There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog) which have prompted me to respond with something of a defence of the Research Excellence Framework.
One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”. And I think this is a fact that’s also overlooked by some critics. While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding. We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.
In spite of the current spirit of perpetual revolution in the sector, money is still paid (via HEFCE) to universities for research, without much in the way of strings attached. This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils. What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.
I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how?
It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way. One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF. That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.
In general, I think that criticisms of the REF tend to fall into the following broad categories. I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.
1. Criticism over details
The REF team have a difficult balancing act. On the one hand, they need rules which are sensitive to the very real differences between different academic disciplines. On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels. The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash. The more complicated the process, the greater the transaction costs. Which brings me onto the second balancing act. On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake. On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members. This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course. But it’s not easy.
2. Impact
The use of impact at all. The relative weighting of impact. The particular approach to impact. The degree of uncertainty about impact. It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so. I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda. Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.
3. Encouraging game playing
While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment. And what some would regard as gaming, others would regard as just doing what is asked of them.
One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is. It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding. But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position. There are two arguments that can be made against this. One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available. But that’s not an argument against the REF as such. The other is that institutions should be obliged to submit everyone. But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding. Because ultimately the REF is not about individuals, but institutions.
4. Perverse incentives
David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.
REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.
I don’t find this terribly convincing. While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t. There is opportunity for – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations. I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue. Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll. It ruins careers and leaves a long shadow over institutions. Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud. To make it work, you need the cooperation and the silence of two academic researchers for every single publication. Risk versus reward – just not worth it.
Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF. That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return. There’s no such problem if the co-author is at another institution, of course. However, I’m not convinced that this theoretical disincentive makes a huge difference in practice. Don’t academics co-author papers with the most appropriate colleague, whether internal or external? How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor? For REF reasons alone? And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?
In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.
5. Impact on morale
There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it. There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion. I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.
However…. and please don’t hate me for this…. so what? I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument. Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it. And again, what alternative system which would be equally effective at dishing out the cash while being less stressful? The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides. Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.
6. Effect on HE culture
I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected. Take Peter Wells’ rather utopian account of what might replace the REF:
For example, everybody should be included, as should all activities. It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.
What’s being described here is not in any sense a ‘Research Excellence Framework’. It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess. How on earth could we go about assessing absolutely everything that absolutely everyone does? Why would we give out research cash according to how good an admissions tutor someone is?
I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only. There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways. Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due. And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.
But I’m not sure why any of these status problems are the fault of the REF. The REF is about assessing research excellence and giving out the cash accordingly. If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem. But it’s an institutional one, and not one that follows inevitably from the REF. We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.
It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections. What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward. And all without a picture of Pierluigi Collina.
By way of preamble, this is a nice summary of the REF debate, so many thanks for this, I’ll definitely be using this in one way or another.
The one issue you miss in the analysis is that the REF isn’t just a way of divvying up the cash, but it’s a way of divvying up cash that steers the sector to behave in particular ways. You can also come up with a class of objections related to the desirability of that steering approach.
So you could come up a whole range of ways of allocating the cash. You could do it like SRIF, based on historical allocations, HEIF1/2, on the basis of allocations which people have to justify to qualify for, you could do it on the basis of staff numbers, or even the numerological value of the Vice Chancellor. These would all be methods for allocating the cash and they would have effects, direct and indirect, positive and negative.
The REF in practice is a means for government to steer the behavioural effects of that the cash allocation process. If you have a historical method, then there is no steering effect, but in the REF, there are very strong steering effects because there is a very clear (and by no means undisputed model) of what kinds of research government should be funding.
Some of these steering effects are desirable – stimulating increased research productivity (which is a proxy for quality), and some are more controversial, like impact.
What you fail to distinguish in debunking opposition to impact, as do most of the writers, is between the desirability of creating impact, and the use of the REF to stimulate it via ten year ex post vignettes.
No one seriously argues that impact is not vital, nor that academics should have societal benefits in return for the public funding they receive. But it is fair to argue that the REF is not steering academics to create real impacts, in part because no one really understands in a generic sense how research creates impact.
So you can point to examples of research impact, and you can write nice narratives of how that has been produced, but that’s all they are, nice narratives: no one can define in conceptual terms how impact functions, and by implication, what is good, and hence better, impact.
So in the absence of a model of what is or is not good impact, the problem is that no one has the faintest idea what the real steering impacts will be of the approach to measuring and rewarding impact taken in the REF.
Nobody is discussing these things other than effectively a heated row between one side accusing the other of philistinism and ivory-tower isolation respectively. So a second argument about the REF would be that it is an imposed project supported by assertions of what is politically necessary and no one really knows what the outcomes will be.
As a code: the one known in the outcomes is the Matthew Principle: powerful people will get a bigger share of the cash, because these research management exercises only have credibility if they reflect what the field knows, and respects existing prestige structures!
So there is also a disingenuousness about the REF in that it claims that it will reward excellence, whereas in reality it is little more than an incremental rebalancing exercise in a dynamic field, and the effects of that rebalancing do indeed reflect groups capacities to play games rather than execute excellent research.
Hi Paul, thanks for your comments. There’s quite a lot to unpack there, but then there’s more to the REF than can be adequately covered in a single blog post or comment.
On impact, I certainly don’t think that I’ve ‘debunked opposition’. Rather, I think my view is that I’ve acknowledged that it’s a point of contention and vaguely hinted at my view, but essentially passed over impact as something for another day. Of course, it may be that the problems and tribulations of impact are just too important to brush over in the way that I have done. I think that perhaps you’re more dismissive that I would be about the efforts that have been made to try to pin down what’s meant by ‘impact’. I advise academic colleagues fairly regularly on impact statements, and I’d say I’ve got a passable handle on what’s required. It’s not easy, but we can go beyond “fairy tales of influence”.
I guess the dilemma about impact is that either (a) we take the plunge and recognise its importance and make it an important determinant of funding and league table position, or (b) we decide it’s too difficult and go back to ‘esteem indicators’. I guess there’s also a (c) which would include phasing in impact more gradually, and giving it a lower weighting for REF 2014, with an intent to increase later if all goes well. That’s something I’d support, I think, but without a significant enough percentage, it’s hard to see it being taken particularly seriously.
On the Matthew principle, well, I think that the issue of the concentration of research funding is a separate one from the REF. We could plug in all kinds of formulae into the REF, although expectations about where the cut off point for funding will be will obviously affect submissions.
I think I’ve got rather more confidence about the REF getting it (broadly) right about the allocation of funding. I find it quite hard to imagine that any submission will get substantially more (money/prestige) than it deserves. While I accept that good or bad game playing may make a small difference, I’m not sure that that difference will be significant compared to the effect of research excellence.
And if we’re saying that REF panel members – all experts in their field – and the best efforts of panel chairs and administrators are not able to do anything other than confirm the status quo with some minor tweaks here and there, then what alternative would be better?
I think you’re very wrong to try to defend the REF. It creates perverse incentives, discourages ground-breaking research, and is an incalculably inefficient use of time (actually, I would love it if someone did produce an estimate of the fEC of the REF, including all internal evaluation activities as well as the actual formal evaluations).
Donald Gillies’s book “How Should Research Be Organised” has a detailed and extremely persuasive critique of the RAE/REF. He also offers a sketch of a replacement system which wouldn’t suffer from the same problems. I find his arguments extremely compelling. If you can’t get hold of the book, there is a cut down version of the critique here:
http://www.paecon.net/PAEReview/issue37/Gillies37.htm
If you’re unpersuaded by Gillies’s clever alternative system, I have another idea. Set some minimum standard of “research-active” status (designed only to identify people who really don’t do any research of note at all) and then just divvy up the money equally between departments according to how many research-active staff they have. The efficiency savings of such a system would be incredible.
Thanks for your comments, and for the Gillies link. The link leads to information about a his critique, but not his proposed alternative. I’ve followed a few links and found a link to a podcast which might include more information, but I haven’t listened to it yet.
I studied some philosophy of science as an undergraduate, so it’s an approach that interests me. In particular, I’ve been wondering for some time whether research funding we’re not *too* focused on the next big paradigm shift – if there is to be one – (though typically larger projects), and not enough on exploring and excavating the current paradigm (through typically smaller projects).
I think Gillies is right about the ‘double peer review’ (or treble if you include the review of an initial funding application), but I think he overestimates the closeness of the relationship between particular research projects/programmes and research funding. The money that flows from the RAE/F exercises is given to universities to support research as they see fit – the assumption is that a track record of success (institutionally) is likely to lead to more success in the future. It’s not about starting up or shutting down individual projects, but on funding institutions. It would be a much more powerful criticism of research council type funding than QR funding.
I have some doubts about there being any adequate solution to the problem of ideas which subsequently turn out to be game changers which are initially regarded as worthless by conservative peers operating within the existing paradigm. I would imagine that at any given time there are a number of what we might call ‘heretical’ ideas and potential projects which call the status quo into question in a deep and fundamental way. However, I would also imagine that a large percentage of these will turn to be simply and wildly wrong. Problem is, how do we tell the genius ideas from the crackpot? A lot of the people with the most radical ideas at the moment are in alternative ‘medicine’ and climate change ‘scepticism’, but these are not projects which sufficient intellectual rigour (or arguably intellectual honesty) for research funding.. And what share of the resources should we give to promoting highly speculative stuff, and what proportion to working within current paradigms? Hindsight is a wonderful thing.
I’ll try to find out more about Gillies’ alternative system during the week, but I’d have real concerns about a system which just funds institutions a flat per-head rate for researchers who meet minimum standards (let’s say 4 x 2* or whatever) without any kind of quality differential. I see the perverse incentives this would cause as being much more serious. Why keep expensive research profs when there’s no more QR support for their salary than for junior colleagues who just scrape in? I can see the net effect being a much more even distribution of funding based upon FTE alone, which would make it much harder to keep high quality research groups together and make getting enough funding to support a critical mass of research in a particular area very difficult. If there’s no extra funding for quality, there’s no obvious financial incentive (other than its intrinsic value) for institutions to chase anything other than research adequacy. Should we give the sabbatical to the ground-breaking Prof, or to the plodder who needs more time to make sure they count as research adequate?
I do, however, have some sympathy to criticisms of the REF in terms of costs. It is incredibly costly, both in terms of cash and time, and alternatives which can deliver the same degree of rigour for less money are certainly worth considering.
I don’t like the idea that a university would only ever consider employing a senior professor because he/she would accrue REF points. Surely, a sufficient reason to employ a talented researcher is that the purpose of a university, along with teaching, is to conduct high quality research?
Jonathan Wolff made the point after the last RAE that, in some subjects at least, the “equal division” system essentially already exists. Given how much time everybody spent preparing for the last RAE his figures are a bit sickening:
http://www.guardian.co.uk/education/2009/may/05/jonathan-wolff-rae
Well, I guess my argument is vulnerable to my general worry about arguments about perverse incentives – just because something might be a theoretical perverse incentive, doesn’t mean there’s a real one in practice. In general, the argument that a university simply wouldn’t behave like that because of the purpose of a university can be a good one. Similarly for the argument that academics wouldn’t act in a particular way. But if I don’t think that universities will go on a sacking frenzy of senior profs, I do think the fact that there’s no extra research funding for a research chair as for a research active junior lecturer means that not only is there no incentive to build a critical mass of senior, experienced researchers, but also that there’s no means to do so. In the short term, such a radical rebalancing of research funding would lead to chaos, but even assuming that the shift was long term rather than short and handled well, I’d still see a future where there’s lot more research “adequance” and much less research excellence. A large department of reliably adequate academics who can guarantee they’ll make the grade and have an outside chance of some research council or EU funding but who won’t bother about promotion will end up being the most sustainable form of research unit. Perhaps they’ll have a Prof or two for research leadership purposes, but that’s it.
Wolff’s article is interesting, and it would be interesting to know if those figures are right, and if it’s a pattern repeated across other topics. I also wonder what ratios were in place when he wrote it.