Initial Reactions to HEFCE’s ‘Initial decisions on REF 2021’

This lunchtime HEFCE have announced some more “Initial Decisions” on REF 2021, which I’ve summarised below.

Slightly frustratingly, the details are scattered across a few documents, and it’s easy to miss some of them. There’s an exec summary,  a circular letter (which is more of a rectangle, really), the main text of the report that can be downloaded from the bottom of the exec summary page (along with an annex listing UoAs and further particulars for panel chair roles)… and annex A on a further consultation staff return and output portability, downloadable from the bottom of the circular letter page.

I’ve had a go at a quick summary, by bullet point theme rather than in the order they appear, or in a grand narrative sweep. This is one of my knee-jerk pieces, and I’ve added a few thoughts of my own. But it’s early days, and possibly I’ve missed something or misunderstood, so please let me know.

Outputs

  • Reserve output allowed where publication may not appear in time
  • Worth only 60% of total mark this time (see scoring system)

I think the reduction in the contribution of outputs to the overall mark (at the expense of impact) is probably what surprised me most, and I suspect this will be controversial. I think the original plan was for environment to be downgraded to make way, but there’s a lot more demanded from the environment statement this time (see below) so it’s been protected. Great to have the option of submitting an insurance publication in case one of the in-press ones doesn’t appear by close of play.

Panels/Units of Assessment

  • Each sub-panel to have at least one appointed member for interdisciplinary research “with a specific role to ensure its equitable assessment”. New identifier/flag for interdisciplinary outputs to capture
  • Single UoA for engineering, multiple submissions allowed
  • Archaeology split from Geography and Environmental studies – now separate
  • Film and Screen Studies to be explicitly included in UoA 33 with Dance, Drama, Performing Arts
  • Decisions on forensic science and criminology (concerns about visibility) due in Autumn
  • Mapping staff to UoAs will be done by institutions, not HESA cost centres, but may ask for more info in the event of any “major variances” from HESA data.

What do people think about a single UoA for engineering? That’s not an area I support much. Is this just tidying up, or does this has greater implications? Is it ironic that forensic science and criminology have been left a cop show cliff-hanger ending?

Environment

  • Expansion of Unit of Assessment environment section to include sections on:
    • Structures to support interdisciplinary research
    • Supporting collaboration with “organisations beyond higher education”
    • Impact template will now be in the environment element
    • Approach to open research/open access
    • Supporting equality and diversity
  • More quant data in UoA environment template (we don’t know what yet)
  • Standard Institution level information
  • Non-assessed invite only pilot for institution level environment statement
  • Expansion of environment section is given as a justification for maintaining it at 15% of score rather than reducing as expected.

The inclusion of a statement about support for interdisciplinary work is interesting, as this moves beyond merely addressing justifiable criticism about the fate of interdisciplinary research (see the welcome addition to each UoA of an appointed ‘Member for Interdisciplinarity’ above). This makes it compulsory, and an end in itself. This will go down better in some UoAs than others.

Impact

  • Institutional level impact case studies will be piloted, but not assessed
  • Moves towards unifying definitions of “impact” and “academic impact” between REF and Research Councils – both part of dual funding system for research
  • Impact on teaching/curriculum will count as impact – more guidance to be published
  • Underpinning work “at least equivalent to 2*” and published between 1st Jan 2000 and 31st Dec 2020. Impact must take place between 1st Aug 2013 and 31st July 2020
  • New impact case study template, more questions asked, more directed, more standardised, more “prefatory” material to make assessment easier.
  • Require “routine provision of audit evidence” for case study templates, but not given to panel
  • Uncertain yet on formula for calculating number of case study requirements, but overall “should not significantly exceed… 2014”. Will be done on some measure of “volume of activity”, possibly outputs
  • Continuation of case studies from 2014 is allowed, but must meet date rules for both impact and publication, need to declare it is a continuation.
  • Increased to 25% of total score

And like a modern day impact superhero, he comes Mark Reed aka Fast Track Impact with a blog post of his own on the impact implications of the latest announcement. I have to say that I’m pleased that we’re only having a pilot for institutional case studies, because I’m not sure that’s a go-er.

Assessment and Scoring system

  • Sub-panels may decide to use metrics/citation data, but will set out criteria statements stating whether/how they’ll use it. HEFCE will provide the citation data
  • As 2014, overall excellence profile, 3 sub-profiles (outputs, impact, environment)
  • Five point scale from unclassified to 4*
  • Outputs 60, Impact 25, Environment 15. Increase of impact to 25, but as extra environment info sought, has come at the expense of outputs.

There was some talk of a possible necessity for a 5* category to be able to differentiate at the very top. but I don’t think this gained much traction.

But on the really big questions… further consultation (deadline 29th Sept):

There’s been some kicking into the short grass, but things are looking a bit clearer…

(1) Staff submission:

All staff “with a significant responsibility to undertake research” will be submitted, but “no single indicator identifies those within the scope of the exercise”.  Institutions have the option of submitting 100% of staff who meet the core eligibility requirement OR come up with a code of practice that they’ll use to decide who is eligible. Audit-able evidence will be required and Institutions can choose different options for different UoAs.

Proposed core eligibility requirements – staff must meet all of the following:

  • “have an academic employment function of ‘research only’ or ‘teaching and research’
  • are independent researchers [i.e. not research assistants unless ‘demonstrably’ independent]
  • hold minimum employment of 0.2 full time equivalent
  • have a substantive connection to the submitting institution.”

I like this as an approach – it throws the question back to universities, and leaves it up to them whether they think it’s worth the time and trouble running an exercise in one or more UoAs. And I think the proposed core requirements look sensible, and faithful to the core aim which is to maximise the number of researchers returned and prevent the hyper selectivity game being played.

(2) Transition arrangements for non-portability of publications.

HEFCE are consulting on either:

(a) “The simplified model, whereby outputs would be eligible for return by the originating institution (i.e. the institution where the research output was demonstrably generated and at which the member of staff was employed) as well as by the newly employing institution”.
or
(b) “The hybrid approach, with a deadline (to be determined), after which a limited number of outputs would transfer with staff, with eligibility otherwise linked to the originating institution. (This would mean operating two rules for portability in this exercise: the outputs of staff employed before the specified date falling under the 2014 rules of full portability; outputs from staff employed after this date would fall under the new rules.)”

I wrote a previous post on portability and non-portability when the Stern Review was first published, which I still think is broadly correct.

I wonder how simple the simplified model will be… if we end having to return n=2 publications, and choosing those publications from a list of everything published by everyone while they worked here. But it’s probably less work than having a cut off date.

More to follow….

Mistakes in Grant Writing, part 95 – “The Gollum”

Image: Alonso Javier Torres [CC BY 2.0] via Flickr
A version of this article first appeared in Funding Insight on 20th July 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Previously I’ve written about the ‘Star Wars Error’ in grant writing, and my latest attempt to crowbar popular culture references into articles about grant writing mistakes is ‘the Gollum’. Gollum is a character from Lord of the Rings, a twisted, tortured figure – wicked and pitiable in equal measure. He’s an addict whose sole drive is possession of the Ring of Power, which he loves and hates with equal ferocity. A little like me and my thesis.

Only begotten

For current purposes, it’s his cry of “my precious!” and obsession with keeping the Ring for himself that I’m thinking of in terms of an analogy with research grant applicants, rather than (spoilers) eating raw fish, plunging into volcanoes, or murdering friends. Even in the current climate of ‘demand management’, internal peer review, and research development support, there are still researchers who treat their projects as their “precious” and are unable or unwilling to share them or to seek comment and feedback.

It’s easy to understand why – there’s the fear of being scooped and of someone else taking and using the idea. There’s the fear of public failure – with low success rates, a substantial majority of applications will be unsuccessful, and perhaps the thought is that if one is going to fail, few people should know about it. And let’s not pretend that internal review/filtering processes don’t at least raise questions about academic freedom.

Power play

But there are other fears. The first is about sabotage or interference from colleagues who might be opposed to the research, whether through ideological and methodological differences, or because they’re on the other side of some major scientific controversy. In my experience, this concern has been largely unfounded. I’ve been very fortunate to work with senior academics who are very clear about their role as internal reviewer, which is to further improve competitive applications and ideas, while filtering out or diverting uncompetitive ideas, or applications that simply aren’t ready. But while internal reviewers will have their views, I’ve not seen anyone let that power go to their heads.

Enough of experts

Second, if the concern isn’t about integrity or (unconscious) bias, it might be about background or knowledge. One view I’ve encountered – mainly unspoken, but occasionally spoken and once shouted – is that no-one else at the institution has the expertise to review their proposal and therefore internal review is a waste of time.

It might well be true that no-one else internally has equivalent expertise to the applicant, and (apart from early career researchers) that’s to be expected and welcomed. But if it’s true internally, it might also be true of the external expert reviewers chosen by the funder, and it’s even more likely to be true of the people on the final decision-making panel. The chances are that the principal applicant on any major project is one of the leaders in that field, and even if she regards a handful of others as appropriate reviewers, there’s absolutely no guarantee that she’ll get them.

Significant other

Ultimately, the purpose of a funding application is to convince expert peer reviewers from the same or cognate discipline and a much broader panel of distinguished scientists of the superior merits of your ideas and the greater significance of your research challenge compared to rival proposals. Because once the incompetent and the unfeasible have been weeded out – it’s all about significance.

A quality internal peer review process will mirror those conditions as closely as possible. It doesn’t matter that internal reviewer X isn’t from the same field and knows little about the topic – what’s of use to the applicant is what X makes of the application as a senior academic from another (sub)discipline. Can she understand the research challenges, why they’re significant and important? Does the application tell her exactly what the applicant proposes to do? What’s particularly valuable are creative misunderstandings – if an internal reviewer has misunderstood a key point or misinterpreted something, a wise applicant will return to the application and seek to identify the source of that misunderstanding and head it off, rather than just dismissing the feedback out of hand.

Forever alone

And that’s without touching on the value that research development support can add. People in my kind of role who may not be academics, but who have seen a great many grant applications over the years. People who aren’t academic experts, but who know when something isn’t clear, or doesn’t make sense to the intelligent lay person.

Most institutions that take research seriously will offer good support to their researchers. Despite this, there are still researchers who only engage with others where they absolutely must, and take little notice of feedback or experience during the grant application process. Do they really think that others are unworthy of gazing upon the magnificence of The Precious?

I’d like to urge them here to turn back, to take the advice and feedback that’s on offer, lest they end up wandering the dark places of the world, alone and unfunded.

“Once more unto the breach” – Should I resubmit my unsuccessful research grant application?

A picture of a boomerangThis article first appeared in Funding Insight on 11th May 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Should I resubmit my unsuccessful research grant application?

No.

‘No’ is the short answer – unless you’ve received an invitation or steer from the funder to do so. Many funders don’t permit uninvited resubmissions, so the first step should always be to check your funder’s rules and definitions of resubmission with your research development team.

To be, or not to be

That’s not to say that you should abandon your research proposal – more that it’s a mistake to think of your next application on the same or similar topic as a resubmission. It’s much better – if you do wish to pursue it – to treat it as a fresh application and to give yourself and your team the opportunity to develop your ideas. It’s unlikely that nothing has changed between the date of submission and now. It’s also unlikely that nothing could be improved about the underpinning research idea or the way it was expressed in the application.

However, sometimes the best approach is to let an idea go, cut your losses, avoid the sunk costs fallacy. Onwards and upwards to the next idea. I was recently introduced to the concept of a “negative CV”, which is the opposite of a normal CV, listing only failed grant applications, rejected papers, unsuccessful conference pitches and job market rejections. Even the most eminent scholars have lengthy negative CVs, and there’s no shame in being unsuccessful, especially as success rates are so low. It’s really difficult – you’ve got your team together, you’ve been through the discussions and debates and the honing of your idea and then the grant writing, and then the disappointment of not getting funded. It’s very definitely worth having meetings and discussion to see what can be salvaged and repurposed – publishing literature reviews, continuing to engage with stakeholders etc. It’s only natural to look for some other avenue for your work, but sometimes it’s best to move on to something else.

Here are two bits of wisdom that are both true in their own way:

  • If at first you don’t succeed, try, try try again (William Edward Hickson)
  • The definition of insanity is doing the same thing over and over but expecting different results (disputed- perhaps Einstein or Franklin, but I reckon US Narcotics Anonymous)

So what should you do? What factors should you consider in deciding whether to rise from the canvas like Rocky, or instead emulate Elsa and Let It Go?

What being unsuccessful means… and what it doesn’t

As a Canadian research council director once said, research funding is a contest, not a test. Research funding is a limited commodity, like Olympic medals, jobs, and winning lottery tickets. It’s not an unlimited commodity like driving licenses or PhDs, commodities which everyone who reaches the required standard can obtain. Sometimes I think researchers confuse the two – if the driving test examiner says I failed on my three point turn, if I get it right next time (and make no further mistakes) I’ll pass. But even if I respond adequately to all of the points made in the referees’ comments, there’s still no guarantee I’ll get funded. The quality of my driving in the morning doesn’t affect your chances of passing your test in the afternoon, but if too many applications are better than yours, you won’t get funded. And just as many recruitment exercises produce more appointable candidates than posts, so funding calls attract far more fundable applications than the funds available.

Sometimes referees’ comments can be misinterpreted. Feedback might list the real or perceived faults with the application, but (once the fundamentally flawed have been excluded) ultimately it’s a competition about significance. What significance means is defined by the funder and the scheme and doesn’t necessarily mean impact – it could be about academic significance, contribution to the field and so on.

As a public panel member for an NIHR scheme I’ve seen this from the inside – project proposals which are technically competent, sensible and feasible. Yet either because they fail to articulate the significance or because their research challenge is just not that significant an issue, they don’t get funded because they’re not competitive against similarly competent applications taking on much more significant and important research challenges. Feedback is given which would have improved the application, but simply addressing that feedback will seldom make it any more competitive.

When major Research Centre calls come out, I often have conversations with colleagues who have great ideas for perfectly formed projects which unfortunately I don’t think are significant enough to be one of three or four funded across the whole of social sciences. Ideally the significance question, the “so what/who cares?” question should be posed before applying in the first place, but you should definitely look again at what was funded and ask it again of your project before considering trying to rework it.

Themed Calls Cast a Long Shadow

One of the most dispiriting grant rejection experiences is rejection from a targeted call which seemed perfect. It’s not like an open call where you have to compete with rival bids on significance from all across your research council’s remit – rather, the significance is already recognised.

Yet the reality is that narrower calls often have similarly low success rates. Although they’re narrower, everyone who can pile in, does pile in. And deciding what to do next is much harder. Themed calls cast a long shadow – if as a funder I’ve just made a major investment in field X through niche call Y, I’m not sure how I’m going to feel about an X-related application coming back in through the open call route. Didn’t we just fund a lot of this stuff? Should we fund more, especially if an idea like this was unsuccessful last time? Shouldn’t we support something else? And I think this effect might be true even with different funders who will be aware of what’s going on elsewhere. If a tranche of projects in your research area have been funded through a particular call, it’s going to be very difficult to get investment through any other scheme anytime soon.

Switching calls, Switching funders

An exception to this might be the Global Challenges Research Fund or perhaps other areas where there’s a lot of funding available (relatively speaking) and a number of different calls with slightly different priorities. Being unsuccessful with an application to an open call or a broader call and then looking to repurpose the research idea in response to a narrower themed call is more likely to pay off than the other way round, moving from a specific call to a general one. But even so, my advice would be to ban the “r” word entirely. It’s not a ‘resubmission’, it’s an entirely new application written for a different funding scheme with different priorities, even if some of the underlying ideas are similar.

This goes double when it comes to switching funders. A good way of wasting everyone’s time is trying to crowbar a previously unsuccessful application into the format required by a different funder. Different funders have different priorities and different application procedures, formats and rules, and so you must treat it as a fresh application. Not doing so is a bit like getting out some love letters you sent to a former paramour, changing the name at the top, and reposting them to the current object of your affections. Neither will end well.

The Leverhulme Trust are admirably clear on this point, they’re “keen to avoid assuming the role of ‘funder of last resort’; that is, of routinely providing support for proposals which have been fully matched to the requirement of another funding agency, but have failed to win support on the grounds of either lack of quality or insufficient available funds.” If you’re going to apply to the Leverhulme Trust, for example, make it a Leverhulme-y application, and that means shifting not just the presentational style but also the substance of what you’re proposing.

Whatever the change, forget any notion of resubmission if you’re taking an idea from one call to another. Yes, you may be able to reuse some of your previous materials, but if you submit something clearly written for another call with the crowbar marks still visible, you won’t get funded.

The Five Stages of Grant Application Failure

I’m reluctant to draw this comparison, but I wonder if responding to grant application rejection is a bit like the Kubler-Ross model of grief (denial, anger, bargaining, depression, and acceptance). Perhaps one question to ask yourself is if your resubmission plans are coming from a position of acceptance – in which case fine, but don’t regard it as a resubmission – or a part of the bargaining stage. In which case…. perhaps take a little longer to decide what to do.

Further reading: What to do if your grant application is unsuccessful. Part 1 – What it Means and What it dDoesn’t and Part 2 – Next Steps.

‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.