Prêt-à-non-portability? Implications and possible responses to the phasing out of publication portability

“How much as been decided about the REF? About this much. And how much of the REF period is there to go? Well, again…

Last week Recently, I attended an Open Forum Events one day conference with the slightly confusing title ‘Research Impact: Strengthening the Excellence Framework‘ and gave a short presentation with the same title as this blog post. It was a very interesting event with some great speakers (and me), and I was lucky enough to meet up with quite a few people I only previously ‘knew’ through Twitter. I’d absolutely endorse Sarah Hayes‘ blogpost for Research Whisperer about the benefits of social media for networking for introverts.

Oh, and if you’re an academic looking for something approaching a straightforward explanation about the REF, can I recommend Charlotte Mathieson‘s excellent blog post. For those of you after in-depth half-baked REF policy stuff, read on…

I was really pleased with how the talk went – it’s one thing writing up summaries and knee-jerk analyses for a mixed audience of semi-engaged academics and research development professionals, but it’s quite another giving a REF-related talk to a room full of REF experts. It was based in part on a previous post I’ve written on portability but my views (and what we know about the REF) has moved on since then, so I thought I’d have a go at summarising the key points.

I started by briefly outlining the problem and the proposed interim arrangements before looking at the key principles that needed to form part of any settled solution on portability for the REF after next.

Why non-portability? What’s the problem?

I addressed most of this in my previous post, but I think the key problem is that it turns what ought to be something like a football league season into an Olympic event. With a league system, the winner is whoever earns the most points over a long, drawn out season. Three points is three points, whatever stage of the season it comes in. With Olympic events, it’s all about peaking at the right time during the cycle – and in some events within the right ten seconds of that cycle. Both are valid as sporting competition formats, but for me, Clive the REF should be more like a league season than to see who can peak best on census day. And that’s what the previous REF rules encourages – fractional short term appointments around the census date; bulking out the submission then letting people go afterwards; rent-seeking behaviour from some academics holding their institution to ransom; poaching and instability, transfer window effects on mobility; and panic buying.

If the point of the REF is to reward sustained excellence over the previous REF cycle with funding to institutions to support research over the next REF cycle, surely it’s a “league season” model we should be looking at, not an Olympic model. The problem with portability is that it’s all about who each unit of assessment has under contract and able to return at the time, even if that’s not a fair reflection of their average over the REF cycle. So if a world class researcher moves six months before the REF census date, her new institution would get REF credit for all of her work over the last REF cycle, and the one which actually paid her salary would get nothing in REF terms. Strictly speaking, this isn’t a problem of publication portability, it’s a problem of publication non-retention. Of which more later.

I summarised what’s being proposed as regards portability as a transition measure in my ‘Initial Reactions‘ post, but briefly by far most likely outcome for this REF is one that retains full portability and full retention. In other words, when someone moves institution, she takes her publications with her and leaves them behind. I’m going to follow Phil Ward of Fundermentals and call these Schrodinger’s Publications, but as HEFCE point out, plenty of publications were returned multiple times by multiple institutions in the last REF, as each co-author could return it for her institution. It would be interesting to see what proportion of publications were returned multiple times, and what the record is for the number of times that a single publication has been submitted.

Researcher Mobility is a Good Thing

Marie Curie and Mr Spock have more in common than radiation-related deaths – they’re both examples of success through researcher mobility. And researcher mobility is important – it spreads ideas and methods, allows critical masses of expertise to be formed. And researchers are human too, and are likely to need to relocate for personal reasons, are entitled to seek better paid work and better conditions, and might – like any other employee – just benefit from a change of scene.

For all these reasons, future portability rules need to treat mobility as positive, and as a human right. We need to minimise ‘transfer window’ effects that force movement into specific stages of the REF cycle – although it’s worth noting that plenty of other professions have transfer windows – teachers, junior doctors (I think), footballers, and probably others too.

And for this reason, and for reasons of fairness, publications from staff who have departed need to be assessed in exactly the same way as publications from staff who are still employed by the returning UoA. Certainly no UoA should be marked down or regarded as living on past glories for returning as much of the work of former colleagues as they see fit.

Render unto Caesar

Institutions are entitled to a fair return on investment in terms of research, though as I mentioned earlier, it’s not portability that’s the problem here so much as non-retention. As Fantasy REF Manager I’m not that bothered by someone else submitting some of my departed star player’s work written on my £££, but I’m very much bothered if I can’t get any credit for it. Universities are given funding on the basis of their research performance as evaluated through the previous REF cycle to support their ongoing endeavors in the next one. This is a really strong argument for publication retention, and it seems to me to be the same argument that underpins impact being retained by the institution.

However, there is a problem which I didn’t properly appreciate in my previous writings on this. It’s the investment/divestment asymmetry issue, as absolutely no-one except me is calling it. It’s an issue not for the likely interim solution, but for the kind of full non-portability system we might have for the REF after next.

In my previous post I imagined a Fantasy REF Manager operating largely a one-in, one-out policy – thus I didn’t need new appointee’s publications because I got to keep their predecessors. And provided that staff mobility was largely one-in, one-out, that’s fine. But it’s less straightforward if it’s not. At the moment the University of Nottingham is looking to invest in a lot of new posts around specific areas (“beacons”) of research strength – really inspiring projects, such as the new Rights Lab which aims to help end modern slavery. And I’m sure plenty of other institutions have similar plans to create or expand areas of critical mass.

Imagine a scenario where I as Fantasy REF Manager decide to sack a load of people  immediately prior to the REF census date. Under the proposed rules I get to return all of their publications and I can have all of the income associated for the duration of the next REF cycle – perhaps seven years funding. On the other hand, if I choose to invest in extra posts that don’t merely replace departed staff, it could be a very long time before I see any return, via REF funding at least. It’s not just that I can’t return their publications that appeared before I recruited them, it’s that the consequences of not being able to return a full REF cycle’s worth of publications will have funding implications for the whole of the next REF cycle. The no-REF-disincentive-to-divest and long-lead-time-for-REF-reward-for-investment looks lopsided and problematic.

I’m a smart Fantasy REF Manager, it means I’ll save up my redundancy axe wielding (at worst) or recruitment freeze (at best) for the end of the REF cycle, and I’ll be looking to invest only right at the beginning of the REF cycle. I’ve no idea what the net effect of all this will be repeated across the sector, but it looks to me as if non-portability just creates new transfer windows and feast and famine around recruitment. And I’d be very worried if universities end up delaying or cancelling or scaling back major strategic research investments because of a lack of REF recognition in terms of new funding.

Looking forward: A settled portability policy

A few years back, HEFCE issued some guidance about Open Access and its place in the coming REF. They did this more or less ‘without prejudice’ to any other aspect of the REF – essentially, whatever the rest of the REF looks like, these will be the open access rules. And once we’ve settled the portability rules for this time (almost certainly using the Schrodinger’s publications model), I’d like to see them issue some similar ‘without prejudice’ guidelines for the following REF.

I think it’s generally agreed that the more complicated but more accurate model that would allow limited portability and full retention can’t be implemented at such short notice. But perhaps something similar could work with adequate notice and warning for institutions to get the right systems in place, which was essentially the point of the OA announcement.

I don’t think a full non-portability full-retention system as currently envisaged could work without some finessing, and every bid of finessing for fairness comes at the cost of complication.  As well as the investment-divestment asymmetry problem outlined above, there are other issues too.

The academic ‘precariat’ – those on fixed term/teaching only/fractional/sessional contracts need special rules. An institution employing someone to teach one module with no research allocation surely shouldn’t be allowed to return that person’s publications. One option would be to say something like ‘teaching only’ = full portability, no retention; and ‘fixed term with research allocation’ = the Schrodinger system of publications being retained and being portable. Granted this opens the door to other games to be played (perhaps turning down a permanent contract to retain portability?) but I don’t think these are as serious as current games, and I’m sure could be finessed.

While I argued previously that career young researchers had more to gain than to lose from a system whereby appointments are made more on potential rather than track record, the fact that so many are as concerned as they are means that there needs to be some sort of reassurance or allowance for those not in permanent roles.

Disorder at the border. What happens about publications written on Old Institution’s Time, but eventually published under New Institution’s affiliation? We can also easily imagine publication filibustering whereby researchers delay publication to maximise their position in the job market. Not only are delays in publication bad for science, but there’s also the potential for inappropriate pressure to be applied by institutions to hold something back/rush something out. It could easily put researchers in an impossible position, and has the potential to poison relationships with previous employers and with new ones. Add in the possible effects of multiple job moves on multi-author publications and this gets messy very quickly.

One possible response to this would be to allow a portability/retention window that goes two ways – so my previous institution could still return my work published (or accepted) up to (say) a year after my official leave date. Of course, this creates a lot of admin, but it’s entirely up to my former institution whether it thinks that it’s worth tracking my publications once I’ve gone.

What about retired staff? As far as I can see there’s nothing in any documents about the status of the publications of retired staff either in this REF or in any future plans. The logic should be that they’re returnable in the same way as those of any other researcher who has left during the REF period. Otherwise we’ll end up with pressure to say on and perhaps other kinds of odd incentives not to appoint people who retire before the end of a REF cycle.

One final suggestion…

One further half-serious suggestion… if we really object to game playing, perhaps the only fair to properly reward excellent research and impact and to minimise game playing is to keep the exact rules of REF a secret for as long as possible in each cycle. Forcing institutions just to focus on “doing good stuff” and worrying less about gaming the REF.

  • If you’re really interested, you can download a copy of my presentation … but if you weren’t there, you’ll just have to wonder about the blank page…

Initial Reactions to HEFCE’s ‘Initial decisions on REF 2021’

This lunchtime HEFCE have announced some more “Initial Decisions” on REF 2021, which I’ve summarised below.

Slightly frustratingly, the details are scattered across a few documents, and it’s easy to miss some of them. There’s an exec summary,  a circular letter (which is more of a rectangle, really), the main text of the report that can be downloaded from the bottom of the exec summary page (along with an annex listing UoAs and further particulars for panel chair roles)… and annex A on a further consultation staff return and output portability, downloadable from the bottom of the circular letter page.

I’ve had a go at a quick summary, by bullet point theme rather than in the order they appear, or in a grand narrative sweep. This is one of my knee-jerk pieces, and I’ve added a few thoughts of my own. But it’s early days, and possibly I’ve missed something or misunderstood, so please let me know.

Outputs

  • Reserve output allowed where publication may not appear in time
  • Worth only 60% of total mark this time (see scoring system)

I think the reduction in the contribution of outputs to the overall mark (at the expense of impact) is probably what surprised me most, and I suspect this will be controversial. I think the original plan was for environment to be downgraded to make way, but there’s a lot more demanded from the environment statement this time (see below) so it’s been protected. Great to have the option of submitting an insurance publication in case one of the in-press ones doesn’t appear by close of play.

Panels/Units of Assessment

  • Each sub-panel to have at least one appointed member for interdisciplinary research “with a specific role to ensure its equitable assessment”. New identifier/flag for interdisciplinary outputs to capture
  • Single UoA for engineering, multiple submissions allowed
  • Archaeology split from Geography and Environmental studies – now separate
  • Film and Screen Studies to be explicitly included in UoA 33 with Dance, Drama, Performing Arts
  • Decisions on forensic science and criminology (concerns about visibility) due in Autumn
  • Mapping staff to UoAs will be done by institutions, not HESA cost centres, but may ask for more info in the event of any “major variances” from HESA data.

What do people think about a single UoA for engineering? That’s not an area I support much. Is this just tidying up, or does this has greater implications? Is it ironic that forensic science and criminology have been left a cop show cliff-hanger ending?

Environment

  • Expansion of Unit of Assessment environment section to include sections on:
    • Structures to support interdisciplinary research
    • Supporting collaboration with “organisations beyond higher education”
    • Impact template will now be in the environment element
    • Approach to open research/open access
    • Supporting equality and diversity
  • More quant data in UoA environment template (we don’t know what yet)
  • Standard Institution level information
  • Non-assessed invite only pilot for institution level environment statement
  • Expansion of environment section is given as a justification for maintaining it at 15% of score rather than reducing as expected.

The inclusion of a statement about support for interdisciplinary work is interesting, as this moves beyond merely addressing justifiable criticism about the fate of interdisciplinary research (see the welcome addition to each UoA of an appointed ‘Member for Interdisciplinarity’ above). This makes it compulsory, and an end in itself. This will go down better in some UoAs than others.

Impact

  • Institutional level impact case studies will be piloted, but not assessed
  • Moves towards unifying definitions of “impact” and “academic impact” between REF and Research Councils – both part of dual funding system for research
  • Impact on teaching/curriculum will count as impact – more guidance to be published
  • Underpinning work “at least equivalent to 2*” and published between 1st Jan 2000 and 31st Dec 2020. Impact must take place between 1st Aug 2013 and 31st July 2020
  • New impact case study template, more questions asked, more directed, more standardised, more “prefatory” material to make assessment easier.
  • Require “routine provision of audit evidence” for case study templates, but not given to panel
  • Uncertain yet on formula for calculating number of case study requirements, but overall “should not significantly exceed… 2014”. Will be done on some measure of “volume of activity”, possibly outputs
  • Continuation of case studies from 2014 is allowed, but must meet date rules for both impact and publication, need to declare it is a continuation.
  • Increased to 25% of total score

And like a modern day impact superhero, he comes Mark Reed aka Fast Track Impact with a blog post of his own on the impact implications of the latest announcement. I have to say that I’m pleased that we’re only having a pilot for institutional case studies, because I’m not sure that’s a go-er.

Assessment and Scoring system

  • Sub-panels may decide to use metrics/citation data, but will set out criteria statements stating whether/how they’ll use it. HEFCE will provide the citation data
  • As 2014, overall excellence profile, 3 sub-profiles (outputs, impact, environment)
  • Five point scale from unclassified to 4*
  • Outputs 60, Impact 25, Environment 15. Increase of impact to 25, but as extra environment info sought, has come at the expense of outputs.

There was some talk of a possible necessity for a 5* category to be able to differentiate at the very top. but I don’t think this gained much traction.

But on the really big questions… further consultation (deadline 29th Sept):

There’s been some kicking into the short grass, but things are looking a bit clearer…

(1) Staff submission:

All staff “with a significant responsibility to undertake research” will be submitted, but “no single indicator identifies those within the scope of the exercise”.  Institutions have the option of submitting 100% of staff who meet the core eligibility requirement OR come up with a code of practice that they’ll use to decide who is eligible. Audit-able evidence will be required and Institutions can choose different options for different UoAs.

Proposed core eligibility requirements – staff must meet all of the following:

  • “have an academic employment function of ‘research only’ or ‘teaching and research’
  • are independent researchers [i.e. not research assistants unless ‘demonstrably’ independent]
  • hold minimum employment of 0.2 full time equivalent
  • have a substantive connection to the submitting institution.”

I like this as an approach – it throws the question back to universities, and leaves it up to them whether they think it’s worth the time and trouble running an exercise in one or more UoAs. And I think the proposed core requirements look sensible, and faithful to the core aim which is to maximise the number of researchers returned and prevent the hyper selectivity game being played.

(2) Transition arrangements for non-portability of publications.

HEFCE are consulting on either:

(a) “The simplified model, whereby outputs would be eligible for return by the originating institution (i.e. the institution where the research output was demonstrably generated and at which the member of staff was employed) as well as by the newly employing institution”.
or
(b) “The hybrid approach, with a deadline (to be determined), after which a limited number of outputs would transfer with staff, with eligibility otherwise linked to the originating institution. (This would mean operating two rules for portability in this exercise: the outputs of staff employed before the specified date falling under the 2014 rules of full portability; outputs from staff employed after this date would fall under the new rules.)”

I wrote a previous post on portability and non-portability when the Stern Review was first published, which I still think is broadly correct.

I wonder how simple the simplified model will be… if we end having to return n=2 publications, and choosing those publications from a list of everything published by everyone while they worked here. But it’s probably less work than having a cut off date.

More to follow….

Mistakes in Grant Writing, part 95 – “The Gollum”

Image: Alonso Javier Torres [CC BY 2.0] via Flickr
A version of this article first appeared in Funding Insight on 20th July 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Previously I’ve written about the ‘Star Wars Error’ in grant writing, and my latest attempt to crowbar popular culture references into articles about grant writing mistakes is ‘the Gollum’. Gollum is a character from Lord of the Rings, a twisted, tortured figure – wicked and pitiable in equal measure. He’s an addict whose sole drive is possession of the Ring of Power, which he loves and hates with equal ferocity. A little like me and my thesis.

Only begotten

For current purposes, it’s his cry of “my precious!” and obsession with keeping the Ring for himself that I’m thinking of in terms of an analogy with research grant applicants, rather than (spoilers) eating raw fish, plunging into volcanoes, or murdering friends. Even in the current climate of ‘demand management’, internal peer review, and research development support, there are still researchers who treat their projects as their “precious” and are unable or unwilling to share them or to seek comment and feedback.

It’s easy to understand why – there’s the fear of being scooped and of someone else taking and using the idea. There’s the fear of public failure – with low success rates, a substantial majority of applications will be unsuccessful, and perhaps the thought is that if one is going to fail, few people should know about it. And let’s not pretend that internal review/filtering processes don’t at least raise questions about academic freedom.

Power play

But there are other fears. The first is about sabotage or interference from colleagues who might be opposed to the research, whether through ideological and methodological differences, or because they’re on the other side of some major scientific controversy. In my experience, this concern has been largely unfounded. I’ve been very fortunate to work with senior academics who are very clear about their role as internal reviewer, which is to further improve competitive applications and ideas, while filtering out or diverting uncompetitive ideas, or applications that simply aren’t ready. But while internal reviewers will have their views, I’ve not seen anyone let that power go to their heads.

Enough of experts

Second, if the concern isn’t about integrity or (unconscious) bias, it might be about background or knowledge. One view I’ve encountered – mainly unspoken, but occasionally spoken and once shouted – is that no-one else at the institution has the expertise to review their proposal and therefore internal review is a waste of time.

It might well be true that no-one else internally has equivalent expertise to the applicant, and (apart from early career researchers) that’s to be expected and welcomed. But if it’s true internally, it might also be true of the external expert reviewers chosen by the funder, and it’s even more likely to be true of the people on the final decision-making panel. The chances are that the principal applicant on any major project is one of the leaders in that field, and even if she regards a handful of others as appropriate reviewers, there’s absolutely no guarantee that she’ll get them.

Significant other

Ultimately, the purpose of a funding application is to convince expert peer reviewers from the same or cognate discipline and a much broader panel of distinguished scientists of the superior merits of your ideas and the greater significance of your research challenge compared to rival proposals. Because once the incompetent and the unfeasible have been weeded out – it’s all about significance.

A quality internal peer review process will mirror those conditions as closely as possible. It doesn’t matter that internal reviewer X isn’t from the same field and knows little about the topic – what’s of use to the applicant is what X makes of the application as a senior academic from another (sub)discipline. Can she understand the research challenges, why they’re significant and important? Does the application tell her exactly what the applicant proposes to do? What’s particularly valuable are creative misunderstandings – if an internal reviewer has misunderstood a key point or misinterpreted something, a wise applicant will return to the application and seek to identify the source of that misunderstanding and head it off, rather than just dismissing the feedback out of hand.

Forever alone

And that’s without touching on the value that research development support can add. People in my kind of role who may not be academics, but who have seen a great many grant applications over the years. People who aren’t academic experts, but who know when something isn’t clear, or doesn’t make sense to the intelligent lay person.

Most institutions that take research seriously will offer good support to their researchers. Despite this, there are still researchers who only engage with others where they absolutely must, and take little notice of feedback or experience during the grant application process. Do they really think that others are unworthy of gazing upon the magnificence of The Precious?

I’d like to urge them here to turn back, to take the advice and feedback that’s on offer, lest they end up wandering the dark places of the world, alone and unfunded.

“Once more unto the breach” – Should I resubmit my unsuccessful research grant application?

A picture of a boomerangThis article first appeared in Funding Insight on 11th May 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Should I resubmit my unsuccessful research grant application?

No.

‘No’ is the short answer – unless you’ve received an invitation or steer from the funder to do so. Many funders don’t permit uninvited resubmissions, so the first step should always be to check your funder’s rules and definitions of resubmission with your research development team.

To be, or not to be

That’s not to say that you should abandon your research proposal – more that it’s a mistake to think of your next application on the same or similar topic as a resubmission. It’s much better – if you do wish to pursue it – to treat it as a fresh application and to give yourself and your team the opportunity to develop your ideas. It’s unlikely that nothing has changed between the date of submission and now. It’s also unlikely that nothing could be improved about the underpinning research idea or the way it was expressed in the application.

However, sometimes the best approach is to let an idea go, cut your losses, avoid the sunk costs fallacy. Onwards and upwards to the next idea. I was recently introduced to the concept of a “negative CV”, which is the opposite of a normal CV, listing only failed grant applications, rejected papers, unsuccessful conference pitches and job market rejections. Even the most eminent scholars have lengthy negative CVs, and there’s no shame in being unsuccessful, especially as success rates are so low. It’s really difficult – you’ve got your team together, you’ve been through the discussions and debates and the honing of your idea and then the grant writing, and then the disappointment of not getting funded. It’s very definitely worth having meetings and discussion to see what can be salvaged and repurposed – publishing literature reviews, continuing to engage with stakeholders etc. It’s only natural to look for some other avenue for your work, but sometimes it’s best to move on to something else.

Here are two bits of wisdom that are both true in their own way:

  • If at first you don’t succeed, try, try try again (William Edward Hickson)
  • The definition of insanity is doing the same thing over and over but expecting different results (disputed- perhaps Einstein or Franklin, but I reckon US Narcotics Anonymous)

So what should you do? What factors should you consider in deciding whether to rise from the canvas like Rocky, or instead emulate Elsa and Let It Go?

What being unsuccessful means… and what it doesn’t

As a Canadian research council director once said, research funding is a contest, not a test. Research funding is a limited commodity, like Olympic medals, jobs, and winning lottery tickets. It’s not an unlimited commodity like driving licenses or PhDs, commodities which everyone who reaches the required standard can obtain. Sometimes I think researchers confuse the two – if the driving test examiner says I failed on my three point turn, if I get it right next time (and make no further mistakes) I’ll pass. But even if I respond adequately to all of the points made in the referees’ comments, there’s still no guarantee I’ll get funded. The quality of my driving in the morning doesn’t affect your chances of passing your test in the afternoon, but if too many applications are better than yours, you won’t get funded. And just as many recruitment exercises produce more appointable candidates than posts, so funding calls attract far more fundable applications than the funds available.

Sometimes referees’ comments can be misinterpreted. Feedback might list the real or perceived faults with the application, but (once the fundamentally flawed have been excluded) ultimately it’s a competition about significance. What significance means is defined by the funder and the scheme and doesn’t necessarily mean impact – it could be about academic significance, contribution to the field and so on.

As a public panel member for an NIHR scheme I’ve seen this from the inside – project proposals which are technically competent, sensible and feasible. Yet either because they fail to articulate the significance or because their research challenge is just not that significant an issue, they don’t get funded because they’re not competitive against similarly competent applications taking on much more significant and important research challenges. Feedback is given which would have improved the application, but simply addressing that feedback will seldom make it any more competitive.

When major Research Centre calls come out, I often have conversations with colleagues who have great ideas for perfectly formed projects which unfortunately I don’t think are significant enough to be one of three or four funded across the whole of social sciences. Ideally the significance question, the “so what/who cares?” question should be posed before applying in the first place, but you should definitely look again at what was funded and ask it again of your project before considering trying to rework it.

Themed Calls Cast a Long Shadow

One of the most dispiriting grant rejection experiences is rejection from a targeted call which seemed perfect. It’s not like an open call where you have to compete with rival bids on significance from all across your research council’s remit – rather, the significance is already recognised.

Yet the reality is that narrower calls often have similarly low success rates. Although they’re narrower, everyone who can pile in, does pile in. And deciding what to do next is much harder. Themed calls cast a long shadow – if as a funder I’ve just made a major investment in field X through niche call Y, I’m not sure how I’m going to feel about an X-related application coming back in through the open call route. Didn’t we just fund a lot of this stuff? Should we fund more, especially if an idea like this was unsuccessful last time? Shouldn’t we support something else? And I think this effect might be true even with different funders who will be aware of what’s going on elsewhere. If a tranche of projects in your research area have been funded through a particular call, it’s going to be very difficult to get investment through any other scheme anytime soon.

Switching calls, Switching funders

An exception to this might be the Global Challenges Research Fund or perhaps other areas where there’s a lot of funding available (relatively speaking) and a number of different calls with slightly different priorities. Being unsuccessful with an application to an open call or a broader call and then looking to repurpose the research idea in response to a narrower themed call is more likely to pay off than the other way round, moving from a specific call to a general one. But even so, my advice would be to ban the “r” word entirely. It’s not a ‘resubmission’, it’s an entirely new application written for a different funding scheme with different priorities, even if some of the underlying ideas are similar.

This goes double when it comes to switching funders. A good way of wasting everyone’s time is trying to crowbar a previously unsuccessful application into the format required by a different funder. Different funders have different priorities and different application procedures, formats and rules, and so you must treat it as a fresh application. Not doing so is a bit like getting out some love letters you sent to a former paramour, changing the name at the top, and reposting them to the current object of your affections. Neither will end well.

The Leverhulme Trust are admirably clear on this point, they’re “keen to avoid assuming the role of ‘funder of last resort’; that is, of routinely providing support for proposals which have been fully matched to the requirement of another funding agency, but have failed to win support on the grounds of either lack of quality or insufficient available funds.” If you’re going to apply to the Leverhulme Trust, for example, make it a Leverhulme-y application, and that means shifting not just the presentational style but also the substance of what you’re proposing.

Whatever the change, forget any notion of resubmission if you’re taking an idea from one call to another. Yes, you may be able to reuse some of your previous materials, but if you submit something clearly written for another call with the crowbar marks still visible, you won’t get funded.

The Five Stages of Grant Application Failure

I’m reluctant to draw this comparison, but I wonder if responding to grant application rejection is a bit like the Kubler-Ross model of grief (denial, anger, bargaining, depression, and acceptance). Perhaps one question to ask yourself is if your resubmission plans are coming from a position of acceptance – in which case fine, but don’t regard it as a resubmission – or a part of the bargaining stage. In which case…. perhaps take a little longer to decide what to do.

Further reading: What to do if your grant application is unsuccessful. Part 1 – What it Means and What it dDoesn’t and Part 2 – Next Steps.

‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.