Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.


Outstanding researcher or Oustanding grant writer?

"It's all the game, yo....."

The Times Higher has a report on Sir Paul Nurse‘s ‘Anniversary Day’ address to the Royal Society.  Although the Royal Society is a learned society in the natural rather than the social sciences, he makes an interesting distinction that seems to have – more or less unchallenged – become a piece of received wisdom across many if not all fields of research.

Here’s part of what Sir Paul had to say (my underline added)

Given this emphasis on the primacy of the individuals carrying out the research, decisions should be guided by the effectiveness of the researchers making the research proposal. The most useful criterion for effectiveness is immediate past progress. Those that have recently carried out high quality research are most likely to continue to do so. In coming to research funding decisions the objective is not to simply support those that write good quality grant proposals but those that will actually carry out good quality research. So more attention should be given to actual performance rather than planned activity. Obviously such an emphasis needs to be tempered for those who have only a limited recent past record, such as early career researchers or those with a break in their careers. In these cases making more use of face-to-face interviews can be very helpful in determining the quality of the researcher making the application.

I guess my first reaction to this is to wonder whether interviews are the best way of deciding research funding for early career researchers.  Apart from the cost, inconvenience and potential equal opportunities issues of holding interviews, I wonder if they’re even a particularly good way of making decisions.  When it comes to job interviews, I’ve seen many cases where interview performance seems to take undue priority over CV and experience.  And if the argument is that sometimes the best researchers aren’t the best communicators (which is fair), it’s not clear to me how an interview will help.

My second reaction is to wonder about the right balance between funding excellent research and funding excellent researchers.  And I think this is really the point that Sir Paul is making.  But that’s a subject for another entry, another time.  Coming soon!

My third reaction – and what this entry is about – is the increasingly common assumption that there is one tribe of researchers who can write outstanding applications, and another which actually does outstanding research.  One really good expression of this can be found in a cartoon at the ever-excellent Research Counselling.  Okay, so it’s only a cartoon, but it wouldn’t have made it there unless it was tapping into some deeper cultural assumptions.  This article from the Times Higher back at the start of November speaks of ‘Dr Plods’ – for whom getting funding is an aim in itself – and ‘Dr Sparks’ – the ones who deserve it – and there seems to be little challenge from readers in the comments section below.

But does this assumption have any basis in fact?  Are those who get funded mere journeymen and women researchers, mere average intellects, whose sole mark of distinction is their ability to toady effectively to remote and out-of-touch funding bodies?  To spot the research priority flavour-of-the-month from the latest Delivery Plan, and cynically twist their research plans to match it?  It’s a comforting thought for the increasingly large number of people who don’t get funding for their project.  We’d all like to be the brilliant-but-eccentric-misunderstood-radical-unappreciated genius, who doesn’t play by the rules, cuts a few corners but gets the job done, and to hell with the pencil pushers at the DA’s office in city hall in RCUK’s offices in downtown Swindon.  A weird kind of cross between Albert Einstein and Jimmy McNulty from ‘The Wire’.

While I don’t think anyone is seriously claiming that the Sparks-and-Plods picture should be taken literally, I’m not even sure how much truth there is in it as a parable or generalisation.  For one thing, I don’t see how anyone could realistically Plod their way very far from priority to priority as they change and still have a convincing track record for all of them.  I’m sure that a lot of deserving proposals don’t get funded, but I doubt very much that many undeserving proposals do get the green light.  The brute fact is that there are more good ideas than there is money to spend on funding them, and the chances of that changing in the near future are pretty much zero.  I think that’s one part of what’s powering this belief – if good stuff isn’t being funded, that must be because mediocre stuff is being funded.  Right?  Er, well…. probably not.  I think the reality is that it’s the Sparks who get funded, but it’s those Sparks who are better able to communicate their ideas and make a convincing case for fit with funders’ or scheme priorities.  Plods, and their ‘incremental’ research (a term that damns with faint praise in some ESRC referee’s reports that I’ve seen) shouldn’t even be applying to the ESRC – or at least not to the standard Research Grants scheme.

A share of this Sparks/Plods view is probably caused by the impact agenda.  If impact is hard for the social sciences, it’s at least ten times as hard for basic research in many of the natural sciences.  I can understand why people don’t like the impact agenda, and I can understand why people are hostile.  However, I’ve always understood the impact agenda as far as research funding applications are concerned is that if a project has the potential for impact, it ought to, and there ought to be a good, solid, thought through, realistic, and defensible plan for bringing it about.  If there genuinely is no impact, argue the case in the impact statement.  Consider this, from the RCUK impact FAQ.

How do Pathways to Impact affect funding decisions within the peer review process?

The primary criterion within the peer review process for all Research Councils is excellent research. This has always been the case and remains unchanged. As such, problematic research with an excellent Pathways to Impact will not be funded. There are a number of other criteria that are assessed within research proposals, and Pathways to Impact is now one of those (along with e.g. management of the research and academic beneficiaries).

Of course, how this plays out in practice is another matter, but every indication I’ve had from the ESRC is that this is taken very seriously.  Research excellence comes first.  Impact (and other factors) second.  These may end up being used in tie-breakers, but if it’s not excellent, it won’t get funded.  Things may be different at the other Research Councils that I know less about, especially the EPSRC which is repositioning itself as a sponsor of research, and is busy dividing and subdividing and prioritising research areas for expansion or contraction in funding terms.

It’s worth recalling that it’s academics who make decisions on funding.  It’s not Suits in Swindon.  It’s academics.  Your peers.  I’d be willing to take seriously arguments that the form of peer review that we have can lead to conservatism and caution in funding decisions.  But I find it much harder to accept the argument that senior academics – researchers and achievers in their own right – are funding projects of mediocre quality but good impact stories ahead of genuinely innovative, ground-breaking research which could drive the relevant discipline forward.

But I guess my message to anyone reading this who considers herself to be more of a ‘Doctor Spark’ who is losing out to ‘Doctor Plod’ is to point out that it’s easier for Sparky to do what Ploddy does well than vice versa.  Ploddy will never match your genius, but you can get the help of academic colleagues and your friendly neighbourhood research officer – some of whom are uber-Plods, which in at least some cases is a large part of the reason why they’re doing their job rather than yours.

Want funding?  Maximise your chances of getting it.  Want to win?  Learn the rules of the game and play it better.  Might your impact plan be holding you back?  Take advantage of any support that your institution offers you – and if it does, be aware of the advantage that this gives you.  Might your problem be the art of grant writing?  Communicating your ideas to a non-specialised audience?  To reviewers and panel members from a cognate discipline?  To a referee not from your precise area?  Take advice.  Get others to read it.  Take their impressions and even their misunderstandings seriously.

Or you could write an application with little consideration for impact, with little concern for clarity of expression or the likely audience, and then if you’re unsuccessful, you can console yourself with the thought that it’s the system, not you, that’s at fault.

ESRC Demand Management Part 5: And the winner is…. researcher sanctions!

"And the prize for best supporting sanctions scheme goes to...."
And the winner is.....

The ESRC today revealed the outcome of the ‘Demand Management’ consultation, with the consultation exercise showing a strong preference for researcher sanctions rather than the other main options, which were institutional sanctions, institutional quotas, or charging for applications.  And therefore….

Given this clear message, it is likely that any further steps will reflect these views.

Which I think means that that’s what they’re going to do.  But being (a) academics, and (b) British, it has to be expressed in the passive voice and as tentatively as possible.

Individual researcher sanctions got the vote of  82% of institutional responses, 80% of learned society responses, and 44% of individual responses.   To put that in context, though, 32% of the individual responses were interpreted as backing none of the possible measures, which I don’t think was ever going to be a particular convincing response.    Institutional sanctions came second among institutions (11%), and institutional quotas (20%) among individual respondents.  Charging for applications was, as I expected, a non-starter, apparently attracting the support of two institutions and one learned society or ‘other agency’.  I’m surprised it got that many.

The issue of the presentation of the results as a ‘vote’ is an interesting one, as I don’t think that’s what this exercise was presented as at the time.  The institutional response that I was involved in was – I like to think – a bit more nuanced and thoughtful than just a ‘vote’ for one particular option.  In any case, if it was a vote, I’m sure that the ‘First Past the Post’ system which appears to have been used wouldn’t be appropriate – some kind of ‘alternative vote’ system to find the least unpopular option would surely have been more appropriate.  I’m also puzzled by the combining of the results from institutions, individuals, and learned societies into totals for ‘all respondents’ which seems to give the same weighting to individual and institutional responses.

Fortunately – or doubly-fortunately – those elements of the research community which responded delivered a clear signal about the preferred method of demand management, and, in my view at least, it’s the right one.  I’ll admit to being a bit surprised by how clear cut the verdict appears to be, but it’s very much one I welcome.

It’s not all good news, though.  The announcement is silent on exactly what form the programme of researcher sanctions will take, and there is still the possibility that sanctions may apply to co-investigators as well as the principal investigator.  As I’ve argued before, I think this would be a mistake, and would be grossly unfair in far too many cases.  I know that there are some non-Nottingham folks reading this blog, so if your institution isn’t one of the ones that responded (and remember only 44 of 115 universities did), it might be worth finding out why not, and making your views known on this issue.

One interesting point that is stressed in the announcement is that individual researcher sanctions – or any form of further ‘demand management’ measures – may never happen.  The ESRC have been clear about this all along – the social science research community was put on notice about the unsustainablity of the current volume of applications being submitted, and that a review would take place in autumn 2012.  The consultation was about the general form of any further steps should they prove necessary.  And interestingly the ESRC are apparently ‘confident’ they they will not.

We remain confident that by working in partnership with HEIs there will be no need to take further steps. There has been a very positive response from institutions to our call for greater self-regulation, and we expect that this will lead to a reduction in uncompetitive proposals.

Contrast that with this, from March, when the consultation was launched:

We very much hope that we will not need additional measures.

Might none of this happen?  I’d like to think not, but I don’t share their confidence, and I fear that “very much hope” was nearer the mark.  I can well believe that each institution is keen to up its game, and I’m sure discussions are going on about new forms of internal peer review, mentoring, research leadership etc in institutions all across the country.  Whether this will lead to a sufficient fall in the number of uncompetitive applications, well, I’m not so sure.

I think there needs to be an acceptance that there are plenty of perfectly good research ideas that would lead to high quality research outputs in quality journals, perhaps with strong non-academic impact, which nevertheless aren’t ‘ESRC-able’ – because they’re merely ‘very good’ or ‘excellent’ rather than ‘outstanding’.  And it’s only the really outstanding ideas that are going to be competitive.  If all institutions realise this, researcher sanctions may never happen.  But if hubris wins out, and everyone concludes that it’s everyone else’s applications that are the problem, then researcher sanctions are inevitable.

A visit from the British Academy….

The British Academy logo, featuring the Greek Muse Clio, according to wikipedia...
The British Academy

Ken Emond, Head of Research Awards of the British Academy, came to visit the University of Nottingham the other week to talk about the various and nefarious research funding schemes that are on offer from the British Academy.  To make an event of it, my colleagues in the Centre for Advanced Studies also arranged for various internal beneficiaries of the Academy’s largesse to come and talk about the role that Academy funding had had in their research career.  I hope no-one minds if I repeat some of the things that were said – there was no mention of ‘Chatham House’ rules or of ‘confidential learning agreements’, and I don’t imagine that Ken gives privileged information to the University of Nottingham alone, no matter how wonderful we are.

Much of what funders’ representatives tend to say during institutional visits or AMRA conferences is pretty much identical to the information already available on their website in one form or another, but it’s interesting how many academics seem to prefer to hear the information in person rather than read it in their own time.  And it’s good to put a face to names, and faces to institutions.  Although I think I shall probably always share Phil Ward‘s mental image of the BA as an exclusive Rowley Birkin QC-style private members club.  But it’s good to have a reminder of what’s on offer, and have an opportunity to ask questions.

I met Ken very briefly at the ARMA conference in 2010, and his enthusiasm for the Small Grants Scheme then (and now) was obvious.  I was very surprised when it was scrapped, and it seems likely that this was imposed rather than freely chosen.  However, it’s great to see it back again, and this time including support for conference funding to disseminate the project findings.  It seems the call is going to be at least annual, with no decision taken yet on whether there will be a second call this year, as in previous years.

It seems much more sensible than having separate schemes for projects and for conference funding.  It’s unlikely that we’re going to see a return of the BA Overseas Conference Scheme, but…. it was quite a lot of work in writing and assessing for really very small amounts of money.  Although having said that, when I was at Keele those very small amounts of money really did help us send researchers to prestigious conferences (especially in the States) they wouldn’t otherwise have attended.

One of the questions asked was about the British Academy’s attitude to demand management, of the kind that the EPSRC have introduced and that the ESRC are proposing.  The response was that they currently have no plans in this direction – they don’t think that any institutions are submitting an excessive number of applications.

Although the British Academy has some of the lowest success rates in town for its major schemes, they are all light touch applications – certainly compared to the Research Councils.  Mid-Career and Post-Doc Fellowships both have an outline stage, and the Senior Research Fellowships application form is hardly more taxing than a Small Grant one.  Presumably they’re also quick and easy to review – I wonder how many of those a referee could get through in the time it took them to review a single Research Council application?  Which does raise the suggestion from Mavan, a commenter on one of my previous posts, about cutting the ESRC application form dramatically.

But… it’s possible that the relative brevity of the application forms is itself increasing the number of applications, and that’s certainly something that the ESRC were concerned about when considering their own move to outline stage applications.

I guess a funding scheme could be credible and sustainable with a low success rate and a low ‘overhead’ cost of writing and reviewing applications or a high success rate with a high overhead cost.  The problem is when were get to where we are at the moment with the ESRC, with low success rates and high overhead costs.

A few scrawled lines in defence of the ESRC…

A picture of lotto balls
Lotto? Balls

There’s a very strange article in the Times Higher today which claims that the ESRC’s latest “grant application figures raise questions about its future”.

Er…. do they?  Seriously?  Why?

It’s true that success rates are a problem – down to 16% overall, and 12% for the Research Grants Scheme (formerly Standard Grants.  According to the article, these are down from 17% and 14% from the year before.  It’s also true that RCUK stated in 2007 that 20% should be the minimum success rates.  But this long term decline in success rates – plus a cut in funding in real terms – is exactly why the ESRC has started a ‘demand management’ strategy.

A comment attributed to one academic (which could have been a rhetorical remark taken out of context) appears to equate the whole thing to a lottery,and calls for the whole thing to be scrapped and the funding distributed via the RAE/REF.  This strikes me as an odd view, though not one, I’m sure, confined to the person quoted.  But it’s not a majority view, not even among the select number of academics approached for comments.  All of the other academics named in the article seem to be calling for more funding for social sciences, so it would probably be legitimate to wonder why the focus of the article is about “questions” about the ESRC’s “future”, rather than calls for more funding.  But perhaps that’s just how journalism works.  It certainly got my attention.

While I don’t expect these calls for greater funding for social science research will be heard in the current politico-economic climate, it’s hard to see that abolishing the ESRC and splitting its budget will achieve very much.  The great strength of the dual funding system is that while the excellence of the Department of TopFiveintheRAE at the University of Russell deserves direct funding, it’s also possible for someone at the Department of X at Poppleton University to get substantial funding for their research if their research proposal is outstanding enough.  Maybe your department gets nothing squared from HEFCE as a result of the last RAE, but if your idea is outstanding it could be you – to use a lottery slogan.  This strikes me as a massively important principle – even if in practice, most of it will go to the Universities of Russell.  As a community of social science scholars, calling for the ESRC to be abolished sounds like cutting of the nose to spite the face.

Yes, success rates are lower than we’d like, and yes, there is a strong element of luck in getting funded.  But it’s inaccurate to call it a “lottery”.  If your application isn’t of outstanding quality, it won’t get funded.  If it is, it still might not get funded, but… er… that’s not a lottery.  All of the other academics named in the article seem to be calling for more funding for the social sciences.

According to the ESRC’s figures between 2007 and 2011, 9% of Standard Grant applications were either withdrawn or rejected at ‘office’ stage for various reasons.  13% fell at the referee stage (beta or reject grades), and 21% fell at the assessor stage (alpha minus).  So… 43% of applications never even got as far as the funding panel before being screened out on quality or eligibility grounds.

So… while the headline success rate might be 12%, the success rates for fundable applications are rather better.  12 funded out of 100 applications is 12%, but 12 funded out of 57 of the 100 of the applications that are competitive is about 28%.  That’s what I tell my academic colleagues – if your application is outstanding, then you’re looking at 1 in 4.  If it’s not outstanding, but merely interesting, or valuable, or would ‘add to the literature’, then look to other (increasingly limited) options.

So…. we need the ESRC.  It would be a disaster for social science research if it were not to have a Research Council.  We may not agree with everything it does and all of the decisions it makes, we may be annoyed and frustrated when they won’t fund our projects, but we need a funder of social science with money to invest in individual research projects, rather than merely in excellent Departments.

The ESRC and “Demand Management”: Part 4 – Quotas and Sanctions, PIs and Co-Is….

A picture of Stuart Pearce and Fabio Capello
It won't just be Fabio getting sanctioned if the referee's comments aren't favourable.

Previously in this series of posts on ESRC Demand Management I’ve discussed the background to the current unsustainable situation and aspects of the initial changes, such as the greater use of sifting and outline stages, and the new ban on (uninvited) resubmissions.  In this post I’ll be looking forward to the possible measures that might be introduced in a year or so’s time should application numbers not drop substantially….

When the ESRC put their proposals out to consultation, there were four basic strategies proposed.

  • Charging for applications
  • Quotas for numbers of applications per institution
  • Sanctions for institutions
  • Sanctions for individual researchers

Reading in between the lines of the demand management section of the presentation that the ERSC toured the country with in the spring, charging for applications is a non-starter.  Even in the consultation documents, this option only appeared to be included for the sake of completeness – it was readily admitted that there was no evidence that it would have the desired effect.

I think we can also all-but-discount quotas as an option.  The advantage of quotas is that it would allow the ESRC to precisely control the maximum number of applications that could be submitted.  Problem is, it’s the nuclear option, and I think it would be sensible to try less radical options first.  If their call for better self-regulation and internal peer review within institutions fails, and then sanctions schemes are tried and fail, then (and only then) should they be thinking about quotas.  Sanctions (and the threat of sanctions) are a seek to modify application submission behaviour, while quotas pretty much dictate it.  There may yet be a time when Quotas are necessary, though I really hope not.

What’s wrong with Quotas, then?  Well, there will be difficulties in assigning quotas fairly to institutions, in spite of complex plans for banding and ‘promotion’ and ‘relegation’ from the bands.  That’ll lead to a lot of game playing, and it’s also likely that there will be a lot of mucking around with the lead applicant.  If one of my colleagues has a brilliant idea and we’re out of Quota, well, maybe we’ll find someone at an institution that isn’t and ask them to lead.  I can imagine a lot of bickering over who should spend their quota on submitting an application with a genuinely 50-50 institutional split.

But my main worry is that institutions are not good at comparing applications from different disciplines.  If we have applications from (say) Management and Law vying for the last precious quota slot, how is the institution to choose between them?  Even if it has experts who are not on the project team, they will inevitably have a conflict of interest – there would be a worry that they would support their ‘team’.  We could give it a pretty good cognate discipline review, but I’m not confident we would always get the decision right.  It won’t take long before institutions start teaming up to provide external preliminary peer review of each other’s applications, and before you know it, we end up just shifting the burden from post-submission to pre-submission for very little gain.

In short, I think quotas are a last resort idea, and shouldn’t be seriously considered unless we end up in a situation where a combination of (a) the failure of other demand management measures, and/or (b) significant cuts in the amount of funding available.

Which leaves sanctions – either on individual researchers or on their institutions.  The EPSRC has had a policy of researcher sanctions for some time, and that’s had quite a considerable effect.  I don’t think it’s so much through sanctioning people and taking them out of the system so much as a kind of chill or placebo effect, whereby greater self-selection is taking place.  Once there’s a penalty for throwing in applications and hoping that some stick, people will stop.

As I argued previously, I think a lot of that pressure for increased submissions is down to institutions rather than individuals, who in many cases are either following direct instructions and expectations, or at least a very strong steer.  As a result, I was initially in favour of a hybrid system of sanctions where both individual researchers and institutions could potentially be sanctioned.  Both bear a responsibility for the application, and both are expected to put their name to it.  But after discussions internally, I’ve been persuaded that individual sanctions are the way to go, in order to have a consistent approach with the EPSRC, and with the other Research Councils, who I think are very likely to have their own version.  While the formulae may vary according to application profiles, as much of a common approach as possible should be adopted, unless of course there are overwhelming reasons why one of the RCs that I’m less familiar with should be different.

For me, the big issue is not whether we end up with individual, institutional, or hybrid sanctions, but whether the ESRC go ahead with plans to penalise co-investigators (and/or their institutions) as well as PIs in cases where an application does not reach the required standard.

This is a terrible, terrible, terrible idea and I would urge them to drop it.  The EPSRC don’t do it, and it’s not clear why the ESRC want to.  For me, the co-I issue is more important than which sanction model we end up with.

Most of the ESRC’s documents on demand management are thoughtful and thorough.  They’re written to inform the consultation exercise rather than dictate a solution, and I think the author(s) should be – on the whole – congratulated on their work.  Clearly a lot of hard work has gone into the proposals, which given the seriousness of the proposals is only right.  However, nowhere is there to be found any kind of argument or justification that I can find for why co-investigators (insert your own ‘and/or institutions’ from here on)  should be regarded as equally culpable.

I guess the argument (which the ESRC doesn’t make) might be that an application will be given yet more careful consideration if more than the principal investigator has something to lose.  At the moment, I don’t do a great deal if an application is led from elsewhere – I offer my services, and sometimes that offer is taken up, sometimes it isn’t.  But no doubt I’d be more forceful in my ‘offer’ if a colleague or my university could end up with a sanctions strike against us.  Further, I’d probably be recommending that none of my academic colleagues get involved in an application without it going through our own rigorous internal peer review processes.  Similarly, I’d imagine that academics would be much more careful about what they allowed their name to be put to, and would presumably take a more active role in drafting the application.  Both institutions and individual academics, can, I think, be guilty of regarding an application led from elsewhere as being a free roll of the dice.  But we’re taking action on this – or at least I am.

The problem is that these benefits are achieved (if they are achieved at all) at the cost of abandoning basic fairness.  It’s just not clear to me why an individual/institution with only a minor role in a major project should be subject to the same penalty as the principal investigator and/or the institution that failed to spot that the application was unfundable.  It’s not clear to me why the career-young academic named as co-I on a much more senior colleague’s proposal should be held responsible for its poor quality.  I understand that there’s a term in aviation – cockpit gradient – which refers to the difference in seniority between Pilot and Co-Pilot.  A very senior Pilot and a very junior co-Pilot is a bad mix because the junior will be reluctant to challenge the senior.  I don’t understand why someone named as co-I for an advisory role – on methodology perhaps, or for a discrete task, should bear the same responsibility.  And so on and so forth.  One response might be to create a new category of research team member less responsible than a ‘co-investigator’ but more involved in the project direction (or part of the project direction) than a ‘researcher’, but do we really want to go down the road of redefining categories?

Now granted, there are proposals where the PI is primus inter pares among a team of equally engaged and responsible investigators, where there is no single, obvious candidate for the role of PI.  In those circumstances, we might think it would be fair for all of them to pay the penalty. But I wonder what proportion of applications are like this, with genuine joint leadership?  Even in such cases, every one of those joint leaders ought to be happy to be named as PI, because they’ve all had equal input.  But the unfairness inherent in only one person getting a strike against their name (and other(s) not), is surely much less unfair than the examples above?

As projects become larger, with £200k (very roughly, between two and two and a half person-years including overheads and project expenses) now being the minimum, the complex, multi-armed, innovative, interdisciplinary project is likely to be come more and more common, because that’s what the ESRC says that it wants to fund.   But the threat of a potential sanction (or step towards sanction) for every last co-I involved is going to be a) a massive disincentive to large-scale collaboration, b) a logistical and organisational nightmare, or c) both.

Institutionally, it makes things very difficult.  Do we insist that every last application involving one of our academics goes through our peer review processes?  Or do we trust the lead institution?  Or do we trust some (University of Russell) but not others (Poppleton University)?  How does the PI manage writing and guiding the project through various different approval processes, with the danger that team members may withdraw (or be forced to withdraw) by their institution?  I’d like to think that in the event of sanctions on co-Is and/or institutions that most Research Offices would come up with some sensible proposals for managing the risk of junior-partnerdom in a proportionate manner, but it only takes one or two to start demanding to see everything and to run everything to their timetable to make things very difficult indeed.

The ESRC and “Demand Management”: Part 2 – Sifting and Outlines

ESRC office staff start their new sifting role

In part one of this week fortnight long series of posts on the ESRC and “demand management”, I attempted to sketch out some context.  Briefly, we’re here because demand has increased while the available funds have remained static at best, and are now declining in real terms.  Phil Ward and Paul Benneworth have both added interesting comments – Phil has a longer professional memory than I do, and Paul makes some very useful comments from the perspective of a researcher starting his career during the period in question.  If you read the previous post before their comments appeared, I’d recommend going back and having a read.

It’s easy to think of “demand management” as something that’s at least a year away, but there are some changes that are being implemented straight away – this post is about outline applications and “sifting”.  Next I’ll talk about the ban on (uninvited) resubmissions.

Greater use of outline stages for managed mode schemes (i.e pretty much everything except open call Research Grants), for example, seems very sensible to me, provided that the application form is cut down sufficiently to represent a genuine time and effort saving for individuals and institutions, while still allowing applicants enough space to make a case.  It’s also important that reviewers treat outline applications as just that, and are sensitive to space constraints.  I understand that the ESRC are developing a new grading scheme for outline applications, which is a very good thing.  At outline stage, I would imagine that they’re looking for ideas that are of the right size in terms of scale and ambition, and at least some evidence that (a) the research team has the right skills and (b) that granting them more time and space to lay out their arguments will result in a competitive application.

With Standard Grants (now known as Research Grants, as there are no longer ‘large’ or ‘small’ grants), there will be “greater internal sifting by ESRC staff”.  I don’t know if this is in place yet, but I understand that there’s a strong possibility that this might not be done by academics.  I’m very relaxed about that – in fact, I welcome it – though I can imagine that some academics will be appalled.  But…. the fact is that about a third of the applications the ESRC receives are “uncompetitive”, which is a lovely British way of saying unfundable.  Not good enough.  Where all these applications are coming from I’ve no idea, and while I don’t think any of them are being submitted on my watch, it would be an act of extreme hubris to declare that absolutely.  However, I strongly suspect that they’re largely coming from universities that don’t have a strong research culture and/or don’t have high quality research support and/or are just firing off as many applications as possible in a mistaken belief that the ESRC is some kind of lottery.

I’d back myself to pick out the unfundable third in a pile of applications.  I wouldn’t back myself to pick the grant recipients, but even then I reckon I’d get close.  I can differentiate between what I don’t understand and what doesn’t make sense with a fair degree of accuracy, and while I’m no expert on research methods, I know when there isn’t a good account of methods, or when it’s not explained or justified.  I can spot a Case for Support that is 80% literature review and only 20% new proposal.  I can tell when the research questions(s) subtly change from section to section.  And I’d back others with similar roles to me to be able to do the same – if we can’t tell the difference between a sinner and a winner…. why are research intensive universities bothering to employ us?

And if I can do it with a combination of some academic background (MPhil political philosophy) and professional experience, I’m sure others could too, including ESRC staff.  They’d only have to sort the no-hopers from the rest, and if a few no-hopers slip through, or if a few low quality fundable some-hopers-but-a-very-long-way-down-the-lists drop out at that stage, it would make very little difference.  Unless, of course, one of the demand management sanction options is introduced, at which point the notion of  non-academics making decisions that could lead to individual or institutional becomes a little more complicated.  But again, I think I’d back myself to spot grant applications that should not have been submitted, even if I wouldn’t necessarily want a sanctions decision depending on my judgement alone.

Even if they were to go with a very conservative policy of only sifting out applications which, say, three ESRC staff think is dreadful, that could still make a substantial difference to the demands on academic reviewers.  I guess that’s the deal – you submit to a non-academic having some limited judgement role over your application, and in return, they stop sending you hopeless applications to review.

If I were an academic I’d take that like a shot.

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.

“It’s a bad review, we got a bad review …oh lord”

A picture of Clacton Pier
A large sandpit and a pier (re)view

A healthy portion of food for thought has been served up by the publication of a RAND Europe report into alternatives to peer review for research project funding.  Peer review is something that I – as an alleged research funding professional -have rather taken for granted as being the natural and obvious way to allocate (increasingly) scarce resources.  How do we decide who gets funded?  Well, let’s ask experts to report, and then make a judgement based upon what those experts say.  I’ve been aware of other ways, but I’ve not given them much thought – I’m a poacher, not a gamekeeper.

The Guardian Higher Education Network ran a poll over the second half of last week, and a whopping 70.8% of those who voted said that they had had a research proposal turned down  thought the process should be changed.  I’m aware of the limitations of peer review -it’s only as good as the peers, and the effort they’re prepared to make and the care they’re prepared to take with their review.  Anyone who has had any involvement in research funding will be aware of examples where comments come back that are frankly baffling: drawing odd conclusions, obsessing over irrelevancies, wanting the research to be about something else, making unsupported statements, or assertions that are just demonstrably false.

[Personally, I hate it when ‘Reviewer Q’ remarks that the project “seems expensive”, without further comment or justification about what’s too expensive.  That’s our carefully crafted budget you’re talking about there, Reviewer Q.  It’s meticulously pedantic, and pedantically meticulous.  We’ve Justified our Resources… so how about you justify your comment?  I wonder how annoyed I’d get if I wrote the whole application…..]

One commentator on the Guardian poll page, dianthusmed, said that

Anyone voting to change the peer review process, I will not take you seriously unless you tell me what you’d replace it with.

And that’s surely the $64,000 question (at 80% fEC)…. we’re all more or less familiar with the potential shortcomings of peer review as a method of allocating funding, but if not peer review… then what?

In fact, the Rand Europe report is not an anti peer-review polemic, and deserves a more nuanced response than a “peer review: yes or no” on-line poll.  The only sensible answer, surely, is: well, it depends what you want to achieve.  The report itself aims to

inspire thinking amongst research funders by showing how the research funding review process can be changed, and to give funders the confidence to try novel methods by explaining where and how such approaches have been used previously.

But crucially…

This is not intended to replace peer review, which remains the best method for review of grant applications in many situations. Rather, we hope that by considering some of the alternatives to peer review, where appropriate, research funders will be able to support a wider portfolio of projects, leading to more innovative, high-impact work.

A number of the options in the report seem to be more related to changing the nature and scope for calls for proposals than changing the nature of peer review itself – many in ways that aren’t unfamiliar.  But I’d like to pick out one idea for particular comment: sand pits.

I believe the origin of the term is from computing, where the term ‘sand box’ or ‘sand pit’ was used to describe an area for experimentation or testing, where no damage could be done to the overall system architecture.  I guess the notion of harmless – even playful – experimentation is what advocates have in mind.

They sound like a very interesting idea – get a group of people with expertise to bring to bear on a particular problem, put them all in same place for a day, or a number of days, and see what emerges from discussions.  It’s not really caught on yet in the social sciences, although social scientists have been involved, of course.  The notion of cooperating rather than competing, and of new research collaborations forming, is an interesting and an appealing idea.  As a way of bringing new perspectives to bear on a particular problem – especially an interdisciplinary problem – it looks like an attractive alternative.

There are problems, though.  If there are more applications to participate than there are places, there will inevitably need to be choices made and applications accepted and rejected.  I would imagine that questions of fit and balance would be relevant as well as questions of experience and expertise, but someone or some group of people will have to make choices.  From the application forms I’ve seen, this is often on the basis of short CV and a short statement.  So… don’t we end up relying on some element of peer review anyway?

Secondly, I wonder about equal opportunities.  If a sand pit event is to take place over several days in a hotel, it will inevitably be difficult or even impossible for some to attend. Those who are parents and/or carers. Those who have timetabled lectures and tutorials.  Those who have other professional or personal diary commitments that just can’t be moved.  For a standard peer reviewed call, no-one is excluded completely because it clashes with an important family event.  Can we be sure that all of the best researchers will even apply?

I should say that I’ve never attended a sandpit event, but I have attended graduate recruitment/selection events (offered, deferred, and finally declined, since you ask), and residential training courses.  They’re all strange situations where both competitive and cooperative behaviours are rewarded, and I wonder how people react.  If I were a funder, I’d be worried that the prizes might be going to the best social operators, rather than those with the best ideas.  It’s a myth that academic brilliance is always found in inverse proportion to social skills, of course, but even so, my concern would be about whether one or more dominant figures could ending up forming projects around themselves. I also wonder about existing cliques or vested interests of whatever kind having a disproportionate influence.

I’m sure that effective facilitation and chairing can go a long way to minimising at least some of the potential problems, and while I think sandpits are an intriguing and promising alternative to peer review, they’re not without problems of their own.  I’d be very interested to hear from anyone who’s attended a sandpit – am I doing them a disservice here?

Although I’m open to other ideas for distributing research funding – by all means, let’s be creative, and let’s look at alternatives – I don’t see a replacement for peer review.  Which isn’t to say that there isn’t scope to improve the quality of peer review.  Because, Reviewer Q, there certainly is.

And perhaps that’s the point that the 70.8% were trying to make.