ESRC – sweeping changes to the standard grants scheme

The ESRC have just announced a huge change to their standard grants scheme, and I think it’s fair to say that it’s going to prove somewhat controversial.

At the moment, it’s possible to apply to the ESRC Standard Grant Scheme at any time for grants of between £200k and £2million. From the end of June this year, the minimum threshold will raise from £200k to £350k, and the maximum threshold will drop from £2m to £1m.

Probably those numbers don’t mean very much to you if you’re not familiar with research grant costing, but as a rough rule of thumb, a full time researcher for a year (including employment costs and overheads) comes to somewhere around £70k-80k. So a rough rule of thumb I used to use was that if your project needed two years of researcher time, it was big enough. So… for £350k you’d probably need three researcher years, a decent amount of PI and Co-I time, and a fair chunk of non-pay costs. That’s a big project. I don’t have my filed in front of me as I’m writing this, so maybe I’ll add a better illustration later on.

This isn’t the first time the lower limit has been raised. Up until February 2011, there used to be a “Small Grants Scheme” for projects up to £200k before that was shut, with £200k becoming the new minimum. The argument at the time was that larger grants delivered more, and had fewer overheads in terms of the costs of reviewing, processing and administering. And although the idea was that they’d help early career researchers, the figures didn’t really show that.

The reasons given for this change are a little disingenuous puzzling. Firstly, this:

The changes are a response to the pattern of demand that is being placed on the standard grants scheme by the social science community. The average value of a standard grant application has steadily increased and is now close to £500,000, so we have adjusted the centre of gravity of the scheme to reflect applicant behaviour.

Now that’s an interesting tidbit of information – I wouldn’t have guessed that the “average value” would be that high, but you don’t have to be an expert in statistics (and believe me, in spite of giving 110% in maths class at school I’m not one) to wonder what “average” means, and further, why it even matters. This might be an attempt at justification, but I don’t see why this provides a rationale for change.

Then we have this….

The changes are also a response to feedback from our Grant Assessment Panels who have found it increasingly difficult to assess and compare the value of applications ranging from £200,000 to £2 million, where there is variable level of detail on project design, costs and deliverables. This issue has become more acute as the number of grant applications over £1 million has steadily increased over the last two years. Narrowing the funding range of the scheme will help to maintain the robustness of the assessment process, ensuring all applications get a fair hearing.

I have every sympathy for the Grant Assessment Panel members here – how do you choose between funding one £2m project and funding 10 x £200k projects, or any combination you can think of? It’s not so much comparing apples to oranges as comparing grapes to water melons. And they’re right to point out the “variable” level of detail provided – but that’s only because their own rules give a maximum of 6 A4 page for the Case for Support for projects under £1m and 12 for those over. If you think that sounds superficially reasonable, then notice that it’s potentially double the space to argue for ten times the money. I’ve supported applications of £1m+ and 12 sides of A4 is nowhere near enough, compared to the relative luxury of 6 sides for £200k. This is a problem.

In my view it makes sense to “introduce an annual open competition for grants between £1 million and £2.5 million”, which is what the ESRC propose to do. So I think there’s a good argument for lowering the upper threshold from £2m to £1m and setting it up as a separate competition. I know the ESRC want to reduce the number of calls/schemes, but this makes sense. As things stand I’ve regularly steered people away from the Centres/Large Grants competition towards Standard Grants instead, where I think success rates will be higher and they’ll get a fairer hearing. So I’d be all in favour of having some kind of single Centres/Large/Huge/Grants of Unusual Size competition.

But nothing here seems to me to be an argument for raising the lower limit.

But finally, I think we come to what I suspect is the real reason, and judging by Twitter comments so far, I’m not alone in thinking this.

We anticipate that these changes will reduce the volume of applications we receive through the Standard Grants scheme. That will increase overall success rates for those who do apply as well as reducing the peer review requirements we need to place on the social science community.

There’s a real problem with ESRC success rates, which dropped to 10% in the July open call, with over half the “excellent” proposals unfunded. This is down from around 25% success rates, much improved in the last few years. I don’t know whether this is a blip – perhaps a few very expensive projects were funded and a lot of cheaper ones missed out – but it’s not good news. So it’s hard not to see this change as driven entirely by a desire to get success rates up, and perhaps an indication that this wasn’t a blip.

In a recent interview with Adam Smith of Research Professional, Chief Exec Jane Eliot recently appeared to rule out the option of individual sanctions which had been threatened if institutional restraint failed to bring down the number of poor quality applications and it appears that the problem is not so much poor quality applications as lots of high quality applications, not enough money, plummeting success rates, and something needing to be done.

All this raises some difficult questions.

  • Where are social science researchers now supposed to go for funding for projects whose “natural size” is between £10k (British Academy Small Grants) and £350k, the proposed new minimum threshold? There’s only really the Leverhulme Trust, whose schemes will suit some project types and but not others, and they’re not exclusively a social science funder.
  • Where will the next generation of PIs to be entrusted with £350k of taxpayer’s money have an opportunity to cut their teeth, both in terms of proving themselves academically and managerially?
  • What about career young researchers? At least here we can expect a further announcement – there has been talk of merging the ‘future leaders scheme’ into Standard Grants, so perhaps there will be a lower minimum for them. But we’ll see.
  • Given that the minimum threshold has been almost doubled, what consultation has been carried out? I’m just a humble Business School Research Manager (I mean I’m humble, my Business School is outstanding, obviously) so perhaps it’s not surprising that this the first I’ve heard. But was there any meaningful consultation over this? Is there any evidence underpinning claims for the efficiency of fewer, longer and larger grants?
  • How do institutions respond? I guess one way will be to work harder to create bigger gestalt projects with multiple themes and streams and work packages. But surely expectations of grant getting for promotion and other purposes need to be dialled right back, if they haven’t been already. Do we encourage or resist a rush to get applications in before the change, at a time when success rates will inevitably be dire?

Of course, the underlying problem is that there’s not enough money in the ESRC’s budget to support excellent social science after years and years of “flat cash” settlements. And it’s hard to see what can be done about that in the current political climate.

Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.

Pre-mortems: Tell me why your current grant application or research project will fail

I came across a really interesting idea the other day week via the Freakonomics podcast – the idea of a project “pre-mortem” or “prospective hindsight”  They interviewed Gary Klein who described it as follows:

KLEIN:  I need you to be in a relaxed state of mind.  So lean back in your chair. Get yourself calm and just a little bit dreamy. I don’t want any daydreaming but I just want you to be ready to be thinking about things. And I’m looking in a crystal ball. And uh, oh, gosh…the image in the crystal ball is a really ugly image. And this is a six-month effort. We are now three months into the effort and it’s clear that this project has failed. There’s no doubt about it. There’s no way that it’s going to succeed. Oh, and I’m looking at another scene a few months later, the project is over and we don’t even want to talk about it. And when we pass each other in the hall, we don’t even make eye contact. It’s that painful. OK. So this project has failed, no doubt about it [….] I want each of you to write down all the reasons why this project has failed. We know it failed. No doubts. Write down why it failed.

The thinking here is that such an approach to projects reduces overconfidence, and elsewhere the podcast discusses the problems of overconfidence, “go fever”, the Challenger shuttle disaster, and how cultural/organisational issues can make it difficult to bring up potential problems and obstacles.  The pre-mortem exercise might free people from that, and encourages people (as a team) to find reasons for failure and then respond to them.  I don’t do full justice to the arguments here, but you can listen to it for yourself (or read the transcript) at the link above.  It reminds me of some of the material covered in a MOOC I took which showed how very small changes in the way that questions are posed and framed can make surprisingly large differences to the decisions that people make, so perhaps this very subtle shift in mindset might be useful.

How might we use the idea of a pre-mortem in research development?  My first thought was about grant applications.  Would it help to get the applicants to undertake the pre-mortem exercise?  I’m not sure that overconfidence is often a huge problem among research teams (a kind of grumpy, passive-aggressive form of entitled pessimism is probably more common), so perhaps the kind of groupthink overconfidence/excessive positivity is less of an issue than in larger project teams where nobody wants to be the one to be negative.  But perhaps there’s value in asking the question anyway, and re-focusing applicants on the fact that they’re writing an application for reviewers and for a funding body, not for themselves.  A reminder that the views, priorities, and (mis)interpretations of others are crucial to their chances of success or failure.

Would it help to say to internal reviewers “assume this project wasn’t funded – tell me why”?  Possibly.  It might flush out issues that reviewers may be too polite or insufficiently assertive to raise otherwise, and again, focuses minds on the nature of the process as a competition.  It could also help reviewers identify where the biggest danger for the application lies.

Another way it could usefully be used is in helping applicants risk assess their own project.  Saying to them “you got funded, but didn’t achieve the objectives you set for yourself.  Why not?” might be a good way of identifying project risks to minimise in the management plan, or risks to alleviate through better advanced planning.  It might prompt researchers to think more cautiously about the project timescale, especially around issues that are largely out of their control.

So… has anyone used anything like this before in research development?  Might it be a useful way of thinking?  Why will your current application fail?

Adam Golberg announces new post about Ministers inserting themselves into research grant announcements

“You might very well think that as your hypothesis, but I couldn’t possibly comment”

Here’s something I’ve been wondering recently.  Is it just me, or have major research council funding announcements started to be made by government ministers, rather than by the, er, research councils?

Here’s a couple of examples that caught my eye from the last week or so. First, David Willetts MP “announces £29 million of funding for ESRC Centres and Large Grants“.  Thanks Dave!  To be fair, he is Minster of State for Universities and Science.  Rather more puzzling is George Osborne announcing “22 new Centres for Doctoral Training“, though apparently he found the money as Chancellor of the Exchequer.  Seems a bit tenuous to me.

So I had a quick look back through the ESRC and EPSRC press release archives to see if the prominence of government ministers in research council funding announcements was a new thing or not.  Because I hadn’t noticed it before.  With the ESRC, it is new.  Here’s the equivalent announcement from last year in which no government minister is mentioned.  With the EPSRC, it’s being going on for longer.  This year’s archive and the 2013 archive show government ministers (mainly Willetts, sometimes Cable or Osborne) front and centre in major announcements.  In 2012 they get a name check, but normally in the second or third paragraph, not in the headline, and don’t get a picture of themselves attached to the story.

Does any of this matter? Perhaps not, but here’s why I think it’s worth mentioning.  The Haldane Principle is generally defined as “decisions about what to spend research funds on should be made by researchers rather than politicians”.  And one of my worries is that in closely associating political figures with funding decisions, the wrong impression is given.  Read the recent ESRC announcement again, and it’s only when you get down to the ‘Notes for Editors’ section that there’s any indication that there was a competition, and you have to infer quite heavily from those notes that decisions were taken independently of government.

Why is this happening? It might be for quite benign reasons – perhaps research council PR people think (probably not unreasonably) that name-checking a government minister gives them a greater chance of media coverage. But I worry that it might be for less benign reasons related to political spin – seeking credit and basking in the reflected glory of all these new investments, which to the non-expert eye look to be something novel, rather than research council business as usual.  To be fair, there are good arguments for thinking that the current government does deserve some credit for protecting research budgets – a flat cash settlement (i.e. cut only be the rate of inflation each year) is less good than many want, but better than many feared. But it would be deeply misleading if the general public were to think that these announcements represented anything above and beyond the normal day-to-day work of the research councils.

Jo VanEvery tells me via Twitter that ministerial announcements are normal practice in Canada, but something doesn’t quite sit right with me about this, and it’s not a party political worry.  I feel there’s a real risk of appearing to politicise research.  If government claims credit, it’s reasonable for the opposition to criticise… now that might be the level of investment, but might it extend to the investments chosen?  Or do politicians know better than to go there for cheap political points?

Or should we stop worrying and just embrace it? It’s not clear that many people outside of the research ‘industry’ notice anyway (though the graphene announcement was very high profile), and so perhaps the chances of the electorate being misled (about this, at least) are fairly small.

But we could go further.  MEPs to announce Horizon 2020 funding? Perhaps Nick Clegg should announce the results of the British Academy/Leverhulme Small Grants Scheme, although given the Victorian origins of investments and wealth supporting work of the Leverhulme Trust, perhaps the honour should go to the ghosts of Gladstone or Disraeli.