ESRC – sweeping changes to the standard grants scheme

The ESRC have just announced a huge change to their standard grants scheme, and I think it’s fair to say that it’s going to prove somewhat controversial.

At the moment, it’s possible to apply to the ESRC Standard Grant Scheme at any time for grants of between £200k and £2million. From the end of June this year, the minimum threshold will raise from £200k to £350k, and the maximum threshold will drop from £2m to £1m.

Probably those numbers don’t mean very much to you if you’re not familiar with research grant costing, but as a rough rule of thumb, a full time researcher for a year (including employment costs and overheads) comes to somewhere around £70k-80k. So a rough rule of thumb I used to use was that if your project needed two years of researcher time, it was big enough. So… for £350k you’d probably need three researcher years, a decent amount of PI and Co-I time, and a fair chunk of non-pay costs. That’s a big project. I don’t have my filed in front of me as I’m writing this, so maybe I’ll add a better illustration later on.

This isn’t the first time the lower limit has been raised. Up until February 2011, there used to be a “Small Grants Scheme” for projects up to £200k before that was shut, with £200k becoming the new minimum. The argument at the time was that larger grants delivered more, and had fewer overheads in terms of the costs of reviewing, processing and administering. And although the idea was that they’d help early career researchers, the figures didn’t really show that.

The reasons given for this change are a little disingenuous puzzling. Firstly, this:

The changes are a response to the pattern of demand that is being placed on the standard grants scheme by the social science community. The average value of a standard grant application has steadily increased and is now close to £500,000, so we have adjusted the centre of gravity of the scheme to reflect applicant behaviour.

Now that’s an interesting tidbit of information – I wouldn’t have guessed that the “average value” would be that high, but you don’t have to be an expert in statistics (and believe me, in spite of giving 110% in maths class at school I’m not one) to wonder what “average” means, and further, why it even matters. This might be an attempt at justification, but I don’t see why this provides a rationale for change.

Then we have this….

The changes are also a response to feedback from our Grant Assessment Panels who have found it increasingly difficult to assess and compare the value of applications ranging from £200,000 to £2 million, where there is variable level of detail on project design, costs and deliverables. This issue has become more acute as the number of grant applications over £1 million has steadily increased over the last two years. Narrowing the funding range of the scheme will help to maintain the robustness of the assessment process, ensuring all applications get a fair hearing.

I have every sympathy for the Grant Assessment Panel members here – how do you choose between funding one £2m project and funding 10 x £200k projects, or any combination you can think of? It’s not so much comparing apples to oranges as comparing grapes to water melons. And they’re right to point out the “variable” level of detail provided – but that’s only because their own rules give a maximum of 6 A4 page for the Case for Support for projects under £1m and 12 for those over. If you think that sounds superficially reasonable, then notice that it’s potentially double the space to argue for ten times the money. I’ve supported applications of £1m+ and 12 sides of A4 is nowhere near enough, compared to the relative luxury of 6 sides for £200k. This is a problem.

In my view it makes sense to “introduce an annual open competition for grants between £1 million and £2.5 million”, which is what the ESRC propose to do. So I think there’s a good argument for lowering the upper threshold from £2m to £1m and setting it up as a separate competition. I know the ESRC want to reduce the number of calls/schemes, but this makes sense. As things stand I’ve regularly steered people away from the Centres/Large Grants competition towards Standard Grants instead, where I think success rates will be higher and they’ll get a fairer hearing. So I’d be all in favour of having some kind of single Centres/Large/Huge/Grants of Unusual Size competition.

But nothing here seems to me to be an argument for raising the lower limit.

But finally, I think we come to what I suspect is the real reason, and judging by Twitter comments so far, I’m not alone in thinking this.

We anticipate that these changes will reduce the volume of applications we receive through the Standard Grants scheme. That will increase overall success rates for those who do apply as well as reducing the peer review requirements we need to place on the social science community.

There’s a real problem with ESRC success rates, which dropped to 10% in the July open call, with over half the “excellent” proposals unfunded. This is down from around 25% success rates, much improved in the last few years. I don’t know whether this is a blip – perhaps a few very expensive projects were funded and a lot of cheaper ones missed out – but it’s not good news. So it’s hard not to see this change as driven entirely by a desire to get success rates up, and perhaps an indication that this wasn’t a blip.

In a recent interview with Adam Smith of Research Professional, Chief Exec Jane Eliot recently appeared to rule out the option of individual sanctions which had been threatened if institutional restraint failed to bring down the number of poor quality applications and it appears that the problem is not so much poor quality applications as lots of high quality applications, not enough money, plummeting success rates, and something needing to be done.

All this raises some difficult questions.

  • Where are social science researchers now supposed to go for funding for projects whose “natural size” is between £10k (British Academy Small Grants) and £350k, the proposed new minimum threshold? There’s only really the Leverhulme Trust, whose schemes will suit some project types and but not others, and they’re not exclusively a social science funder.
  • Where will the next generation of PIs to be entrusted with £350k of taxpayer’s money have an opportunity to cut their teeth, both in terms of proving themselves academically and managerially?
  • What about career young researchers? At least here we can expect a further announcement – there has been talk of merging the ‘future leaders scheme’ into Standard Grants, so perhaps there will be a lower minimum for them. But we’ll see.
  • Given that the minimum threshold has been almost doubled, what consultation has been carried out? I’m just a humble Business School Research Manager (I mean I’m humble, my Business School is outstanding, obviously) so perhaps it’s not surprising that this the first I’ve heard. But was there any meaningful consultation over this? Is there any evidence underpinning claims for the efficiency of fewer, longer and larger grants?
  • How do institutions respond? I guess one way will be to work harder to create bigger gestalt projects with multiple themes and streams and work packages. But surely expectations of grant getting for promotion and other purposes need to be dialled right back, if they haven’t been already. Do we encourage or resist a rush to get applications in before the change, at a time when success rates will inevitably be dire?

Of course, the underlying problem is that there’s not enough money in the ESRC’s budget to support excellent social science after years and years of “flat cash” settlements. And it’s hard to see what can be done about that in the current political climate.

Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.

Adam Golberg announces new post about Ministers inserting themselves into research grant announcements

“You might very well think that as your hypothesis, but I couldn’t possibly comment”

Here’s something I’ve been wondering recently.  Is it just me, or have major research council funding announcements started to be made by government ministers, rather than by the, er, research councils?

Here’s a couple of examples that caught my eye from the last week or so. First, David Willetts MP “announces £29 million of funding for ESRC Centres and Large Grants“.  Thanks Dave!  To be fair, he is Minster of State for Universities and Science.  Rather more puzzling is George Osborne announcing “22 new Centres for Doctoral Training“, though apparently he found the money as Chancellor of the Exchequer.  Seems a bit tenuous to me.

So I had a quick look back through the ESRC and EPSRC press release archives to see if the prominence of government ministers in research council funding announcements was a new thing or not.  Because I hadn’t noticed it before.  With the ESRC, it is new.  Here’s the equivalent announcement from last year in which no government minister is mentioned.  With the EPSRC, it’s being going on for longer.  This year’s archive and the 2013 archive show government ministers (mainly Willetts, sometimes Cable or Osborne) front and centre in major announcements.  In 2012 they get a name check, but normally in the second or third paragraph, not in the headline, and don’t get a picture of themselves attached to the story.

Does any of this matter? Perhaps not, but here’s why I think it’s worth mentioning.  The Haldane Principle is generally defined as “decisions about what to spend research funds on should be made by researchers rather than politicians”.  And one of my worries is that in closely associating political figures with funding decisions, the wrong impression is given.  Read the recent ESRC announcement again, and it’s only when you get down to the ‘Notes for Editors’ section that there’s any indication that there was a competition, and you have to infer quite heavily from those notes that decisions were taken independently of government.

Why is this happening? It might be for quite benign reasons – perhaps research council PR people think (probably not unreasonably) that name-checking a government minister gives them a greater chance of media coverage. But I worry that it might be for less benign reasons related to political spin – seeking credit and basking in the reflected glory of all these new investments, which to the non-expert eye look to be something novel, rather than research council business as usual.  To be fair, there are good arguments for thinking that the current government does deserve some credit for protecting research budgets – a flat cash settlement (i.e. cut only be the rate of inflation each year) is less good than many want, but better than many feared. But it would be deeply misleading if the general public were to think that these announcements represented anything above and beyond the normal day-to-day work of the research councils.

Jo VanEvery tells me via Twitter that ministerial announcements are normal practice in Canada, but something doesn’t quite sit right with me about this, and it’s not a party political worry.  I feel there’s a real risk of appearing to politicise research.  If government claims credit, it’s reasonable for the opposition to criticise… now that might be the level of investment, but might it extend to the investments chosen?  Or do politicians know better than to go there for cheap political points?

Or should we stop worrying and just embrace it? It’s not clear that many people outside of the research ‘industry’ notice anyway (though the graphene announcement was very high profile), and so perhaps the chances of the electorate being misled (about this, at least) are fairly small.

But we could go further.  MEPs to announce Horizon 2020 funding? Perhaps Nick Clegg should announce the results of the British Academy/Leverhulme Small Grants Scheme, although given the Victorian origins of investments and wealth supporting work of the Leverhulme Trust, perhaps the honour should go to the ghosts of Gladstone or Disraeli.

Six writing habits I reckon you ought to avoid in grant applications…..

There are lots of mistakes to avoid in writing grant applications, and I’ve written a bit about some of them in some previous posts (see “advice on grant applications” link above).  This one is more about writing habits.  I read a lot of draft grant applications, and as a result I’ve got an increasingly long list of writing quirks, ticks, habits, styles and affectations that Get On My Nerves.

Imagine I’m a reviewer… Okay, I’ll start again.. imagine I’m a proper reviewer with some kind of power and influence…. imagine further that I’ve got a pile of applications to review that’s as high as a high pile of applications.  Imagine how well disposed I’d feel towards anyone who makes reading their writing easier, clearer, or in the least bit more pleasant.  Remember how the really well-written essays make your own personal marking hell a little bit less sulphurous for a short time.  That.  Whatever that tiny burst of goodwill – or antibadwill – is worth, you want it.

The passive voice is excessively used

I didn’t know the difference between active and passive voice until relatively recently, and if you’re also from a generation where grammar wasn’t really teached in schools then you might not either.  Google is your friend for a proper explanation by people who actually know what they’re talking about, and you should probably read that first, but my favourite explanation is from Rebecca Johnson – if you can add “by zombies”, then it’s passive voice. I’ve also got the beginnings of a theory that the Borg from Star Trek use the passive voice, and that’s one of the things that makes them creepy (“resistance is futile” and “you will be assimilated”)  but I don’t know enough about grammar or Star Trek to make a case for this.   Sometimes the use of the passive voice (by zombies) is appropriate, but often it makes for distant and slightly tepid writing.  Consider:

A one day workshop will be held (by zombies) at which the research findings will be disseminated (by zombies).  A recording of the event will be made (bz) and posted on our blog (bz).  Relevant professional bodies will be approached (bz)…

This will be done, that will be done.  Yawn.  Although, to be fair, a workshop with that many zombies probably won’t be a tepid affair.  But much better, I think, to take ownership… we will do these things, co-Is A and B will lead on X.  Academic writing seems to encourage depersonalisation and formality and distancing (which is why politicians love it – “mistakes were made [perhaps by zombies, but not by me]”.

I think there are three reasons why I don’t like it.  One is that it’s just dull.  A second is that I think it can read like a way of avoiding detail or specifics or responsibility for precisely the reasons that politicians use it, so it can subconsciously undermine the credibility of what’s being proposed.  The third reason is that I think for at least some kinds of projects, who the research team are – and in particular who the PI is – really matters.  I can understand the temptation to be distant and objective and sciency as if the research speaks entirely for itself.  But this is your grant application, it’s something that you ought to be excited and enthused by, and that should come across. If you’re not, don’t even bother applying.

First Person singular, First Person plural, Third Person

Pat Thomson’s blog Patter has a much fuller and better discussion about the use of  “we” and “I” in academic writing that I can’t really add much to. But I think the key thing is to be consistent – don’t be calling yourself Dr Referstoherselfinthethirdperson in one part of the application, “I” in another, “the applicant” somewhere else, and “your humble servant”/ “our man in Havana” elsewhere.  Whatever you choose will feel awkward, but choose a consistent method of awkwardness and have done with it. Oh, and don’t use “we” if you’re the sole applicant.  Unless you’re Windsor (ii), E.

And don’t use first names for female team members and surnames for male team members.  Or, worse, first names for women, titles and surnames for men. I’ve not seen this myself, but I read about it in a tweet with the hashtag #everydaysexism

Furthermore and Moreover…

Is anyone willing to mount a defence for the utility of either of these words, other than (1) general diversity of language and (2) padding out undergraduate essays to the required word count? I’m just not sure what either of these words actually means or adds, other than perhaps as an attempted rhetorical flourish, or, more likely, a way of bridging non-sequiturs or propping up poor structuring.

“However” and “Yet”…. I’ll grudgingly allow to live.  For now.

Massive (Right Justified) Wall-o-Text Few things make my heart sink more than having to read a draft application that regards the use of paragraphs and other formatting devices as illustrative of a lack of seriousness and rigour. There is a distinction between densely argued and just dense.  Please make it easier to read… and that means not using right hand justification.  Yes, it has a kind of superficial neatness, but it makes the text much less readable.

Superabundance of Polysyllabic  Terminology

Too many long words. It’s not academic language and (entirely necessary) technical terms and jargon that I particularly object to – apart from in the lay summary, of course.  It’s a general inflation of linguistic complexity – using a dozen words where one will do, never using a simple word where a complex one will do, never making your point twice when a rhetorically-pleasing triple is on offer.

I guess this is all done in an attempt to make the application or the text seem as scholarly and intellectually rigorous as possible, and I think students may make similar mistakes.  As an undergraduate I think I went through a deeply regrettable phase of trying to ape the style of academic papers in my essay writing, and probably made myself sound like one of the most pompous nineteen year olds on the planet.

If you find yourself using words like “effectuate”, you might want to think about whether you might be guilty of this.

Sta. Cca. To. Sen. Ten. Ces.

Varying and manipulating sentence length can be done deliberately to produce certain effects.  Language has a natural rhythm and pace.  Most people probably have some awareness of what that is.  They are aware that sentences which are one paced can be very dull.  They are aware that this is something tepid about this paragraph.  But not everyone can feel the music in language.  I think it is a lack of commas that is killing this paragraph.  Probably there is a technical term for this.

So… anyone willing to defend “moreover” or “furthermore”? Any particularly irritating habits I’ve missed?  Anyone actually know any grammar or linguistics provide any technical terms for any of these habits?

ESRC success rates by discipline for 2012-13

Update: 2013/14 figures here.

WA pot of gold at the end of a rainbowith all of the fanfare of a cat-burglar slipping in through a first floor window in back office of a diamond museum, the ESRC has published its Vital Statistics for 2012-13, including the success rates by academic discipline.  I’ve been looking forward to seeing these figures to see if there’s been any change since last year’s figures, which showed huge variations in success rates between different disciplines, with success rates varying from 1 in 68 for Business and Management and 2 in 62 for Education compared to 7 of 18 for socio-legal studies.

The headline news, as trumpeted in the Times Higher, is that success rates are indeed up, and that “demand management” appears to be working.  Their table shows how applications, amount of money distributed, and success rates have varied over the last few years, and has figures for all of the research councils.  For the ESRC, the numbers in their Vital Statistics document are slightly different (315 applications, 27% success rate) to those in the Times Higher table (310, 26%) , possibly because some non-university recipients have been excluded.  The overall picture is hugely encouraging and is a great improvement on 14% success rates last year.  And it’s also worth repeating that these figures don’t seem to include the Knowledge Exchange scheme, which now has a 52% success rate.  This success rate is apparently too high, as the scheme is going to end in March next year to be replaced with a scheme of passing funding directly to institutions based on their ESRC funding record – similar to the EPSRC scheme which also delegates responsibility for running impact/knowledge exchange schemes to universities.

For the ESRC, “demand management” measures so far have largely consisted of:
(i) Telling universities to stop submitting crap applications (I paraphrase, obviously…..)
(ii) Telling universities that they have to have some kind of internal peer review process
(iii) Threatening some kind of researcher sanctions if (i) and (ii) don’t do the trick.

And the message appears to have been getting through.  Though I do wonder how much of this gain is through eliminating “small” research grants – up to £100k – which I think in recent times had a worse success rate than Standard Grants, though that wasn’t always the case historically.  Although it’s more work to process and review applications for four pots of 100k than for one of 400k, the loss of Standard Grants is to be regretted, as it’s now very difficult indeed to get funding for social science projects with a natural size of £20k-£199k.

But what you’re probably wondering is how your academic discipline got on this time round.  Well, you can find this year’s and last year’s Vital Statistics documents hidden away in a part of the ESRC’s website that even I struggle to find, and I’ve collated them for easy comparison purposes here.  But the figures aren’t comparing like with like – the 2011/12 figures included the last six months of the old Small Grants Scheme, which distorts things.  It’s also difficult (obviously) to make judgements based on small numbers which probably aren’t statistically significant. Also, in the 2011-12 figures there were 43 applications (about 6% of the total) which were flagged as “no lead discipline”, which isn’t a category this year.  But some overall trends have emerged:

  • Socio-legal Studies (7 from 18, 3 from 8), Linguistics (6 from 27, 5 from 15) and Social Anthropology (5 from 18, 4 from 5) have done significantly better than the average for the last two years
  • Business and Management (1 from 68, 2 from 17) and Education (2 from 62, 2 from 19) continue to do very poorly.
  • Economics and Economics and Social History did very well the year before last, but much less well this year.
  • Psychology got one-third of all the successes last year, and over a quarter the year before, though the success rate is only very slightly above average in both years.
  • No projects in the last two years funded from Environmental Planning or Science and Technology Studies
  • Demography (2 from 2) and Social Work (3 from 6) have their first projects funded since 2009/10.

Last year I speculated briefly about what the causes of these differences might be and looked at success rates in previous years, and much of that is still relevant.  Although we should welcome the overall rise in success rates, it’s still the case that some academic subjects do consistently better than others with the ESRC.  While we shouldn’t expect to see exactly even success rates, when some consistently outperform the average, and some under-perform, we ought to wonder why that is.

Meanwhile, over at the ESRC…

There have been a few noteworthy developments at the ESRC over the summer months which I think are probably worth drawing together into a single blog post for those (like me) who’ve made the tactical error of choosing to have some time off over the summer.

1.  The annual report

I’ve been looking forward to this (I know, I know….) to see whether there’s been any substantial change to the huge differences in success rates between different academic disciplines.  I wrote a post about this back in October and it’s by some distance the most read article on my blog. Has there been any improvements since 2011/12, when Business and Management had 1 of 68 applications funded and Education 2 of 62, compared to Socio-Legal Studies (39%, 7 of 18), and Social Anthropology (28%, 5 from 18).

Sadly, we still don’t know, because this information is nowhere to be found in the annual report. We know the expenditure by region and the top 11 (sic) recipients of research expenditure, research and training expenditure, and the two combined.  But we don’t know how this breaks down by subject.  To be fair, that information wasn’t published until October last year, and so presumably it will be forthcoming.  And presumably the picture will be better this year.

That’s not to say that there’s no useful information in the annual report. We learn that the ESRC Knowledge Exchange Scheme has a very healthy success rate of 52%, though I think I’m right in saying that the scheme will have been through a number of variations in the period in question. Historically it’s not been an easy scheme to apply for, partly because of the need for co-funding from research partners, and partly because of a number of very grey areas around costing rules.

For the main Research Grants Scheme success rates are also up, though by how much is unclear.  The text of the report (p. 18) states that

After a period where rates plummeted to as low as 11 per cent, they have now risen to 35 per cent, in part because we have committed additional funding to the scheme [presumably through reallocation, rather than new money] but also because application volume has decreased. This shows the effects of our demand management strategy, with HEIs now systematically quality assuring their applications and filtering out those which are not ready for submission. We would encourage HEIs to continue to develop their demand management strategies as this means academics and administrators in both HEIs and the ESRC have been able to focus efforts on processing and peer-reviewing a smaller number of good quality applications, rather than spending time on poor quality proposals which have no chance of being funded.

Oddly the accompanying table gives a 27% success rate, and unfortunately (at the time of writing) the document with success rates for individual panel meetings hasn’t been updated since April 2012, and the individual panel meeting documents only list funded projects, not success rates. But whatever the success rate is, it does appear to be a sign that “demand management” is working and that institutions are practising restraint in their application habits.  Success rates of between a quarter and a third sound about right to me – enough applications to allow choice, but not so many as to be a criminal waste of time and effort.

The report also contains statistics about the attendance of members at Council and Audit Committee Meetings, but you’ll have to look them up for yourself as I have a strict “no spoilers” policy on this blog.

I very much look forward – and I think the research community is too – to seeing the success rates by academic discipline at a later date.

2. A new Urgency Grants Mechanism

More good news…. a means by which research funding decisions can be taken quickly in response to the unexpected and significant.  The example given is the Riots of summer 2011, and I remember thinking that someone would get a grant out of all this as I watched TV pictures my former stomping ground of Croydon burn.  But presumably less… explosive unexpected opportunities might arise too.  All this seems only sensible, and allows a way for urgent requests to be considered in a timely and transparent manner.

3. ESRC Future Research Leaders call

But “sensible” isn’t a word I’d apply to the timing of this latest call.  First you’ve heard of it?  Well, better get your skates on because the deadline is the 24th September. Outline applications?  Expressions of interest?  Nope, a full application.  And in all likelihood, you should probably take your skates off again because chances are that your institution’s internal deadlines for internal peer review have already been and gone.

The call came out on or about the 23rd July, with a deadline of 24th September. Notwithstanding what I’ve said previously about no time of the academic year being a good time to get anything done, it’s very hard to understand why this happened.  Surely the ESRC know that August/September is when a lot of academic staff (and therefore research support) are away from the university on a mixture of annual leave and undertaking research.  Somehow, institutions are expected to cobble together a process of internal review and institutional support, and individuals are expected to find time to write the application.  It’s hard enough for the academics to write the applications, but if we take the demand management agenda seriously, we should be looking at both the track record and the proposed project of potential applicants, thinking seriously about mentoring and support, and having difficult conversations with people we don’t think are ready.  That needs a lot of senior time, and a lot of research management time.

This scheme is a substantial investment.  Effectively 70 projects worth up to £250k (at 80% fEC).  This is a major investment, and given that the Small Grants scheme and British Academy Fellowship success rates are tiny, this is really the major opportunity to be PI on a substantial project.  This scheme is overtly picking research leaders of the future, but the timetable means that it’s picking those leaders from those who didn’t have holiday booked in the wrong couple of weeks, or who could clear their diaries to write the application, or who don’t have a ton of teaching to prepare for – which is most early career academics, I would imagine.

Now it might be objected that we should have know that the call was coming.  Well…. yes and no. The timing was similar last year, and it was tight then, but it’s worse this year – it was announced on about the same date, but with a deadline 4th October, almost two working weeks later.  Two working weeks that turns it from a tall order into something nigh on impossible, and which can only favour those with lighter workloads in the run-up to the new academic year. And even knowing that it’s probably coming doesn’t help.  Do we really expect people to start making holiday plans around when a particular call might come out?  Really?  If we must have a September deadline, can we know about it in January?  Or even earlier?  To be fair, the ESRC has got much better with pre-call announcements of late, at least for very narrow schemes, but this really isn’t good enough.

I also have a recollection (backed up by a quick search through old emails, but not by documentary evidence) that last year the ESRC were talking about changing the scheme for this year, possibly with multiple deadlines or even going open call.  Surely, I remember thinking, this start-of-year madness can only be a one-off.

Apparently not.

Is there a danger that research funding calls are getting too narrow?

The ESRC have recently added a little more detail to a previous announcement about a pending call for European-Chinese joint research projects on Green Economy and Population Change.  Specifically, they’re after projects which address the following themes:

Green Economy

  • The ‘greenness and dynamics of economies’
  • Institutions, Policies and planning for a green economy
  • The green economy in cities and metropolitan areas
  • Consumer behaviour and lifestyles in a green economy

Understanding population Change

  • changing life course
  • urbanisation and migration
  • labour markets and social security dynamics
  • methodology, modelling and forecasting
  • care provision
  • comparative policy learning

Projects will need to involve institutions from at least two of the participating European counties (UK, France (involvement TBC), Germany, Netherlands) and two institutions in China. On top of this is an expectation that there will be sustainability/capacity building around the research collaborations, plus the usual further plus points of involving stakeholders and interdisciplinary research.

Before I start being negative, or potentially negative, I have one blatant plug and some positive things to say. The blatant plug is that the University of Nottingham has a campus in Ningbo in China which is eligible for NSFC funding and therefore would presumably count as one Chinese partner. I wouldn’t claim to know all about all aspects of our Ningbo research expertise, but I know people who do.  Please feel free to contact me with ideas/research agendas and I’ll see if I can put you in touch with people who know people.

The positive things.  The topics seem to me to be important, and we’ve been given advance notice of the call and a fair amount of time to put something together.  There’s a reference to Open Research Area procedures and mechanisms, which refers to agreements between the UK, France, Netherlands and Germany on a common decision making process for joint projects in which each partner is funded by their national funder under their own national funding rules.  This is excellent, as it doesn’t require anyone to become an expert in another country’s national funder’s rules, and doesn’t have the double or treble jeopardy problem of previous calls where decisions were taken by individual funders.  It’s also good that national funders are working together on common challenges – this adds fresh insight, invites interesting comparative work and pools intellectual and financial resources.

However, what concerns me about calls like this is that the area at the centre of the particular Venn diagram of this call is really quite small.  It’s open to researchers with research interests in the right areas, with collaborators in the right European countries, with collaborators in China.   That’s two – arguably three – circles in the diagram.  Of course, there’s a fourth – proposals that are outstanding.  Will there be enough strong competition on the hallowed ground at the centre of all these circles? It’s hard to say, as we don’t know yet how much money is available.

I’m all for calls that encourage, incentivise, and facilitate international research.  I’m in favour of calls on specific topics which are under-researched, which are judged of particular national or international importance, or where co-funding from partners can be found to address areas of common interest.

But I’m less sure about having both in one call – both very specific requirements in terms of the nationality of the partner institutions, and in terms of the call themes. Probably the scope of this call is wide enough – presumably the funders think so – but I can’t help think that that less onerous eligibility requirements in terms of partners could lead to greater numbers of high quality applications.

ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

There was an interesting snippet of information in an article in this week’s Times Higher about the latest research council success rates.

 [A] spokeswoman for the ESRC said that since the research council had begun requiring institutions from June 2011 to internally sift applications before submitting them, it had recorded an overall success rate of 24 per cent, rising to 33 per cent for its most recent round of responsive mode grants.  She said that application volumes had also dropped by 37 per cent, “which is an encouraging start towards our demand management target of a 50 per cent reduction” by the end of 2014-15.

Back in October last year I noticed what I thought was a change in tone from the ESRC which gave the impression that they were more confident that institutions had taken note of the shot across the bows of the “demand management” measures consultation exercise(s), and that perhaps asking for greater restraint in putting forward applications would be sufficient.  I hope it is, because as the current formal demand management proposals that will be implemented if required unfairly and unreasonably include co-applicants in any sanction.

I’ve written before (and others have added very interesting comments) about how I think we arrived at the situation where social science research units were flinging as many applications in as possible in the hope that some of them would stick.  And I hope the recent improvements in success rates to around 1-in-3, 1-in-4 don’t serve to re-encourage this kind of behaviour. We need long term, sustainable, careful, restraint in terms of what applications are submitted by institutions to the ESRC (and other major funders, for that matter) and the state in which they’re submitted.

Everyone will want to improve the quality of applications, and internal mentoring and peer review and the kind of lay review that I do will assist with that, but we also need to make sure that the underlying research idea is what I call ‘ESRC-able’.  At Nottingham University Business School, I secured agreement a while ago now to introduce a ‘proof of concept’ review phase for ESRC applications, where we review a two page outline first, before deciding whether to give the green light for the development of a full application.  I think this allows time for changes to be made at the earliest stage, and makes it much easier for us to say that the idea isn’t right and shouldn’t be developed than if a full application was in front of us.

And what isn’t ‘ESRC-able’?  I think a look at the assessment schema gives some useful clues – if you can’t honestly say that your application would fit in the top two categories on the final page, you probably shouldn’t bother.  ‘Dull but worthy’ stuff won’t get funded, and I’ve seen the phrase “incremental progress” used in referees’ comments to damn with faint praise.  There’s now a whole category of research that is of good quality and would doubtless score respectably in any REF exercise, but which simply won’t be competitive with the ESRC.  This, of course, raises the question about how non-groundbreaking stuff gets funded – the stuff that’s more than a series of footnotes to Plato, but which builds on and advances the findings of ground-breaking research by others.  And to that I have no answer – we have a system which craves the theoretically and methodologically innovative, but after a paradigm has been shifted, there’s no money available to explore the consequences.

*     *     *     *     *

Also in the Times Higher this week is the kind of story that appears every year – some universities have done better this year at getting research funding/with their success rates than in previous years, and some have done worse.  Some of those who have done better and worse are the traditional big players, and some are in the chasing pack.  Those who have done well credit their brilliant internal systems and those who have done badly will contest the figures or point to extenuating circumstances, such as the ending of large grants.

While one always wants to see one’s own institution doing well and doing better, and everyone always enjoys a good bit of schadenfreude at the expense of their rivals benchmark institutions and any apparent difficulties that a big beast find themselves in, are any of these short term variations of actual, real, statistical significance?  Apparently big gains can be down to a combination of a few big wins, grants transferring in with new staff, and just… well… the kind of natural variation you’d expect to see.  Big losses could be big grants ending, staff moving on, and – again – natural variance.  Yes, you could ascribe your big gains to your shiny new review processes, but would you also conclude that there’s a problem with those same processes and people the year after when performance is apparently less good?

Why these short term (and mostly meaningless) short term variations are more newsworthy than the radical variation in ESRC success rates for different social science disciplines I have no idea….

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..