How useful is reading examples of successful grant applications?

This article is prompted by a couple of twitter conversations around a Times Higher Education article which quotes Ross Mounce, founding editor of Research Ideas and Outcomes, who argues for open publication at every stage of the research process, including (successful and unsuccessful) grant applications. The article acknowledges that this is likely to be controversial, but it got a few of us thinking about the value of reading other people’s grant applications to improve one’s own.

I’m asked about this a lot by prospective grant applicants – “do you have any examples of successful applications that you can share?” – and while generally I will supply them if I have access to them, I also add substantial caveats and health warnings about their use.

The first and perhaps most obvious worry is that most schemes change and evolve over time, and what works for one call might not work in another. Even if the application form hasn’t changed substantially, funder priorities – both hard priorities and softer steers – may have changed. And even if neither have changed, competitive pressures and improved grant writing skills may well be raising the bar, and an application that got funded – say – three or four years ago might not get funding today. Not necessarily because the project is weaker, but because the exposition and argument would now need to be stronger. This is particularly the case for impact – it’s hard to imagine that many of the impact sections on RCUK applications written in the early days of impact would pass muster now.

The second, and more serious worry, is that potential applicants take the successful grant application far too seriously and far too literally. I’ve seen smart, sensible, sophisticated people become obsessed with a successful grant application and try to copy everything about it, whether relevant or not, as if there was some mystical secret encoded into the text, and any subtle deviation would prevent the magic from working. Things like… the exact balance of the application, the tables/diagrams used or not used (“but the successful application didn’t have diagrams!”), the referencing system, the font choice, the level of technical detail, the choice and exposition of methods, whether there are critical friends and/or a steering group, the number of Profs on the bid, the amount of RA time, the balance between academic and stakeholder impact.

It’s a bit like a locksmith borrowing someone else’s front door key, making as exact a replica as she can, and then expecting it to open her front door too. Or a bit like taking a recipe that you’ve successfully followed and using it to make a completely different dish by changing the ingredients while keeping the cooking processes the same. Is it a bit like cargo cult thinking? Attempting to replicate an observed success or desired outcome by copying everything around it as closely as possible, without sufficient reflection on cause and effect? It’s certainly generalising inappropriately from a very small sample size (often n=1).

But I think – subject to caveats and health warnings – it can be useful to look at previously successful applications from the same scheme. I think it can sometimes even be useful to look at unsuccessful applications. I’ve changed my thinking on this quite a bit in the last few years, when I used to steer people away from them much more strongly. I think they can be useful in the following ways:

  1. Getting a sense of what’s required. It’s one thing seeing a blank application form and list of required annexes and additional documents, it’s another seeing the full beast. This will help potential applicants get a sense of the time and commitment that’s required, and make sensible, informed decisions about their workload and priorities and whether to apply or not.
  2. It also highlights all of the required sections, so no requirement of the application should come as a shock. Increasingly with the impact agenda it’s a case of getting your ducks in a row before you even think about applying, and it’s good to find that out early.
  3. It makes success feel real, and possible, especially if the grant winner is someone the applicant knows, or who works at the same institution. Low success rates can be demoralising, but it helps to know not only that someone, somewhere is successful, but that someone here and close by has been successful.
  4. It does set a benchmark in terms of the state of readiness, detail, thoroughness, and ducks-in-a-row-ness that the attentive potential applicant should aspire to at least equal, if not exceed. Early draft and early stage research applications often have larger or smaller pockets of vaguery and are often held together with a generous helping of fudge. Successful applications should show what’s needed in terms of clarity and detail, especially around methods.
  5. Writing skills. Writing grant applications is a very different skill to writing academic papers, which may go some way towards explaining why the Star Wars error in grant writing is so common. So it’s going to be useful to see examples of that skill used successfully… but having said that, I have a few examples in my library of successes which were clearly great ideas, but which were pretty mediocre as examples of how to craft a grant application.
  6. Concrete ideas and inspiration. Perhaps about how to use social media, or ways to engage stakeholders, or about data management, or other kinds of issues, questions and challenges if (and only if) they’re also relevant for the new proposal.

So on balance, I think reading (funder and scheme) relevant, recent, and highly rated (even if not successful) funding applications can help prospective applicants…. provided that they remember that what they’re reading and drawing inspiration from is a different application from a different team to do different things for different reasons at a different time.

And not a mystical, magical, alchemical formula for funding success.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

ESRC success rates 2014/2015 – a quick and dirty commentary

"meep meep"
Success rates. Again.

The ESRC has issued its annual report and accounts for the financial year 2014/15, and they don’t make good reading. As predicted by Brian Lingley and Phil Ward back in January on the basis of the figures from the July open call, the success rate is well down – to 13% –  from the 25% I commented on last year , 27% on 2012-13 and 14% of 2011-2012.

Believe it or not there is a staw-grasping positive way of looking at these figures… of which more later.

This research professional article has a nice overview which I can’t add much to, so read it first. Three caveats about these figures, though…

  • They’re for the standard open call research grant scheme, not for all calls/schemes
  • They relate to the financial year, not the academic year
  • It’s very difficult to compare year-on-year due to changes to the scheme rules, including minimum and maximum thresholds which have changed substantially.

In previous years I’ve focused on how different academic disciplines have got on, but there’s probably very little to add. You can read them for yourself (p. 38), but the report only bothers to calculate success rates for the disciplines with the highest numbers of applications – presumably beyond that there’s little statistical significance. I could be claiming that it’s been a bumper year for Education research, which for years bumped along at the bottom of the league table with Business and Management Studies in terms of success rates, but which this year received 3 awards from 22 applications, tracking the average success rate. Political Science and Socio-Legal Studies did well, as they always tend to do. But it’s generalising from small numbers.

As last year, there is also a table of success rates by institution. In an earlier section on demand management, the report states that the ESRC “are discussing ways of enhancing performance with those HEIs where application volume is high and quality is relatively weak”. But as with last year, it’s hard to see from the raw success rate figures which these institutions might be – though of course detailed institutional profiles showing the final scores for applications might tell a very different story. Last year I picked out Leeds (10/0), Edinburgh (8/1), and Southampton (14/2) as doing poorly, and Kings College (7/3), King Leicester III (9/4), Oxford (14/6) as doing well – though again, one more or less success changes the picture.

This year, Leeds (8/1) and Edinburgh (6/1) have stats that look much better. Southampton doesn’t look to have improved (12/0) at all, and is one of the worst performers. Of those who did well last year, none did so well this year – Kings were down to 11/1, Leicester 2/0, and Oxford 11/2. Along with Southampton, this year’s poor performers were Durham (10/0), UCL (15/1)  and Sheffield (11/0) – though all three had respectable enough scores last time. This year’s standouts were Cambridge at 10/4. Perhaps someone with more time than me can combine success rates from the last two years, and I’m sure someone at the ESRC already has….

So… on the basis of success rates alone, probably only Southampton jumps out as doing consistently poorly. But again, much depends on the quality profile of the applications being submitted – it’s entirely possible that they were very unlucky, and that small numbers mask much more slapdash grant submission behaviour from other institutions. And of course, these figures only relate to the lead institution as far as I know.

It’s worth noting that demand management has worked… after a fashion.

We remain committed to managing application volume, with
the aim of focusing sector-wide efforts on the submission
of a fewer number of higher quality proposals with a
genuine chance of funding. General progress is positive.
Application volume is down by 48 per cent on pre-demand
management levels – close to our target of 50 per cent.
Quality is improving with the proportion of applications now
in the ‘fundable range’ up by 13 per cent on pre-demand
management levels, to 42 per cent. (p. 21).

I remember the target of reducing the numbers of applications received by 50% as being regarded as very ambitious at the time, and even if some of it was achieved by changing scheme rules to increase the minimum value of a grant application and banning resubmissions, it’s still some achievement. Back in October 2011 I argued that the ESRC had started to talk optimistically about meeting that target after researcher sanctions (in some form) had started to look inevitable. And in November 2012 things looked nicely on track.

But reducing brute numbers of applications is all very well. But if only 42% of applications are within the “fundable range”, then that’s a problem because it means that a lot of applications being submitted still aren’t good enough.This is where there’s cause for optimism – if less than half of the applications are fundable, your own chances should be more than double the average success rate – assuming that your application is of “fundable” quality. So there’s your good news. Problem is, no-one applies who doesn’t think their application is fundable.

Internal peer review/demand management processes are often framed in terms of improving the quality of what gets submitted, but perhaps not enough of a filtering process. So we refine and we polish and we make 101 incremental improvements… but ultimately you can’t polish a sow’s ear. Or something.

Proper internal filtering is really, really hard to do – sometimes it’s just easier to let stuff from people who won’t be told through and see if what happens is exactly what you think will happen, which it always is. There’s also a fine line (though one I think that can be held and defended) between preventing perceived uncompetitive applications from doing so and impinging on academic freedom. I don’t think telling someone they can’t submit a crap application is infringing their academic freedom, but any such decisions need to be taken with a great deal of care. There’s always the possibility of suspicion of ulterior motives – be it personal, be it subject or methods-based prejudice, or senior people just overstepping the mark and inappropriately imposing their convictions (ideological, methodological etc) on others. Like the external examiner who insists on “more of me” on the reading list….

The elephant in the room, of course, is the flat cash settlement and the fact that that’s now really biting, and that there’s nowhere near enough funding to go around for all of the quality social science research that’s badly needed. But we can’t do much about that – and we can do something about the quality of the applications we’re submitting and allowing to be submitted.

I wrote something for research professional a few years back on how not to do demand management/filtering processes, and I think it still stands up reasonably well and is even quite funny in places (though I say so myself). So I’m going to link to it, as I seem to be linking to a disproportionate amount of my back catalogue in this post.

A combination of a new minimum of £350k for the ESRC standard research grants scheme and the latest drop in success rates makes me think it’s worth writing a companion piece to this blog post about potential ESRC applicants need to consider before applying, and what I think is expected of a “fundable” application.

Hopefully something for the autumn…. a few other things to write about first.

Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.