Mistakes in grant writing – cut and paste text

A version of this article first appeared in Funding Insight in November 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

Given the ever-expanding requirements of most research funding application forms, it’s inevitable that applicants are tempted to pay less attention to some sections and end up writing text so generic, so bland, that it could be cut and pasted – with minimal editing of names and topics – into almost any other proposal.

Resist that temptation. Using text that looks like it could be cut and pasted between proposals suggests that you haven’t thought through the specifics of your project or fellowship, and it will make it seem less plausible as a result. 

Content free

I often see responses that are so content free they make my heart sink. For example:

1)  “We will present the findings at major international conferences and publish in world class journals”

2)  “The findings will be of interest to researchers in A, B, and C.”

3)  “This is a methodologically innovative, timely, and original project which represents a step change in our understanding”

4)  “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.”

5)  “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”.

These claims might well be true and can read well in isolation. But they’re only superficially plausible, and while they contain buzzwords that applicants think that funders are after, they’re entirely content, evidence, and argument free.

Self harm

Why should you care? Because your proposal doesn’t just have to be good enough to meet a certain standard, it has to be better than its rivals. If there are sections of your application that could be transferred into any rival application, this might be a sign that that section is not as strong or distinctive as it could be and is not giving you any competitive edge.

Cut and paste sections may be actively harming your chances. They may read well in isolation but when compared directly to more thoughtful and more detailed sections in rival applications, they can look weak and lazy, especially if they don’t take full advantage of the word count.

Cut and pasteable text tends to occur in the trickier sections of the application form to write and those that get less attention: dissemination; impact pathway/plan; academic impact; personal development plan; data management plan; choice of host institution. Sometimes these generic statements emerge because the applicants don’t know what to write, and sometimes because it’s all they can be bothered to write for a section they wrongly regard of lesser importance.

Give evidence

Give these sections the time, attention and thought they deserve. Add details. Add specifics.  Add argument.  Add evidence. Find things to say that only apply to your application.  If you don’t know how to answer a question strongly, get advice from your research development colleagues.

The more editing it would take to put it into someone else’s bid, the better. Here are some thoughts on improving the earlier examples:

1)  “We will present the findings at major international conferences and publish in world class journals”. I find it hard to understand vagueness about plans for academic impact. Even allowing for the fact that the findings of the research will affect plans, it’s surely not too much to expect some target journals and conferences to be named. If applicants can’t demonstrate knowledge of realistic targets, it undermines their credibility.

2)  “The findings will be of interest to researchers in A, B, and C.” I’d ban the phrase “of interest to” when explaining potential academic impact. It tells the reader nothing about the likely academic impact – who will cite your work, and what difference do you anticipate it will make to the field?

3)  “This is a methodologically innovative, timely, and original project which represents a step change in our understanding” Who will use your methods? Who will use your frameworks? If all research is standing on the shoulders of giants, how much further can future researchers see perched atop your work? How exactly does your project go beyond the state of the art, and what might be the new state of the art after your project?

4)  “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.” If you’re talking about engaging with social media, talk about how you are going to find readers and/or followers. What’s your plan for your presence in terms of the existing ecosystem of social media accounts that are active in this area? Who are the current key influencers?

5)  “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”. How does funding take you to what’s next? What’s the path from the conclusions of this project to your future research agenda?

Looking for cut and paste text – and improving it where you find it – is an excellent review technique to polish your draft application, and particularly to improve those harder-to-write sections. Hammering out the detail is more difficult, but it could give you an advantage in the race for funding.

Applying for research funding – is it worth it? Part II – Costs and Benefits

A version of this article first appeared in Funding Insight on 9th March 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

“Just when I thought I was out, they pull me back in!”

My previous post posed a question about whether applying for research funding was worth it or not, and concluded with a list of questions to consider to work out the answer. This follow-up is a list of costs and benefits associated with applying for external research funding, whether successful or unsuccessful. Weirdly, my list appears to contain more costs than benefits for success and more benefits than costs for failure, but perhaps that’s just me being contrary…

If you’re successful:

Benefits….

  • You get to do the research you really want to do
  • In career terms, whether for moving institution or internal promotion, there’s a big tick in the box marked ‘external research funding’.
  • Your status in your institution and within your discipline is likely to rise. Bringing in funding via a competitive external process gives you greater external validation, and that changes perceptions – perhaps it marks you out as a leader in your field, perhaps it marks a shift from career young researcher to fulfilling your evident promise.
  • Success tends to begat success in terms of research funding. Deliver this project and any future application will look more credible for it.

Costs…

  • You’ve got to deliver on what you promised. That means all the areas of fudge or doubt or uncertainty about who-does-what need to be sorted out in practice. If you’ve under-costed any element of the project – your time, consumables, travel and subsistence – you’ll have to deal with it, and it might not be much fun.
  • Congratulations, you’ve just signed yourself up for a shedload of admin. Even with the best and most supportive post-award team, you’ll have project management to do. Financial monitoring; recruitment, selection, and line management of one or more research associates. And it doesn’t finish when the research finishes – thanks to the impact agenda, you’ll probably be reporting on your project via Researchfish for years to come.
  • Every time any comparable call comes round in the future, your colleagues will ask you give a presentation about your application/sit on the internal sifting panel/undertake peer review. Once a funding agency has given you money, you can bet they’ll be asking you to peer review other applications. Listed as a cost for workload purposes, but there are also a lot of benefits to getting involved in peer reviewing applications because it’ll improve your own too. Also, the chances are that you benefited from such support/advice from senior colleagues, so pay it forward. But be ready to pay.
  • You’ve just raised the bar for yourself. Don’t be surprised if certain people in research management start talking about your next project before this one is done as if it’s a given or an inevitability.
  • Unless you’re careful, you may not see as much recognition in your workload as you might have expected. Of course, your institution is obliged to make the time promised in the grant application available to you, but unless you’ve secured agreement in advance, you may find that much of this is taken out of your existing research allocation rather than out of teaching and admin. Especially as these days we no longer thing of teaching as a chore to buy ourselves out from. Think very carefully about what elements of your workload you would like to lose if your application is successful.
  • The potential envy and enmity of colleagues who are picking up bits of what was your work.

If you’re unsuccessful…

Benefits…

  • The chances are that there’s plenty to be salvaged even from an unsuccessful application. Once you’ve gone through the appropriate stages of grief, there’s a good chance that there’s at least one paper (even if ‘only’ a literature review) in the work that you’ve done. If you and your academic colleagues and your stakeholders are still keen, the chances are that there’s something you can do together, even if it’s not what you ideally wanted to do.
  • Writing an application will force you to develop your research ideas. This is particularly the case for career young researchers, where the pursuit of one of those long-short Fellowships can be worth it if only to get proper support in developing your research agenda.
  • If you’ve submitted a credible, competitive application, you’ve at least shown willing in terms of grant-getting. No-one can say that you haven’t tried. Depending on the pressures/expectations you’re under, having had a credible attempt at it buys you some license to concentrate on your papers for a bit.
  • If it’s your first application, you’ll have learnt a lot from the process, and you’ll be better prepared next time. Depending on your field, you could even add a credible unsuccessful application to a CV, or a job application question about grant-getting experience.
  • If your institution has an internal peer review panel or other selection process, you’ve put you and your research onto the radar of some senior people. You’ll be more visible, and this may well lead to further conversations with colleagues, especially outside your school. In the past I’ve recommended that people put forward internal expressions of interest even if they’re not sure they’re ready for precisely this reason.

Costs…

  • You’ve just wasted your time – and quite a lot of time at that. And not just work time… often evenings and weekends too.
  • It’ll come as a disappointment, which may take some time to get over
  • Even if you’ve kept it quiet, people in your institution will know that you’ve been unsuccessful.

I’ve written two longer pieces on what to do if your research grant application is unsuccessful, which can be found here and here.

USS Pensions Strike – Could deliberative democracy be a way out of the impasse?

“Freedom for the University of Tooting!”

My headline, unfortunately, is a classic QTWTAIN (‘question to which the answer is no’) because I can’t see any evidence that the employers want to negotiate or seek alternatives or engage in any meaningful way. You can find an admirably clear (and referenced) summary of the current situation at the time of writing here.

And if you want to read my previous wibblings from a previous dispute about why you should join the union, see the second half of this post.

All I’ll add is that we’ve been here before as regards pensions cuts… again and again and again… only previously it’s been salami slices, or at least compared to what’s being proposed now. These previous changes, we were told, would put the scheme back on the right track, and were necessary due to increased life expectancy etc and so on. So my question is… were those previous claims about past changes just straightforward lies, or have things got worse? And if they’ve got worse, is that wider economic conditions, or incompetence? And either way, why are the people responsible taking huge pay increases? Why is my pension scheme on the way to becoming a regular in the pages of Private Eye?

Anyway… I wanted to talk about deliberative democracy. I listened to a really interesting Reasons to be Cheerful podcast (presented by Ed Miliband and Geoff Lloyd) on deliberative democracy the other week. If we ask everyone what they think on a particular topic, the problem is that not everyone will be equally well informed, will have the necessary time to follow the arguments and find the evidence, or will come to the topic with an open mind. The idea of deliberative democracy is finding a small, representative group, giving them full access to the evidence and the arguments and expertise, and then, through deliberation, work towards a consensus decision if possible.

Trial by jury follows this model very closely, though we don’t typically think of a jury as an expression of democracy. These are twelve ordinary people, selected at random (with some exceptions and criteria), and trusted to follow the arguments in a criminal trial. But we regard this as fair, and as legitimate, and my perception is that there’s widespread faith in trial by jury as an institution.

Could we extend this to other issues? For example, the current strike action about cuts to the USS pension scheme. At the moment I’m reading a lot of criticism about the methods of calculation, the underpinning assumptions, and some very questionable motivations and methods of reaching and spinning decisions by Universities UK. But some of that criticism comes from people who aren’t experts in this area, but have relevant expertise in other related areas, or in areas that share a skill set. Such cognate-experts might well be right, but equally there might be good explanations for some of the peculiar-looking assumptions. In keeping with the Dunning-Kruger Effect, might such people be overestimating their own expertise and underestimating those of genuine experts? I don’t know.

Hence my interest in deliberative democracy… get a representative group of pension scheme members (academic and APM, a range of ages (including PhD students and retired staff), union and non-union members, a range of seniority and experience and subject area/specialism), give them access to experts and evidence, and let’s see what they come back with. A report from such a group that contends that, yes, the pension scheme is in trouble and an end to defined benefit is the only thing that will keep it sustainable, would have credibility and legitimacy. On the other hand, a report that came back with other options and which denied that case for the necessity for such a drastic step, would also be persuasive. This would be a decision by my peers who have taken more time and more trouble than I have, who have access to expertise and arguments and evidence, and who I would therefore trust.

“Down with this sort of thing!”

I strongly suspect that we have two very polarised actuarial valuations of the scheme – one, from the employers, which seems to me to be laughably flawed (but again, Dunning Kruger… what do I know?), and another, from UCU, which may turn out to be laughably optimistic. Point is, I don’t know, and I don’t want to make the mistake of assuming that the truth must lie somewhere in between.

One objection is that this might be little different to recent accusations about university Vice Chancellors sitting on the committees that set their salaries. However, a range of ages and career stages could mitigate against this – younger group members would surely resist any attempts to allow the scheme to limp on until older members are likely to be retired but which would leave them with little or nothing. We can also include information about affordability and HE finance in general to ensure that we don’t end up with recommendations that are completely unaffordable.I’d also like to think that those who chose careers in academia or in university management – in most cases ahead of more lucrative careers – have a commitment to the sector and its future.

And no-one’s saying that the report of such a group need be binding, but, a properly constituted group undertaking deliberative work with access to evidence and expertise would carry a great deal of authority and would be hard to be simply set aside. It’s an example of what John Rawls called ‘pure procedural justice‘. Its outcome is fair because it is set up and operates in a way that’s fair.

So I guess that’s my challenge to Universities UK and (to a lesser extent) the UCU too. If, UUK, your argument is that ‘There Is No Alternative’ (TINA) – which we’ve heard before, ad nausem – let’s see if that’s really the case. Their complete refusal to engage on the issue of the ending of defined benefit doesn’t bode well here, nor does the obvious disingenuous of offering “talks” while refusing to negotiate on the issue the strike is actually about. But let’s see if UCU’s claims bear scrutiny too. No-one is immune from wishful thinking, and some elements within UCU seem to enjoy being on strike a bit too much for my liking.

Because, frankly, I’d quite like (a) to get back to work; and (b) have some sort of security in retirement, and the same for generations of academics and APM staff to come.

My fictional heroes of research development (and perhaps university management more generally)

Consider this post a bit of an early Christmas indulgence, by which I mean largely ignore it..

As an undergraduate, I was very taken with Aristotelian ethics, and in particular ideas about character and about exemplars of moral excellences and other kinds of excellences (public speaking, bravery, charisma etc). Roughly, a good way to learn is to observe people who do certain things well and learn from their example. Conversely, one can also learn from people who are terrible at things, and avoid their mistakes. No-one’s so awful that they can’t at least serve as a bad example and as a warning to others. I remember later conversations about who the ultimate Aristotelian exemplar might be – in reality or in contemporary culture – with a lot of votes for Captains Kirk and Picard, leading to the merits of real life and therefore more complex figures taking second place to a who’s-your-favourite-Star-Trek-captain debate.

Years later, I fell to wondering who the exemplars are for research development, or perhaps university management/leadership/academic wrangling more generally. I could write about people who’ve influenced my thinking and my career, but instead, like the Star Trek fans, I’ve been distracted by fictional examples.

My first nomination comes via a 2012 Inside Higher Education blog post from ‘Dean Dad’ in the US, and is for Kermit the Frog. The nomination goes as follows:

[Kermit] keeps the show running, but it’s clear that he actually enjoys the Chaos Muppets and wants them to be able to do what they do.  His work makes it possible for Gonzo to jump through the flaming hoop with a chicken under his arm while reciting Shakespeare, even though Kermit would never do that himself.

Kermit endures snark from Statler and Waldorf in the balcony; let’s just say I get that.  And the few times that Kermit freaks out have much more impact than when, say, Animal does, because a freaked-out Kermit threatens the working of the show.  Freaking out is just what Animal does.

The nomination comes complete with a whole theory of academic management based on the Muppet Show. Muppets can be divided into ‘order’ and ‘chaos’ muppets, with ‘hard’ and ‘soft’ examples of each. Kermit is the epitome of a soft order muppet because he understands the importance of order and structure, but doesn’t enjoy it for its own sake and wants to help others to what they do best. I’d quite like to add “soft order muppet” to my email signature and even my office door sign, but I don’t think the world’s quite ready for that.

My second nomination is Sergeant Wilson of ‘Dad’s Army’, played by John Le Mesurier. Catchphrases include “would you mind awfully…” and the eventual title of a biography of JLeM: “Do you think that’s wise, sir?” He’s usually a model of subtle and understated influence, providing gentle but timely challenge to those set above him. Good humoured, unflappable, wise, and reassuring, he’s the ideal sergeant.

My third – more controversially – is Edmund Blackadder, and in particular Blackadder III. This exchange alone – when reviewing the Prince of Wales’ first draft of a love letter – makes him the patron fictional saint of research development staff.

“Would you mind if I changed  just one tiny aspect of it?”
“What’s that?”
“The words.”

Blackadder loses points for deviousness, more points for largely unsuccessful deviousness, consistent mistreatment of those he line manages, and general cynicism about and contempt for those in power. However, in the latter case, in his world, he’s got something of a point. But, as I said, exemplars can embody what not to do as well as what to do.

Next up, a trip to Fawlty Towers and Polly Sherman (Connie Booth, who also co-wrote the series), the voice of sanity (mostly) and a model of competence, dedication, and loyalty. She usually manages to keep her head while all around are losing theirs, and has a level of compassion, understanding, and tolerance for the eccentricities of those around her which the likes of Edmund Blackadder never reach.

Finally, one I’ve changed my mind over. Initially, it was Sir Humphrey Appleby (left), the Permanent Private Secretary in Yes (Prime) Minister, who was my nominee. While the Minister, Jim Hacker (centre), was all fresh ideas and act-without-thinking, Sir Humphrey was the voice of experience and the embodiment of institutional memory.

On reflection, though, the real hero is Bernard Woolley (right), the Principal Private Secretary. Sir Humphrey’s first priority is the civil service, and no academic management role model can put the cart before the horse in such a way. And more seriously, the ‘Sir Humphrey’ view of the civil service and of administration and management more generally is a reactionary, cynical, and highly damaging view. My Nottingham colleague Steven Fielding wrote an interesting piece about the effects of YM on perceptions of civil servants and cynicism about government. But he is an example of someone who has concluded that success/good governance isn’t possible without an effective and professional civil service, but then in seeking to defend the means, ends up forgetting the end. And that’s a kind of negative exemplar as well. Let’s none of us forget who we’re here for, or why. Kermit doesn’t think the Muppet Show is all about him.

Bernard Woolley, though, struggles to manage conflicted loyalties (multiple stakeholders and bottom lines), and is under pressure both from Jim Hacker, the government minister actually in charge of the Department, but only likely to have a very limited term of office – and Sir Humphrey, his rather more permanent boss with huge power of his career prospects. Anyone else ever felt like that – (temporary) Heads of School or Research Directors or other fixed-term academic leaders on one side, and more more permanent senior administrative, professional, and managerial colleagues on the other? People who won’t be stepping down inside eighteen months and returning, a good job well done, to their research and teaching?

Well, you may have felt like that, but I couldn’t possibly comment.

So… who have I missed? Who else deserves a mention? Kryten from Red Dwarf, perhaps? Smithers from the Simpsons? Bunk Moreland from The Wire?

Mistakes in Grant Writing, part 95 – “The Gollum”

Image: Alonso Javier Torres [CC BY 2.0] via Flickr
A version of this article first appeared in Funding Insight on 20th July 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Previously I’ve written about the ‘Star Wars Error’ in grant writing, and my latest attempt to crowbar popular culture references into articles about grant writing mistakes is ‘the Gollum’. Gollum is a character from Lord of the Rings, a twisted, tortured figure – wicked and pitiable in equal measure. He’s an addict whose sole drive is possession of the Ring of Power, which he loves and hates with equal ferocity. A little like me and my thesis.

Only begotten

For current purposes, it’s his cry of “my precious!” and obsession with keeping the Ring for himself that I’m thinking of in terms of an analogy with research grant applicants, rather than (spoilers) eating raw fish, plunging into volcanoes, or murdering friends. Even in the current climate of ‘demand management’, internal peer review, and research development support, there are still researchers who treat their projects as their “precious” and are unable or unwilling to share them or to seek comment and feedback.

It’s easy to understand why – there’s the fear of being scooped and of someone else taking and using the idea. There’s the fear of public failure – with low success rates, a substantial majority of applications will be unsuccessful, and perhaps the thought is that if one is going to fail, few people should know about it. And let’s not pretend that internal review/filtering processes don’t at least raise questions about academic freedom.

Power play

But there are other fears. The first is about sabotage or interference from colleagues who might be opposed to the research, whether through ideological and methodological differences, or because they’re on the other side of some major scientific controversy. In my experience, this concern has been largely unfounded. I’ve been very fortunate to work with senior academics who are very clear about their role as internal reviewer, which is to further improve competitive applications and ideas, while filtering out or diverting uncompetitive ideas, or applications that simply aren’t ready. But while internal reviewers will have their views, I’ve not seen anyone let that power go to their heads.

Enough of experts

Second, if the concern isn’t about integrity or (unconscious) bias, it might be about background or knowledge. One view I’ve encountered – mainly unspoken, but occasionally spoken and once shouted – is that no-one else at the institution has the expertise to review their proposal and therefore internal review is a waste of time.

It might well be true that no-one else internally has equivalent expertise to the applicant, and (apart from early career researchers) that’s to be expected and welcomed. But if it’s true internally, it might also be true of the external expert reviewers chosen by the funder, and it’s even more likely to be true of the people on the final decision-making panel. The chances are that the principal applicant on any major project is one of the leaders in that field, and even if she regards a handful of others as appropriate reviewers, there’s absolutely no guarantee that she’ll get them.

Significant other

Ultimately, the purpose of a funding application is to convince expert peer reviewers from the same or cognate discipline and a much broader panel of distinguished scientists of the superior merits of your ideas and the greater significance of your research challenge compared to rival proposals. Because once the incompetent and the unfeasible have been weeded out – it’s all about significance.

A quality internal peer review process will mirror those conditions as closely as possible. It doesn’t matter that internal reviewer X isn’t from the same field and knows little about the topic – what’s of use to the applicant is what X makes of the application as a senior academic from another (sub)discipline. Can she understand the research challenges, why they’re significant and important? Does the application tell her exactly what the applicant proposes to do? What’s particularly valuable are creative misunderstandings – if an internal reviewer has misunderstood a key point or misinterpreted something, a wise applicant will return to the application and seek to identify the source of that misunderstanding and head it off, rather than just dismissing the feedback out of hand.

Forever alone

And that’s without touching on the value that research development support can add. People in my kind of role who may not be academics, but who have seen a great many grant applications over the years. People who aren’t academic experts, but who know when something isn’t clear, or doesn’t make sense to the intelligent lay person.

Most institutions that take research seriously will offer good support to their researchers. Despite this, there are still researchers who only engage with others where they absolutely must, and take little notice of feedback or experience during the grant application process. Do they really think that others are unworthy of gazing upon the magnificence of The Precious?

I’d like to urge them here to turn back, to take the advice and feedback that’s on offer, lest they end up wandering the dark places of the world, alone and unfunded.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

ESRC success rates 2014/2015 – a quick and dirty commentary

"meep meep"
Success rates. Again.

The ESRC has issued its annual report and accounts for the financial year 2014/15, and they don’t make good reading. As predicted by Brian Lingley and Phil Ward back in January on the basis of the figures from the July open call, the success rate is well down – to 13% –  from the 25% I commented on last year , 27% on 2012-13 and 14% of 2011-2012.

Believe it or not there is a staw-grasping positive way of looking at these figures… of which more later.

This research professional article has a nice overview which I can’t add much to, so read it first. Three caveats about these figures, though…

  • They’re for the standard open call research grant scheme, not for all calls/schemes
  • They relate to the financial year, not the academic year
  • It’s very difficult to compare year-on-year due to changes to the scheme rules, including minimum and maximum thresholds which have changed substantially.

In previous years I’ve focused on how different academic disciplines have got on, but there’s probably very little to add. You can read them for yourself (p. 38), but the report only bothers to calculate success rates for the disciplines with the highest numbers of applications – presumably beyond that there’s little statistical significance. I could be claiming that it’s been a bumper year for Education research, which for years bumped along at the bottom of the league table with Business and Management Studies in terms of success rates, but which this year received 3 awards from 22 applications, tracking the average success rate. Political Science and Socio-Legal Studies did well, as they always tend to do. But it’s generalising from small numbers.

As last year, there is also a table of success rates by institution. In an earlier section on demand management, the report states that the ESRC “are discussing ways of enhancing performance with those HEIs where application volume is high and quality is relatively weak”. But as with last year, it’s hard to see from the raw success rate figures which these institutions might be – though of course detailed institutional profiles showing the final scores for applications might tell a very different story. Last year I picked out Leeds (10/0), Edinburgh (8/1), and Southampton (14/2) as doing poorly, and Kings College (7/3), King Leicester III (9/4), Oxford (14/6) as doing well – though again, one more or less success changes the picture.

This year, Leeds (8/1) and Edinburgh (6/1) have stats that look much better. Southampton doesn’t look to have improved (12/0) at all, and is one of the worst performers. Of those who did well last year, none did so well this year – Kings were down to 11/1, Leicester 2/0, and Oxford 11/2. Along with Southampton, this year’s poor performers were Durham (10/0), UCL (15/1)  and Sheffield (11/0) – though all three had respectable enough scores last time. This year’s standouts were Cambridge at 10/4. Perhaps someone with more time than me can combine success rates from the last two years, and I’m sure someone at the ESRC already has….

So… on the basis of success rates alone, probably only Southampton jumps out as doing consistently poorly. But again, much depends on the quality profile of the applications being submitted – it’s entirely possible that they were very unlucky, and that small numbers mask much more slapdash grant submission behaviour from other institutions. And of course, these figures only relate to the lead institution as far as I know.

It’s worth noting that demand management has worked… after a fashion.

We remain committed to managing application volume, with
the aim of focusing sector-wide efforts on the submission
of a fewer number of higher quality proposals with a
genuine chance of funding. General progress is positive.
Application volume is down by 48 per cent on pre-demand
management levels – close to our target of 50 per cent.
Quality is improving with the proportion of applications now
in the ‘fundable range’ up by 13 per cent on pre-demand
management levels, to 42 per cent. (p. 21).

I remember the target of reducing the numbers of applications received by 50% as being regarded as very ambitious at the time, and even if some of it was achieved by changing scheme rules to increase the minimum value of a grant application and banning resubmissions, it’s still some achievement. Back in October 2011 I argued that the ESRC had started to talk optimistically about meeting that target after researcher sanctions (in some form) had started to look inevitable. And in November 2012 things looked nicely on track.

But reducing brute numbers of applications is all very well. But if only 42% of applications are within the “fundable range”, then that’s a problem because it means that a lot of applications being submitted still aren’t good enough.This is where there’s cause for optimism – if less than half of the applications are fundable, your own chances should be more than double the average success rate – assuming that your application is of “fundable” quality. So there’s your good news. Problem is, no-one applies who doesn’t think their application is fundable.

Internal peer review/demand management processes are often framed in terms of improving the quality of what gets submitted, but perhaps not enough of a filtering process. So we refine and we polish and we make 101 incremental improvements… but ultimately you can’t polish a sow’s ear. Or something.

Proper internal filtering is really, really hard to do – sometimes it’s just easier to let stuff from people who won’t be told through and see if what happens is exactly what you think will happen, which it always is. There’s also a fine line (though one I think that can be held and defended) between preventing perceived uncompetitive applications from doing so and impinging on academic freedom. I don’t think telling someone they can’t submit a crap application is infringing their academic freedom, but any such decisions need to be taken with a great deal of care. There’s always the possibility of suspicion of ulterior motives – be it personal, be it subject or methods-based prejudice, or senior people just overstepping the mark and inappropriately imposing their convictions (ideological, methodological etc) on others. Like the external examiner who insists on “more of me” on the reading list….

The elephant in the room, of course, is the flat cash settlement and the fact that that’s now really biting, and that there’s nowhere near enough funding to go around for all of the quality social science research that’s badly needed. But we can’t do much about that – and we can do something about the quality of the applications we’re submitting and allowing to be submitted.

I wrote something for research professional a few years back on how not to do demand management/filtering processes, and I think it still stands up reasonably well and is even quite funny in places (though I say so myself). So I’m going to link to it, as I seem to be linking to a disproportionate amount of my back catalogue in this post.

A combination of a new minimum of £350k for the ESRC standard research grants scheme and the latest drop in success rates makes me think it’s worth writing a companion piece to this blog post about potential ESRC applicants need to consider before applying, and what I think is expected of a “fundable” application.

Hopefully something for the autumn…. a few other things to write about first.

ESRC – sweeping changes to the standard grants scheme

The ESRC have just announced a huge change to their standard grants scheme, and I think it’s fair to say that it’s going to prove somewhat controversial.

At the moment, it’s possible to apply to the ESRC Standard Grant Scheme at any time for grants of between £200k and £2million. From the end of June this year, the minimum threshold will raise from £200k to £350k, and the maximum threshold will drop from £2m to £1m.

Probably those numbers don’t mean very much to you if you’re not familiar with research grant costing, but as a rough rule of thumb, a full time researcher for a year (including employment costs and overheads) comes to somewhere around £70k-80k. So a rough rule of thumb I used to use was that if your project needed two years of researcher time, it was big enough. So… for £350k you’d probably need three researcher years, a decent amount of PI and Co-I time, and a fair chunk of non-pay costs. That’s a big project. I don’t have my filed in front of me as I’m writing this, so maybe I’ll add a better illustration later on.

This isn’t the first time the lower limit has been raised. Up until February 2011, there used to be a “Small Grants Scheme” for projects up to £200k before that was shut, with £200k becoming the new minimum. The argument at the time was that larger grants delivered more, and had fewer overheads in terms of the costs of reviewing, processing and administering. And although the idea was that they’d help early career researchers, the figures didn’t really show that.

The reasons given for this change are a little disingenuous puzzling. Firstly, this:

The changes are a response to the pattern of demand that is being placed on the standard grants scheme by the social science community. The average value of a standard grant application has steadily increased and is now close to £500,000, so we have adjusted the centre of gravity of the scheme to reflect applicant behaviour.

Now that’s an interesting tidbit of information – I wouldn’t have guessed that the “average value” would be that high, but you don’t have to be an expert in statistics (and believe me, in spite of giving 110% in maths class at school I’m not one) to wonder what “average” means, and further, why it even matters. This might be an attempt at justification, but I don’t see why this provides a rationale for change.

Then we have this….

The changes are also a response to feedback from our Grant Assessment Panels who have found it increasingly difficult to assess and compare the value of applications ranging from £200,000 to £2 million, where there is variable level of detail on project design, costs and deliverables. This issue has become more acute as the number of grant applications over £1 million has steadily increased over the last two years. Narrowing the funding range of the scheme will help to maintain the robustness of the assessment process, ensuring all applications get a fair hearing.

I have every sympathy for the Grant Assessment Panel members here – how do you choose between funding one £2m project and funding 10 x £200k projects, or any combination you can think of? It’s not so much comparing apples to oranges as comparing grapes to water melons. And they’re right to point out the “variable” level of detail provided – but that’s only because their own rules give a maximum of 6 A4 page for the Case for Support for projects under £1m and 12 for those over. If you think that sounds superficially reasonable, then notice that it’s potentially double the space to argue for ten times the money. I’ve supported applications of £1m+ and 12 sides of A4 is nowhere near enough, compared to the relative luxury of 6 sides for £200k. This is a problem.

In my view it makes sense to “introduce an annual open competition for grants between £1 million and £2.5 million”, which is what the ESRC propose to do. So I think there’s a good argument for lowering the upper threshold from £2m to £1m and setting it up as a separate competition. I know the ESRC want to reduce the number of calls/schemes, but this makes sense. As things stand I’ve regularly steered people away from the Centres/Large Grants competition towards Standard Grants instead, where I think success rates will be higher and they’ll get a fairer hearing. So I’d be all in favour of having some kind of single Centres/Large/Huge/Grants of Unusual Size competition.

But nothing here seems to me to be an argument for raising the lower limit.

But finally, I think we come to what I suspect is the real reason, and judging by Twitter comments so far, I’m not alone in thinking this.

We anticipate that these changes will reduce the volume of applications we receive through the Standard Grants scheme. That will increase overall success rates for those who do apply as well as reducing the peer review requirements we need to place on the social science community.

There’s a real problem with ESRC success rates, which dropped to 10% in the July open call, with over half the “excellent” proposals unfunded. This is down from around 25% success rates, much improved in the last few years. I don’t know whether this is a blip – perhaps a few very expensive projects were funded and a lot of cheaper ones missed out – but it’s not good news. So it’s hard not to see this change as driven entirely by a desire to get success rates up, and perhaps an indication that this wasn’t a blip.

In a recent interview with Adam Smith of Research Professional, Chief Exec Jane Eliot recently appeared to rule out the option of individual sanctions which had been threatened if institutional restraint failed to bring down the number of poor quality applications and it appears that the problem is not so much poor quality applications as lots of high quality applications, not enough money, plummeting success rates, and something needing to be done.

All this raises some difficult questions.

  • Where are social science researchers now supposed to go for funding for projects whose “natural size” is between £10k (British Academy Small Grants) and £350k, the proposed new minimum threshold? There’s only really the Leverhulme Trust, whose schemes will suit some project types and but not others, and they’re not exclusively a social science funder.
  • Where will the next generation of PIs to be entrusted with £350k of taxpayer’s money have an opportunity to cut their teeth, both in terms of proving themselves academically and managerially?
  • What about career young researchers? At least here we can expect a further announcement – there has been talk of merging the ‘future leaders scheme’ into Standard Grants, so perhaps there will be a lower minimum for them. But we’ll see.
  • Given that the minimum threshold has been almost doubled, what consultation has been carried out? I’m just a humble Business School Research Manager (I mean I’m humble, my Business School is outstanding, obviously) so perhaps it’s not surprising that this the first I’ve heard. But was there any meaningful consultation over this? Is there any evidence underpinning claims for the efficiency of fewer, longer and larger grants?
  • How do institutions respond? I guess one way will be to work harder to create bigger gestalt projects with multiple themes and streams and work packages. But surely expectations of grant getting for promotion and other purposes need to be dialled right back, if they haven’t been already. Do we encourage or resist a rush to get applications in before the change, at a time when success rates will inevitably be dire?

Of course, the underlying problem is that there’s not enough money in the ESRC’s budget to support excellent social science after years and years of “flat cash” settlements. And it’s hard to see what can be done about that in the current political climate.

Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.