Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.

Pre-mortems: Tell me why your current grant application or research project will fail

I came across a really interesting idea the other day week via the Freakonomics podcast – the idea of a project “pre-mortem” or “prospective hindsight”  They interviewed Gary Klein who described it as follows:

KLEIN:  I need you to be in a relaxed state of mind.  So lean back in your chair. Get yourself calm and just a little bit dreamy. I don’t want any daydreaming but I just want you to be ready to be thinking about things. And I’m looking in a crystal ball. And uh, oh, gosh…the image in the crystal ball is a really ugly image. And this is a six-month effort. We are now three months into the effort and it’s clear that this project has failed. There’s no doubt about it. There’s no way that it’s going to succeed. Oh, and I’m looking at another scene a few months later, the project is over and we don’t even want to talk about it. And when we pass each other in the hall, we don’t even make eye contact. It’s that painful. OK. So this project has failed, no doubt about it [….] I want each of you to write down all the reasons why this project has failed. We know it failed. No doubts. Write down why it failed.

The thinking here is that such an approach to projects reduces overconfidence, and elsewhere the podcast discusses the problems of overconfidence, “go fever”, the Challenger shuttle disaster, and how cultural/organisational issues can make it difficult to bring up potential problems and obstacles.  The pre-mortem exercise might free people from that, and encourages people (as a team) to find reasons for failure and then respond to them.  I don’t do full justice to the arguments here, but you can listen to it for yourself (or read the transcript) at the link above.  It reminds me of some of the material covered in a MOOC I took which showed how very small changes in the way that questions are posed and framed can make surprisingly large differences to the decisions that people make, so perhaps this very subtle shift in mindset might be useful.

How might we use the idea of a pre-mortem in research development?  My first thought was about grant applications.  Would it help to get the applicants to undertake the pre-mortem exercise?  I’m not sure that overconfidence is often a huge problem among research teams (a kind of grumpy, passive-aggressive form of entitled pessimism is probably more common), so perhaps the kind of groupthink overconfidence/excessive positivity is less of an issue than in larger project teams where nobody wants to be the one to be negative.  But perhaps there’s value in asking the question anyway, and re-focusing applicants on the fact that they’re writing an application for reviewers and for a funding body, not for themselves.  A reminder that the views, priorities, and (mis)interpretations of others are crucial to their chances of success or failure.

Would it help to say to internal reviewers “assume this project wasn’t funded – tell me why”?  Possibly.  It might flush out issues that reviewers may be too polite or insufficiently assertive to raise otherwise, and again, focuses minds on the nature of the process as a competition.  It could also help reviewers identify where the biggest danger for the application lies.

Another way it could usefully be used is in helping applicants risk assess their own project.  Saying to them “you got funded, but didn’t achieve the objectives you set for yourself.  Why not?” might be a good way of identifying project risks to minimise in the management plan, or risks to alleviate through better advanced planning.  It might prompt researchers to think more cautiously about the project timescale, especially around issues that are largely out of their control.

So… has anyone used anything like this before in research development?  Might it be a useful way of thinking?  Why will your current application fail?

Six writing habits I reckon you ought to avoid in grant applications…..

There are lots of mistakes to avoid in writing grant applications, and I’ve written a bit about some of them in some previous posts (see “advice on grant applications” link above).  This one is more about writing habits.  I read a lot of draft grant applications, and as a result I’ve got an increasingly long list of writing quirks, ticks, habits, styles and affectations that Get On My Nerves.

Imagine I’m a reviewer… Okay, I’ll start again.. imagine I’m a proper reviewer with some kind of power and influence…. imagine further that I’ve got a pile of applications to review that’s as high as a high pile of applications.  Imagine how well disposed I’d feel towards anyone who makes reading their writing easier, clearer, or in the least bit more pleasant.  Remember how the really well-written essays make your own personal marking hell a little bit less sulphurous for a short time.  That.  Whatever that tiny burst of goodwill – or antibadwill – is worth, you want it.

The passive voice is excessively used

I didn’t know the difference between active and passive voice until relatively recently, and if you’re also from a generation where grammar wasn’t really teached in schools then you might not either.  Google is your friend for a proper explanation by people who actually know what they’re talking about, and you should probably read that first, but my favourite explanation is from Rebecca Johnson – if you can add “by zombies”, then it’s passive voice. I’ve also got the beginnings of a theory that the Borg from Star Trek use the passive voice, and that’s one of the things that makes them creepy (“resistance is futile” and “you will be assimilated”)  but I don’t know enough about grammar or Star Trek to make a case for this.   Sometimes the use of the passive voice (by zombies) is appropriate, but often it makes for distant and slightly tepid writing.  Consider:

A one day workshop will be held (by zombies) at which the research findings will be disseminated (by zombies).  A recording of the event will be made (bz) and posted on our blog (bz).  Relevant professional bodies will be approached (bz)…

This will be done, that will be done.  Yawn.  Although, to be fair, a workshop with that many zombies probably won’t be a tepid affair.  But much better, I think, to take ownership… we will do these things, co-Is A and B will lead on X.  Academic writing seems to encourage depersonalisation and formality and distancing (which is why politicians love it – “mistakes were made [perhaps by zombies, but not by me]”.

I think there are three reasons why I don’t like it.  One is that it’s just dull.  A second is that I think it can read like a way of avoiding detail or specifics or responsibility for precisely the reasons that politicians use it, so it can subconsciously undermine the credibility of what’s being proposed.  The third reason is that I think for at least some kinds of projects, who the research team are – and in particular who the PI is – really matters.  I can understand the temptation to be distant and objective and sciency as if the research speaks entirely for itself.  But this is your grant application, it’s something that you ought to be excited and enthused by, and that should come across. If you’re not, don’t even bother applying.

First Person singular, First Person plural, Third Person

Pat Thomson’s blog Patter has a much fuller and better discussion about the use of  “we” and “I” in academic writing that I can’t really add much to. But I think the key thing is to be consistent – don’t be calling yourself Dr Referstoherselfinthethirdperson in one part of the application, “I” in another, “the applicant” somewhere else, and “your humble servant”/ “our man in Havana” elsewhere.  Whatever you choose will feel awkward, but choose a consistent method of awkwardness and have done with it. Oh, and don’t use “we” if you’re the sole applicant.  Unless you’re Windsor (ii), E.

And don’t use first names for female team members and surnames for male team members.  Or, worse, first names for women, titles and surnames for men. I’ve not seen this myself, but I read about it in a tweet with the hashtag #everydaysexism

Furthermore and Moreover…

Is anyone willing to mount a defence for the utility of either of these words, other than (1) general diversity of language and (2) padding out undergraduate essays to the required word count? I’m just not sure what either of these words actually means or adds, other than perhaps as an attempted rhetorical flourish, or, more likely, a way of bridging non-sequiturs or propping up poor structuring.

“However” and “Yet”…. I’ll grudgingly allow to live.  For now.

Massive (Right Justified) Wall-o-Text Few things make my heart sink more than having to read a draft application that regards the use of paragraphs and other formatting devices as illustrative of a lack of seriousness and rigour. There is a distinction between densely argued and just dense.  Please make it easier to read… and that means not using right hand justification.  Yes, it has a kind of superficial neatness, but it makes the text much less readable.

Superabundance of Polysyllabic  Terminology

Too many long words. It’s not academic language and (entirely necessary) technical terms and jargon that I particularly object to – apart from in the lay summary, of course.  It’s a general inflation of linguistic complexity – using a dozen words where one will do, never using a simple word where a complex one will do, never making your point twice when a rhetorically-pleasing triple is on offer.

I guess this is all done in an attempt to make the application or the text seem as scholarly and intellectually rigorous as possible, and I think students may make similar mistakes.  As an undergraduate I think I went through a deeply regrettable phase of trying to ape the style of academic papers in my essay writing, and probably made myself sound like one of the most pompous nineteen year olds on the planet.

If you find yourself using words like “effectuate”, you might want to think about whether you might be guilty of this.

Sta. Cca. To. Sen. Ten. Ces.

Varying and manipulating sentence length can be done deliberately to produce certain effects.  Language has a natural rhythm and pace.  Most people probably have some awareness of what that is.  They are aware that sentences which are one paced can be very dull.  They are aware that this is something tepid about this paragraph.  But not everyone can feel the music in language.  I think it is a lack of commas that is killing this paragraph.  Probably there is a technical term for this.

So… anyone willing to defend “moreover” or “furthermore”? Any particularly irritating habits I’ve missed?  Anyone actually know any grammar or linguistics provide any technical terms for any of these habits?

Demand mismanagement: a practical guide

I’ve written an article on Demand (Mis)management for Research Professional. While most of the site’s content is behind a paywall, they’ve been kind enough to make my article open access.  Which saves me the trouble of cutting and pasting it here.

Universities are striving to make their grant applications as high in quality as possible, avoid wasting time and energy, and run a supportive yet critical internal review process. Here are a few tips on how not to do it. [read the full article]

In other news, I was at the ARMA conference earlier this week and co-presented a session on Research Development for the Special Interest Group with Dr Jon Hunt from the University of Bath.  A copy of the presentation and some further thoughts will follow once I’ve caught up with my email backlog….

Book review: The Research Funding Toolkit (Part 1)

For the purposes of this review, I’ve set aside my aversion to the use of terms like ‘toolkit’ and ‘workshop’.

The existence of a market for The Research Funding Toolkit, by Jacqueline Aldridge and Andrew Derrington, is yet more evidence of how difficult it is to get research funding in the current climate.  Although the primary target audience is an academic one, research managers and those in similar roles “will also find most of this book useful”, and I’d certainly have no hesitation in recommending this book to researchers who want to improve their chances of getting funding, and also to new and to experienced research managers.  In particular, academics who don’t have regular access to research managers (or similar) and to experienced grant getters and givers at their own institution should consider this book essential reading if they entertain serious ambitions about obtaining research funding.  While no amount of skill in grant writing will get a poor idea funded, a lack of skill in grant writing can certainly prevent an outstanding idea from getting the hearing it deserves if the application lacks clarity, fails to highlight the key issues, or fails to make a powerful case for its importance.

The authors have sought to distil a substantial amount of advice and experience down into one short book which covers finding appropriate funding sources, planning an application, understanding application forms, and assembling budgets.  But it goes beyond mere administrative advice, and also addresses writing style, getting useful (rather than merely polite) feedback on draft versions, the internal politics of grant getting, the challenges of collaborative projects, and the key questions that need to be addressed in every application.  Crucially, it demystifies what really goes on at grant decision making meetings – something that far too many applicants know far too little about.  Applicants would love to think that the scholarly and eminent panel spend hours subjecting every facet of their magnum opus to detailed, rigorous, and forensic analysis.  The reality is – unavoidably given application numbers  – rather different.

Aldridge and Derrington are well-situated to write a book about obtaining research funding.  Aldridge is Research Manager at Kent Business School and has over eight years’ experience of research management and administration.  Derrington is Pro-Vice Chancellor for Humanities and Social Sciences at the University of Liverpool, and has served on grant committees for several UK research councils and for the Wellcome Trust.  His research has been “continuously funded” by various schemes and funders for 30 years.  I think a book like this could only have been written in close collaboration between an academic with grant getting and giving experience, and a research manager with experience of supporting applications over a number of years.

The book practices what it preaches by applying the principles of grant writing that it advocates to the style and layout of the book itself.  It is organised into 13 distinct chapters, each containing a summary and introduction, and a conclusion at the end to summarise the key points and lessons to be taken.  It includes 19 different practical tools, as well as examples from successful grant applications. One of the appendixes offers advice on running institutional events on grant getting.  As it advises applicants, it breaks the text down into small chunks, makes good use of headings and subheadings, and uses clear, straightforward language.  It’s certainly an easy, straightforward read which won’t take too long to read cover-to-cover, and the structure allows the reader to dip back in to re-read appropriate sections later.  Probably the most impressive thing for me about the style is how lightly it wears its expertise – genuinely useful advice without falling into the traps of condescension, smugness, or preaching.  Although the prose sacrifices sparkle for clarity and brevity, the book coins a number of useful phrases or distinctions that will be of value, and I’ll certainly be adopting one or two of them.

Writing a book of this nature raises a number of challenges about specificity and relevance.  Different subjects have different funders with different priorities and conventions, and arrangements vary from country to country, and – of course – over time.  The authors have deliberately sought to use a wide range of example funders, including funders from Australia, America, and from Europe – though as you might expect the majority of exemplar funders are UK-based.  However, different Research Councils are used as case studies, and I would imagine that the advice given is generalisable enough to be of real value across academic disciplines and countries.  It’s harder to tell how this book will date, (references to web resources all date from Oct 2011), but much of the advice flows directly from (a) the scarcity of resources, and (b) the way that grant panels are organised and work, and it’s hard to imagine either changing substantially.  The authors are careful not to make generalisations or sweeping assertions based on any particular funder or scheme, so I would be broadly optimistic about the book’s continuing relevance and utility in years to come.  There’s also a website to accompany the book where new materials and updates may be added in the future.  There are already a number of blog posts subsequent to the publication date of the book.

Worries about appearing dated may account for the book having comparatively little to say about the impact agenda and how to go about writing an impact statement.  Only two pages address this directly, and much of these are taken up with examples.  Although not all UK funders ask for impact statements yet, the research councils have been asking for them for some time, and indications are that other countries are more likely to follow suit than not.  However, I think the authors were right not to devote a substantial section to this, as understandings and approaches to impact are still comparatively in their infancy, and such a section would probably be likely to date.

I’ve attempted a fairly general review in this post, and I’ll save most of my personal reaction for Part 2 of this post.  As well as highlighting a few areas that I found particularly useful, I’m going to raise a few issues that arise from the book as a bit of a jumping off point for debate and discussion.  Attempting to do that in this first post will make it too long, and unbalance the review by placing excessive focus on areas where I’d tentatively disagree, rather than the overwhelming majority of the points and arguments made in the book which I’d thoroughly agree with and endorse absolutely.

‘The Research Funding Toolkit(£21.99 for the paperback version) is available from Sage.  The Sage website also mentions an ebook version, but the link doesn’t appear to be working at the time of writing.

Declarations of interest:
Publishers Sage were kind enough to provide me with a free review copy of this book.  I have had some very brief Twitter interactions with Derrington and I met Aldridge briefly at the ARMA conference earlier this year.

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

Responding to Referees

Preliminary evidence appears to show that this approach to responding to referees is - on balance - probably sub-optimal. (Photo by Tseen Khoo)

This post is co-authored by Adam Golberg of Cash for Questions (UK), and Jonathan O’Donnell and Tseen Khoo of The Research Whisperer (Australia).

It arises out of a comment that Jonathan made about understanding and responding to referees on one of Adam’s posts about what to do if your grant application is unsuccessful. This seemed like a good topic for an article of its own, so here it is, cross-posted to our respective blogs.

A quick opening note on terminology: We use ‘referee’ or ‘assessor’ to refer to academics who read and review research grant applications, then feed their comments into the final decision-making process. Terminology varies a bit between funders, and between the UK and Australia. We’re not talking about journal referees, although some of the advice that follows may also apply there.

————————————-

There are funding schemes that offer applicants the opportunity to respond to referees’ comments. These responses are then considered alongside the assessors’ scores/comments by the funding panel. Some funders (including the Economic and Social Research Council [ESRC] in the UK) have a filtering process before this point, so if you are being asked to respond to referees’ comments, you should consider it a positive sign as not all applications get this far. Others, such as the Australian Research Council (ARC), offer you the chance to write a rejoinder regardless of the level of referees’ reports.

If the funding body offers you the option of a response, you should consider your response as one of the most important parts of the application process.  A good response can draw the sting from criticisms, emphasise the positive comments, and enhance your chances of getting funding.  A bad one can doom your application.

And if you submit no response at all? That can signal negative things about your project and research team that might live on beyond this grant round.

The first thing you might need to do when you get the referees’ comments about your grant application is kick the (imaginary) cat.* This is an important process. Embrace it.

When that’s out of your system, here are four strategies for putting together a persuasive response and pulling that slaved-over application across the funding finish line.

1. Attitude and tone

Be nice.  Start with a brief statement thanking the anonymous referees for their careful and insightful comments, even if actually you suspect some of them are idiots who haven’t read your masterpiece properly. Think carefully about the tone of the rest of the response as well.  You’re aiming for calm, measured, and appropriately assertive.  There’s nothing wrong with saying that a referee is just plain wrong on a particular point, but do it calmly and politely.  If you’re unhappy about a criticism or reviewer, there’s a good chance that it will take several drafts before you eliminate all the spikiness from the text.  If it makes you feel better (and it might), you can write what you really think in the tone that you think it in but, whatever you do, don’t send that version! This is the version that may spontaneously combust from the deadly mixture of vitriol and pleading contained within.

Preparing a response is not about comprehensively refuting every criticism, or establishing intellectual superiority over the referees. You need to sift the comments to identify the ones that really matter. What are the criticisms (or backhanded compliments) that will harm your cause? Highlight those and answer them methodically (see below). Petty argy-bargy isn’t worth spending your time on.

2. Understanding and interpreting referees’ comments

One UK funder provides referee report templates that invite the referees to state their level of familiarity with the topic and even a little about their research background, so that the final decision-making panel can put their comments into context. This is a great idea, and we would encourage other funding agencies to embrace it.

Beyond this volunteered information (if provided), never assume you know who the referee is, or that you can infer anything else about them because you could be going way off-base with your rant against econometricians who don’t ‘get’ sociological work. If there’s one thing worse than an ad hominem response, it’s an ad hominem response aimed at the wrong target!

One exercise that you might find useful is to produce a matrix listing all of the criticisms, and indicating the referee(s) who made those objections. As these reports are produced independently, the more referees make a particular point, the more problematic it might be.  This tabled information can be sorted by section (e.g. methodology, impact/dissemination plan, alternative approaches). You can then repeat the exercise with the positive comments that were made. While assimilating and processing information is a task that academics tend to be good at, it’s worth being systematic about this because it’s easy to overlook praise or attach too much weight to objections that are the most irritating.

Also, look out for, and highlight, any requests that you do a different project. Sometimes, these can be as obvious as “you should be doing Y instead”, where Y is a rather different project and probably closer to the reviewer’s own interests. These can be quite difficult criticisms to deal with, as what they are proposing may be sensible enough, but not what you want to do.  In such cases, stick to your guns, be clear what you want to do, and why it’s of at least as much value as the alternative proposal.

Using the matrix that you have prepared, consider further how damaging each criticism might be in the minds of the decision makers.  Using a combination of weight of opinion (positive remarks on a particular point minus criticisms) and multiplying by potential damage, you should now have a sense of which are the most serious criticisms.

Preparing a response is not a task to be attempted in isolation. You should involve other members of your team, and make full use of your research support office and senior colleagues (who are not directly involved in the application). Take advantage of assistance in interpreting the referees’ comments, and reviewing multiple drafts of your response.

Don’t read the assessor reports by themselves; you should also go back to your whole application, several times if necessary. It has probably been some time since you submitted the application, and new eyes and a bit of distance will help you to see the application as the referees may have seen it. You could pinpoint the reasons for particular criticisms, or misunderstandings that you assumed they made. While their criticisms may not be valid for the application you thought you wrote, they may very well be so for the one that you actually submitted.

3. The response

You should plan to use the available space in line with the exercise above, setting aside space for each criticism in proportion to its risk of stopping you getting funded.

Quibbles about your budgeted expenditure for hotel accommodation are insignificant compared to objections that question your entire approach, devalue your track-record, invalidate your methodology, or claim that you’re adding little that’s new to the sum of human knowledge. So, your response should:

  • Make it easy for the decision-makers: Be clear and concise.
  • Be specific when rebutting from the application. For example: “As we stated on page 24, paragraph 3…”. However, don’t lose sight of the need to create a document that can be understood in isolation as far as possible.
  • If possible and appropriate, introduce something that you’ve done in the time since submission to rebut a negative comment (be careful, though, as some schemes may not allow the introduction of new material).
  • Acknowledge any misunderstandings that arise from the application’s explanatory shortcomings or limitations of space, and be open to new clarifications.
  • Be grateful for the positive comments, but focus on rebutting the negative comments.

4. Be the reviewer

For the best way to really get an idea of what the response dynamic is all about in these funding rounds, consider becoming a grant referee. Once you’ve assessed a few applications and cut your teeth on a whole funding round (they can often be year-long processes), you quickly learn about the demands of the job and how regular referees ‘value’ applications.

Look out for chances to be on grant assessment panels, and say yes to invitations to review for various professional bodies or government agencies. Almost all funding schemes could do with a larger and more diverse pool of academics to act as their ‘gate-keepers’.

Finally: Remember to keep your eyes on the prize. The purpose of this response exercise is to give your project the best possible chance of getting funding. It is an inherent part of many funding rounds these days, and not only an afterthought to your application.

* The writers and their respective organisations do not, in any way, endorse the mistreatment of animals. We love cats.  We don’t kick them, and neither should you. It’s just an expression. For those who’ve never met it, it means ‘to vent your frustration and powerlessness’.

I’ve disabled comments on this entry so that we can keep conversations on this article to one place – please head over to the Research Whisperer if you’d like to comment. (AG).

Coping with rejection: What to do if your grant application is unsuccessful. Part 2: Next Steps

Look, I know I said that not getting funded doesn't mean they disliked your proposal, but I need a picture and it's either this or a picture of Simon Cowell with his thumb down. Think on.

In the first part of this series, I argued that it’s important not to misunderstand or misinterpret the reasons for a grant application being unsuccessful.  In the comments, Jo VanEvery shared a phrase that she’s heard from a senior figure at one of the Canadian Research Councils – that research funding “is not a test, it’s a contest”.  Not getting funded doesn’t necessarily mean that your research isn’t considered to be of high quality.  This second entry is about what steps to consider next.

1.  Some words of wisdom

‘Tis a lesson you should heed:  Try, try, try again.
If at first you don’t succeed, Try, try, try again
William Edward Hickson (1803-1870)

The definition of insanity is doing the same thing over and over but expecting different results
Ben Franklin, Albert Einstein, or Narcotics Anonymous

I like these quotes because they’re both correct in their own way.  There’s value to Hickson’s exhortation.  Success rates are low for most schemes and most funders, so even if you’ve done everything right, the chances are against you.  To be successful, you need a degree of resilience to look for another funder or a new project, rather than embarking on a decade-long sulk, muttering plaintively about how “the ESRC doesn’t like” your research whenever the topic of external funding is raised.

However Franklin et al (or al?) also have a point about not learning from the experience, and repeating the same mistakes without learning anything as you drift from application to application.  While doing this, you can convince yourself that research funding is a lottery (which it isn’t) and all you have to do is to submit enough applications and eventually your number will come up (which it won’t).  This is the kind of approach (on the part of institutions as well as individuals) that’s pushed us close to ‘demand management’ measures with the ESRC.  More on learning from the experience in a moment or two.

2.  Can you do the research anyway?

This might seem like an odd question to ask, but it’s always the first one I ask academic colleagues who’ve been unsuccessful with a grant application (yes, this does happen,  even at Nottingham University Business School).  The main component of most research projects is staff time.  And if you’re fortunate enough to be employed by a research-intensive institution which gives you a generous research time allocation, then this shouldn’t be a problem.  Granted, you can’t have that full time research associate you wanted, but could you cut down the project and take on some or all of that work yourself or between the investigators?  Could you involve more people – perhaps junior colleagues – to help cover the work? Would others be willing to be involved if they can either co-author or be sole author on some of the outputs?  Could it be a PhD project?

Directly incurred research expenses are more of a problem – transcription costs, data costs, travel and expenses – especially if you and your co-investigators don’t have personal research accounts to dip into.  But if it turns out that all you need is your expenses paying, then a number of other funding options become viable – some external, but perhaps also some internal.

Of course, doing it anyway isn’t always possible, but it’s worth asking yourself and your team that question.  It’s also one that’s well worth asking before you decide to apply for funding.

3.  What can you learn for next time?

It’s not nice not getting your project funded.  Part of you probably wants to lock that application away and not think about it again.  Move onwards and upwards, and perhaps trying again with another research idea.  While resilience is important, it’s just as important to learn whatever lessons there are to learn to give yourself the best possible chance next time.

One lesson you might be able to take from the experience is about planning the application.  If you found yourself running out of time, or not getting sufficient input from senior colleagues, not taking full advantage of the support available within your institution, well, that’s a lesson to learn.  Give yourself more time, start earlier before the deadline, and don’t make yourself rush it.  If you did all this last time, remember that you did, and the difference that it made.  If you didn’t, then the fact is that your application was almost certainly not as strong as it could have been.  And if your application document is not the strongest possible iteration of your research idea, your chances of getting funded are pretty minimal.

I’d recommend reading through your application and the call guidance notes once again in the light of referees’ comments.  Now that you have sufficient distance from the application, you should ‘referee’ it yourself as well.  What would you do better next time?  Not necessarily individual application-specific aspects, but more general points.  Did your application address the priorities of the call specifically enough, or were the crowbar marks far too visible?  Did you get the balance right between exposition and background and writing about the current project?  Did you pay enough attention to each section?  Did you actually answer the questions asked?  Do you understand any criticisms that the referees had?

4. Can you reapply?  Should you reapply?

If it’s the ESRC you’re thinking about, then the answer’s no unless you’re invited.  I think we’re still waiting on guidance from the ESRC about what constitutes a resubmission, but if you find yourself thinking about how much you might need to tinker with your unsuccessful project to make it a fresh submission, then the chances are that you’ll be barking up the wrong tree.  Worst case scenario is that it’s thrown straight out without review, and best case is probably that you end up with something a little too contrived to stand any serious chance of funding.

Some other research funders do allow resubmissions, but generally you will need to declare it.  While you might get lucky with a straight resubmission, my sense is that if it was unsuccessful once it will be unsuccessful again. But if you were to thoroughly revise it, polish it, take advice from anyone willing to give it, and have one more go, well, who knows?

But there’s really no shame in walking away.  Onwards and upwards to the next idea.  Let this one go for now, and working on something new and fresh and exciting instead.  Just remember everything that you learnt along the way.  One former colleague once told me that he usually got at least one paper out of an application even it was unsuccessful.  I don’t know how true that might be more generally, but you’ve obviously done a literature review and come up with some ideas for future research.  Might there be a paper in all that somewhere?

Another option which I hinted at earlier when I mentioned looking for the directly incurred costs only is resubmitting to another funder.  My advice on this is simple…. don’t resubmit to another funder.  Or at least, don’t treat it like a resubmission.  Every research funder, every scheme, has different interests and priorities.  You wrote an application for one funder, which presumably was tailored to that funder (it was, wasn’t it?).  So a few alterations probably won’t be enough.

For one thing, the application form is almost certainly different, and that eight page monstrosity won’t fit into two pages.  But cut it down crudely, and if it reads like it’s been cut down crudely, you have no chance.  I’ve never worked for a research funding body (unless you count internal schemes where I’ve had a role in managing the process), but I would imagine that if I did, the best way to annoy me (other than using the word ‘impactful‘) would be sending me some other funder’s cast-offs.  It’s not quite like romancing a potential new partner and using your old flame’s name by mistake, but you get the picture.  Your new funder wants to feel special and loved.  They want you to have picked out them – and them alone – for their unique and enlightened approach to funding.  Only they can fill the hole in your heart wallet, and satisfy your deep yearning for fulfilment.

And where should you look if your first choice funder does not return your affections?  Well, I’m not going to tell you (not without a consultancy fee, anyway).  But I’m sure your research funding office will be able to help find you some new prospective partners.

 

Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.