Setting Grant Getting Targets in the Social Sciences

I’m writing this in the final week of my current role as Research Development Manager (Social Sciences) at the University of Nottingham before I move to my role as Research Development Manager (Research Charities) at the University of Nottingham. This may or may not change the focus of this blog, but I won’t abandon the social sciences entirely – not least because I’m stuck with the web address.

Image by Tookapic

I’ve been thinking about strategies and approaches to research funding, and the place and prioritisation of applying for research grants in academic structures. It’s good for institutions to be ambitious in terms of their grant getting activities. However, these ambitions need to be at least on a nodding acquaintance with:
(a) the actual amount of research funding historically available to any given particular discipline; and
(b) the chances of any given unit or school or individual to compete successfully for that funding given the strength of the competition.

To use a football analogy, if I want my team to get promotion, I should moderate my expectations in the light of how many promotion places are available, and how strong the likely competition for those limited spots will be. In both cases, we want to set targets that are challenging, stretching, and ambitious, but which are also realistic and informed by the evidence.

How do we do that? Well, in a social science context, a good place to start is the ESRC success rates, and other disciplines could do worse than take a similar approach with their most relevant funding council. The ESRC produce quite a lot of data and analysis on funding and success rates, and Alex Hulkes of the ESRC Insights team writes semi-regular blog posts. Given the effort put into creating and curating this information, it seems only right that we use it to inform our strategies. This level of transparency is a huge (and very welcome) change from previous practices of very limited information being rather hidden away. Obvious caveats – the ESRC is by no means the only funder in town for the social sciences, but they’re got the deepest pockets and offer the best financial terms. Another (and probably better) way would be to compare HESA research income stats, but let’s stick to the ESRC for now.

The table below shows the running three year total (2015/6- 2017/18) and number of applications for each discipline for all calls, and the total for the period 2011/12 to 2017/8. You can access the data for yourself on the ESRC web page. This data is linked as ‘Application and success rate data (2011-12 to 2017-18)’ and was published in ODS format in May 2018. For ease of reading I’ve hidden the results from individual years.

Lots of caveats here. Unsuccessful outline proposals aren’t included (as no outline application leads directly to funding), but ‘office rejects’ (often for eligibility reasons) are. The ‘core discipline’ of each application is taken into account – secondary disciplines are not. The latest figures here are from 2017-2018 (financial year), so there’s a bit of a lag – in particular, the influence of the Global Challenges Research Fund (GCRF) or Industrial Strategy Challenge Fund (ISCF) will not be fully reflected in these figures. I think the ‘all data’ figures may include now-defunct schemes such as the ESRC Seminar Series, though I think Small Grants had largely gone by the start of the period covered by these figures.

Perhaps most importantly, because these are the results for all schemes, they include targeted calls which will rarely open to all disciplines equally. Fortunately, the ESRC also publishes similar figures for their open call (Standard) Research Grants scheme for the same time period. Note that (as far as I can tell) the data above includes the data below, just as the ‘all data’ column (which goes back to 2011/2) also includes the three year total.

This table is important because the Research Grants Scheme is bottom-up, open-call, and open to any application that’s at least 50% social sciences. Any social science researcher could apply to this scheme, whereas directed calls will inevitably appeal only to a subset. These are the chances/success rates for those whose work does not fit squarely into a directed scheme and could arguably be regarded as a more accurate measure of disciplinary success rates. It’s worth noting that a specific call that’s very friendly to a particular discipline is likely to boost the successes but may decrease the disciplinary success rate if it attracts a lot of bids. It’s also possible that major targetted calls that are friendly to a particular disciplin may result in fewer bids to open call.

To be fair, there are a few other regular ESRC schemes that are similarly open and should arguably be included if we wanted to look at the balance of disciplines and what a discipline target might look like. The New Investigator Scheme is open in terms of academic discipline, if not in time-since-PhD, and the Open Research Area call is open in terms of discipline if not in terms of collaborators. The Secondary Data Analysis Initiative is similarly open in terms of discipline, if not in terms of methods. Either way, we don’t have (or I can’t find) data which combines those schemes into a non-directed total.

Nevertheless, caveats and qualifications aside, I think these two tables give us a good sense of the size of prize available for each discipline. There’s approxinately 29 per year (of which 5 open call) for Economics, and 11 per year (of which 2 open call) for Business and Management. Armed with that information and a knowledge of the relative strength of the discipline/school in our own institution, we ought to get a sense of what a realistic target might look like and a sense of how well we’re already doing. Given what we know about our expertise, eminence, and environment, and the figures for funded projects, what ought our share of those projects be?

We could ask a further question about how those successes are distributed between universities and about any correllation between successes and (unofficial) subject league tables from the last REF, calculated on the basis of Grade Point Average or Research power. However, even if that data were available, we’d be looking at small numbers. We do know that the ESRC have done a lot of work on looking at funding distribution and concentration and their key findings are that:

ESRC peer review processes do not concentrate funding to a degree greater than that apparent in the proposals that request the funding.

ROs which apply infrequently appear to have lower success rates than do those which are more active applicants

In other words, most universities typically have comparable succcess rates except that those that apply more often do a little better than average, those who apply rarely do a little worse. This sounds intuitively right – those who apply more are likely more research-active, at least in the social sciences, and therefore more likely to generate stronger applications. But this is at an overall level, not discipline level.

I’d also note that we shouldn’t only measure success by the number of projects we lead. As grants get larger on average, there’s more research income available for co-investigators on bids leds elsewhere. I think a strategy that focuses only on leading bids and being lead institution neglects the opportunties offered by being involved in strong bids led by world class researchers based elsewhere. I’m sure it’s not unusual for co-I research income to exceed PI income for academic units.

I’ve not made any comment about the different success rates for different disciplines. I’ve written about this already for many of the years covered by the full data (though Alex Hulkes has done this far more effectively over the last few years, having the benefit of actual data skills) and I don’t really want to cover old ground again. The same disparities continue much as before. Perhaps GCRF will provide a much-needed boost for Education research (or at least the international aspects) and ISCF for management and business research.

Maybe.

Summary-time, and the writing ain’t easy…

A version of this article first appeared in Funding Insight in March 2019 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

“…. the words”.

All funding schemes have a summary section as an essential part of the application form. On the UK research councils’ JeS form, the instruction is to “[d]escribe the proposed research in simple terms in a way that could be publicised to a general audience”.

But the summary is not just about publicising your research. The summary also:

  • primes your reader’s expectations and understanding of your project
  • helps the funder to identify suitable experts to review your application
  • gives the reviewer a clear, straightforward, and complete overview of your project
  • helps your nominated funding panel introducer to summarise your application for the rest of the decision-making panel
  • acts as a prompt for the other panel members to recall your application, which they may have only skim-read
  • can be used once your project is successful, for a variety of purposes including ethics review; participant recruitment; impact work.
  • will appeal to non-academic panel members, especially for Leverhulme Trust applications.

How to make a mess of a summary

There are three main ways to make a mess of your summary.

1. Concentrating on the context – writing an introduction, not a summary.

I’ve written before about using too much background or introductory material (what I describe as ‘the Star Wars error’) but it’s a particular problem for a summary. The reader needs some context and background, but if it’s more than a sentence or two, it’s probably too much. If you don’t reach the “this research will” tipping point by a third of the way through (or worse, even later), there’s too much background.

2. Writing to avoid spoilers – writing a blurb not a precis

I really admire film editors who produce movie trailers: they capture the essence of the film while minimising spoilers. However while a film trailer for The Sixth Sense, The Usual Suspects, or The Crying Game should omit certain key elements, a project summary needs to include all of them. An unexpected fifth-act twist is great for film fans, but not for reviewers. Their reaction to your dramatic twist of adding a hitherto unheralded extra research question or work package is more likely to earn you bafflement and a poor review than an Oscar.

3. Ignoring Plain English

The National Institute for Health Research’s Research for Patient Benefit form asks for a “Plain English summary of research”. As a former regional panel member, I have read many applications and some great examples of Plain English summaries of very complex projects. I have read applications from teams that have not tried at all, and from those whose commitment to Plain English lasts the first three paragraphs, before they lapse back.

Writing in Plain English is hard. It involves finding a way to forget how you usually communicate to colleagues, and putting yourself in the situation of someone who knows a fraction of what you do. Without dumbing down or patronising. If it wasn’t hard to write in Plain English, we wouldn’t need the expert shorthand of specialist language, which is usually created to simplify complicated concepts to facilitate clear and concise communication among colleagues.

Very few people can write their own Plain English summary. It’s something you probably need help with, and your friendly neighbourhood research development officer might be well placed to do this – and might even draft it for you.

Incidentally, with NIHR schemes, beware not only of using specialist language, but also using higher-reading-level vocabulary and expressions when more simple ones will do. There’s no need for a superabundance of polysyllabic terminonlogy. The Levehulme Trust offers some useful guidance on writing for the lay ‘lay reader’.

When to write it

Should you write the summary when you start the application, or when you’ve finished it? Ideally both.

You should sketch out your summary when you start writing – if you can’t produce a bullet point summary of your project you’re probably not ready to write it up as an application. Save plenty of time at the very end to rework it in the light of your completed project. Above all, get as much outside input on your summary as you can. It is the most important part of the application, and well worth your time and trouble.

How can we help researchers with grant applications? The contribution of Research Development professionals

A version of this article first appeared in Funding Insight in February 2019 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

Duck/Rabbit, Joseph Jastrow (1899).

You are the academic expert, in the process of applying for funding to make a major advance in your field. I am not. I am a Research Development Manager – perhaps I have a PhD or MPhil in a cognate or entirely different field, or nothing postgraduate at all. How can I possibly help you?

The answer lies in this difference of experience and perspective. Sure, we may look at the same things, but different levels of knowledge,understanding – as well as different background assumptions –  mean we find very different meanings in them. We all look at the world through lenses tinted by our own experiences and expectations, and if we didn’t, we couldn’t make sense of it.

Interpreting funding calls

When academics look at funding calls, they notice and emphasise the elements of the call that suit their agenda and often downplay or fail to notice other elements. Early in my career I was baffled as to why a very senior professor thought that a funding call was appropriate for a project. He’s smarter than me, more experienced…so obviously I assumed I’d got it wrong. I went back to the call expecting to find my mistake and find that his interpretation was correct. But no…my instinct was right.

Since then I’ve regularly had these conversations and pointed out that an idea would need crowbarring to death to fit a particular call, and even then would be uncompetitive. I’ve had to point out basic eligibility problems that have escaped the finely-honed research skills of frightening bright people.

When research development professionals like me look at a funding call, we see it through tinted glasses too, but these are tinted by comparable calls that we’ve seen before. We see what has unusual or disproportionate emphasis or lack of emphasis, or even the significance of what’s missing. We know what we’re looking for and whereabouts in the call we expect to see it.  Our reading of calls is enhanced by a deep knowledge of the funder and its priorities, and what might be the motivation or source of funding behind a particular call.

Of course, some academics have an excellent understanding of particular funders. Especially if they’ve received funding from them, or served on a panel or as a referee, or been invited to a scoping workshop to inform the design and remit of a funding call.

But if you’re not in that position, the chances are that your friendly neighbourhood research development professional can advise you on how to interpret any given funder or scheme, or put you in touch with someone who can.

We can help you identify the most appropriate scheme and call for what you want to do, and just as importantly, prevent you from wasting your time on bids that are a poor fit. Often the best thing I do on any given day is talking someone out of spending weeks writing an application that never had any realistic chance of success.

Reading draft applications

You must have internal expert peer review and encourage your academic colleagues to be brave enough to criticise your ideas and point out weaknesses in their iteration. Don’t be Gollum.

Research Development professionals can’t usually offer expert review, but a form of lay review can be just as useful. We may not be experts in your area, but we’ve seen lots and lots of grant applications, good and bad. We have a sense of what works. We know when the balance is wrong. We know when we don’t understand sections that we think we should be able to understand, such as the lay summary. We notice when the significance or unique contribution is not clearly spelled out. We know when the methods are asserted, rather than defended. We know when sections are vague or undercooked, or fudged. Or inconsistent. When research questions appear, disappear, or mutate during the course of an application.

When I meet with academics and they explain their project, I often find there’s a mismatch between what I understood from reading a draft proposal and  what they actually meant. It’s very common for only 75%-90% of an idea to be on the page. The rest will be in the mind of the applicant, who will think the missing elements are present in the document because they can’t help but read the draft through the lens of their complete idea.

If your research development colleagues misunderstand or misread your application, it may be because they lack the background, but it’s more likely that what you’ve written isn’t clear enough. There’s a lot to be learned from creative misinterpretation.

None of this is a criticism of academics; it’s true for everyone. We all see our own writing through the prism of what we intend to write, not what we’ve actually written. It’s why this article would be even worse without Research Professional’s editorial team.

A Fantastic ‘Funding Friday’ in Finland

Last month Back in February, I was delighted to be invited to give the keynote talk at the University of Turku’s inaugural Funding Friday event. Before the invitation I didn’t know very much about Finland (other than the joke that in Finland, an extrovert is someone who stares at your shoes) and still less about the Finnish research funding environment. But I presumed (largely, if not entirely correctly) that there are a great many issues in common, and that advice about writing grant applications would be reasonably universal.

When someone takes Finnish stereotypes too seriously
Finnish Nightmares, by Karoliina Korhonen

When I reached the venue I was slightly surprised to see early arrivals each sitting at their own individual one-person desk. For a moment I did wonder if the Finnish stereotype was true to the extent that even sharing a desk was regarded as excessively extrovert. However, there was a more obvious explanation – it was exam season and the room doubled as an exam hall.

The Star Wars Error in Grant Writing

I was very impressed with the Funding Friday event. I was surprised to realise that I’d never been to a university-wide event on research funding – rather, we’ve tended to organise on a Faculty or School basis. The structure of the event was a brief introduction, my presentation (Applying for Research Funding: Preparations, Proposals, and Post-Mortems) followed by a panel discussion with five UTU academics who served on funding panels. Maria guided the panel through a series of questions about their experiences – how they ended up on a funding panel, what they’d learnt, what they looked for in a proposal, and what really annoyed them  – and took questions from the floor. This was a really valuable exercise, and something that I’d like to repeat at Nottingham. I’m always trying to humanise reviewers and panel members in the minds of grant applicants and to help them understand the processes of review and evaluation, and having a range of panel members from across academic disciplines willing to share their experiences was fascinating. Of course, not everyone agreed on everything, but there seemed to be relative uniformity across panels and academic disciplines in terms of what panel members wanted to see, what made their jobs easier, and what irritated them and made things harder.

In the afternoon, we had a series of shorter sessions from UTU’s research funding specialists. Lauri spoke about applying Aristotle’s teachings on rhetoric (ethos, pathos, and logos) to structuring research grant proposals – a really interesting approach that I’d not come across before. What is a grant application if not an attempt to persuade, and what’s rhetoric if not the art of persuasion? Anu talked about funding opportunities relative to career stage, and Johanna discussed the impact agenda, and it was particularly fascinating to hear how that’s viewed in Finland, given its growth and prominence in the UK. From discussions in the room there are clearly worries about the balance between funding for ‘blue skies’ or basic research and for applied research with impact potential. Finally, we heard from Samira, a successful grant winner, about her experiences of applying for funding. It’s great to hear from successful applicants to show that success is possible in spite of dispiriting success rates.

To resubmit, or not to resubmit, that is the question

I’d arrived with the assumption that research – like almost everything else in the Nordic social democracies – would be significantly better funded pro rata than in the UK. (See, for example, the existence of an affordable, reliable railway system with play areas for small children on intercity trains). However, success rates are broadly comparable. One significant difference between the UK and Finland funding landscapes is the prevalence of the UK ‘demand management’ agenda. This limits – or even bans – the re-submission of unsuccessful applications, or imposes individual or institutional sanctions/limits on numbers and timing of future applications. The motivating force behind this is to reduce the burden of peer review and assessment, both on funders and on academic reviewers and panel members. Many UK funders, especially the ESRC, felt that a lot of the applications they were receiving were of poor quality and stood little chance of funding.

Finnish funders take an approach that’s more like the European Research Council or the Marie Curie Fellowship, where resubmissions are not only allowed but often seem to be a part of the process. Apply, be unsuccessful, get some feedback, respond to it, improve the application, and get funded second or a subsequent time round. However, one problem – as our panel of panel members discussed – is that panel membership varies from year to year, and the panel who almost nearly funded your proposal one year is not going to be the same panel who reviews the improved version the following year. For this reason, we probably shouldn’t always expect absolute consistency from panels between years, especially as the application will be up against a different set of rival bids. Also, the feedback may not contain the reasons why an application wasn’t funded nor instructions on how to make it fundable next time. Sometimes panels will point out the flaws in applications, but can be reluctant to say what needs to be said – that no version of this application, however polished, will ever be competitive. I’ve written previously about deciding whether to resubmit or not, although it was written with the UK context in mind.

The room was very much split on whether or not those receiving the lowest marks should be prevented from applying again for a time, or even about a more modest limitation on applying again with a similar project. Of course, what the UK system does is move the burden of peer review back to universities, who are often poorly placed to review their own applications as almost all their expertise will be named on the bid. But I also worry about a completely open resubmission policy if it’s not accompanied by rigorous feedback, making it clear not only how an application can be improved, but on how competitive even the best possible iteration of that idea would be.

One of the themes to emerge from the day was about when to resubmit and when to move on. Funding (and paper, and job) rejection is a fact of academic life, calling for more than a measure of determination, resilience, bouncebackability, (or as they say in Finland) sisu . But carried too far, it ends up turning into stubbornness, especially if the same unsuccessful application is submitted over and over again with little or no changes. I think most people would accept that there is an element of luck in getting research funding – I’ve seen for myself how one negative comment can prompt others, leading to a criticism spiral which sinks an initially well-received application. Sometimes – by chance – there’s one person on the panel who is a particular subject expert and really likes/really hates a particular proposal and swings the discussion in a way that wouldn’t have happened without their presence. But the existence of an element of luck does not mean that research funding is lottery in which all you need do is keep buying your ticket until your number comes up. Luck is involved, but only regarding which competitive applications are funded.

I’ve written a couple of posts before (part one, and part two) about what to do when your grant application is unsuccessful, and they might form the beginnings of a strategy to respond and to decide what to do next. At the very least, I think a thorough review of the application and any feedback offered is in order before making any decisions. I think my sense is that in any system where resubmissions are an accepted feature, and where it’s common for resubmissions to be successful, it would a shame to give up after the first attempt. By the twelfth, though…

Watching your language

I was fascinated to learn that responsibility for training in research grant application writing is shared between UTU’s research development team and their English language unit. National funders tend to give the option of writing in English or in Finnish, though writing in English makes it easier to find international referees and reviewers for grant applications – and indeed one of my Business School colleagues is a regular reviewer.

One issue I’m going to continue to think is about support for researchers writing grant applications in their second or additional language. English language support is an obvious service to offer for a university in a country whose own language is not commonly spoken beyond the borders of immediate neighbours, and particularly in Finland where the language isn’t part of the same Indo-European language group as most of the rest of Europe. But it’s not something we think about much in the UK.

I’d say about half of the researchers I support speak English as a second language, and some of the support I provide can be around proof reading and sense-making – expressing ideas clearly and eliminating errors that obscure meaning or which might irritate the reader. I tend to think that reviewers will understand some minor mistakes or awkward phrasing in English provided that the application does not contain lazy or careless errors. If a reviewer is to take the time reading it, she wants to see that the applicant has taken his time writing it.

I think most universities run courses on academic English, though I suspect most of them are designed for students. Could we do more for academic staff who want to improve their academic English- not just for grant writing, but for teaching and for the purposes of writing journal papers? And could we (and should we) normalise that support as part of professional development? Or do we just assume that immersion in an English-speaking country will be sufficient?

However… I do think that academics writing in their second language have one potential advantage. I’ve written elsewhere about the ‘Superabundance of Polysyllabic Terminology’ (aka too many long words) error in grant writing, to which native English speakers are more prone. Second language academics tend to write more clearly, more simply, and more directly.  Over-complicated language can be confusing and/or annoying for a native English speaker reviewing your work, but there’s a decent chance that reviewers and panel members might speak English as a second language, who will be even more irritated. One piece of advice I once heard for writing EU grant applications was to write as if your application was going to reviewed by someone reading it in the fourth language while waiting to catch their flight. Because it might well be.

It was a real honour to visit Turku, and I’d have loved to have stayed longer. While there’s  a noticeable quietness and a reserve about Finnish people – even compared to the UK – everyone I met couldn’t have been more welcoming and friendly. So, to Soile, Lauri, Anu, Johanna, Jeremy, Samira, the Turku hotel receptionist who told me how to pronounce sisu, everyone else I met, and especially to Maria for organising …. kiittos, everyone.

Mistakes in grant writing – cut and paste text

A version of this article first appeared in Funding Insight in November 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

Given the ever-expanding requirements of most research funding application forms, it’s inevitable that applicants are tempted to pay less attention to some sections and end up writing text so generic, so bland, that it could be cut and pasted – with minimal editing of names and topics – into almost any other proposal.

Resist that temptation. Using text that looks like it could be cut and pasted between proposals suggests that you haven’t thought through the specifics of your project or fellowship, and it will make it seem less plausible as a result. 

Content free

I often see responses that are so content free they make my heart sink. For example:

1)  “We will present the findings at major international conferences and publish in world class journals”

2)  “The findings will be of interest to researchers in A, B, and C.”

3)  “This is a methodologically innovative, timely, and original project which represents a step change in our understanding”

4)  “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.”

5)  “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”.

These claims might well be true and can read well in isolation. But they’re only superficially plausible, and while they contain buzzwords that applicants think that funders are after, they’re entirely content, evidence, and argument free.

Self harm

Why should you care? Because your proposal doesn’t just have to be good enough to meet a certain standard, it has to be better than its rivals. If there are sections of your application that could be transferred into any rival application, this might be a sign that that section is not as strong or distinctive as it could be and is not giving you any competitive edge.

Cut and paste sections may be actively harming your chances. They may read well in isolation but when compared directly to more thoughtful and more detailed sections in rival applications, they can look weak and lazy, especially if they don’t take full advantage of the word count.

Cut and pasteable text tends to occur in the trickier sections of the application form to write and those that get less attention: dissemination; impact pathway/plan; academic impact; personal development plan; data management plan; choice of host institution. Sometimes these generic statements emerge because the applicants don’t know what to write, and sometimes because it’s all they can be bothered to write for a section they wrongly regard of lesser importance.

Give evidence

Give these sections the time, attention and thought they deserve. Add details. Add specifics.  Add argument.  Add evidence. Find things to say that only apply to your application.  If you don’t know how to answer a question strongly, get advice from your research development colleagues.

The more editing it would take to put it into someone else’s bid, the better. Here are some thoughts on improving the earlier examples:

1)  “We will present the findings at major international conferences and publish in world class journals”. I find it hard to understand vagueness about plans for academic impact. Even allowing for the fact that the findings of the research will affect plans, it’s surely not too much to expect some target journals and conferences to be named. If applicants can’t demonstrate knowledge of realistic targets, it undermines their credibility.

2)  “The findings will be of interest to researchers in A, B, and C.” I’d ban the phrase “of interest to” when explaining potential academic impact. It tells the reader nothing about the likely academic impact – who will cite your work, and what difference do you anticipate it will make to the field?

3)  “This is a methodologically innovative, timely, and original project which represents a step change in our understanding” Who will use your methods? Who will use your frameworks? If all research is standing on the shoulders of giants, how much further can future researchers see perched atop your work? How exactly does your project go beyond the state of the art, and what might be the new state of the art after your project?

4)  “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.” If you’re talking about engaging with social media, talk about how you are going to find readers and/or followers. What’s your plan for your presence in terms of the existing ecosystem of social media accounts that are active in this area? Who are the current key influencers?

5)  “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”. How does funding take you to what’s next? What’s the path from the conclusions of this project to your future research agenda?

Looking for cut and paste text – and improving it where you find it – is an excellent review technique to polish your draft application, and particularly to improve those harder-to-write sections. Hammering out the detail is more difficult, but it could give you an advantage in the race for funding.