All funding schemes have a summary section as an essential part of the application form. On the UK research councils’ JeS form, the instruction is to “[d]escribe the proposed research in simple terms in a way that could be publicised to a general audience”.
But the summary is not just about publicising your research. The summary also:
primes your reader’s expectations and understanding of your project
helps the funder to identify suitable experts to review your application
gives the reviewer a clear, straightforward, and complete overview of your project
helps your nominated funding panel introducer to summarise your application for the rest of the decision-making panel
acts as a prompt for the other panel members to recall your application, which they may have only skim-read
can be used once your project is successful, for a variety of purposes including ethics review; participant recruitment; impact work.
There are three main ways to make a mess of your summary.
1. Concentrating on the context – writing an introduction, not a summary.
I’ve written before about using too much background or introductory material (what I describe as ‘the Star Wars error’) but it’s a particular problem for a summary. The reader needs some context and background, but if it’s more than a sentence or two, it’s probably too much. If you don’t reach the “this research will” tipping point by a third of the way through (or worse, even later), there’s too much background.
2. Writing to avoid spoilers – writing a blurb not a precis
I really admire film editors who produce movie trailers: they capture the essence of the film while minimising spoilers. However while a film trailer for The Sixth Sense, The Usual Suspects, or The Crying Game should omit certain key elements, a project summary needs to include all of them. An unexpected fifth-act twist is great for film fans, but not for reviewers. Their reaction to your dramatic twist of adding a hitherto unheralded extra research question or work package is more likely to earn you bafflement and a poor review than an Oscar.
3. Ignoring Plain English
The National Institute for Health Research’s Research for Patient Benefit form asks for a “Plain English summary of research”. As a former regional panel member, I have read many applications and some great examples of Plain English summaries of very complex projects. I have read applications from teams that have not tried at all, and from those whose commitment to Plain English lasts the first three paragraphs, before they lapse back.
Writing in Plain English is hard. It involves finding a way to forget how you usually communicate to colleagues, and putting yourself in the situation of someone who knows a fraction of what you do. Without dumbing down or patronising. If it wasn’t hard to write in Plain English, we wouldn’t need the expert shorthand of specialist language, which is usually created to simplify complicated concepts to facilitate clear and concise communication among colleagues.
Incidentally, with NIHR schemes, beware not only of using specialist language, but also using higher-reading-level vocabulary and expressions when more simple ones will do. There’s no need for a superabundance of polysyllabic terminonlogy. The Levehulme Trust offers some useful guidance on writing for the lay ‘lay reader’.
When to write it
Should you write the summary when you start the application, or when you’ve finished it? Ideally both.
You should sketch out your summary when you start writing – if you can’t produce a bullet point summary of your project you’re probably not ready to write it up as an application. Save plenty of time at the very end to rework it in the light of your completed project. Above all, get as much outside input on your summary as you can. It is the most important part of the application, and well worth your time and trouble.
You are the academic expert, in the process of applying for funding to make a major advance in your field. I am not. I am a Research Development Manager – perhaps I have a PhD or MPhil in a cognate or entirely different field, or nothing postgraduate at all. How can I possibly help you?
The answer lies in this difference of experience and perspective. Sure, we may look at the same things, but different levels of knowledge,understanding – as well as different background assumptions – mean we find very different meanings in them. We all look at the world through lenses tinted by our own experiences and expectations, and if we didn’t, we couldn’t make sense of it.
Interpreting funding calls
When academics look at funding calls, they notice and emphasise the elements of the call that suit their agenda and often downplay or fail to notice other elements. Early in my career I was baffled as to why a very senior professor thought that a funding call was appropriate for a project. He’s smarter than me, more experienced…so obviously I assumed I’d got it wrong. I went back to the call expecting to find my mistake and find that his interpretation was correct. But no…my instinct was right.
Since then I’ve regularly had these conversations and pointed out that an idea would need crowbarring to death to fit a particular call, and even then would be uncompetitive. I’ve had to point out basic eligibility problems that have escaped the finely-honed research skills of frightening bright people.
When research development professionals like me look at a funding call, we see it through tinted glasses too, but these are tinted by comparable calls that we’ve seen before. We see what has unusual or disproportionate emphasis or lack of emphasis, or even the significance of what’s missing. We know what we’re looking for and whereabouts in the call we expect to see it. Our reading of calls is enhanced by a deep knowledge of the funder and its priorities, and what might be the motivation or source of funding behind a particular call.
Of course, some academics have an excellent understanding of particular funders. Especially if they’ve received funding from them, or served on a panel or as a referee, or been invited to a scoping workshop to inform the design and remit of a funding call.
But if you’re not in that position, the chances are that your friendly neighbourhood research development professional can advise you on how to interpret any given funder or scheme, or put you in touch with someone who can.
We can help you identify the most appropriate scheme and call for what you want to do, and just as importantly, prevent you from wasting your time on bids that are a poor fit. Often the best thing I do on any given day is talking someone out of spending weeks writing an application that never had any realistic chance of success.
Reading draft applications
You must have internal expert peer review and encourage your academic colleagues to be brave enough to criticise your ideas and point out weaknesses in their iteration. Don’t be Gollum.
Research Development professionals can’t usually offer expert review, but a form of lay review can be just as useful. We may not be experts in your area, but we’ve seen lots and lots of grant applications, good and bad. We have a sense of what works. We know when the balance is wrong. We know when we don’t understand sections that we think we should be able to understand, such as the lay summary. We notice when the significance or unique contribution is not clearly spelled out. We know when the methods are asserted, rather than defended. We know when sections are vague or undercooked, or fudged. Or inconsistent. When research questions appear, disappear, or mutate during the course of an application.
When I meet with academics and they explain their project, I often find there’s a mismatch between what I understood from reading a draft proposal and what they actually meant. It’s very common for only 75%-90% of an idea to be on the page. The rest will be in the mind of the applicant, who will think the missing elements are present in the document because they can’t help but read the draft through the lens of their complete idea.
If your research development colleagues misunderstand or misread your application, it may be because they lack the background, but it’s more likely that what you’ve written isn’t clear enough. There’s a lot to be learned from creative misinterpretation.
None of this is a criticism of academics; it’s true for everyone. We all see our own writing through the prism of what we intend to write, not what we’ve actually written. It’s why this article would be even worse without Research Professional’s editorial team.
Last month Back in February, I was delighted to be invited to give the keynote talk at the University of Turku’s inaugural Funding Friday event. Before the invitation I didn’t know very much about Finland (other than the joke that in Finland, an extrovert is someone who stares at your shoes) and still less about the Finnish research funding environment. But I presumed (largely, if not entirely correctly) that there are a great many issues in common, and that advice about writing grant applications would be reasonably universal.
When I reached the venue I was slightly surprised to see early arrivals each sitting at their own individual one-person desk. For a moment I did wonder if the Finnish stereotype was true to the extent that even sharing a desk was regarded as excessively extrovert. However, there was a more obvious explanation – it was exam season and the room doubled as an exam hall.
I was very impressed with the Funding Friday event. I was surprised to realise that I’d never been to a university-wide event on research funding – rather, we’ve tended to organise on a Faculty or School basis. The structure of the event was a brief introduction, my presentation (Applying for Research Funding: Preparations, Proposals, and Post-Mortems) followed by a panel discussion with five UTU academics who served on funding panels. Maria guided the panel through a series of questions about their experiences – how they ended up on a funding panel, what they’d learnt, what they looked for in a proposal, and what really annoyed them – and took questions from the floor. This was a really valuable exercise, and something that I’d like to repeat at Nottingham. I’m always trying to humanise reviewers and panel members in the minds of grant applicants and to help them understand the processes of review and evaluation, and having a range of panel members from across academic disciplines willing to share their experiences was fascinating. Of course, not everyone agreed on everything, but there seemed to be relative uniformity across panels and academic disciplines in terms of what panel members wanted to see, what made their jobs easier, and what irritated them and made things harder.
In the afternoon, we had a series of shorter sessions from UTU’s research funding specialists. Lauri spoke about applying Aristotle’s teachings on rhetoric (ethos, pathos, and logos) to structuring research grant proposals – a really interesting approach that I’d not come across before. What is a grant application if not an attempt to persuade, and what’s rhetoric if not the art of persuasion? Anu talked about funding opportunities relative to career stage, and Johanna discussed the impact agenda, and it was particularly fascinating to hear how that’s viewed in Finland, given its growth and prominence in the UK. From discussions in the room there are clearly worries about the balance between funding for ‘blue skies’ or basic research and for applied research with impact potential. Finally, we heard from Samira, a successful grant winner, about her experiences of applying for funding. It’s great to hear from successful applicants to show that success is possible in spite of dispiriting success rates.
To resubmit, or not to resubmit, that is the question
I’d arrived with the assumption that research – like almost everything else in the Nordic social democracies – would be significantly better funded pro rata than in the UK. (See, for example, the existence of an affordable, reliable railway system with play areas for small children on intercity trains). However, success rates are broadly comparable. One significant difference between the UK and Finland funding landscapes is the prevalence of the UK ‘demand management’ agenda. This limits – or even bans – the re-submission of unsuccessful applications, or imposes individual or institutional sanctions/limits on numbers and timing of future applications. The motivating force behind this is to reduce the burden of peer review and assessment, both on funders and on academic reviewers and panel members. Many UK funders, especially the ESRC, felt that a lot of the applications they were receiving were of poor quality and stood little chance of funding.
Finnish funders take an approach that’s more like the European Research Council or the Marie Curie Fellowship, where resubmissions are not only allowed but often seem to be a part of the process. Apply, be unsuccessful, get some feedback, respond to it, improve the application, and get funded second or a subsequent time round. However, one problem – as our panel of panel members discussed – is that panel membership varies from year to year, and the panel who almost nearly funded your proposal one year is not going to be the same panel who reviews the improved version the following year. For this reason, we probably shouldn’t always expect absolute consistency from panels between years, especially as the application will be up against a different set of rival bids. Also, the feedback may not contain the reasons why an application wasn’t funded nor instructions on how to make it fundable next time. Sometimes panels will point out the flaws in applications, but can be reluctant to say what needs to be said – that no version of this application, however polished, will ever be competitive. I’ve written previously about deciding whether to resubmit or not, although it was written with the UK context in mind.
The room was very much split on whether or not those receiving the lowest marks should be prevented from applying again for a time, or even about a more modest limitation on applying again with a similar project. Of course, what the UK system does is move the burden of peer review back to universities, who are often poorly placed to review their own applications as almost all their expertise will be named on the bid. But I also worry about a completely open resubmission policy if it’s not accompanied by rigorous feedback, making it clear not only how an application can be improved, but on how competitive even the best possible iteration of that idea would be.
One of the themes to emerge from the day was about when to resubmit and when to move on. Funding (and paper, and job) rejection is a fact of academic life, calling for more than a measure of determination, resilience, bouncebackability, (or as they say in Finland) sisu . But carried too far, it ends up turning into stubbornness, especially if the same unsuccessful application is submitted over and over again with little or no changes. I think most people would accept that there is an element of luck in getting research funding – I’ve seen for myself how one negative comment can prompt others, leading to a criticism spiral which sinks an initially well-received application. Sometimes – by chance – there’s one person on the panel who is a particular subject expert and really likes/really hates a particular proposal and swings the discussion in a way that wouldn’t have happened without their presence. But the existence of an element of luck does not mean that research funding is lottery in which all you need do is keep buying your ticket until your number comes up. Luck is involved, but only regarding which competitive applications are funded.
I’ve written a couple of posts before (part one, and part two) about what to do when your grant application is unsuccessful, and they might form the beginnings of a strategy to respond and to decide what to do next. At the very least, I think a thorough review of the application and any feedback offered is in order before making any decisions. I think my sense is that in any system where resubmissions are an accepted feature, and where it’s common for resubmissions to be successful, it would a shame to give up after the first attempt. By the twelfth, though…
Watching your language
I was fascinated to learn that responsibility for training in research grant application writing is shared between UTU’s research development team and their English language unit. National funders tend to give the option of writing in English or in Finnish, though writing in English makes it easier to find international referees and reviewers for grant applications – and indeed one of my Business School colleagues is a regular reviewer.
One issue I’m going to continue to think is about support for researchers writing grant applications in their second or additional language. English language support is an obvious service to offer for a university in a country whose own language is not commonly spoken beyond the borders of immediate neighbours, and particularly in Finland where the language isn’t part of the same Indo-European language group as most of the rest of Europe. But it’s not something we think about much in the UK.
I’d say about half of the researchers I support speak English as a second language, and some of the support I provide can be around proof reading and sense-making – expressing ideas clearly and eliminating errors that obscure meaning or which might irritate the reader. I tend to think that reviewers will understand some minor mistakes or awkward phrasing in English provided that the application does not contain lazy or careless errors. If a reviewer is to take the time reading it, she wants to see that the applicant has taken his time writing it.
I think most universities run courses on academic English, though I suspect most of them are designed for students. Could we do more for academic staff who want to improve their academic English- not just for grant writing, but for teaching and for the purposes of writing journal papers? And could we (and should we) normalise that support as part of professional development? Or do we just assume that immersion in an English-speaking country will be sufficient?
However… I do think that academics writing in their second language have one potential advantage. I’ve written elsewhere about the ‘Superabundance of Polysyllabic Terminology’ (aka too many long words) error in grant writing, to which native English speakers are more prone. Second language academics tend to write more clearly, more simply, and more directly. Over-complicated language can be confusing and/or annoying for a native English speaker reviewing your work, but there’s a decent chance that reviewers and panel members might speak English as a second language, who will be even more irritated. One piece of advice I once heard for writing EU grant applications was to write as if your application was going to reviewed by someone reading it in the fourth language while waiting to catch their flight. Because it might well be.
It was a real honour to visit Turku, and I’d have loved to have stayed longer. While there’s a noticeable quietness and a reserve about Finnish people – even compared to the UK – everyone I met couldn’t have been more welcoming and friendly. So, to Soile, Lauri, Anu, Johanna, Jeremy, Samira, the Turku hotel receptionist who told me how to pronounce sisu, everyone else I met, and especially to Maria for organising …. kiittos, everyone.
A version of this article first appeared in Funding Insight in November 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
Given the ever-expanding requirements of most research funding application forms, it’s inevitable that applicants are tempted to pay less attention to some sections and end up writing text so generic, so bland, that it could be cut and pasted – with minimal editing of names and topics – into almost any other proposal.
Resist that temptation. Using text that looks like it could be cut and pasted between proposals suggests that you haven’t thought through the specifics of your project or fellowship, and it will make it seem less plausible as a result.
I often see responses that are so content free they make my heart sink. For example:
1) “We will present the findings at major international conferences and publish in world class journals”
2) “The findings will be of interest to researchers in A, B, and C.”
3) “This is a methodologically innovative, timely, and original project which represents a step change in our understanding”
4) “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.”
5) “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”.
These claims might well be true and can read well in isolation. But they’re only superficially plausible, and while they contain buzzwords that applicants think that funders are after, they’re entirely content, evidence, and argument free.
Why should you care? Because your proposal doesn’t just have to be good enough to meet a certain standard, it has to be better than its rivals. If there are sections of your application that could be transferred into any rival application, this might be a sign that that section is not as strong or distinctive as it could be and is not giving you any competitive edge.
Cut and paste sections may be actively harming your chances. They may read well in isolation but when compared directly to more thoughtful and more detailed sections in rival applications, they can look weak and lazy, especially if they don’t take full advantage of the word count.
Cut and pasteable text tends to occur in the trickier sections of the application form to write and those that get less attention: dissemination; impact pathway/plan; academic impact; personal development plan; data management plan; choice of host institution. Sometimes these generic statements emerge because the applicants don’t know what to write, and sometimes because it’s all they can be bothered to write for a section they wrongly regard of lesser importance.
Give these sections the time, attention and thought they deserve. Add details. Add specifics. Add argument. Add evidence. Find things to say that only apply to your application. If you don’t know how to answer a question strongly, get advice from your research development colleagues.
The more editing it would take to put it into someone else’s bid, the better. Here are some thoughts on improving the earlier examples:
1) “We will present the findings at major international conferences and publish in world class journals”. I find it hard to understand vagueness about plans for academic impact. Even allowing for the fact that the findings of the research will affect plans, it’s surely not too much to expect some target journals and conferences to be named. If applicants can’t demonstrate knowledge of realistic targets, it undermines their credibility.
2) “The findings will be of interest to researchers in A, B, and C.” I’d ban the phrase “of interest to” when explaining potential academic impact. It tells the reader nothing about the likely academic impact – who will cite your work, and what difference do you anticipate it will make to the field?
3) “This is a methodologically innovative, timely, and original project which represents a step change in our understanding” Who will use your methods? Who will use your frameworks? If all research is standing on the shoulders of giants, how much further can future researchers see perched atop your work? How exactly does your project go beyond the state of the art, and what might be the new state of the art after your project?
4) “We will set up a project Twitter account and a blog, and with the support of our outstanding press office, write about our research for a general audience.” If you’re talking about engaging with social media, talk about how you are going to find readers and/or followers. What’s your plan for your presence in terms of the existing ecosystem of social media accounts that are active in this area? Who are the current key influencers?
5) “Funding will enable me to lead my own project for the first time, and support me in making the transition to independent researcher”. How does funding take you to what’s next? What’s the path from the conclusions of this project to your future research agenda?
Looking for cut and paste text – and improving it where you find it – is an excellent review technique to polish your draft application, and particularly to improve those harder-to-write sections. Hammering out the detail is more difficult, but it could give you an advantage in the race for funding.
A version of this article first appeared in Funding Insight in June 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
Post-doctoral or early career research fellowships in the social sciences have low success rates and are scarcely less competitive than academic posts. But if you have a strong proposal, at least some publications, realistic expectations and a plan B, applying for one of these schemes can be an opportunity to firm up your research ideas and make connections.
If you’re thinking of applying for a postdoc or early career social science fellowship, you should ask yourself the following:
Are you likely to be one of the top (say) six or seven applicants in your academic discipline?
Does your current track record demonstrate this, or at least trajectory towards it?
Is applying for a Fellowship the best use of your time?
There’s a lot of naivety about the number of social science fellowships there are and the competition for them. Perhaps some PhD supervisors paint too rosy a picture, perhaps it is applicant wishful thinking, or perhaps the phrasing of some calls understates the reality of what’s required of a competitive proposal. But the reality is that Postdoc Fellowships in the social sciences are barely less competitive than lectureships. Competitive pressures mean that standards are driven sky high and demand exceeds supply by a huge margin.
The British Academy has a success rate of around 5%, with 45 Fellowships across arts, humanities, and social sciences. The Leverhulme Trust success rate is 14%, with around 100 Fellowships across all the disciplines they support (i.e. nearly all). The ESRC scheme is new – no success rates yet – but it will support 30-35 social science Fellowships. Marie Curie Fellowships are still available, but require relocating to another European country. There are the new UKRI Future Leader Fellowships which will fund 100 per call, but that’s across all subjects, and these are very much ‘future leader’ not ‘postdoc’ calls. Although some institutions have responded to a lack of external funding by establishing internal schemes – such as the Nottingham Research Fellowships – standards and expectations are also very, very high.
That’s not to say that you shouldn’t apply – Fellowships do exist, applicants do get them – but you need to take a realistic view of your chances of success and decide about the best use of your time. If you’re writing a Fellowship application, you’re not writing up a paper, or writing a job application.
Top Tips for applications
Credible applicants need their own (not their supervisor’s) original, detailed and significant Fellowship project. Doing ‘more of the same’ is unlikely to be competitive – it’s fine to want to mine your PhD for publications and for there to be a connection to the new programme of work, but a Fellowship is really about the next stage.
If you don’t have any publications, you have little to make you stand out, and therefore little to no chance. Like all grant applications, this is a contest, not a test. It’s not about being sufficiently promising to be worth funding (most applicants are), it’s about presenting a stronger and more compelling case than your rivals.
If you have co-authored publications, make your contribution clear. If you have co-written a paper with your supervisor, make sure reviewers can tell whether (a) it is your work, with supervisory input; or (b) it is your supervisor’s work, for which you provided research assistance.
Give serious consideration to moving institution unless (a) you’re already at the best place for what you want to do; or (b) your personal circumstances prevent this. Moving institution doubles your network, may give you a better research environment, and gives you a fresh start where you’re seen as an early career researcher, not as the PhD student you used to be. If you’re already at the best place for your work or you can’t move, make the case. Funders are becoming a bit less dogmatic on this point and more aware that not everyone can relocate, but don’t assume that staying put is the best idea.
Don’t neglect training and development plans. Who would you like to meet or work with, what would you like training in, what extra research and impact skills would you like to have? Fellowships are about producing the researcher as well as the research.
Success rates are very low. Don’t get your hopes up, and don’t put all your eggs in one basket and neglect other opportunities.
Even if you’re ultimately unsuccessful, you can also use the application as a vehicle to support the development of your post-PhD research agenda. By expressing a credible interest in applying for a Fellowship at an institution that’s serious about research, you will get feedback on your research plans from senior academics and potential mentors and from research development staff. It also forces you to put your ideas down on paper in a coherent way. Whether you apply for a Fellowship or not, you’ll need this for the academic job market.
A version of this article first appeared in Funding Insight in July 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
Many major research funding calls for substantial UKRI investments now include one or more workshops or events. These events typically aim:
(a) to publicise the call and answer questions from potential bidders; and
(b) to facilitate networking and to develop consortia, often including non-academic partners.
There’s an application process to gauge demand and to allocate or ration places (if required) between different disciplines and institutions. These events are distinct from ‘sandpit’ events – which have a more rigorous and competitive application process and where direct research funding may result. They’re also distinct from scoping meetings, which define and shape future calls. Some of the advice below might be applicable for those events, but my experience is limited to the call information day.
I’ve attended one such meeting and I found it very useful in terms of understanding the call and the likely competition for funding. While I’ve attended networking and idea generation events before, this was my first UKRI event, and I’ve come up with a few hints and tips that might help other first time attendees.
Don’t send Research Development staff. People like me are more experienced at identifying similarities/differences in emphasis in calls, but we can only go so far in terms of networking and representing academics. However well briefed, there will come a point at which we can’t answer further questions because we’re not academics. Send an academic if you possibly can.
Hone your pitch. A piece of me dies inside every time I use a phrase like “elevator pitch”, but the you’re going to be introducing yourself, your team, and your ideas many, many times during the day. Prepare a short version and a long version of what you want to say. It doesn’t have to be crafted word-for-word, but prepare the structure of a clear, concise introduction that you can comfortably reel off.
Be clear about what you want and what you’re looking for. If you’re planning on leading a bid, say so. If you’re looking to add your expertise on X to another bid TBC, say so. If you’re not sure yet, say so. I’m not sure what possible advantage could be gained about being coy. You could finesse your starting position by talking of “looking to” or “planning to” lead a bid if you want, but much better to be clear.
Don’t just talk to your friends. Chances are that you’ll have friends/former colleagues at the event who you may not see as often as you’d like, but resist spending too much time in your comfort zone. It’ll limit your opportunities and will make you appear cliquey. Consider arranging to meet before or after the event, or at another time to catch up properly.
Be realistic about what’s achievable. I’m persuadable that these events can and do shape the composition/final teams of some bids, but I wonder whether any collaboration starting from ground level at one of these events has a realistic chance of success.
Do your homework. Most call meetings invite delegates to submit information in advance, usually a brief biog and a statement of research interests. It’s worth taking time to do this well, and having a read of the information submitted by others. Follow up with web searches about potential partners to find out more about their work, follow them on twitter, and find out what they look like if you don’t already know. It’s not stalking if it’s for research collaboration.
Brush up your networking skills. If networking is something you struggle with, have a quick read of some basic networking guides. Best tip I was ever given – regard networking as a process to identify “how can I help these people?” rather than “how can I use these people to my advantage?” and it’s much easier. Also, I find… “I think I follow you on twitter” an effective icebreaker.
Don’t expect any new call info. There will be a presentation and Q&A, but don’t expect major new insights. As not everyone can make these events, funders avoid giving any unfair advantages. Differences in nuance and emphasis can emerge in presentations and through questions, but don’t expect radical additional insights or secret insider knowledge.
If your target call has an event along these lines, you should make every effort to attend. Send your prospective PI if you can, another academic if not, and your research development staff only if you must. Do a bit of homework… be clear about what you want to achieve, prepare your pitch, and identify the people you want to talk to, and you’ll have a much better chance of achieving your goals.
A version of this article first appeared in Funding Insight on 9th March 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
My previous post posed a question about whether applying for research funding was worth it or not, and concluded with a list of questions to consider to work out the answer. This follow-up is a list of costs and benefits associated with applying for external research funding, whether successful or unsuccessful. Weirdly, my list appears to contain more costs than benefits for success and more benefits than costs for failure, but perhaps that’s just me being contrary…
If you’re successful:
You get to do the research you really want to do
In career terms, whether for moving institution or internal promotion, there’s a big tick in the box marked ‘external research funding’.
Your status in your institution and within your discipline is likely to rise. Bringing in funding via a competitive external process gives you greater external validation, and that changes perceptions – perhaps it marks you out as a leader in your field, perhaps it marks a shift from career young researcher to fulfilling your evident promise.
Success tends to begat success in terms of research funding. Deliver this project and any future application will look more credible for it.
You’ve got to deliver on what you promised. That means all the areas of fudge or doubt or uncertainty about who-does-what need to be sorted out in practice. If you’ve under-costed any element of the project – your time, consumables, travel and subsistence – you’ll have to deal with it, and it might not be much fun.
Congratulations, you’ve just signed yourself up for a shedload of admin. Even with the best and most supportive post-award team, you’ll have project management to do. Financial monitoring; recruitment, selection, and line management of one or more research associates. And it doesn’t finish when the research finishes – thanks to the impact agenda, you’ll probably be reporting on your project via Researchfish for years to come.
Every time any comparable call comes round in the future, your colleagues will ask you give a presentation about your application/sit on the internal sifting panel/undertake peer review. Once a funding agency has given you money, you can bet they’ll be asking you to peer review other applications. Listed as a cost for workload purposes, but there are also a lot of benefits to getting involved in peer reviewing applications because it’ll improve your own too. Also, the chances are that you benefited from such support/advice from senior colleagues, so pay it forward. But be ready to pay.
You’ve just raised the bar for yourself. Don’t be surprised if certain people in research management start talking about your next project before this one is done as if it’s a given or an inevitability.
Unless you’re careful, you may not see as much recognition in your workload as you might have expected. Of course, your institution is obliged to make the time promised in the grant application available to you, but unless you’ve secured agreement in advance, you may find that much of this is taken out of your existing research allocation rather than out of teaching and admin. Especially as these days we no longer thing of teaching as a chore to buy ourselves out from. Think very carefully about what elements of your workload you would like to lose if your application is successful.
The potential envy and enmity of colleagues who are picking up bits of what was your work.
If you’re unsuccessful…
The chances are that there’s plenty to be salvaged even from an unsuccessful application. Once you’ve gone through the appropriate stages of grief, there’s a good chance that there’s at least one paper (even if ‘only’ a literature review) in the work that you’ve done. If you and your academic colleagues and your stakeholders are still keen, the chances are that there’s something you can do together, even if it’s not what you ideally wanted to do.
Writing an application will force you to develop your research ideas. This is particularly the case for career young researchers, where the pursuit of one of those long-short Fellowships can be worth it if only to get proper support in developing your research agenda.
If you’ve submitted a credible, competitive application, you’ve at least shown willing in terms of grant-getting. No-one can say that you haven’t tried. Depending on the pressures/expectations you’re under, having had a credible attempt at it buys you some license to concentrate on your papers for a bit.
If it’s your first application, you’ll have learnt a lot from the process, and you’ll be better prepared next time. Depending on your field, you could even add a credible unsuccessful application to a CV, or a job application question about grant-getting experience.
If your institution has an internal peer review panel or other selection process, you’ve put you and your research onto the radar of some senior people. You’ll be more visible, and this may well lead to further conversations with colleagues, especially outside your school. In the past I’ve recommended that people put forward internal expressions of interest even if they’re not sure they’re ready for precisely this reason.
You’ve just wasted your time – and quite a lot of time at that. And not just work time… often evenings and weekends too.
It’ll come as a disappointment, which may take some time to get over
Even if you’ve kept it quiet, people in your institution will know that you’ve been unsuccessful.
I’ve written two longer pieces on what to do if your research grant application is unsuccessful, which can be found here and here.
A version of this article first appeared in Funding Insight on 6th March 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
Success rates are low and applications are more and more time consuming to write. Is it worth it? Here’s a quick list of considerations that might help you reach a better decision.
While the latest success rates from UK research councils showed a very modest overall improvement after five consecutive annual falls, most observers regard this as a blip rather than as a sign of better times to come. Outside the Research Councils, success rates are often even lower, with some social science/humanities fellowship schemes having single digit success rates.
While success rates have fallen, demands on applicants have steadily risen. The impact agenda has brought first the impact summary and then the pathways to impact statement, and more recently we’ve seen greater emphasis on data management plans and on detailed letters of support from project partners that require significant coordination to obtain. It would be one thing if it were just a question of volume – if you want a six or seven figure sum of what’s ultimately public money, it’s not unreasonable to be asked to work for it. But it’s not just that, it’s also the fiddly nature of using JeS and understanding funder requirements. I’m forever having to explain the difference between the pathways to impact and the impact summary, and there are lots of little quirks and hidden sections that can trip people up.
But beyond even that, there’s the institutional effort of internal peer review from research development staff and senior and very busy academic staff. Whether that’s an internal review mandated by the research council – shifting the burden of review onto institutions – or introduced as a means of improving quality, it’s another cost.
Given the low success rates, the effort and time required, and the opportunity costs of doing so, are we wasting our time? And how would we know?
Do you need funding to do the research? If not, might it be a better idea just to get on with it, rather than spend a month writing an application and six months waiting for a response? And if you only need a small amount of funding, consider a smaller scheme with a less onerous application process.
Do you have a clear idea of what you want to achieve? If you can’t identify some clear research questions, and what your project will deliver, the chances are it needs more thinking through before it’s ready to be turned into an application.
Are you and your team passionate and enthused and excited about your proposal? If you’re not, why should anyone else be?
Is your research idea competitive? That’s not the same question as ‘is it good’? To quote a research director from a Canadian Research Council – it’s not a test, it’s a contest. Lots and lots and lots of good ideas go unfunded. Just because you could get something in that’s in scope and has at least some text in every box doesn’t mean you should.
Is your research idea significant? In other words, does it pass the ‘so what, who cares’ test? My experience on an NIHR funding panel is that once the flawed are eliminated, funding is a battle of significance. Is your research idea significant, would others outside your field regard it as significant, and can you communicate its significance?
Are they intrinsic to the research – to do with the research and what you and your team want to discover and achieve and contribute…. or are they extrinsic?
Are you applying for funding because you want promotion? When you come and talk to me and my colleagues about ‘applying for funding’ but have less a coherent project and more of a list of random keywords, don’t think we don’t know.
Is it because you/your research group/school is being pressured to bring in more funding? Football manager Harry Redknapp’s tactical instructions to a substitute apparently once consisted of “just flipping run around a bit” (I paraphrase) and I sometimes worry that in some parts of some institutions that’s what passes for a grant capture strategy that values activity over outcomes.
Is it because you want to keep researchers on fixed term contracts/your promising PhD student in work? That’s a laudable aim, but without the right application and idea, you risk giving them false hope if the application is just to do more of the same with the same people.
Do you have the time you need to write a competitive application? Just as importantly, do your team? Will they be able to deliver on the bits of the application they’ll need to write? As Yoda said, “do or do not, there is no try” (Lucas, 1980). If you can’t turn your idea into a really well written, competitive, proposal in time, perhaps don’t.
Do you have your ducks in a row? Your collaborators and co-Is, your industry, government, or third sector partners lined up and on board? Are your impact plans ready? Or are you still scratching around for project partners while your competitors are polishing the fourth iteration of the complete application? Who are your rivals for this funding? Not relevant for ‘open’ calls, but for targeting schemes, who else is likely to be going for this?
Does what you want to do fit the call you’re considering applying for? Read the call, read it again, and then speak to your friendly neighbourhood Research Development professional and see if your understanding of the call matches hers. Why? Because it’s hard for researchers to read a call for proposals without seeing it through the lens of their own research priorities. Make sure others think it’s a good fit – don’t trust yourself or your co-Is to make that decision alone.
Is this the best use of your time right now? Might your time be better spent on impact, publishing papers from the last project, revising a dated module, running professional development courses?
A companion piece on the costs and benefits to researchers of applying for funding will be republished here next week.
Last week Recently, I attended an Open Forum Events one day conference with the slightly confusing title ‘Research Impact: Strengthening the Excellence Framework‘ and gave a short presentation with the same title as this blog post. It was a very interesting event with some great speakers (and me), and I was lucky enough to meet up with quite a few people I only previously ‘knew’ through Twitter. I’d absolutely endorse Sarah Hayes‘ blogpost for Research Whisperer about the benefits of social media for networking for introverts.
Oh, and if you’re an academic looking for something approaching a straightforward explanation about the REF, can I recommend Charlotte Mathieson‘s excellent blog post. For those of you after in-depth half-baked REF policy stuff, read on…
I was really pleased with how the talk went – it’s one thing writing up summaries and knee-jerk analyses for a mixed audience of semi-engaged academics and research development professionals, but it’s quite another giving a REF-related talk to a room full of REF experts. It was based in part on a previous post I’ve written on portability but my views (and what we know about the REF) has moved on since then, so I thought I’d have a go at summarising the key points.
I started by briefly outlining the problem and the proposed interim arrangements before looking at the key principles that needed to form part of any settled solution on portability for the REF after next.
Why non-portability? What’s the problem?
I addressed most of this in my previous post, but I think the key problem is that it turns what ought to be something like a football league season into an Olympic event. With a league system, the winner is whoever earns the most points over a long, drawn out season. Three points is three points, whatever stage of the season it comes in. With Olympic events, it’s all about peaking at the right time during the cycle – and in some events within the right ten seconds of that cycle. Both are valid as sporting competition formats, but for me, Clive the REF should be more like a league season than to see who can peak best on census day. And that’s what the previous REF rules encourages – fractional short term appointments around the census date; bulking out the submission then letting people go afterwards; rent-seeking behaviour from some academics holding their institution to ransom; poaching and instability, transfer window effects on mobility; and panic buying.
If the point of the REF is to reward sustained excellence over the previous REF cycle with funding to institutions to support research over the next REF cycle, surely it’s a “league season” model we should be looking at, not an Olympic model. The problem with portability is that it’s all about who each unit of assessment has under contract and able to return at the time, even if that’s not a fair reflection of their average over the REF cycle. So if a world class researcher moves six months before the REF census date, her new institution would get REF credit for all of her work over the last REF cycle, and the one which actually paid her salary would get nothing in REF terms. Strictly speaking, this isn’t a problem of publication portability, it’s a problem of publication non-retention. Of which more later.
I summarised what’s being proposed as regards portability as a transition measure in my ‘Initial Reactions‘ post, but briefly by far most likely outcome for this REF is one that retains full portability and full retention. In other words, when someone moves institution, she takes her publications with her and leaves them behind. I’m going to follow Phil Ward of Fundermentals and call these Schrodinger’s Publications, but as HEFCE point out, plenty of publications were returned multiple times by multiple institutions in the last REF, as each co-author could return it for her institution. It would be interesting to see what proportion of publications were returned multiple times, and what the record is for the number of times that a single publication has been submitted.
Researcher Mobility is a Good Thing
Marie Curie and Mr Spock have more in common than radiation-related deaths – they’re both examples of success through researcher mobility. And researcher mobility is important – it spreads ideas and methods, allows critical masses of expertise to be formed. And researchers are human too, and are likely to need to relocate for personal reasons, are entitled to seek better paid work and better conditions, and might – like any other employee – just benefit from a change of scene.
For all these reasons, future portability rules need to treat mobility as positive, and as a human right. We need to minimise ‘transfer window’ effects that force movement into specific stages of the REF cycle – although it’s worth noting that plenty of other professions have transfer windows – teachers, junior doctors (I think), footballers, and probably others too.
And for this reason, and for reasons of fairness, publications from staff who have departed need to be assessed in exactly the same way as publications from staff who are still employed by the returning UoA. Certainly no UoA should be marked down or regarded as living on past glories for returning as much of the work of former colleagues as they see fit.
Render unto Caesar
Institutions are entitled to a fair return on investment in terms of research, though as I mentioned earlier, it’s not portability that’s the problem here so much as non-retention. As Fantasy REF Manager I’m not that bothered by someone else submitting some of my departed star player’s work written on my £££, but I’m very much bothered if I can’t get any credit for it. Universities are given funding on the basis of their research performance as evaluated through the previous REF cycle to support their ongoing endeavors in the next one. This is a really strong argument for publication retention, and it seems to me to be the same argument that underpins impact being retained by the institution.
However, there is a problem which I didn’t properly appreciate in my previous writings on this. It’s the investment/divestment asymmetry issue, as absolutely no-one except me is calling it. It’s an issue not for the likely interim solution, but for the kind of full non-portability system we might have for the REF after next.
In my previous post I imagined a Fantasy REF Manager operating largely a one-in, one-out policy – thus I didn’t need new appointee’s publications because I got to keep their predecessors. And provided that staff mobility was largely one-in, one-out, that’s fine. But it’s less straightforward if it’s not. At the moment the University of Nottingham is looking to invest in a lot of new posts around specific areas (“beacons”) of research strength – really inspiring projects, such as the new Rights Lab which aims to help end modern slavery. And I’m sure plenty of other institutions have similar plans to create or expand areas of critical mass.
Imagine a scenario where I as Fantasy REF Manager decide to sack a load of people immediately prior to the REF census date. Under the proposed rules I get to return all of their publications and I can have all of the income associated for the duration of the next REF cycle – perhaps seven years funding. On the other hand, if I choose to invest in extra posts that don’t merely replace departed staff, it could be a very long time before I see any return, via REF funding at least. It’s not just that I can’t return their publications that appeared before I recruited them, it’s that the consequences of not being able to return a full REF cycle’s worth of publications will have funding implications for the whole of the next REF cycle. The no-REF-disincentive-to-divest and long-lead-time-for-REF-reward-for-investment looks lopsided and problematic.
I’m a smart Fantasy REF Manager, it means I’ll save up my redundancy axe wielding (at worst) or recruitment freeze (at best) for the end of the REF cycle, and I’ll be looking to invest only right at the beginning of the REF cycle. I’ve no idea what the net effect of all this will be repeated across the sector, but it looks to me as if non-portability just creates new transfer windows and feast and famine around recruitment. And I’d be very worried if universities end up delaying or cancelling or scaling back major strategic research investments because of a lack of REF recognition in terms of new funding.
Looking forward: A settled portability policy
A few years back, HEFCE issued some guidance about Open Access and its place in the coming REF. They did this more or less ‘without prejudice’ to any other aspect of the REF – essentially, whatever the rest of the REF looks like, these will be the open access rules. And once we’ve settled the portability rules for this time (almost certainly using the Schrodinger’s publications model), I’d like to see them issue some similar ‘without prejudice’ guidelines for the following REF.
I think it’s generally agreed that the more complicated but more accurate model that would allow limited portability and full retention can’t be implemented at such short notice. But perhaps something similar could work with adequate notice and warning for institutions to get the right systems in place, which was essentially the point of the OA announcement.
I don’t think a full non-portability full-retention system as currently envisaged could work without some finessing, and every bid of finessing for fairness comes at the cost of complication. As well as the investment-divestment asymmetry problem outlined above, there are other issues too.
The academic ‘precariat’ – those on fixed term/teaching only/fractional/sessional contracts need special rules. An institution employing someone to teach one module with no research allocation surely shouldn’t be allowed to return that person’s publications. One option would be to say something like ‘teaching only’ = full portability, no retention; and ‘fixed term with research allocation’ = the Schrodinger system of publications being retained and being portable. Granted this opens the door to other games to be played (perhaps turning down a permanent contract to retain portability?) but I don’t think these are as serious as current games, and I’m sure could be finessed.
While I argued previously that career young researchers had more to gain than to lose from a system whereby appointments are made more on potential rather than track record, the fact that so many are as concerned as they are means that there needs to be some sort of reassurance or allowance for those not in permanent roles.
Disorder at the border. What happens about publications written on Old Institution’s Time, but eventually published under New Institution’s affiliation? We can also easily imagine publication filibustering whereby researchers delay publication to maximise their position in the job market. Not only are delays in publication bad for science, but there’s also the potential for inappropriate pressure to be applied by institutions to hold something back/rush something out. It could easily put researchers in an impossible position, and has the potential to poison relationships with previous employers and with new ones. Add in the possible effects of multiple job moves on multi-author publications and this gets messy very quickly.
One possible response to this would be to allow a portability/retention window that goes two ways – so my previous institution could still return my work published (or accepted) up to (say) a year after my official leave date. Of course, this creates a lot of admin, but it’s entirely up to my former institution whether it thinks that it’s worth tracking my publications once I’ve gone.
What about retired staff? As far as I can see there’s nothing in any documents about the status of the publications of retired staff either in this REF or in any future plans. The logic should be that they’re returnable in the same way as those of any other researcher who has left during the REF period. Otherwise we’ll end up with pressure to say on and perhaps other kinds of odd incentives not to appoint people who retire before the end of a REF cycle.
One final suggestion…
One further half-serious suggestion… if we really object to game playing, perhaps the only fair to properly reward excellent research and impact and to minimise game playing is to keep the exact rules of REF a secret for as long as possible in each cycle. Forcing institutions just to focus on “doing good stuff” and worrying less about gaming the REF.
If you’re really interested, you can download a copy of my presentation … but if you weren’t there, you’ll just have to wonder about the blank page…
This lunchtime HEFCE have announced some more “Initial Decisions” on REF 2021, which I’ve summarised below.
Slightly frustratingly, the details are scattered across a few documents, and it’s easy to miss some of them. There’s an exec summary, a circular letter (which is more of a rectangle, really), the main text of the report that can be downloaded from the bottom of the exec summary page (along with an annex listing UoAs and further particulars for panel chair roles)… and annex A on a further consultation staff return and output portability, downloadable from the bottom of the circular letter page.
I’ve had a go at a quick summary, by bullet point theme rather than in the order they appear, or in a grand narrative sweep. This is one of my knee-jerk pieces, and I’ve added a few thoughts of my own. But it’s early days, and possibly I’ve missed something or misunderstood, so please let me know.
Reserve output allowed where publication may not appear in time
Worth only 60% of total mark this time (see scoring system)
I think the reduction in the contribution of outputs to the overall mark (at the expense of impact) is probably what surprised me most, and I suspect this will be controversial. I think the original plan was for environment to be downgraded to make way, but there’s a lot more demanded from the environment statement this time (see below) so it’s been protected. Great to have the option of submitting an insurance publication in case one of the in-press ones doesn’t appear by close of play.
Panels/Units of Assessment
Each sub-panel to have at least one appointed member for interdisciplinary research “with a specific role to ensure its equitable assessment”. New identifier/flag for interdisciplinary outputs to capture
Single UoA for engineering, multiple submissions allowed
Archaeology split from Geography and Environmental studies – now separate
Film and Screen Studies to be explicitly included in UoA 33 with Dance, Drama, Performing Arts
Decisions on forensic science and criminology (concerns about visibility) due in Autumn
Mapping staff to UoAs will be done by institutions, not HESA cost centres, but may ask for more info in the event of any “major variances” from HESA data.
What do people think about a single UoA for engineering? That’s not an area I support much. Is this just tidying up, or does this has greater implications? Is it ironic that forensic science and criminology have been left a cop show cliff-hanger ending?
Expansion of Unit of Assessment environment section to include sections on:
Structures to support interdisciplinary research
Supporting collaboration with “organisations beyond higher education”
Impact template will now be in the environment element
Approach to open research/open access
Supporting equality and diversity
More quant data in UoA environment template (we don’t know what yet)
Standard Institution level information
Non-assessed invite only pilot for institution level environment statement
Expansion of environment section is given as a justification for maintaining it at 15% of score rather than reducing as expected.
The inclusion of a statement about support for interdisciplinary work is interesting, as this moves beyond merely addressing justifiable criticism about the fate of interdisciplinary research (see the welcome addition to each UoA of an appointed ‘Member for Interdisciplinarity’ above). This makes it compulsory, and an end in itself. This will go down better in some UoAs than others.
Institutional level impact case studies will be piloted, but not assessed
Moves towards unifying definitions of “impact” and “academic impact” between REF and Research Councils – both part of dual funding system for research
Impact on teaching/curriculum will count as impact – more guidance to be published
Underpinning work “at least equivalent to 2*” and published between 1st Jan 2000 and 31st Dec 2020. Impact must take place between 1st Aug 2013 and 31st July 2020
New impact case study template, more questions asked, more directed, more standardised, more “prefatory” material to make assessment easier.
Require “routine provision of audit evidence” for case study templates, but not given to panel
Uncertain yet on formula for calculating number of case study requirements, but overall “should not significantly exceed… 2014”. Will be done on some measure of “volume of activity”, possibly outputs
Continuation of case studies from 2014 is allowed, but must meet date rules for both impact and publication, need to declare it is a continuation.
Increased to 25% of total score
And like a modern day impact superhero, he comes Mark Reed aka Fast Track Impact with a blog post of his own on the impact implications of the latest announcement. I have to say that I’m pleased that we’re only having a pilot for institutional case studies, because I’m not sure that’s a go-er.
Assessment and Scoring system
Sub-panels may decide to use metrics/citation data, but will set out criteria statements stating whether/how they’ll use it. HEFCE will provide the citation data
As 2014, overall excellence profile, 3 sub-profiles (outputs, impact, environment)
Five point scale from unclassified to 4*
Outputs 60, Impact 25, Environment 15. Increase of impact to 25, but as extra environment info sought, has come at the expense of outputs.
There was some talk of a possible necessity for a 5* category to be able to differentiate at the very top. but I don’t think this gained much traction.
But on the really big questions… further consultation (deadline 29th Sept):
There’s been some kicking into the short grass, but things are looking a bit clearer…
(1) Staff submission:
All staff “with a significant responsibility to undertake research” will be submitted, but “no single indicator identifies those within the scope of the exercise”. Institutions have the option of submitting 100% of staff who meet the core eligibility requirement OR come up with a code of practice that they’ll use to decide who is eligible. Audit-able evidence will be required and Institutions can choose different options for different UoAs.
Proposed core eligibility requirements – staff must meet all of the following:
“have an academic employment function of ‘research only’ or ‘teaching and research’
are independent researchers [i.e. not research assistants unless ‘demonstrably’ independent]
hold minimum employment of 0.2 full time equivalent
have a substantive connection to the submitting institution.”
I like this as an approach – it throws the question back to universities, and leaves it up to them whether they think it’s worth the time and trouble running an exercise in one or more UoAs. And I think the proposed core requirements look sensible, and faithful to the core aim which is to maximise the number of researchers returned and prevent the hyper selectivity game being played.
(2) Transition arrangements for non-portability of publications.
HEFCE are consulting on either:
(a) “The simplified model, whereby outputs would be eligible for return by the originating institution (i.e. the institution where the research output was demonstrably generated and at which the member of staff was employed) as well as by the newly employing institution”.
(b) “The hybrid approach, with a deadline (to be determined), after which a limited number of outputs would transfer with staff, with eligibility otherwise linked to the originating institution. (This would mean operating two rules for portability in this exercise: the outputs of staff employed before the specified date falling under the 2014 rules of full portability; outputs from staff employed after this date would fall under the new rules.)”
I wrote a previous post on portability and non-portability when the Stern Review was first published, which I still think is broadly correct.
I wonder how simple the simplified model will be… if we end having to return n=2 publications, and choosing those publications from a list of everything published by everyone while they worked here. But it’s probably less work than having a cut off date.