You’re applying for UK research council funding and suddenly you’re confronted with massive overhead costs. Adam Golberg tries to explain what you need to know.
Trying to explain Full Economic Costing is not straightforward. For
current purposes, I’ll be assuming that you’re an academic applying for UK
Research Council funding; that you want to know enough to understand your
budget; and that you don’t really want to know much more than that.
If you do already know a lot about costing or research finances, be warned – this article contains simplifications, generalisations, and omissions, and you may not like it.
What are Full Economic Costs, and why are they taking up so much of my budget?
Full Economic Costs (fEC) are paid as part of UK Research and Innovation grants to cover a fair share of the wider costs of running the university – the infrastructure that supports your research. There are a few different cost categories, but you don’t need to worry about the distinctions.
Every UK university calculates its own overhead rates using a common methodology. I’m not going to try to explain how this works, because (a) I don’t know; and (b) you don’t need to know. Most other research funders (charities, EU funders, industry) do not pay fEC for most of their schemes. However, qualifying peer-reviewed charity funding does attract a hidden overhead of around 19% through QR funding (the same source as REF funding). But it’s so well hidden that a lot of people don’t know about it. And that’s not important right now.
How does fEC work?
In effect, this methodology produces a flat daily overhead rate to be charged
relative to academic time on your project. This rate is the same for the time
of the most senior professor and the earliest of early career researchers.
One effect of this is to make postdoc researchers seem proportionally more expensive. Senior academics are more expensive because of higher employment costs (salary etc), but the overheads generated by both will be the same. Don’t be surprised if the overheads generated by a full time researcher are greater than her employment costs.
All fEC costs are calculated at today’s rates. Inflation and increments
will be added later to the final award value.
Do we have to charge fEC overheads?
Yes. This is a methodology that all universities use to make sure that
research is funded properly, and there are good arguments for not undercutting
each other. Rest assured that everyone – including your competitors– are
playing by the same rules and end up with broadly comparable rates. Reviewers
are not going to be shocked by your overhead costs compared to rival bids. Your
university is not shooting itself (or you) in the foot.
There are fairness reasons not to waive overheads. The point of Research
Councils is to fund the best individual research proposals regardless of the
university they come from, while the REF (through QR) funds for broad,
sustained research excellence based on historical performance. If we start
waiving overheads, wealthier universities will have an unfair advantage as they
can waive while others drown.
Further, the budget allocations set by funders are decided with fEC overheads in mind. They’re expecting overhead costs. If your project is too expensive for the call, the problem is with your proposal, not with overheads. Either it contains activities that shouldn’t be there, or there’s a problem with the scope and scale of what you propose.
However, there are (major) funding calls where “evidence of institutional
commitment” is expected. This could include a waiver of some overheads, but
more likely it will be contributions in kind – some free academic staff time, a
PhD studentship, new facilities, a separate funding stream for related work.
Different universities have different policies on co-funding and it probably
won’t hurt to ask. But ask early (because approval is likely to be complex) and
have an idea of what you want.
What’s this 80% business?
This is where things get unnecessarily complicated. Costs are calculated
at 100% fEC but paid by the research councils at 80%. This leaves the remaining
20% of costs to be covered by the university. Fortunately, there’s enough money
from overheads to cover the missing 20% of direct costs. However, if you have a
lot of non-pay costs and relatively
little academic staff time, check with your costings team that the project is
still affordable.
Why 80%? In around 2005 it was deemed ‘affordable’ – a compromise figure intended to make a significant contribution to university costs but without breaking the bank. Again, you don’t need to worry about any of this.
Can I game the fEC system, and if so, how?
Academic time is what drives overheads, so reducing academic time reduces
overheads. One way to do this is to think about whether you really need as much
researcher time on the project. If you really need to save money, could
contracts finish earlier or start later in the project?
Note that non-academic time (project administrators, managers,
technicians) does not attract overheads, and so are good value for money under
this system. If some of the tasks you’d like your research associate to do are
project management/administration tasks, your budget will go further if you
cost in administrative time instead.
However, if your final application has unrealistically low amounts of academic time and/or costs in administrators to do researcher roles, the panel will conclude that either (a) you don’t understand the resource implications of your own proposal; or (b) a lack of resources means the project risks being unable to achieve its stated aims. Either way, it won’t be funded. Funding panels are especially alert for ‘salami projects’ which include lots of individual co-investigators for thin slivers of time in which the programme of research cannot possibly be completed. Or for undercooked projects which put too much of a burden on not enough postdoc researcher time. As mentioned earlier, if the project is too big for the call budget, the problem is with your project.
The best way to game fEC it is not to worry about it. If you have support with your research costings, you’ll be working with someone who can cost your application and advise you on where and how it can be tweaked and what costs are eligible. That’s their job – leave it to them, trust what they tell you, and use the time saved to write the rest of the application.
Thanks to
Nathaniel Golden (Nottingham Trent) and Jonathan Hollands (University of
Nottingham) for invaluable comments on earlier versions of this article. Any
errors that remain are my own.
The relentless drive for research excellence has created a culture in modern science that cares exclusively about what is achieved and not about how it is achieved.
As I speak to people at every stage of a scientific career, although I hear stories of wonderful support and mentorship, I’m also hearing more and more about the troubling impact of prevailing culture.
People tell me about instances of destructive hyper-competition, toxic power dynamics and poor leadership behaviour – leading to a corresponding deterioration in researchers’ wellbeing. We need to cultivate, reward, and encourage the best while challenging what is wrong.
We know that Wellcome has helped to create this focus on excellence. Our aim has rightly been to support research with the potential to benefit society. But I believe that we now also have an important role to play in changing and improving the prevailing research culture. A culture in which, however unintentionally, it can be hard to be kind.
If we want science to be firing on all cylinders, we need everyone in the research system – individuals, institutions and funders – working in step to foster a positive working culture.
Which leads me to wonder what the role of research development and other research support professionals should be in moving towards a more positive research culture. I don’t know the answer, and this post is an open invitation to share your thoughts. I’ll pull these together into a crowd-sourced post with credit for those who want it and anonymity for those who don’t. This approach seemed to work well for a previous post around supporting a new academic discipline, so perhaps it will work here too.
I don’t want to say too much in this post, but as I’m asking others I should at least share a few indicative thoughts about areas to think about.
We should look at our own profession, our own culture, and how we treat each other. In my time in research development I’ve generally found it to be a supportive profession, both internally within the universities where I’ve worked, and (especially) externally through ARMA. However, I’m white, male, heterosexual, middle age, middle class, so I’m very much playing on ‘easy mode‘. I don’t get mistaken for an administrator, and either I’m super diplomatic and great at influencing and persuading, or I get taken more seriously by some people because of my jackpot of categories of privilege. As I’ve alluded to on this blog before, I do have a slight stammer and have written about the challenges that can cause me, but it that has seldom held me back and I don’t think it’s affected how I’m perceived.
In terms of our own profession and our own behaviour, the phrase “be the change you want to see in the world” came to mind. Although… when I went to google to find out who said it, I found an interesting blog post that arguing that Mahatma Gandhi (to whom it is usually attributed) said and meant something rather more different and much more challenging. It’s not simply about living our values, but reflecting on them and changing ourselves where necessary. As a philosopher by training I also thought about Aristotle and his writings on the importance of character and virtues – if you nurture the right character and the right virtues, the chances that you’ll respond in the right way when tested or under pressure will be higher. But how do we do that? Practice, reflection, courage, and learning from the example of others, both positive and negative.
Less esoterically, a second category of issues is around our role in supporting research and researchers, especially around grant getting and grant writing activity. Competition for funding, low success rates, increasingly long and complicated application forms, and pressure from university management form part of research culture. While we rarely have formal power or authority over academic staff, we do have a measure of influence on research culture.So how do we use that influence and our roles for good? What’s our role in preventing research excellence coming at the expense of those who make it happen – which includes us, in our small way. I’ll kick things off with three issues I’ve been thinking about recently…
Firstly, forwarding funding opportunities and supporting applications. When I send funding opportunities onto academics, am I guilty of unconscious bias? Am I committing the availability error and just emailing the first people who come to mind? Does that mean some people with certain characteristics are more likely to receive those emails than others? Does unconscious bias affect how I respond to tentative enquiries about opportunities, or about how I divide my time between proposals?
Honest answer is that I don’t know. But I’ve been influenced by the pushback against ‘manels’ (all male panels at conferences)… and if my funding opportunity distribution list looks like a manel, especially a white manel (because intersectionality is key) I’m taking time to stop and think about who I might have missed. Sometimes structural inequalities or call specifics mean that I got it right first time, but it’s worth a check.
Secondly, what’s our role around workload and work life balance? Could we do more to minimise the burden on researchers at all levels of seniority? Partly this is around efficiency and systems and processes, but partly I think there are cultural issues to consider too. I recently had a discussion with organisers of a research network which ran funding calls about the appropriateness of having a deadline of (something like) 23:59 on Sunday evening. The argument was that academics preferred this because it gave them more time than, say a Friday 4:00pm deadline. But it’s time over a weekend, and arguably this increases the expectation that academics work weekends. When do we set our internal deadlines for various tasks, from REF reviews to internal peer reviews to internal deadlines for draft applications? Do we assume that academic colleagues will be working weekends?
Thirdly, when we advise on the staffing of research projects, are we creating good jobs with fair salaries and training career development opportunities? The issue of ‘good jobs’ on research projects (for academics and managers/administrators) was something that Wellcome brought up at a visit I attended a few weeks ago. I have to admit that under cost pressures on UKRI applications, there’s a strong incentive to try to cut researcher time as much as possible to reduce both employment costs and overheads. Of course, we should never over-cost for any post for any funder, but likely I’ve had a role in creating (potential) jobs that are lower quality than they might otherwise be.
That’s probably enough for now – this was supposed to be a short post. But this is an open invitation to email me with any thoughts you have about challenges we face, or steps we might take, in responding to the Wellcome Trust’s challenge to reimagine how we do research. I’ll be sharing this invitation via the ARMA Research Development email list and via Twitter for greater international reach.
I’ve recently moved from a role supporting the Business School and the School of Economics to a central role at the University of Nottingham, looking after our engagement with research charities. I’m going from a role where I know a few corners of the university very well to a role where I’m going to have to get to know more about much more of it.
My academic background (such as it is) is in political
philosophy and for most of my research development career I’ve been supporting
(broadly) social sciences, with a few outliers. I’m now trying to develop my
understanding of academic disciplines that I have little background or
experience in – medical research, life sciences, physics, biochemistry etc. I
suspect the answer is just time, practice, familiarity, confidence (and
Wikipedia), but I found myself wondering if there are any short cuts or
particularly good resources to speed things up.
Fortunately, if you’re a member of ARMA, you’re never on your own, and I sent an email around the Research Development Special Interest Group email list, with a promise (a) to write up contributions as a blog post and (b) to add some hints and tips of my own, especially for the social sciences.
So here goes… the collated and collected wisdom of the SIG… bookmark this post and revisit it if your remit changes…
Don’t panic… and focus on what you can do
In my original email, the first requirement I suggested was ‘time’, and that’s been echoed in a lot of the responses. “Time, practice, familiarity, confidence (and Wikipedia)” as Chris Hewson puts it. It’s easy to be overwhelmed by a sea of new faces and names and an alphabet soup of new acronyms- and to regard other people’s hard-won institutional/school/faculty knowledge as some kind of magical superpower.
Lorna Wilson suggests that disciplinary differences are overrated and “sometimes the narrative of ‘difference’ is what makes things harder. The skills and expertise we have as research development professionals are transferable across the board, and I think that the silos of disciplines led to a silo-ing of roles (especially in larger universities). With the changes in the external landscape and push with more challenge-led interdisciplinary projects, the silos of disciplines AND of roles I think is eroding.”
But there are differences in practices and norms – there are differences in terminology, outlook, career structures, internal politics, norms, and budget sizes – and I’m working hard trying not to carry social science assumptions with me. Though perhaps I’m equally likely to be too hesitant to generalise from social science experience where it would be entirely appropriate to do so.
Rommany Jenkins has “moved from Arts and Humanities to Life Sciences” and thinks that while “the perception might be that it’s the harder direction to go in because of the complexity of the subject matter […] it’s probably easier because the culture is quite straightforward […] although there are differences between translational / clinical and basic, the principles of the PI lab and team are basically the same”. She thinks that perhaps “it’s more of a culture shock moving into Arts and Humanities, because people are all so independently minded and come at things from so many different directions and don’t fit neatly into the funding boxes. […] I know a lot of people just find it totally bizarre that you can ask a Prof in Arts what they need in terms of costings and they genuinely don’t know.”
Charlotte Johnson moved in the opposite direction, from science to arts. “The shortcut was trying to find commonalities in how the different disciplines think and prepare their research. Once you realise that an artist and a chemist would go about planning their research project very similarly, and they only start to diverge in the experimental/interpretation stage, it does actually make it all quite easy to understand“
Muriel Swijghuisen Reigersberg says that her contribution “tends to be not so much on the science front, but on the social and economic or policy and political implications of the work STEMM colleagues are doing and recommendations around impact and engagement or even interdisciplinary angles to enquiries for larger projects.”
My colleague Liz Humphreys makes a similar (and very reassuring) point about using the same “skills to assess any bid by not focusing on the technical things but focus on all the other usual things that a bid writer can strengthen”. A lay summary that doesn’t make any lay sense is an issue regardless of discipline, as is a summary that doesn’t summarise that’s more of an introduction. Getting good at reviewing research grants can transcend academic disciplines. “If someone can’t explain to me what they’re doing,” says Claire Edwards, “then it’s unlikely to convince reviewers or a panel.”
Kate Clift make a similar point: “When I am working in a discipline which is alien to me I tend to try and ground the proposed research in something which I do understand so I can appreciate the bigger picture, context etc. I will ask lots of ‘W’ questions – Why is it important? What do you want to do? Who is going to do it? Less illuminating to me in this situations is HOW they are going to do it”.
Roger Singleton Escofet makes the very sensible point that some subjects are very theoretical “where you will always struggle to understand what is being proposed”. I certainly found this with Economics – I could hope to try to understand what a proposed project did, but how it worked would always be beyond me. Reminds me a bit of this Armstrong and Miller sketch in which they demonstrate how not to do public engagement in theoretical physics.
Ann Onymous-Contributor says that “multidisciplinary projects are the best way to ease yourself into other disciplines and their own specific languages. My background is in social sciences but because of the projects I have worked on I have experience of, and familiarity with a range of arts and hard science disciplines and the languages they use. Broad, shallow knowledge accumulated on this basis can be very useful; sometimes specific disciplinary knowledge is less important than understanding connections between different disciplines, or the application of knowledge, which typically also tend to be the things which specialists miss.” I think this is a really good point – if we allow ourselves it include the other disciplines that we’ve supported as part of interdisciplinary bids, we may find we’ve more experience that we thought.
Finding the Shallow End, Producing your Cheat Sheet
Lorna Wilson suggests “[h]aving a basic understanding” of methodologies in different disciplines, “helps to demonstrate how [research questions] are answered and hypotheses evidenced, and I think breaks through some of the ‘difference’. What makes things slightly more difficult is also accessibility, in terms of language of disciplines, we could almost do with a cheat sheet in terms of terms!”
Richard Smith suggests identifying academics in the field who are effective and willing communicators “who appreciate the benefits and know the means of conveying approaches and fields to non-experts… and do it with enthusiasm”. Harry Moriarty’s experience has been that often ECRs and PhD students are a particularly good source – many are more willing to engage, and perhaps have more to benefit from our advice and support.
Muriel Swijghuisen Reigersberg suggests attending public
lectures (rather than expert seminars) which will be aimed at the generalist,
and notes that expert-novice conversations will benefit the academic expert in
terms of practising explanations of complex topics to a generalist audience. I
think we can all recognise academics who enjoy talking about their work to
non-specialists and with a gift for explanations, and those who don’t, haven’t
or both.
Other non-academic colleagues can help too, Richard argues – especially impact and public or business engagement staff working in that area, but also admin staff and School managers. Sanja Vlaisavljevic wanted to “understand how our various departments operate, not just in terms of subject-matter but the internal politics”. This is surely right – I’m sure we’re all aware of historical disagreements or clashes between powerful individuals or whole research groups/Schools that stand in the way of certain kinds of collaboration or joint working. Whether we work to try to erode these obstructions or navigate deftly around them, we need to know that they’re there.
Caroline Moss-Gibbons adds librarians to the list, citing their resource guides and access/role with the university repository. Claire Edwards observes that many research development staff have particular academic backgrounds that might be useful.
Don’t try to fake it till you make it
“Be open that you’re new to the area, but if they’re looking for funding they need to be able to explain their research to a non-specialist” says Jeremy Barraud.
I’ve always found that a full, frank, and even cheerful confession of a lack of knowledge is very effective. I often include a blank slide in presentations to illustrate what I don’t know. My experience is that admitting what I don’t know earns me a better hearing on matters that I do know about (as long as I do both together), but I’m aware that as a straight, white, middle aged, middle class male perhaps that’s easier for me to do. I’ve suspected for some time now that being male (and therefore less likely to be mistaken for an “administrator”) means I’m probably playing research development on easy mode. There’s an interesting project around EDI and research development that I’m probably not best placed to do.
While no-one is arguing for outright deception, I’ve heard
it argued that frank admissions of ignorance about a particular topic area may
make it harder to engage academic colleagues and to find out more. If academic
colleagues make certain assumptions about background, perhaps try to live up to
those with a bit of background reading. It’s easy to be written off and written
out, which then makes it harder to learn later.
I always think half the battle is convincing academic colleagues that we’re on their side and the side of their research (rather than, say, motivated by university income targets or an easier life), and perhaps it’s easy to underestimate the importance of showing an interest and a willingness to learn. Asking intelligent, informed, interested lay questions of an expert – alongside demonstrating our own expertise in grant writing etc – is one way to build relationships. My own experience with my MPhil is that research can be a lonely business, and so an outsider showing interest and enthusiasm – rather than their eyes glazing over and disengaging – can be really heartening.
Kate Clift makes an important point about combining any admissions of relative ignorance with a stress on what she can do/does know/can contribute. “I’m always very upfront with people and say I don’t have an understanding of their research but I do understand how to craft a submission – that way everyone plays to their strengths. I can focus on structure and language and the academic can focus on scientific content.”
Find a niche, get involved, be visible
For Jeremy Barraud, that was being secretary for an ethics committee. In my early days with Economics, it was supporting the production of the newsletter and writing research summaries – even though it wasn’t technically part of my remit, it was a great way to get my name known, get to know people, and have a go at summarising Economics working papers.
Suzannah Laver is a research development manager in a Medical School, but has a background in project management and strategy rather than medicine or science. For her it was “just time” and getting involved “[a]ttending the PI meetings, away days, seminars, and arranging pitching events or networking events.” Mary Caspillo-Brewer adds project inception meetings and dissemination events to the list, and also suggests attending academic seminars and technical meetings (as does Roger Singleton Escofet), even if they’re aimed at academics. This is great in terms of visibility and in terms of evidence of commitment – sending a message that we’re interested and committed, even if we don’t always entirely understand.
Mark Smith suggests visiting research labs or clinics, however terrifying they may first appear. So far I’ve only met academics in their offices – I’m not sure I trust myself anywhere near a lab. I’m still half-convinced I’ll knock over the wrong rack of test tubes and trigger a zombie epidemic. But lab visits are perhaps something I could do more of in the future when I know people better. And as Mark says, taking an interest is key.
Do your homework
I’ve blogged before about the problems with the uses and abuses of successful applications, but Nat Golden is definitely onto something when he suggests reading successful applications to look at good practice and what the particular requirements of a funder are. Oh, and reading the guidance notes.
Roger Singleton Escofet (and others) have mentioned that the Royal Society and Royal Academy of Engineering produce useful reports that “may be technical but offer good overviews on topical issues across disciplines. Funders such as research councils or Wellcome may also be useful sources since funders tend to follow (or set) the emerging areas.” Hilary Noone also suggests looking to the funders for guidance – trying to “understand the funders real meaning (crucial for new programmes and calls where they themselves are not clear on what they are trying to achieve)”.
There’s a series of short ‘Bluffer’s Guide’ books which are somewhat dated, but potentially very useful. Bluff your way in Philosophy was on my undergraduate reading list. Bluff your way in Economics gave me an excellent grounding when my role changed, and explained (among many other things) the difference between exogenous and endogenous factors. When supporting a Geography application, I learned the difference between pluvial and fluvial flooding. These little things make a difference, and it’s probably the absence of that kind of basic ground for many disciplines that I’m now supporting that’s making me feel uneasy. In a good way.
Harry Moriarty argues that it’s more complicated than just reading Wikipedia – the work he supported “was necessarily at the cutting edge and considerably beyond the level that I could get to in a sensible order – I had to take the work and climb back through the Wikipedia pages in layers, and then, once I had some underpinning knowledge, go back through the same pages in light of my new understanding”.
Specific things to do
“Become an NIHR Public Reviewer”, says Jeremy Barraud. “It’s easy to sign up and they’re keen to get more reviewers. Being on the other side of the funding fence gives a real insight into how decisions are reached (and bolsters your professional reputation when speaking with researchers). “
I absolutely second this – I’ve been reviewing for NIHR for some time and just finished a four year term as a patient/public representative on a RfPB panel. I’d recommend doing this not just to gain experience of new research areas, but as a valuable public service that you as a research development professional can perform. If you’ve got experience of a health condition, using NHS services (as a patient or carer), and you’re not a healthcare professional or researcher, I’m sure they’d love to hear from you.
Being a research participant, argues Jeremy Barraud, is “professionally insightful and personally fulfilling. The more experience you have on research in all its different angles, the better your professional standing”. This is also something I’ve done – in many ways it’s hard not to get involved in research if you’re hanging around a university. I’m part of a study looking at running and knee problems, and I’ve recently been invited to participate in another study.
Bonhi Bhattacharya registered for a MOOC (Massively Open Online Courses) – an “Introduction to Ecology” – Bonhi is a mathematician by training – “and it was immensely helpful in getting a grounding in the subject, as well as a useful primer in terminology.“ It can be a bit of a time commitment, but they’re also fascinating – and as above, really shows willing. I wrote about my experience with a MOOC on behavioural economics in a post a few years ago. Bonhi also suggests reading academics’ papers – even if only the introduction and conclusion.
Resources
Subscribe to The Conversation, says Claire Edwards, it’s “a great source of academic content aimed at a non-specialist audience”. In a similar vein, Helen Walker recommends the Wellcome-funded website Mosaic which is “great for stories that give the bigger picture ‘around’ science/research – sometimes research journeys, sometimes stories showing the broader context of science-related research.” Both Mosaic and The Conversation have podcast companions. Recent Conversation podcast series have looked at the Indian elections and moon exploration.
I’m a huge fan of podcasts, and there are loads that can help with gaining a basic understanding of new academic areas – in addition to being interesting (and sometimes amusing).
A quick search of the BBC has identified four science podcasts I should think about listening to – The Science Hour, Discovery, and BBC Inside Science. Very open to other suggestions – please tweet me or let me know in the comments/via email.
A huge thank you to all contributors:
I’m very grateful to everyone for their comments. I’ve not been able to include everything everyone said, in the interests of avoiding duplication/repetition and in the interests of keeping this post to a manageable length.
I don’t think there’s any great secret to success in supporting a new discipline or working in research development in a new institution – it’s really a case of remembering and repeating the steps that worked last time. And hopefully this blog post will serve as a reminder to others, as it is doing to me.
Jeremy Barraud is Deputy Director, Research Management and Administration, at the University of the Arts, London.
Bonhi Bhattacharya is Research Development Manager at the University of Reading
Mary Caspillo-Brewer is Research Coordinator at the Institute for Global Health, University College London
Kate Clift is Research Development Manager at Loughborough University
Anne Onymous-Contributor is something or other at the University of Redacted
Claire Edwards is Research Bid Development Manager at the University of Surrey.
Adam Forristal Golberg is Research Development Manager (Charities), at the University of Nottingham
Nathanial Golden is Research Development Manager (ADHSS) at Nottingham Trent University
Chris Hewson is Social Science Research Impact Manager at the University of York
Liz Humphreys is Research Development Manager for Life Sciences, University of Nottingham
Rommany Jenkins is Research Development Manager for Medical and Dental Sciences, University of Birmingham.
Charlotte Johnson is Senior Research Development Manager, University of Reading
Suzannah Laver is Research Development Manager at the University of Exeter Medical School
Harry Moriarty is Research Accelerator Project Manager at the University of Nottingham.
Caroline Moss-Gibbons is Parasol Librarian at the University of Gibraltar.
Hilary Noone is Project Officer (REF Environment and NUCoREs0, at the University of Newcastle
Roger Singleton Escofet is Research Strategy and Development Manager for the Faculty of Science, University of Warwick.
Mark Smith is Programme Manager – The Bloomsbury SET, at the Royal Veterinary College
Richard Smith is Research and Innovation Funding Manager, Faculty of Arts, Humanities and Social sciences, Anglia Ruskin University.
Muriel Swijghuisen Reigersberg is Researcher Development Manager (Strategy) at the University of Sydney.
Sanja Vlaisavljevic is Enterprise Officer at Goldsmiths, University of London
Helen Walker is Research and Innovation Officer at the University of Portsmouth
Lorna Wilson is Head of Research Development, Durham University
I’m writing this in the final week of my current role as Research Development Manager (Social Sciences) at the University of Nottingham before I move to my role as Research Development Manager (Research Charities) at the University of Nottingham. This may or may not change the focus of this blog, but I won’t abandon the social sciences entirely – not least because I’m stuck with the web address.
I’ve been thinking about strategies and approaches to research funding, and the place and prioritisation of applying for research grants in academic structures. It’s good for institutions to be ambitious in terms of their grant getting activities. However, these ambitions need to be at least on a nodding acquaintance with: (a) the actual amount of research funding historically available to any given particular discipline; and (b) the chances of any given unit or school or individual to compete successfully for that funding given the strength of the competition.
To use a football analogy, if I want my team to get promotion, I should moderate my expectations in the light of how many promotion places are available, and how strong the likely competition for those limited spots will be. In both cases, we want to set targets that are challenging, stretching, and ambitious, but which are also realistic and informed by the evidence.
How do we do that? Well, in a social science context, a good place to start is the ESRC success rates, and other disciplines could do worse than take a similar approach with their most relevant funding council. The ESRC produce quite a lot of data and analysis on funding and success rates, and Alex Hulkes of the ESRC Insights team writes semi-regular blog posts. Given the effort put into creating and curating this information, it seems only right that we use it to inform our strategies. This level of transparency is a huge (and very welcome) change from previous practices of very limited information being rather hidden away. Obvious caveats – the ESRC is by no means the only funder in town for the social sciences, but they’re got the deepest pockets and offer the best financial terms. Another (and probably better) way would be to compare HESA research income stats, but let’s stick to the ESRC for now.
The table below shows the running three year total (2015/6- 2017/18) and number of applications for each discipline for all calls, and the total for the period 2011/12 to 2017/8. You can access the data for yourself on the ESRC web page. This data is linked as ‘Application and success rate data (2011-12 to 2017-18)’ and was published in ODS format in May 2018. For ease of reading I’ve hidden the results from individual years.
Lots of caveats here. Unsuccessful outline proposals aren’t included (as no outline application leads directly to funding), but ‘office rejects’ (often for eligibility reasons) are. The ‘core discipline’ of each application is taken into account – secondary disciplines are not. The latest figures here are from 2017-2018 (financial year), so there’s a bit of a lag – in particular, the influence of the Global Challenges Research Fund (GCRF) or Industrial Strategy Challenge Fund (ISCF) will not be fully reflected in these figures. I think the ‘all data’ figures may include now-defunct schemes such as the ESRC Seminar Series, though I think Small Grants had largely gone by the start of the period covered by these figures.
Perhaps most importantly, because these are the results for all schemes, they include targeted calls which will rarely open to all disciplines equally. Fortunately, the ESRC also publishes similar figures for their open call (Standard) Research Grants scheme for the same time period. Note that (as far as I can tell) the data above includes the data below, just as the ‘all data’ column (which goes back to 2011/2) also includes the three year total.
This table is important because the Research Grants Scheme is bottom-up, open-call, and open to any application that’s at least 50% social sciences. Any social science researcher could apply to this scheme, whereas directed calls will inevitably appeal only to a subset. These are the chances/success rates for those whose work does not fit squarely into a directed scheme and could arguably be regarded as a more accurate measure of disciplinary success rates. It’s worth noting that a specific call that’s very friendly to a particular discipline is likely to boost the successes but may decrease the disciplinary success rate if it attracts a lot of bids. It’s also possible that major targetted calls that are friendly to a particular disciplin may result in fewer bids to open call.
To be fair, there are a few other regular ESRC schemes that are similarly open and should arguably be included if we wanted to look at the balance of disciplines and what a discipline target might look like. The New Investigator Scheme is open in terms of academic discipline, if not in time-since-PhD, and the Open Research Area call is open in terms of discipline if not in terms of collaborators. The Secondary Data Analysis Initiative is similarly open in terms of discipline, if not in terms of methods. Either way, we don’t have (or I can’t find) data which combines those schemes into a non-directed total.
Nevertheless, caveats and qualifications aside, I think these two tables give us a good sense of the size of prize available for each discipline. There’s approxinately 29 per year (of which 5 open call) for Economics, and 11 per year (of which 2 open call) for Business and Management. Armed with that information and a knowledge of the relative strength of the discipline/school in our own institution, we ought to get a sense of what a realistic target might look like and a sense of how well we’re already doing. Given what we know about our expertise, eminence, and environment, and the figures for funded projects, what ought our share of those projects be?
We could ask a further question about how those successes are distributed between universities and about any correllation between successes and (unofficial) subject league tables from the last REF, calculated on the basis of Grade Point Average or Research power. However, even if that data were available, we’d be looking at small numbers. We do know that the ESRC have done a lot of work on looking at funding distribution and concentration and their key findings are that:
ESRC peer review processes do not concentrate funding to a degree greater than that apparent in the proposals that request the funding.
ROs which apply infrequently appear to have lower success rates than do those which are more active applicants
In other words, most universities typically have comparable succcess rates except that those that apply more often do a little better than average, those who apply rarely do a little worse. This sounds intuitively right – those who apply more are likely more research-active, at least in the social sciences, and therefore more likely to generate stronger applications. But this is at an overall level, not discipline level.
I’d also note that we shouldn’t only measure success by the number of projects we lead. As grants get larger on average, there’s more research income available for co-investigators on bids leds elsewhere. I think a strategy that focuses only on leading bids and being lead institution neglects the opportunties offered by being involved in strong bids led by world class researchers based elsewhere. I’m sure it’s not unusual for co-I research income to exceed PI income for academic units.
I’ve not made any comment about the different success rates for different disciplines. I’ve written about this already for many of the years covered by the full data (though Alex Hulkes has done this far more effectively over the last few years, having the benefit of actual data skills) and I don’t really want to cover old ground again. The same disparities continue much as before. Perhaps GCRF will provide a much-needed boost for Education research (or at least the international aspects) and ISCF for management and business research.
All funding schemes have a summary section as an essential part of the application form. On the UK research councils’ JeS form, the instruction is to “[d]escribe the proposed research in simple terms in a way that could be publicised to a general audience”.
But the summary is not just about publicising your research. The summary also:
primes your reader’s expectations and understanding of your project
helps the funder to identify suitable experts to review your application
gives the reviewer a clear, straightforward, and complete overview of your project
helps your nominated funding panel introducer to summarise your application for the rest of the decision-making panel
acts as a prompt for the other panel members to recall your application, which they may have only skim-read
can be used once your project is successful, for a variety of purposes including ethics review; participant recruitment; impact work.
There are three main ways to make a mess of your summary.
1. Concentrating on the context – writing an introduction, not a summary.
I’ve written before about using too much background or introductory material (what I describe as ‘the Star Wars error’) but it’s a particular problem for a summary. The reader needs some context and background, but if it’s more than a sentence or two, it’s probably too much. If you don’t reach the “this research will” tipping point by a third of the way through (or worse, even later), there’s too much background.
2. Writing to avoid spoilers – writing a blurb not a precis
I really admire film editors who produce movie trailers: they capture the essence of the film while minimising spoilers. However while a film trailer for The Sixth Sense, The Usual Suspects, or The Crying Game should omit certain key elements, a project summary needs to include all of them. An unexpected fifth-act twist is great for film fans, but not for reviewers. Their reaction to your dramatic twist of adding a hitherto unheralded extra research question or work package is more likely to earn you bafflement and a poor review than an Oscar.
3. Ignoring Plain English
The National Institute for Health Research’s Research for Patient Benefit form asks for a “Plain English summary of research”. As a former regional panel member, I have read many applications and some great examples of Plain English summaries of very complex projects. I have read applications from teams that have not tried at all, and from those whose commitment to Plain English lasts the first three paragraphs, before they lapse back.
Writing in Plain English is hard. It involves finding a way to forget how you usually communicate to colleagues, and putting yourself in the situation of someone who knows a fraction of what you do. Without dumbing down or patronising. If it wasn’t hard to write in Plain English, we wouldn’t need the expert shorthand of specialist language, which is usually created to simplify complicated concepts to facilitate clear and concise communication among colleagues.
Very few people can write their own Plain English summary. It’s something you probably need help with, and your friendly neighbourhood research development officer might be well placed to do this – and might even draft it for you.
Incidentally, with NIHR schemes, beware not only of using specialist language, but also using higher-reading-level vocabulary and expressions when more simple ones will do. There’s no need for a superabundance of polysyllabic terminonlogy. The Levehulme Trust offers some useful guidance on writing for the lay ‘lay reader’.
When to write it
Should you write the summary when you start the application, or when you’ve finished it? Ideally both.
You should sketch out your summary when you start writing – if you can’t produce a bullet point summary of your project you’re probably not ready to write it up as an application. Save plenty of time at the very end to rework it in the light of your completed project. Above all, get as much outside input on your summary as you can. It is the most important part of the application, and well worth your time and trouble.