An applicant’s guide to Full Economic Costing

A version of this article first appeared in Funding Insight in July 2019 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

You’re applying for UK research council funding and suddenly you’re confronted with massive overhead costs. Adam Golberg tries to explain what you need to know.

Trying to explain Full Economic Costing is not straightforward. For current purposes, I’ll be assuming that you’re an academic applying for UK Research Council funding; that you want to know enough to understand your budget; and that you don’t really want to know much more than that.

If you do already know a lot about costing or research finances, be warned – this article contains simplifications, generalisations, and omissions, and you may not like it.

What are Full Economic Costs, and why are they taking up so much of my budget?

Full Economic Costs (fEC) are paid as part of UK Research and Innovation grants to cover a fair share of the wider costs of running the university – the infrastructure that supports your research. There are a few different cost categories, but you don’t need to worry about the distinctions.

Every UK university calculates its own overhead rates using a common methodology. I’m not going to try to explain how this works, because (a) I don’t know; and (b) you don’t need to know. Most other research funders (charities, EU funders, industry) do not pay fEC for most of their schemes. However, qualifying peer-reviewed charity funding does attract a hidden overhead of around 19% through QR funding (the same source as REF funding). But it’s so well hidden that a lot of people don’t know about it. And that’s not important right now.

How does fEC work?

In effect, this methodology produces a flat daily overhead rate to be charged relative to academic time on your project. This rate is the same for the time of the most senior professor and the earliest of early career researchers.

One effect of this is to make postdoc researchers seem proportionally more expensive. Senior academics are more expensive because of higher employment costs (salary etc), but the overheads generated by both will be the same. Don’t be surprised if the overheads generated by a full time researcher are greater than her employment costs.

All fEC costs are calculated at today’s rates. Inflation and increments will be added later to the final award value.

Do we have to charge fEC overheads?

Yes. This is a methodology that all universities use to make sure that research is funded properly, and there are good arguments for not undercutting each other. Rest assured that everyone – including your competitors– are playing by the same rules and end up with broadly comparable rates. Reviewers are not going to be shocked by your overhead costs compared to rival bids. Your university is not shooting itself (or you) in the foot.

There are fairness reasons not to waive overheads. The point of Research Councils is to fund the best individual research proposals regardless of the university they come from, while the REF (through QR) funds for broad, sustained research excellence based on historical performance. If we start waiving overheads, wealthier universities will have an unfair advantage as they can waive while others drown.

Further, the budget allocations set by funders are decided with fEC overheads in mind. They’re expecting overhead costs. If your project is too expensive for the call, the problem is with your proposal, not with overheads. Either it contains activities that shouldn’t be there, or there’s a problem with the scope and scale of what you propose.

However, there are (major) funding calls where “evidence of institutional commitment” is expected. This could include a waiver of some overheads, but more likely it will be contributions in kind – some free academic staff time, a PhD studentship, new facilities, a separate funding stream for related work. Different universities have different policies on co-funding and it probably won’t hurt to ask. But ask early (because approval is likely to be complex) and have an idea of what you want.

What’s this 80% business?

This is where things get unnecessarily complicated. Costs are calculated at 100% fEC but paid by the research councils at 80%. This leaves the remaining 20% of costs to be covered by the university. Fortunately, there’s enough money from overheads to cover the missing 20% of direct costs. However, if you have a lot of non-pay costs and relatively little academic staff time, check with your costings team that the project is still affordable.

Why 80%? In around 2005 it was deemed ‘affordable’ – a compromise figure intended to make a significant contribution to university costs but without breaking the bank. Again, you don’t need to worry about any of this.

Can I game the fEC system, and if so, how?

Academic time is what drives overheads, so reducing academic time reduces overheads. One way to do this is to think about whether you really need as much researcher time on the project. If you really need to save money, could contracts finish earlier or start later in the project?

Note that non-academic time (project administrators, managers, technicians) does not attract overheads, and so are good value for money under this system. If some of the tasks you’d like your research associate to do are project management/administration tasks, your budget will go further if you cost in administrative time instead.

However, if your final application has unrealistically low amounts of academic time and/or costs in administrators to do researcher roles, the panel will conclude that either (a) you don’t understand the resource implications of your own proposal; or (b) a lack of resources means the project risks being unable to achieve its stated aims. Either way, it won’t be funded. Funding panels are especially alert for ‘salami projects’ which include lots of individual co-investigators for thin slivers of time in which the programme of research cannot possibly be completed. Or for undercooked projects which put too much of a burden on not enough postdoc researcher time. As mentioned earlier, if the project is too big for the call budget, the problem is with your project.

The best way to game fEC it is not to worry about it. If you have support with your research costings, you’ll be working with someone who can cost your application and advise you on where and how it can be tweaked and what costs are eligible. That’s their job – leave it to them, trust what they tell you, and use the time saved to write the rest of the application.

Thanks to Nathaniel Golden (Nottingham Trent) and Jonathan Hollands (University of Nottingham) for invaluable comments on earlier versions of this article. Any errors that remain are my own.

Setting Grant Getting Targets in the Social Sciences

I’m writing this in the final week of my current role as Research Development Manager (Social Sciences) at the University of Nottingham before I move to my role as Research Development Manager (Research Charities) at the University of Nottingham. This may or may not change the focus of this blog, but I won’t abandon the social sciences entirely – not least because I’m stuck with the web address.

Image by Tookapic

I’ve been thinking about strategies and approaches to research funding, and the place and prioritisation of applying for research grants in academic structures. It’s good for institutions to be ambitious in terms of their grant getting activities. However, these ambitions need to be at least on a nodding acquaintance with:
(a) the actual amount of research funding historically available to any given particular discipline; and
(b) the chances of any given unit or school or individual to compete successfully for that funding given the strength of the competition.

To use a football analogy, if I want my team to get promotion, I should moderate my expectations in the light of how many promotion places are available, and how strong the likely competition for those limited spots will be. In both cases, we want to set targets that are challenging, stretching, and ambitious, but which are also realistic and informed by the evidence.

How do we do that? Well, in a social science context, a good place to start is the ESRC success rates, and other disciplines could do worse than take a similar approach with their most relevant funding council. The ESRC produce quite a lot of data and analysis on funding and success rates, and Alex Hulkes of the ESRC Insights team writes semi-regular blog posts. Given the effort put into creating and curating this information, it seems only right that we use it to inform our strategies. This level of transparency is a huge (and very welcome) change from previous practices of very limited information being rather hidden away. Obvious caveats – the ESRC is by no means the only funder in town for the social sciences, but they’re got the deepest pockets and offer the best financial terms. Another (and probably better) way would be to compare HESA research income stats, but let’s stick to the ESRC for now.

The table below shows the running three year total (2015/6- 2017/18) and number of applications for each discipline for all calls, and the total for the period 2011/12 to 2017/8. You can access the data for yourself on the ESRC web page. This data is linked as ‘Application and success rate data (2011-12 to 2017-18)’ and was published in ODS format in May 2018. For ease of reading I’ve hidden the results from individual years.

Lots of caveats here. Unsuccessful outline proposals aren’t included (as no outline application leads directly to funding), but ‘office rejects’ (often for eligibility reasons) are. The ‘core discipline’ of each application is taken into account – secondary disciplines are not. The latest figures here are from 2017-2018 (financial year), so there’s a bit of a lag – in particular, the influence of the Global Challenges Research Fund (GCRF) or Industrial Strategy Challenge Fund (ISCF) will not be fully reflected in these figures. I think the ‘all data’ figures may include now-defunct schemes such as the ESRC Seminar Series, though I think Small Grants had largely gone by the start of the period covered by these figures.

Perhaps most importantly, because these are the results for all schemes, they include targeted calls which will rarely open to all disciplines equally. Fortunately, the ESRC also publishes similar figures for their open call (Standard) Research Grants scheme for the same time period. Note that (as far as I can tell) the data above includes the data below, just as the ‘all data’ column (which goes back to 2011/2) also includes the three year total.

This table is important because the Research Grants Scheme is bottom-up, open-call, and open to any application that’s at least 50% social sciences. Any social science researcher could apply to this scheme, whereas directed calls will inevitably appeal only to a subset. These are the chances/success rates for those whose work does not fit squarely into a directed scheme and could arguably be regarded as a more accurate measure of disciplinary success rates. It’s worth noting that a specific call that’s very friendly to a particular discipline is likely to boost the successes but may decrease the disciplinary success rate if it attracts a lot of bids. It’s also possible that major targetted calls that are friendly to a particular disciplin may result in fewer bids to open call.

To be fair, there are a few other regular ESRC schemes that are similarly open and should arguably be included if we wanted to look at the balance of disciplines and what a discipline target might look like. The New Investigator Scheme is open in terms of academic discipline, if not in time-since-PhD, and the Open Research Area call is open in terms of discipline if not in terms of collaborators. The Secondary Data Analysis Initiative is similarly open in terms of discipline, if not in terms of methods. Either way, we don’t have (or I can’t find) data which combines those schemes into a non-directed total.

Nevertheless, caveats and qualifications aside, I think these two tables give us a good sense of the size of prize available for each discipline. There’s approxinately 29 per year (of which 5 open call) for Economics, and 11 per year (of which 2 open call) for Business and Management. Armed with that information and a knowledge of the relative strength of the discipline/school in our own institution, we ought to get a sense of what a realistic target might look like and a sense of how well we’re already doing. Given what we know about our expertise, eminence, and environment, and the figures for funded projects, what ought our share of those projects be?

We could ask a further question about how those successes are distributed between universities and about any correllation between successes and (unofficial) subject league tables from the last REF, calculated on the basis of Grade Point Average or Research power. However, even if that data were available, we’d be looking at small numbers. We do know that the ESRC have done a lot of work on looking at funding distribution and concentration and their key findings are that:

ESRC peer review processes do not concentrate funding to a degree greater than that apparent in the proposals that request the funding.

ROs which apply infrequently appear to have lower success rates than do those which are more active applicants

In other words, most universities typically have comparable succcess rates except that those that apply more often do a little better than average, those who apply rarely do a little worse. This sounds intuitively right – those who apply more are likely more research-active, at least in the social sciences, and therefore more likely to generate stronger applications. But this is at an overall level, not discipline level.

I’d also note that we shouldn’t only measure success by the number of projects we lead. As grants get larger on average, there’s more research income available for co-investigators on bids leds elsewhere. I think a strategy that focuses only on leading bids and being lead institution neglects the opportunties offered by being involved in strong bids led by world class researchers based elsewhere. I’m sure it’s not unusual for co-I research income to exceed PI income for academic units.

I’ve not made any comment about the different success rates for different disciplines. I’ve written about this already for many of the years covered by the full data (though Alex Hulkes has done this far more effectively over the last few years, having the benefit of actual data skills) and I don’t really want to cover old ground again. The same disparities continue much as before. Perhaps GCRF will provide a much-needed boost for Education research (or at least the international aspects) and ISCF for management and business research.

Maybe.

Top application tips for postdoc fellowships in the social sciences

A version of this article first appeared in Funding Insight in June 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

Post-doctoral or early career research fellowships in the social sciences have low success rates and are scarcely less competitive than academic posts. But if you have a strong proposal, at least some publications, realistic expectations and a plan B, applying for one of these schemes can be an opportunity to firm up your research ideas and make connections.

Reality check

If you’re thinking of applying for a postdoc or early career social science fellowship, you should ask yourself the following:

  • Are you likely to be one of the top (say) six or seven applicants in your academic discipline?
  • Does your current track record demonstrate this, or at least trajectory towards it?
  • Is applying for a Fellowship the best use of your time?

There’s a lot of naivety about the number of social science fellowships there are and the competition for them. Perhaps some PhD supervisors paint too rosy a picture, perhaps it is applicant wishful thinking, or perhaps the phrasing of some calls understates the reality of what’s required of a competitive proposal. But the reality is that Postdoc Fellowships in the social sciences are barely less competitive than lectureships. Competitive pressures mean that standards are driven sky high and demand exceeds supply by a huge margin.

The British Academy has a success rate of around 5%, with 45 Fellowships across arts, humanities, and social sciences. The Leverhulme Trust success rate is 14%, with around 100 Fellowships across all the disciplines they support (i.e. nearly all). The ESRC scheme is new – no success rates yet – but it will support 30-35 social science Fellowships. Marie Curie Fellowships are still available, but require relocating to another European country. There are the new UKRI Future Leader Fellowships which will fund 100 per call, but that’s across all subjects, and these are very much ‘future leader’ not ‘postdoc’ calls. Although some institutions have responded to a lack of external funding by establishing internal schemes – such as the Nottingham Research Fellowships – standards and expectations are also very, very high.

That’s not to say that you shouldn’t apply – Fellowships do exist, applicants do get them – but you need to take a realistic view of your chances of success and decide about the best use of your time. If you’re writing a Fellowship application, you’re not writing up a paper, or writing a job application.

Top Tips for applications

  • Credible applicants need their own (not their supervisor’s) original, detailed and significant Fellowship project. Doing ‘more of the same’ is unlikely to be competitive – it’s fine to want to mine your PhD for publications and for there to be a connection to the new programme of work, but a Fellowship is really about the next stage.
  • If you don’t have any publications, you have little to make you stand out, and therefore little to no chance. Like all grant applications, this is a contest, not a test. It’s not about being sufficiently promising to be worth funding (most applicants are), it’s about presenting a stronger and more compelling case than your rivals.
  • If you have co-authored publications, make your contribution clear. If you have co-written a paper with your supervisor, make sure reviewers can tell whether (a) it is your work, with supervisory input; or (b) it is your supervisor’s work, for which you provided research assistance.
  • Give serious consideration to moving institution unless (a) you’re already at the best place for what you want to do; or (b) your personal circumstances prevent this. Moving institution doubles your network, may give you a better research environment, and gives you a fresh start where you’re seen as an early career researcher, not as the PhD student you used to be. If you’re already at the best place for your work or you can’t move, make the case. Funders are becoming a bit less dogmatic on this point and more aware that not everyone can relocate, but don’t assume that staying put is the best idea.
  • Don’t neglect training and development plans. Who would you like to meet or work with, what would you like training in, what extra research and impact skills would you like to have? Fellowships are about producing the researcher as well as the research.
  • Success rates are very low. Don’t get your hopes up, and don’t put all your eggs in one basket and neglect other opportunities.
  • Much of the rest of my advice on research grant writing applies to Fellowships too.

Even if you’re ultimately unsuccessful, you can also use the application as a vehicle to support the development of your post-PhD research agenda. By expressing a credible interest in applying for a Fellowship at an institution that’s serious about research, you will get feedback on your research plans from senior academics and potential mentors and from research development staff. It also forces you to put your ideas down on paper in a coherent way. Whether you apply for a Fellowship or not, you’ll need this for the academic job market.

Eight tips for attending a research call information and networking day

A version of this article first appeared in Funding Insight in July 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

‘School of Athens’ by Raphael. Aristotle is willing to join Plato’s project as co-I, but only if his research group gets at least two FT research fellows. Unfortunately, Plato’s proposal turns out to be merely a pale imitation of the perfect (JeS) form and isn’t invited to full application stage.

Many major research funding calls for substantial UKRI investments now include one or more workshops or events. These events typically aim:

(a) to publicise the call and answer questions from potential bidders; and
(b) to facilitate networking and to develop consortia, often including non-academic partners.

There’s an application process to gauge demand and to allocate or ration places (if required) between different disciplines and institutions. These events are distinct from ‘sandpit’ events – which have a more rigorous and competitive application process and where direct research funding may result. They’re also distinct from scoping meetings, which define and shape future calls. Some of the advice below might be applicable for those events, but my experience is limited to the call information day.

I’ve attended one such meeting and I found it very useful in terms of understanding the call and the likely competition for funding. While I’ve attended networking and idea generation events before, this was my first UKRI event, and I’ve come up with a few hints and tips that might help other first time attendees.

  1. Don’t send Research Development staff. People like me are more experienced at identifying similarities/differences in emphasis in calls, but we can only go so far in terms of networking and representing academics. However well briefed, there will come a point at which we can’t answer further questions because we’re not academics. Send an academic if you possibly can.
  2. Hone your pitch. A piece of me dies inside every time I use a phrase like “elevator pitch”, but the you’re going to be introducing yourself, your team, and your ideas many, many times during the day. Prepare a short version and a long version of what you want to say. It doesn’t have to be crafted word-for-word, but prepare the structure of a clear, concise introduction that you can comfortably reel off.
  3. Be clear about what you want and what you’re looking for. If you’re planning on leading a bid, say so. If you’re looking to add your expertise on X to another bid TBC, say so. If you’re not sure yet, say so. I’m not sure what possible advantage could be gained about being coy. You could finesse your starting position by talking of “looking to” or “planning to” lead a bid if you want, but much better to be clear.
  4. Don’t just talk to your friends. Chances are that you’ll have friends/former colleagues at the event who you may not see as often as you’d like, but resist spending too much time in your comfort zone. It’ll limit your opportunities and will make you appear cliquey. Consider arranging to meet before or after the event, or at another time to catch up properly.
  5. Be realistic about what’s achievable. I’m persuadable that these events can and do shape the composition/final teams of some bids, but I wonder whether any collaboration starting from ground level at one of these events has a realistic chance of success.
  6. Do your homework. Most call meetings invite delegates to submit information in advance, usually a brief biog and a statement of research interests. It’s worth taking time to do this well, and having a read of the information submitted by others. Follow up with web searches about potential partners to find out more about their work, follow them on twitter, and find out what they look like if you don’t already know. It’s not stalking if it’s for research collaboration.
  7. Brush up your networking skills. If networking is something you struggle with, have a quick read of some basic networking guides. Best tip I was ever given – regard networking as a process to identify “how can I help these people?” rather than “how can I use these people to my advantage?” and it’s much easier. Also, I find… “I think I follow you on twitter” an effective icebreaker.
  8. Don’t expect any new call info. There will be a presentation and Q&A, but don’t expect major new insights. As not everyone can make these events, funders avoid giving any unfair advantages. Differences in nuance and emphasis can emerge in presentations and through questions, but don’t expect radical additional insights or secret insider knowledge.

If your target call has an event along these lines, you should make every effort to attend. Send your prospective PI if you can, another academic if not, and your research development staff only if you must. Do a bit of homework… be clear about what you want to achieve, prepare your pitch, and identify the people you want to talk to, and you’ll have a much better chance of achieving your goals.

Applying for research funding – is it worth it? Part II – Costs and Benefits

A version of this article first appeared in Funding Insight on 9th March 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

“Just when I thought I was out, they pull me back in!”

My previous post posed a question about whether applying for research funding was worth it or not, and concluded with a list of questions to consider to work out the answer. This follow-up is a list of costs and benefits associated with applying for external research funding, whether successful or unsuccessful. Weirdly, my list appears to contain more costs than benefits for success and more benefits than costs for failure, but perhaps that’s just me being contrary…

If you’re successful:

Benefits….

  • You get to do the research you really want to do
  • In career terms, whether for moving institution or internal promotion, there’s a big tick in the box marked ‘external research funding’.
  • Your status in your institution and within your discipline is likely to rise. Bringing in funding via a competitive external process gives you greater external validation, and that changes perceptions – perhaps it marks you out as a leader in your field, perhaps it marks a shift from career young researcher to fulfilling your evident promise.
  • Success tends to begat success in terms of research funding. Deliver this project and any future application will look more credible for it.

Costs…

  • You’ve got to deliver on what you promised. That means all the areas of fudge or doubt or uncertainty about who-does-what need to be sorted out in practice. If you’ve under-costed any element of the project – your time, consumables, travel and subsistence – you’ll have to deal with it, and it might not be much fun.
  • Congratulations, you’ve just signed yourself up for a shedload of admin. Even with the best and most supportive post-award team, you’ll have project management to do. Financial monitoring; recruitment, selection, and line management of one or more research associates. And it doesn’t finish when the research finishes – thanks to the impact agenda, you’ll probably be reporting on your project via Researchfish for years to come.
  • Every time any comparable call comes round in the future, your colleagues will ask you give a presentation about your application/sit on the internal sifting panel/undertake peer review. Once a funding agency has given you money, you can bet they’ll be asking you to peer review other applications. Listed as a cost for workload purposes, but there are also a lot of benefits to getting involved in peer reviewing applications because it’ll improve your own too. Also, the chances are that you benefited from such support/advice from senior colleagues, so pay it forward. But be ready to pay.
  • You’ve just raised the bar for yourself. Don’t be surprised if certain people in research management start talking about your next project before this one is done as if it’s a given or an inevitability.
  • Unless you’re careful, you may not see as much recognition in your workload as you might have expected. Of course, your institution is obliged to make the time promised in the grant application available to you, but unless you’ve secured agreement in advance, you may find that much of this is taken out of your existing research allocation rather than out of teaching and admin. Especially as these days we no longer thing of teaching as a chore to buy ourselves out from. Think very carefully about what elements of your workload you would like to lose if your application is successful.
  • The potential envy and enmity of colleagues who are picking up bits of what was your work.

If you’re unsuccessful…

Benefits…

  • The chances are that there’s plenty to be salvaged even from an unsuccessful application. Once you’ve gone through the appropriate stages of grief, there’s a good chance that there’s at least one paper (even if ‘only’ a literature review) in the work that you’ve done. If you and your academic colleagues and your stakeholders are still keen, the chances are that there’s something you can do together, even if it’s not what you ideally wanted to do.
  • Writing an application will force you to develop your research ideas. This is particularly the case for career young researchers, where the pursuit of one of those long-short Fellowships can be worth it if only to get proper support in developing your research agenda.
  • If you’ve submitted a credible, competitive application, you’ve at least shown willing in terms of grant-getting. No-one can say that you haven’t tried. Depending on the pressures/expectations you’re under, having had a credible attempt at it buys you some license to concentrate on your papers for a bit.
  • If it’s your first application, you’ll have learnt a lot from the process, and you’ll be better prepared next time. Depending on your field, you could even add a credible unsuccessful application to a CV, or a job application question about grant-getting experience.
  • If your institution has an internal peer review panel or other selection process, you’ve put you and your research onto the radar of some senior people. You’ll be more visible, and this may well lead to further conversations with colleagues, especially outside your school. In the past I’ve recommended that people put forward internal expressions of interest even if they’re not sure they’re ready for precisely this reason.

Costs…

  • You’ve just wasted your time – and quite a lot of time at that. And not just work time… often evenings and weekends too.
  • It’ll come as a disappointment, which may take some time to get over
  • Even if you’ve kept it quiet, people in your institution will know that you’ve been unsuccessful.

I’ve written two longer pieces on what to do if your research grant application is unsuccessful, which can be found here and here.

“Once more unto the breach” – Should I resubmit my unsuccessful research grant application?

A picture of a boomerangThis article first appeared in Funding Insight on 11th May 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Should I resubmit my unsuccessful research grant application?

No.

‘No’ is the short answer – unless you’ve received an invitation or steer from the funder to do so. Many funders don’t permit uninvited resubmissions, so the first step should always be to check your funder’s rules and definitions of resubmission with your research development team.

To be, or not to be

That’s not to say that you should abandon your research proposal – more that it’s a mistake to think of your next application on the same or similar topic as a resubmission. It’s much better – if you do wish to pursue it – to treat it as a fresh application and to give yourself and your team the opportunity to develop your ideas. It’s unlikely that nothing has changed between the date of submission and now. It’s also unlikely that nothing could be improved about the underpinning research idea or the way it was expressed in the application.

However, sometimes the best approach is to let an idea go, cut your losses, avoid the sunk costs fallacy. Onwards and upwards to the next idea. I was recently introduced to the concept of a “negative CV”, which is the opposite of a normal CV, listing only failed grant applications, rejected papers, unsuccessful conference pitches and job market rejections. Even the most eminent scholars have lengthy negative CVs, and there’s no shame in being unsuccessful, especially as success rates are so low. It’s really difficult – you’ve got your team together, you’ve been through the discussions and debates and the honing of your idea and then the grant writing, and then the disappointment of not getting funded. It’s very definitely worth having meetings and discussion to see what can be salvaged and repurposed – publishing literature reviews, continuing to engage with stakeholders etc. It’s only natural to look for some other avenue for your work, but sometimes it’s best to move on to something else.

Here are two bits of wisdom that are both true in their own way:

  • If at first you don’t succeed, try, try try again (William Edward Hickson)
  • The definition of insanity is doing the same thing over and over but expecting different results (disputed- perhaps Einstein or Franklin, but I reckon US Narcotics Anonymous)

So what should you do? What factors should you consider in deciding whether to rise from the canvas like Rocky, or instead emulate Elsa and Let It Go?

What being unsuccessful means… and what it doesn’t

As a Canadian research council director once said, research funding is a contest, not a test. Research funding is a limited commodity, like Olympic medals, jobs, and winning lottery tickets. It’s not an unlimited commodity like driving licenses or PhDs, commodities which everyone who reaches the required standard can obtain. Sometimes I think researchers confuse the two – if the driving test examiner says I failed on my three point turn, if I get it right next time (and make no further mistakes) I’ll pass. But even if I respond adequately to all of the points made in the referees’ comments, there’s still no guarantee I’ll get funded. The quality of my driving in the morning doesn’t affect your chances of passing your test in the afternoon, but if too many applications are better than yours, you won’t get funded. And just as many recruitment exercises produce more appointable candidates than posts, so funding calls attract far more fundable applications than the funds available.

Sometimes referees’ comments can be misinterpreted. Feedback might list the real or perceived faults with the application, but (once the fundamentally flawed have been excluded) ultimately it’s a competition about significance. What significance means is defined by the funder and the scheme and doesn’t necessarily mean impact – it could be about academic significance, contribution to the field and so on.

As a public panel member for an NIHR scheme I’ve seen this from the inside – project proposals which are technically competent, sensible and feasible. Yet either because they fail to articulate the significance or because their research challenge is just not that significant an issue, they don’t get funded because they’re not competitive against similarly competent applications taking on much more significant and important research challenges. Feedback is given which would have improved the application, but simply addressing that feedback will seldom make it any more competitive.

When major Research Centre calls come out, I often have conversations with colleagues who have great ideas for perfectly formed projects which unfortunately I don’t think are significant enough to be one of three or four funded across the whole of social sciences. Ideally the significance question, the “so what/who cares?” question should be posed before applying in the first place, but you should definitely look again at what was funded and ask it again of your project before considering trying to rework it.

Themed Calls Cast a Long Shadow

One of the most dispiriting grant rejection experiences is rejection from a targeted call which seemed perfect. It’s not like an open call where you have to compete with rival bids on significance from all across your research council’s remit – rather, the significance is already recognised.

Yet the reality is that narrower calls often have similarly low success rates. Although they’re narrower, everyone who can pile in, does pile in. And deciding what to do next is much harder. Themed calls cast a long shadow – if as a funder I’ve just made a major investment in field X through niche call Y, I’m not sure how I’m going to feel about an X-related application coming back in through the open call route. Didn’t we just fund a lot of this stuff? Should we fund more, especially if an idea like this was unsuccessful last time? Shouldn’t we support something else? And I think this effect might be true even with different funders who will be aware of what’s going on elsewhere. If a tranche of projects in your research area have been funded through a particular call, it’s going to be very difficult to get investment through any other scheme anytime soon.

Switching calls, Switching funders

An exception to this might be the Global Challenges Research Fund or perhaps other areas where there’s a lot of funding available (relatively speaking) and a number of different calls with slightly different priorities. Being unsuccessful with an application to an open call or a broader call and then looking to repurpose the research idea in response to a narrower themed call is more likely to pay off than the other way round, moving from a specific call to a general one. But even so, my advice would be to ban the “r” word entirely. It’s not a ‘resubmission’, it’s an entirely new application written for a different funding scheme with different priorities, even if some of the underlying ideas are similar.

This goes double when it comes to switching funders. A good way of wasting everyone’s time is trying to crowbar a previously unsuccessful application into the format required by a different funder. Different funders have different priorities and different application procedures, formats and rules, and so you must treat it as a fresh application. Not doing so is a bit like getting out some love letters you sent to a former paramour, changing the name at the top, and reposting them to the current object of your affections. Neither will end well.

The Leverhulme Trust are admirably clear on this point, they’re “keen to avoid assuming the role of ‘funder of last resort’; that is, of routinely providing support for proposals which have been fully matched to the requirement of another funding agency, but have failed to win support on the grounds of either lack of quality or insufficient available funds.” If you’re going to apply to the Leverhulme Trust, for example, make it a Leverhulme-y application, and that means shifting not just the presentational style but also the substance of what you’re proposing.

Whatever the change, forget any notion of resubmission if you’re taking an idea from one call to another. Yes, you may be able to reuse some of your previous materials, but if you submit something clearly written for another call with the crowbar marks still visible, you won’t get funded.

The Five Stages of Grant Application Failure

I’m reluctant to draw this comparison, but I wonder if responding to grant application rejection is a bit like the Kubler-Ross model of grief (denial, anger, bargaining, depression, and acceptance). Perhaps one question to ask yourself is if your resubmission plans are coming from a position of acceptance – in which case fine, but don’t regard it as a resubmission – or a part of the bargaining stage. In which case…. perhaps take a little longer to decide what to do.

Further reading: What to do if your grant application is unsuccessful. Part 1 – What it Means and What it dDoesn’t and Part 2 – Next Steps.

‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.

How useful is reading examples of successful grant applications?

This article is prompted by a couple of twitter conversations around a Times Higher Education article which quotes Ross Mounce, founding editor of Research Ideas and Outcomes, who argues for open publication at every stage of the research process, including (successful and unsuccessful) grant applications. The article acknowledges that this is likely to be controversial, but it got a few of us thinking about the value of reading other people’s grant applications to improve one’s own.

I’m asked about this a lot by prospective grant applicants – “do you have any examples of successful applications that you can share?” – and while generally I will supply them if I have access to them, I also add substantial caveats and health warnings about their use.

The first and perhaps most obvious worry is that most schemes change and evolve over time, and what works for one call might not work in another. Even if the application form hasn’t changed substantially, funder priorities – both hard priorities and softer steers – may have changed. And even if neither have changed, competitive pressures and improved grant writing skills may well be raising the bar, and an application that got funded – say – three or four years ago might not get funding today. Not necessarily because the project is weaker, but because the exposition and argument would now need to be stronger. This is particularly the case for impact – it’s hard to imagine that many of the impact sections on RCUK applications written in the early days of impact would pass muster now.

The second, and more serious worry, is that potential applicants take the successful grant application far too seriously and far too literally. I’ve seen smart, sensible, sophisticated people become obsessed with a successful grant application and try to copy everything about it, whether relevant or not, as if there was some mystical secret encoded into the text, and any subtle deviation would prevent the magic from working. Things like… the exact balance of the application, the tables/diagrams used or not used (“but the successful application didn’t have diagrams!”), the referencing system, the font choice, the level of technical detail, the choice and exposition of methods, whether there are critical friends and/or a steering group, the number of Profs on the bid, the amount of RA time, the balance between academic and stakeholder impact.

It’s a bit like a locksmith borrowing someone else’s front door key, making as exact a replica as she can, and then expecting it to open her front door too. Or a bit like taking a recipe that you’ve successfully followed and using it to make a completely different dish by changing the ingredients while keeping the cooking processes the same. Is it a bit like cargo cult thinking? Attempting to replicate an observed success or desired outcome by copying everything around it as closely as possible, without sufficient reflection on cause and effect? It’s certainly generalising inappropriately from a very small sample size (often n=1).

But I think – subject to caveats and health warnings – it can be useful to look at previously successful applications from the same scheme. I think it can sometimes even be useful to look at unsuccessful applications. I’ve changed my thinking on this quite a bit in the last few years, when I used to steer people away from them much more strongly. I think they can be useful in the following ways:

  1. Getting a sense of what’s required. It’s one thing seeing a blank application form and list of required annexes and additional documents, it’s another seeing the full beast. This will help potential applicants get a sense of the time and commitment that’s required, and make sensible, informed decisions about their workload and priorities and whether to apply or not.
  2. It also highlights all of the required sections, so no requirement of the application should come as a shock. Increasingly with the impact agenda it’s a case of getting your ducks in a row before you even think about applying, and it’s good to find that out early.
  3. It makes success feel real, and possible, especially if the grant winner is someone the applicant knows, or who works at the same institution. Low success rates can be demoralising, but it helps to know not only that someone, somewhere is successful, but that someone here and close by has been successful.
  4. It does set a benchmark in terms of the state of readiness, detail, thoroughness, and ducks-in-a-row-ness that the attentive potential applicant should aspire to at least equal, if not exceed. Early draft and early stage research applications often have larger or smaller pockets of vaguery and are often held together with a generous helping of fudge. Successful applications should show what’s needed in terms of clarity and detail, especially around methods.
  5. Writing skills. Writing grant applications is a very different skill to writing academic papers, which may go some way towards explaining why the Star Wars error in grant writing is so common. So it’s going to be useful to see examples of that skill used successfully… but having said that, I have a few examples in my library of successes which were clearly great ideas, but which were pretty mediocre as examples of how to craft a grant application.
  6. Concrete ideas and inspiration. Perhaps about how to use social media, or ways to engage stakeholders, or about data management, or other kinds of issues, questions and challenges if (and only if) they’re also relevant for the new proposal.

So on balance, I think reading (funder and scheme) relevant, recent, and highly rated (even if not successful) funding applications can help prospective applicants…. provided that they remember that what they’re reading and drawing inspiration from is a different application from a different team to do different things for different reasons at a different time.

And not a mystical, magical, alchemical formula for funding success.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

ESRC success rates 2014/2015 – a quick and dirty commentary

"meep meep"
Success rates. Again.

The ESRC has issued its annual report and accounts for the financial year 2014/15, and they don’t make good reading. As predicted by Brian Lingley and Phil Ward back in January on the basis of the figures from the July open call, the success rate is well down – to 13% –  from the 25% I commented on last year , 27% on 2012-13 and 14% of 2011-2012.

Believe it or not there is a staw-grasping positive way of looking at these figures… of which more later.

This research professional article has a nice overview which I can’t add much to, so read it first. Three caveats about these figures, though…

  • They’re for the standard open call research grant scheme, not for all calls/schemes
  • They relate to the financial year, not the academic year
  • It’s very difficult to compare year-on-year due to changes to the scheme rules, including minimum and maximum thresholds which have changed substantially.

In previous years I’ve focused on how different academic disciplines have got on, but there’s probably very little to add. You can read them for yourself (p. 38), but the report only bothers to calculate success rates for the disciplines with the highest numbers of applications – presumably beyond that there’s little statistical significance. I could be claiming that it’s been a bumper year for Education research, which for years bumped along at the bottom of the league table with Business and Management Studies in terms of success rates, but which this year received 3 awards from 22 applications, tracking the average success rate. Political Science and Socio-Legal Studies did well, as they always tend to do. But it’s generalising from small numbers.

As last year, there is also a table of success rates by institution. In an earlier section on demand management, the report states that the ESRC “are discussing ways of enhancing performance with those HEIs where application volume is high and quality is relatively weak”. But as with last year, it’s hard to see from the raw success rate figures which these institutions might be – though of course detailed institutional profiles showing the final scores for applications might tell a very different story. Last year I picked out Leeds (10/0), Edinburgh (8/1), and Southampton (14/2) as doing poorly, and Kings College (7/3), King Leicester III (9/4), Oxford (14/6) as doing well – though again, one more or less success changes the picture.

This year, Leeds (8/1) and Edinburgh (6/1) have stats that look much better. Southampton doesn’t look to have improved (12/0) at all, and is one of the worst performers. Of those who did well last year, none did so well this year – Kings were down to 11/1, Leicester 2/0, and Oxford 11/2. Along with Southampton, this year’s poor performers were Durham (10/0), UCL (15/1)  and Sheffield (11/0) – though all three had respectable enough scores last time. This year’s standouts were Cambridge at 10/4. Perhaps someone with more time than me can combine success rates from the last two years, and I’m sure someone at the ESRC already has….

So… on the basis of success rates alone, probably only Southampton jumps out as doing consistently poorly. But again, much depends on the quality profile of the applications being submitted – it’s entirely possible that they were very unlucky, and that small numbers mask much more slapdash grant submission behaviour from other institutions. And of course, these figures only relate to the lead institution as far as I know.

It’s worth noting that demand management has worked… after a fashion.

We remain committed to managing application volume, with
the aim of focusing sector-wide efforts on the submission
of a fewer number of higher quality proposals with a
genuine chance of funding. General progress is positive.
Application volume is down by 48 per cent on pre-demand
management levels – close to our target of 50 per cent.
Quality is improving with the proportion of applications now
in the ‘fundable range’ up by 13 per cent on pre-demand
management levels, to 42 per cent. (p. 21).

I remember the target of reducing the numbers of applications received by 50% as being regarded as very ambitious at the time, and even if some of it was achieved by changing scheme rules to increase the minimum value of a grant application and banning resubmissions, it’s still some achievement. Back in October 2011 I argued that the ESRC had started to talk optimistically about meeting that target after researcher sanctions (in some form) had started to look inevitable. And in November 2012 things looked nicely on track.

But reducing brute numbers of applications is all very well. But if only 42% of applications are within the “fundable range”, then that’s a problem because it means that a lot of applications being submitted still aren’t good enough.This is where there’s cause for optimism – if less than half of the applications are fundable, your own chances should be more than double the average success rate – assuming that your application is of “fundable” quality. So there’s your good news. Problem is, no-one applies who doesn’t think their application is fundable.

Internal peer review/demand management processes are often framed in terms of improving the quality of what gets submitted, but perhaps not enough of a filtering process. So we refine and we polish and we make 101 incremental improvements… but ultimately you can’t polish a sow’s ear. Or something.

Proper internal filtering is really, really hard to do – sometimes it’s just easier to let stuff from people who won’t be told through and see if what happens is exactly what you think will happen, which it always is. There’s also a fine line (though one I think that can be held and defended) between preventing perceived uncompetitive applications from doing so and impinging on academic freedom. I don’t think telling someone they can’t submit a crap application is infringing their academic freedom, but any such decisions need to be taken with a great deal of care. There’s always the possibility of suspicion of ulterior motives – be it personal, be it subject or methods-based prejudice, or senior people just overstepping the mark and inappropriately imposing their convictions (ideological, methodological etc) on others. Like the external examiner who insists on “more of me” on the reading list….

The elephant in the room, of course, is the flat cash settlement and the fact that that’s now really biting, and that there’s nowhere near enough funding to go around for all of the quality social science research that’s badly needed. But we can’t do much about that – and we can do something about the quality of the applications we’re submitting and allowing to be submitted.

I wrote something for research professional a few years back on how not to do demand management/filtering processes, and I think it still stands up reasonably well and is even quite funny in places (though I say so myself). So I’m going to link to it, as I seem to be linking to a disproportionate amount of my back catalogue in this post.

A combination of a new minimum of £350k for the ESRC standard research grants scheme and the latest drop in success rates makes me think it’s worth writing a companion piece to this blog post about potential ESRC applicants need to consider before applying, and what I think is expected of a “fundable” application.

Hopefully something for the autumn…. a few other things to write about first.