The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.

8 thoughts on “The ESRC and “Demand Management”: Part 1 – How did we get here?”

  1. Another great post Adam. I agree that the RAE played a part, but it’s more likely to be the 2001 rather than the 2008 one. The rush for applications has not just happened post-2009 but has, as you suggest, been building for some time. Certainly there was a recognition after 2001 of the importance of grant income as an indicator of research activity/excellence. I wonder, too, if it was also the way that universities came to be seen as businesses, as ‘generators of income’ rather than merely ‘generators of knowledge’ (*shudder*).

    1. Thanks Phil. That’s interesting – RAE 2001 was before my time. Although I think I was employed in HE by the time the results came out, my role didn’t really involve research. I can see how 2001 might have set off a virtuous circle – the more grant income we get, the more QR funding we get, and the more QR funding we get, the more time we can devote to grant getting.

      The university as business versus the university as, er, university is a difficult one. I expect there are multiple causes for that, and I’m still not sure what I think about it. On the one hand, my inner pragmatist cannot help but suspect special pleading on at least some occasions where it’s argued that “knowledge” must trump financial realities. On the other, well, something about Oscar Wilde and the price of everything and the value of nothing…..

  2. The question of FEC is an interesting one, and in particular in the explanatory power that can be placed at the hands of a narrative.

    What we can clearly say is that there is definitely a story that is actively being told in research communities about FEC and the effects that this has had on researcher activity.

    You’ve hit the nail on the head with your description of the way the story is being told , with the possible omission of the point that universities regard the overhead as a replacement for funds that have been withdrawn as relative funding levels have stagnated in the block grant, and so have a sense of entitlement to the funds, and so feel justified in compelling their staff to chase FEC-bearing funds.

    What is less clear are the conclusions that can be drawn from the fact about everyone telling these stories, or at least these stories being widespread amongst academics.

    Probably the least valid conclusion that can be drawn is that the reality is accurately described by the story. I’m not saying that’s the conclusion that you draw, but I’d caution against assuming that the reality matches the story.

    What you term rhetorical questions are probably actually more like real research questions that are of use in answering the broader question of what has happened. I think more kinds of stories need consideration to understand what has happened in a very complex research funding system.

    Part of the reality to my mind is the fact that from the early 2000s, ESRC put a lot of effort into investing in the people who could write a lot of applications, so the post-doctoral fellowships, and first grants, were specifically about taking post-doc researchers with experience of doing research, and giving them experience in getting and managing their own grant, as a pillar of academic life. RCUK backed this up with their academic fellowship schemes, for which the economics were that unless you wanted to be a galley slave you needed to win your own independent funding.

    So there has been an evolution of the norms of what it means to be a social science researcher in that writing grant applications is part of the career, it is a good thing, there are plenty of role models, mentors and exemplars in the field. And then at the same time, the rising tide of research funding for the social sciences has collapsed.

    The effect is that there is an oversupply of academics who believe that part of doing good research is writing good research proposals and getting funding for them, at a time when there are much lower chances of that happening.

    What makes less sense about demand management looked at through this lens is that it is imposing a shift in the nature of the academic career, and a novel set of values and norms, where seeking research funding is more exceptional than before.

    For the sake of transparency, I have to be clear that in a sense this is my story. I was one of this cohort, as an ESRC post-doc fellow who used the PDF to get a Small Grant, and then moved onto an RCUK academic fellowship, where I won ESRC programme funding for a project within the Regional Economic Impact of HEIs programme.

    1. Thanks Paul. I think you’re absolutely right about being cautious about grand narratives. Just because a narrative is consistent with the known facts, doesn’t mean it’s true. I present it as a kind of interpretation, or perhaps a story about net effects and intended and unintended consequences. I’d be surprised if (m)any universities had actually been through that thought process.

      I certainly recognise your narrative. When I moved into research development in 2005 (rather later in this process than I’d previously thought, from what you’ve said, and from Phil’s comment above), a key part of what was expected of me in my new role was about changing the culture to encourage grant getting. Part of that was about putting in place new and better pre and post award support removing some of the obstacles and disincentives to applying, but just as much it was about changing the culture -“mainstreaming” grant getting as a core part of the academic role – (though that’s a horrible noun-as-verb expression).

      Now I think that culture has changed. I can’t remember the last time I had to try to convince a career young researcher that grant getting is a good thing – they’ve got that message loud and clear, but in ways that aren’t always helpful or healthy. Sometimes the ambition to get a grant seems to be prior to the ambition to undertake a specific research project. I worry about poorly-advised early career researchers wasting a proportion of their honeymoon period as academics on writing unsuccessful grant applications.

      I’m going to say more about what I think the response might be to Demand Management in a later blog post, but briefly I wonder if one aspect will be to reverse this cultural change, and, as you say, moving back to a time when applying for funding was more “exceptional”.

Comments are closed.