The ESRC and “Demand Management”: Part 4 – Quotas and Sanctions, PIs and Co-Is….

A picture of Stuart Pearce and Fabio Capello
It won't just be Fabio getting sanctioned if the referee's comments aren't favourable.

Previously in this series of posts on ESRC Demand Management I’ve discussed the background to the current unsustainable situation and aspects of the initial changes, such as the greater use of sifting and outline stages, and the new ban on (uninvited) resubmissions.  In this post I’ll be looking forward to the possible measures that might be introduced in a year or so’s time should application numbers not drop substantially….

When the ESRC put their proposals out to consultation, there were four basic strategies proposed.

  • Charging for applications
  • Quotas for numbers of applications per institution
  • Sanctions for institutions
  • Sanctions for individual researchers

Reading in between the lines of the demand management section of the presentation that the ERSC toured the country with in the spring, charging for applications is a non-starter.  Even in the consultation documents, this option only appeared to be included for the sake of completeness – it was readily admitted that there was no evidence that it would have the desired effect.

I think we can also all-but-discount quotas as an option.  The advantage of quotas is that it would allow the ESRC to precisely control the maximum number of applications that could be submitted.  Problem is, it’s the nuclear option, and I think it would be sensible to try less radical options first.  If their call for better self-regulation and internal peer review within institutions fails, and then sanctions schemes are tried and fail, then (and only then) should they be thinking about quotas.  Sanctions (and the threat of sanctions) are a seek to modify application submission behaviour, while quotas pretty much dictate it.  There may yet be a time when Quotas are necessary, though I really hope not.

What’s wrong with Quotas, then?  Well, there will be difficulties in assigning quotas fairly to institutions, in spite of complex plans for banding and ‘promotion’ and ‘relegation’ from the bands.  That’ll lead to a lot of game playing, and it’s also likely that there will be a lot of mucking around with the lead applicant.  If one of my colleagues has a brilliant idea and we’re out of Quota, well, maybe we’ll find someone at an institution that isn’t and ask them to lead.  I can imagine a lot of bickering over who should spend their quota on submitting an application with a genuinely 50-50 institutional split.

But my main worry is that institutions are not good at comparing applications from different disciplines.  If we have applications from (say) Management and Law vying for the last precious quota slot, how is the institution to choose between them?  Even if it has experts who are not on the project team, they will inevitably have a conflict of interest – there would be a worry that they would support their ‘team’.  We could give it a pretty good cognate discipline review, but I’m not confident we would always get the decision right.  It won’t take long before institutions start teaming up to provide external preliminary peer review of each other’s applications, and before you know it, we end up just shifting the burden from post-submission to pre-submission for very little gain.

In short, I think quotas are a last resort idea, and shouldn’t be seriously considered unless we end up in a situation where a combination of (a) the failure of other demand management measures, and/or (b) significant cuts in the amount of funding available.

Which leaves sanctions – either on individual researchers or on their institutions.  The EPSRC has had a policy of researcher sanctions for some time, and that’s had quite a considerable effect.  I don’t think it’s so much through sanctioning people and taking them out of the system so much as a kind of chill or placebo effect, whereby greater self-selection is taking place.  Once there’s a penalty for throwing in applications and hoping that some stick, people will stop.

As I argued previously, I think a lot of that pressure for increased submissions is down to institutions rather than individuals, who in many cases are either following direct instructions and expectations, or at least a very strong steer.  As a result, I was initially in favour of a hybrid system of sanctions where both individual researchers and institutions could potentially be sanctioned.  Both bear a responsibility for the application, and both are expected to put their name to it.  But after discussions internally, I’ve been persuaded that individual sanctions are the way to go, in order to have a consistent approach with the EPSRC, and with the other Research Councils, who I think are very likely to have their own version.  While the formulae may vary according to application profiles, as much of a common approach as possible should be adopted, unless of course there are overwhelming reasons why one of the RCs that I’m less familiar with should be different.

For me, the big issue is not whether we end up with individual, institutional, or hybrid sanctions, but whether the ESRC go ahead with plans to penalise co-investigators (and/or their institutions) as well as PIs in cases where an application does not reach the required standard.

This is a terrible, terrible, terrible idea and I would urge them to drop it.  The EPSRC don’t do it, and it’s not clear why the ESRC want to.  For me, the co-I issue is more important than which sanction model we end up with.

Most of the ESRC’s documents on demand management are thoughtful and thorough.  They’re written to inform the consultation exercise rather than dictate a solution, and I think the author(s) should be – on the whole – congratulated on their work.  Clearly a lot of hard work has gone into the proposals, which given the seriousness of the proposals is only right.  However, nowhere is there to be found any kind of argument or justification that I can find for why co-investigators (insert your own ‘and/or institutions’ from here on)  should be regarded as equally culpable.

I guess the argument (which the ESRC doesn’t make) might be that an application will be given yet more careful consideration if more than the principal investigator has something to lose.  At the moment, I don’t do a great deal if an application is led from elsewhere – I offer my services, and sometimes that offer is taken up, sometimes it isn’t.  But no doubt I’d be more forceful in my ‘offer’ if a colleague or my university could end up with a sanctions strike against us.  Further, I’d probably be recommending that none of my academic colleagues get involved in an application without it going through our own rigorous internal peer review processes.  Similarly, I’d imagine that academics would be much more careful about what they allowed their name to be put to, and would presumably take a more active role in drafting the application.  Both institutions and individual academics, can, I think, be guilty of regarding an application led from elsewhere as being a free roll of the dice.  But we’re taking action on this – or at least I am.

The problem is that these benefits are achieved (if they are achieved at all) at the cost of abandoning basic fairness.  It’s just not clear to me why an individual/institution with only a minor role in a major project should be subject to the same penalty as the principal investigator and/or the institution that failed to spot that the application was unfundable.  It’s not clear to me why the career-young academic named as co-I on a much more senior colleague’s proposal should be held responsible for its poor quality.  I understand that there’s a term in aviation – cockpit gradient – which refers to the difference in seniority between Pilot and Co-Pilot.  A very senior Pilot and a very junior co-Pilot is a bad mix because the junior will be reluctant to challenge the senior.  I don’t understand why someone named as co-I for an advisory role – on methodology perhaps, or for a discrete task, should bear the same responsibility.  And so on and so forth.  One response might be to create a new category of research team member less responsible than a ‘co-investigator’ but more involved in the project direction (or part of the project direction) than a ‘researcher’, but do we really want to go down the road of redefining categories?

Now granted, there are proposals where the PI is primus inter pares among a team of equally engaged and responsible investigators, where there is no single, obvious candidate for the role of PI.  In those circumstances, we might think it would be fair for all of them to pay the penalty. But I wonder what proportion of applications are like this, with genuine joint leadership?  Even in such cases, every one of those joint leaders ought to be happy to be named as PI, because they’ve all had equal input.  But the unfairness inherent in only one person getting a strike against their name (and other(s) not), is surely much less unfair than the examples above?

As projects become larger, with £200k (very roughly, between two and two and a half person-years including overheads and project expenses) now being the minimum, the complex, multi-armed, innovative, interdisciplinary project is likely to be come more and more common, because that’s what the ESRC says that it wants to fund.   But the threat of a potential sanction (or step towards sanction) for every last co-I involved is going to be a) a massive disincentive to large-scale collaboration, b) a logistical and organisational nightmare, or c) both.

Institutionally, it makes things very difficult.  Do we insist that every last application involving one of our academics goes through our peer review processes?  Or do we trust the lead institution?  Or do we trust some (University of Russell) but not others (Poppleton University)?  How does the PI manage writing and guiding the project through various different approval processes, with the danger that team members may withdraw (or be forced to withdraw) by their institution?  I’d like to think that in the event of sanctions on co-Is and/or institutions that most Research Offices would come up with some sensible proposals for managing the risk of junior-partnerdom in a proportionate manner, but it only takes one or two to start demanding to see everything and to run everything to their timetable to make things very difficult indeed.

The ESRC and “Demand Management”: Part 3 – Submissions and re-submissions

A picture of a boomerangIn the previous post in this series, I said a few things about the increased use of outline application stages and greater use of ‘sifting’ processes to filter out uncompetitive applications before they reach the refereeing stage.  But that’s not the only change taking place straight away.  The new prohibition on “uninvited” resubmissions for the open-call Research Grants scheme has been controversial, and it’s fair to say that it’s not a move that found universal favour in our internal discussions about our institutional response to the ESRC’s second Demand Management consultation.  Having said that, I personally think it’s sensible – which in my very British way is quite high praise.

In recent years I’ve advised against resubmissions on the ground that I strongly suspected that they were a waste of time.  Although they were technically allowed, the guidance notes gave the strong impression that this was grudging – perhaps even to the extent of being a case of yes in principle, no in practice.  After all, resubmissions were supposed to demonstrate that they had been “substantially revised” or some such phrase.

But the resubmissions the ESRC might have wanted presumably wouldn’t need to be “substantially revised” – tightening up perhaps, refocusing a bit, addressing criticisms, that kind of thing.  But “substantially revised”?  From memory, I don’t think an increase or decrease in scale would count.  Am I being unfair in thinking that any proposal that could be “substantially revised” and remain the same proposal (of which more later) was, well,  unfundable, and shouldn’t have been submitted in the first place?  The time and place for “substantially revising” your proposal is surely before submission.

The figures are interesting – apparently banning resubmissions should reduce application numbers by about 7% or so – a significant step in achieving the very ambitious goal of halving the number of applications by 2014.  Of those 7%, 80% are unsuccessful.  A 20% success rate sounds high compared to some scheme averages, but it’s not clear what period of time that figure relates to, nor how it’s split over different schemes.  But even if it was just this last year, a 20% success rate for resubmissions compared to about 15% for first time applications is not a substantial improvement.  We should probably expect resubmissions to be of a higher standard, after all, and that’s not much of a higher standard.

But moving to invited-only resubmissions shouldn’t be understood in isolation.  With very little fanfare, the ESRC have changed their policy on a right to respond to referees’ comments.  They do have a habit of sneaking stuff onto their website when I’m not looking, and this one caught me out a bit.  Previously the right to respond was only available to those asking for more than £500k – now it’s for all Standard Grant applications.  I’m amazed that the ESRC hasn’t linked this policy change more explicitly to the resubmissions change – I’m sure most applicants would happily swap the right to resubmit for the right to respond to referees’ comments.

There are problems with this idea of “invited resubmissions”, though, and I suspect that the ESRC are grappling with them at the moment.

The first problem will be identifying the kinds of applications that would benefit from being allowed a second bite of the cherry.  I would imagine these might be very promising ideas, but which perhaps are let down by poor exposition and/or grant writing – A for ideas, E for execution type applications.  Others might be very promising applications which have a single glaring weakness that could be addressed.  But I wonder how many applications really fall into either of these categories.  If you’re good enough to have a fundable idea, it’s hard to imagine that you’d struggle to write it up, or that it would contain a fixable weakness.  But perhaps there are applications like this, where (for example) a pathways to impact plan in unacceptably poor, or where the panel wants to fund one arm of the project, but not the other.  Clearly the 20% figure indicates that there are at least some like this.

The danger is that the “invited resubmission” might be a runner up prize for the applications that came closest to getting funding but which didn’t quite make it.  But if they’re that good, is there really any point asking for a full resubmission?  Wouldn’t it be better for the ESRC to think about having a repêchage, where a very small number of high quality applications will get another chance in the next funding round.  I’m told that there can be a large element of luck involved in the number, quality, and costs of the competition at each funding meeting, so perhaps allowing a very small number of unsuccessful applications to be carried forward might make sense.  It might mean re-costing because of changed start dates, but I’m sure we’d accept that as a price to pay.  Or we could re-cost on the same basis for the new project dates if successful.

A second problem is determining when an application is a resubmission, and when it’s a fresh application on a related topic.  So far we have this definition:

“a ‘new’ application needs to be substantively different from a previous submission with fresh or significantly modified aims and objectives, a different or revised methodological approach and potentially a different team of investigators. This significant change of focus will be accompanied by a different set of costings to deliver the project. Applications that fall short of these broad criteria and reflect more minor amendments based on peer review feedback alone will be counted as re-submissions.”

Some of my former colleagues in philosophy might appreciate this particular version of the identity problem.  I’ve had problems with this distinction in the past, where I’ve been involved in an application submitted to the ESRC which was bounced back as a resubmission without having the required letter explaining the changes.  Despite what I said last time about having broad confidence in ESRC staff to undertake sifting activities, in this case they got it wrong.  In fairness, it was a very technical economics application with a superficial similarity to a previous application, but you’d have to be an economist to know that.  In the end, the application was allowed as a new application, but wasn’t funded.  That case was merely frustrating, but the ESRC are planning on counting undeclared resubmissions as unsuccessful, with potential sanctions/quota consequences, so we need to get this right.  Fortunately…

The identification of uninvited re-submissions will rest with staff within the ESRC, as is currently the practice. In difficult cases advice will be taken from GAP [Grant Assessment Panel] members. Applications identified as uninvited re-submissions will not be processed and classified as unsuccessful on quality grounds under any sanctions policy that we may introduce.”

Even so, I’d like to see the “further guidance” that the ESRC intend to produce on this.  While we don’t want applicants disguising resubmissions as fresh applications, there’s a danger of a chilling effect which could serve to dissuade genuinely fresh applications on a similar or related topic.  However, I’m heartened to see the statement about the involvement of GAP members in getting this right – that should provide some measure of reassurance.

The ESRC and “Demand Management”: Part 2 – Sifting and Outlines

ESRC office staff start their new sifting role

In part one of this week fortnight long series of posts on the ESRC and “demand management”, I attempted to sketch out some context.  Briefly, we’re here because demand has increased while the available funds have remained static at best, and are now declining in real terms.  Phil Ward and Paul Benneworth have both added interesting comments – Phil has a longer professional memory than I do, and Paul makes some very useful comments from the perspective of a researcher starting his career during the period in question.  If you read the previous post before their comments appeared, I’d recommend going back and having a read.

It’s easy to think of “demand management” as something that’s at least a year away, but there are some changes that are being implemented straight away – this post is about outline applications and “sifting”.  Next I’ll talk about the ban on (uninvited) resubmissions.

Greater use of outline stages for managed mode schemes (i.e pretty much everything except open call Research Grants), for example, seems very sensible to me, provided that the application form is cut down sufficiently to represent a genuine time and effort saving for individuals and institutions, while still allowing applicants enough space to make a case.  It’s also important that reviewers treat outline applications as just that, and are sensitive to space constraints.  I understand that the ESRC are developing a new grading scheme for outline applications, which is a very good thing.  At outline stage, I would imagine that they’re looking for ideas that are of the right size in terms of scale and ambition, and at least some evidence that (a) the research team has the right skills and (b) that granting them more time and space to lay out their arguments will result in a competitive application.

With Standard Grants (now known as Research Grants, as there are no longer ‘large’ or ‘small’ grants), there will be “greater internal sifting by ESRC staff”.  I don’t know if this is in place yet, but I understand that there’s a strong possibility that this might not be done by academics.  I’m very relaxed about that – in fact, I welcome it – though I can imagine that some academics will be appalled.  But…. the fact is that about a third of the applications the ESRC receives are “uncompetitive”, which is a lovely British way of saying unfundable.  Not good enough.  Where all these applications are coming from I’ve no idea, and while I don’t think any of them are being submitted on my watch, it would be an act of extreme hubris to declare that absolutely.  However, I strongly suspect that they’re largely coming from universities that don’t have a strong research culture and/or don’t have high quality research support and/or are just firing off as many applications as possible in a mistaken belief that the ESRC is some kind of lottery.

I’d back myself to pick out the unfundable third in a pile of applications.  I wouldn’t back myself to pick the grant recipients, but even then I reckon I’d get close.  I can differentiate between what I don’t understand and what doesn’t make sense with a fair degree of accuracy, and while I’m no expert on research methods, I know when there isn’t a good account of methods, or when it’s not explained or justified.  I can spot a Case for Support that is 80% literature review and only 20% new proposal.  I can tell when the research questions(s) subtly change from section to section.  And I’d back others with similar roles to me to be able to do the same – if we can’t tell the difference between a sinner and a winner…. why are research intensive universities bothering to employ us?

And if I can do it with a combination of some academic background (MPhil political philosophy) and professional experience, I’m sure others could too, including ESRC staff.  They’d only have to sort the no-hopers from the rest, and if a few no-hopers slip through, or if a few low quality fundable some-hopers-but-a-very-long-way-down-the-lists drop out at that stage, it would make very little difference.  Unless, of course, one of the demand management sanction options is introduced, at which point the notion of  non-academics making decisions that could lead to individual or institutional becomes a little more complicated.  But again, I think I’d back myself to spot grant applications that should not have been submitted, even if I wouldn’t necessarily want a sanctions decision depending on my judgement alone.

Even if they were to go with a very conservative policy of only sifting out applications which, say, three ESRC staff think is dreadful, that could still make a substantial difference to the demands on academic reviewers.  I guess that’s the deal – you submit to a non-academic having some limited judgement role over your application, and in return, they stop sending you hopeless applications to review.

If I were an academic I’d take that like a shot.

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.