The ESRC today revealed the outcome of the ‘Demand Management’ consultation, with the consultation exercise showing a strong preference for researcher sanctions rather than the other main options, which were institutional sanctions, institutional quotas, or charging for applications. And therefore….
Given this clear message, it is likely that any further steps will reflect these views.
Which I think means that that’s what they’re going to do. But being (a) academics, and (b) British, it has to be expressed in the passive voice and as tentatively as possible.
Individual researcher sanctions got the vote of 82% of institutional responses, 80% of learned society responses, and 44% of individual responses. To put that in context, though, 32% of the individual responses were interpreted as backing none of the possible measures, which I don’t think was ever going to be a particular convincing response. Institutional sanctions came second among institutions (11%), and institutional quotas (20%) among individual respondents. Charging for applications was, as I expected, a non-starter, apparently attracting the support of two institutions and one learned society or ‘other agency’. I’m surprised it got that many.
The issue of the presentation of the results as a ‘vote’ is an interesting one, as I don’t think that’s what this exercise was presented as at the time. The institutional response that I was involved in was – I like to think – a bit more nuanced and thoughtful than just a ‘vote’ for one particular option. In any case, if it was a vote, I’m sure that the ‘First Past the Post’ system which appears to have been used wouldn’t be appropriate – some kind of ‘alternative vote’ system to find the least unpopular option would surely have been more appropriate. I’m also puzzled by the combining of the results from institutions, individuals, and learned societies into totals for ‘all respondents’ which seems to give the same weighting to individual and institutional responses.
Fortunately – or doubly-fortunately – those elements of the research community which responded delivered a clear signal about the preferred method of demand management, and, in my view at least, it’s the right one. I’ll admit to being a bit surprised by how clear cut the verdict appears to be, but it’s very much one I welcome.
It’s not all good news, though. The announcement is silent on exactly what form the programme of researcher sanctions will take, and there is still the possibility that sanctions may apply to co-investigators as well as the principal investigator. As I’ve argued before, I think this would be a mistake, and would be grossly unfair in far too many cases. I know that there are some non-Nottingham folks reading this blog, so if your institution isn’t one of the ones that responded (and remember only 44 of 115 universities did), it might be worth finding out why not, and making your views known on this issue.
One interesting point that is stressed in the announcement is that individual researcher sanctions – or any form of further ‘demand management’ measures – may never happen. The ESRC have been clear about this all along – the social science research community was put on notice about the unsustainablity of the current volume of applications being submitted, and that a review would take place in autumn 2012. The consultation was about the general form of any further steps should they prove necessary. And interestingly the ESRC are apparently ‘confident’ they they will not.
We remain confident that by working in partnership with HEIs there will be no need to take further steps. There has been a very positive response from institutions to our call for greater self-regulation, and we expect that this will lead to a reduction in uncompetitive proposals.
Contrast that with this, from March, when the consultation was launched:
We very much hope that we will not need additional measures.
Might none of this happen? I’d like to think not, but I don’t share their confidence, and I fear that “very much hope” was nearer the mark. I can well believe that each institution is keen to up its game, and I’m sure discussions are going on about new forms of internal peer review, mentoring, research leadership etc in institutions all across the country. Whether this will lead to a sufficient fall in the number of uncompetitive applications, well, I’m not so sure.
I think there needs to be an acceptance that there are plenty of perfectly good research ideas that would lead to high quality research outputs in quality journals, perhaps with strong non-academic impact, which nevertheless aren’t ‘ESRC-able’ – because they’re merely ‘very good’ or ‘excellent’ rather than ‘outstanding’. And it’s only the really outstanding ideas that are going to be competitive. If all institutions realise this, researcher sanctions may never happen. But if hubris wins out, and everyone concludes that it’s everyone else’s applications that are the problem, then researcher sanctions are inevitable.
Ken Emond, Head of Research Awards of the British Academy, came to visit the University of Nottingham the other week to talk about the various and nefarious research funding schemes that are on offer from the British Academy. To make an event of it, my colleagues in the Centre for Advanced Studies also arranged for various internal beneficiaries of the Academy’s largesse to come and talk about the role that Academy funding had had in their research career. I hope no-one minds if I repeat some of the things that were said – there was no mention of ‘Chatham House’ rules or of ‘confidential learning agreements’, and I don’t imagine that Ken gives privileged information to the University of Nottingham alone, no matter how wonderful we are.
Much of what funders’ representatives tend to say during institutional visits or AMRA conferences is pretty much identical to the information already available on their website in one form or another, but it’s interesting how many academics seem to prefer to hear the information in person rather than read it in their own time. And it’s good to put a face to names, and faces to institutions. Although I think I shall probably always share Phil Ward‘s mental image of the BA as an exclusive Rowley Birkin QC-style private members club. But it’s good to have a reminder of what’s on offer, and have an opportunity to ask questions.
I met Ken very briefly at the ARMA conference in 2010, and his enthusiasm for the Small Grants Scheme then (and now) was obvious. I was very surprised when it was scrapped, and it seems likely that this was imposed rather than freely chosen. However, it’s great to see it back again, and this time including support for conference funding to disseminate the project findings. It seems the call is going to be at least annual, with no decision taken yet on whether there will be a second call this year, as in previous years.
It seems much more sensible than having separate schemes for projects and for conference funding. It’s unlikely that we’re going to see a return of the BA Overseas Conference Scheme, but…. it was quite a lot of work in writing and assessing for really very small amounts of money. Although having said that, when I was at Keele those very small amounts of money really did help us send researchers to prestigious conferences (especially in the States) they wouldn’t otherwise have attended.
One of the questions asked was about the British Academy’s attitude to demand management, of the kind that the EPSRC have introduced and that the ESRC are proposing. The response was that they currently have no plans in this direction – they don’t think that any institutions are submitting an excessive number of applications.
Although the British Academy has some of the lowest success rates in town for its major schemes, they are all light touch applications – certainly compared to the Research Councils. Mid-Career and Post-Doc Fellowships both have an outline stage, and the Senior Research Fellowships application form is hardly more taxing than a Small Grant one. Presumably they’re also quick and easy to review – I wonder how many of those a referee could get through in the time it took them to review a single Research Council application? Which does raise the suggestion from Mavan, a commenter on one of my previous posts, about cutting the ESRC application form dramatically.
But… it’s possible that the relative brevity of the application forms is itself increasing the number of applications, and that’s certainly something that the ESRC were concerned about when considering their own move to outline stage applications.
I guess a funding scheme could be credible and sustainable with a low success rate and a low ‘overhead’ cost of writing and reviewing applications or a high success rate with a high overhead cost. The problem is when were get to where we are at the moment with the ESRC, with low success rates and high overhead costs.
There’s a very strange article in the Times Higher today which claims that the ESRC’s latest “grant application figures raise questions about its future”.
Er…. do they? Seriously? Why?
It’s true that success rates are a problem – down to 16% overall, and 12% for the Research Grants Scheme (formerly Standard Grants. According to the article, these are down from 17% and 14% from the year before. It’s also true that RCUK stated in 2007 that 20% should be the minimum success rates. But this long term decline in success rates – plus a cut in funding in real terms – is exactly why the ESRC has started a ‘demand management’ strategy.
A comment attributed to one academic (which could have been a rhetorical remark taken out of context) appears to equate the whole thing to a lottery,and calls for the whole thing to be scrapped and the funding distributed via the RAE/REF. This strikes me as an odd view, though not one, I’m sure, confined to the person quoted. But it’s not a majority view, not even among the select number of academics approached for comments. All of the other academics named in the article seem to be calling for more funding for social sciences, so it would probably be legitimate to wonder why the focus of the article is about “questions” about the ESRC’s “future”, rather than calls for more funding. But perhaps that’s just how journalism works. It certainly got my attention.
While I don’t expect these calls for greater funding for social science research will be heard in the current politico-economic climate, it’s hard to see that abolishing the ESRC and splitting its budget will achieve very much. The great strength of the dual funding system is that while the excellence of the Department of TopFiveintheRAE at the University of Russell deserves direct funding, it’s also possible for someone at the Department of X at Poppleton University to get substantial funding for their research if their research proposal is outstanding enough. Maybe your department gets nothing squared from HEFCE as a result of the last RAE, but if your idea is outstanding it could be you – to use a lottery slogan. This strikes me as a massively important principle – even if in practice, most of it will go to the Universities of Russell. As a community of social science scholars, calling for the ESRC to be abolished sounds like cutting of the nose to spite the face.
Yes, success rates are lower than we’d like, and yes, there is a strong element of luck in getting funded. But it’s inaccurate to call it a “lottery”. If your application isn’t of outstanding quality, it won’t get funded. If it is, it still might not get funded, but… er… that’s not a lottery. All of the other academics named in the article seem to be calling for more funding for the social sciences.
According to the ESRC’s figures between 2007 and 2011, 9% of Standard Grant applications were either withdrawn or rejected at ‘office’ stage for various reasons. 13% fell at the referee stage (beta or reject grades), and 21% fell at the assessor stage (alpha minus). So… 43% of applications never even got as far as the funding panel before being screened out on quality or eligibility grounds.
So… while the headline success rate might be 12%, the success rates for fundable applications are rather better. 12 funded out of 100 applications is 12%, but 12 funded out of 57 of the 100 of the applications that are competitive is about 28%. That’s what I tell my academic colleagues – if your application is outstanding, then you’re looking at 1 in 4. If it’s not outstanding, but merely interesting, or valuable, or would ‘add to the literature’, then look to other (increasingly limited) options.
So…. we need the ESRC. It would be a disaster for social science research if it were not to have a Research Council. We may not agree with everything it does and all of the decisions it makes, we may be annoyed and frustrated when they won’t fund our projects, but we need a funder of social science with money to invest in individual research projects, rather than merely in excellent Departments.
Previously in this series of posts on ESRC Demand Management I’ve discussed the background to the current unsustainable situation and aspects of the initial changes, such as the greater use of sifting and outline stages, and the new ban on (uninvited) resubmissions. In this post I’ll be looking forward to the possible measures that might be introduced in a year or so’s time should application numbers not drop substantially….
When the ESRC put their proposals out to consultation, there were four basic strategies proposed.
Charging for applications
Quotas for numbers of applications per institution
Sanctions for institutions
Sanctions for individual researchers
Reading in between the lines of the demand management section of the presentation that the ERSC toured the country with in the spring, charging for applications is a non-starter. Even in the consultation documents, this option only appeared to be included for the sake of completeness – it was readily admitted that there was no evidence that it would have the desired effect.
I think we can also all-but-discount quotas as an option. The advantage of quotas is that it would allow the ESRC to precisely control the maximum number of applications that could be submitted. Problem is, it’s the nuclear option, and I think it would be sensible to try less radical options first. If their call for better self-regulation and internal peer review within institutions fails, and then sanctions schemes are tried and fail, then (and only then) should they be thinking about quotas. Sanctions (and the threat of sanctions) are a seek to modify application submission behaviour, while quotas pretty much dictate it. There may yet be a time when Quotas are necessary, though I really hope not.
What’s wrong with Quotas, then? Well, there will be difficulties in assigning quotas fairly to institutions, in spite of complex plans for banding and ‘promotion’ and ‘relegation’ from the bands. That’ll lead to a lot of game playing, and it’s also likely that there will be a lot of mucking around with the lead applicant. If one of my colleagues has a brilliant idea and we’re out of Quota, well, maybe we’ll find someone at an institution that isn’t and ask them to lead. I can imagine a lot of bickering over who should spend their quota on submitting an application with a genuinely 50-50 institutional split.
But my main worry is that institutions are not good at comparing applications from different disciplines. If we have applications from (say) Management and Law vying for the last precious quota slot, how is the institution to choose between them? Even if it has experts who are not on the project team, they will inevitably have a conflict of interest – there would be a worry that they would support their ‘team’. We could give it a pretty good cognate discipline review, but I’m not confident we would always get the decision right. It won’t take long before institutions start teaming up to provide external preliminary peer review of each other’s applications, and before you know it, we end up just shifting the burden from post-submission to pre-submission for very little gain.
In short, I think quotas are a last resort idea, and shouldn’t be seriously considered unless we end up in a situation where a combination of (a) the failure of other demand management measures, and/or (b) significant cuts in the amount of funding available.
Which leaves sanctions – either on individual researchers or on their institutions. The EPSRC has had a policy of researcher sanctions for some time, and that’s had quite a considerable effect. I don’t think it’s so much through sanctioning people and taking them out of the system so much as a kind of chill or placebo effect, whereby greater self-selection is taking place. Once there’s a penalty for throwing in applications and hoping that some stick, people will stop.
As I argued previously, I think a lot of that pressure for increased submissions is down to institutions rather than individuals, who in many cases are either following direct instructions and expectations, or at least a very strong steer. As a result, I was initially in favour of a hybrid system of sanctions where both individual researchers and institutions could potentially be sanctioned. Both bear a responsibility for the application, and both are expected to put their name to it. But after discussions internally, I’ve been persuaded that individual sanctions are the way to go, in order to have a consistent approach with the EPSRC, and with the other Research Councils, who I think are very likely to have their own version. While the formulae may vary according to application profiles, as much of a common approach as possible should be adopted, unless of course there are overwhelming reasons why one of the RCs that I’m less familiar with should be different.
For me, the big issue is not whether we end up with individual, institutional, or hybrid sanctions, but whether the ESRC go ahead with plans to penalise co-investigators (and/or their institutions) as well as PIs in cases where an application does not reach the required standard.
This is a terrible, terrible, terrible idea and I would urge them to drop it. The EPSRC don’t do it, and it’s not clear why the ESRC want to. For me, the co-I issue is more important than which sanction model we end up with.
Most of the ESRC’s documents on demand management are thoughtful and thorough. They’re written to inform the consultation exercise rather than dictate a solution, and I think the author(s) should be – on the whole – congratulated on their work. Clearly a lot of hard work has gone into the proposals, which given the seriousness of the proposals is only right. However, nowhere is there to be found any kind of argument or justification that I can find for why co-investigators (insert your own ‘and/or institutions’ from here on) should be regarded as equally culpable.
I guess the argument (which the ESRC doesn’t make) might be that an application will be given yet more careful consideration if more than the principal investigator has something to lose. At the moment, I don’t do a great deal if an application is led from elsewhere – I offer my services, and sometimes that offer is taken up, sometimes it isn’t. But no doubt I’d be more forceful in my ‘offer’ if a colleague or my university could end up with a sanctions strike against us. Further, I’d probably be recommending that none of my academic colleagues get involved in an application without it going through our own rigorous internal peer review processes. Similarly, I’d imagine that academics would be much more careful about what they allowed their name to be put to, and would presumably take a more active role in drafting the application. Both institutions and individual academics, can, I think, be guilty of regarding an application led from elsewhere as being a free roll of the dice. But we’re taking action on this – or at least I am.
The problem is that these benefits are achieved (if they are achieved at all) at the cost of abandoning basic fairness. It’s just not clear to me why an individual/institution with only a minor role in a major project should be subject to the same penalty as the principal investigator and/or the institution that failed to spot that the application was unfundable. It’s not clear to me why the career-young academic named as co-I on a much more senior colleague’s proposal should be held responsible for its poor quality. I understand that there’s a term in aviation – cockpit gradient – which refers to the difference in seniority between Pilot and Co-Pilot. A very senior Pilot and a very junior co-Pilot is a bad mix because the junior will be reluctant to challenge the senior. I don’t understand why someone named as co-I for an advisory role – on methodology perhaps, or for a discrete task, should bear the same responsibility. And so on and so forth. One response might be to create a new category of research team member less responsible than a ‘co-investigator’ but more involved in the project direction (or part of the project direction) than a ‘researcher’, but do we really want to go down the road of redefining categories?
Now granted, there are proposals where the PI is primus inter pares among a team of equally engaged and responsible investigators, where there is no single, obvious candidate for the role of PI. In those circumstances, we might think it would be fair for all of them to pay the penalty. But I wonder what proportion of applications are like this, with genuine joint leadership? Even in such cases, every one of those joint leaders ought to be happy to be named as PI, because they’ve all had equal input. But the unfairness inherent in only one person getting a strike against their name (and other(s) not), is surely much less unfair than the examples above?
As projects become larger, with £200k (very roughly, between two and two and a half person-years including overheads and project expenses) now being the minimum, the complex, multi-armed, innovative, interdisciplinary project is likely to be come more and more common, because that’s what the ESRC says that it wants to fund. But the threat of a potential sanction (or step towards sanction) for every last co-I involved is going to be a) a massive disincentive to large-scale collaboration, b) a logistical and organisational nightmare, or c) both.
Institutionally, it makes things very difficult. Do we insist that every last application involving one of our academics goes through our peer review processes? Or do we trust the lead institution? Or do we trust some (University of Russell) but not others (Poppleton University)? How does the PI manage writing and guiding the project through various different approval processes, with the danger that team members may withdraw (or be forced to withdraw) by their institution? I’d like to think that in the event of sanctions on co-Is and/or institutions that most Research Offices would come up with some sensible proposals for managing the risk of junior-partnerdom in a proportionate manner, but it only takes one or two to start demanding to see everything and to run everything to their timetable to make things very difficult indeed.
In the previous post in this series, I said a few things about the increased use of outline application stages and greater use of ‘sifting’ processes to filter out uncompetitive applications before they reach the refereeing stage. But that’s not the only change taking place straight away. The new prohibition on “uninvited” resubmissions for the open-call Research Grants scheme has been controversial, and it’s fair to say that it’s not a move that found universal favour in our internal discussions about our institutional response to the ESRC’s second Demand Management consultation. Having said that, I personally think it’s sensible – which in my very British way is quite high praise.
In recent years I’ve advised against resubmissions on the ground that I strongly suspected that they were a waste of time. Although they were technically allowed, the guidance notes gave the strong impression that this was grudging – perhaps even to the extent of being a case of yes in principle, no in practice. After all, resubmissions were supposed to demonstrate that they had been “substantially revised” or some such phrase.
But the resubmissions the ESRC might have wanted presumably wouldn’t need to be “substantially revised” – tightening up perhaps, refocusing a bit, addressing criticisms, that kind of thing. But “substantially revised”? From memory, I don’t think an increase or decrease in scale would count. Am I being unfair in thinking that any proposal that could be “substantially revised” and remain the same proposal (of which more later) was, well, unfundable, and shouldn’t have been submitted in the first place? The time and place for “substantially revising” your proposal is surely before submission.
The figures are interesting – apparently banning resubmissions should reduce application numbers by about 7% or so – a significant step in achieving the very ambitious goal of halving the number of applications by 2014. Of those 7%, 80% are unsuccessful. A 20% success rate sounds high compared to some scheme averages, but it’s not clear what period of time that figure relates to, nor how it’s split over different schemes. But even if it was just this last year, a 20% success rate for resubmissions compared to about 15% for first time applications is not a substantial improvement. We should probably expect resubmissions to be of a higher standard, after all, and that’s not much of a higher standard.
But moving to invited-only resubmissions shouldn’t be understood in isolation. With very little fanfare, the ESRC have changed their policy on a right to respond to referees’ comments. They do have a habit of sneaking stuff onto their website when I’m not looking, and this one caught me out a bit. Previously the right to respond was only available to those asking for more than £500k – now it’s for all Standard Grant applications. I’m amazed that the ESRC hasn’t linked this policy change more explicitly to the resubmissions change – I’m sure most applicants would happily swap the right to resubmit for the right to respond to referees’ comments.
There are problems with this idea of “invited resubmissions”, though, and I suspect that the ESRC are grappling with them at the moment.
The first problem will be identifying the kinds of applications that would benefit from being allowed a second bite of the cherry. I would imagine these might be very promising ideas, but which perhaps are let down by poor exposition and/or grant writing – A for ideas, E for execution type applications. Others might be very promising applications which have a single glaring weakness that could be addressed. But I wonder how many applications really fall into either of these categories. If you’re good enough to have a fundable idea, it’s hard to imagine that you’d struggle to write it up, or that it would contain a fixable weakness. But perhaps there are applications like this, where (for example) a pathways to impact plan in unacceptably poor, or where the panel wants to fund one arm of the project, but not the other. Clearly the 20% figure indicates that there are at least some like this.
The danger is that the “invited resubmission” might be a runner up prize for the applications that came closest to getting funding but which didn’t quite make it. But if they’re that good, is there really any point asking for a full resubmission? Wouldn’t it be better for the ESRC to think about having a repêchage, where a very small number of high quality applications will get another chance in the next funding round. I’m told that there can be a large element of luck involved in the number, quality, and costs of the competition at each funding meeting, so perhaps allowing a very small number of unsuccessful applications to be carried forward might make sense. It might mean re-costing because of changed start dates, but I’m sure we’d accept that as a price to pay. Or we could re-cost on the same basis for the new project dates if successful.
A second problem is determining when an application is a resubmission, and when it’s a fresh application on a related topic. So far we have this definition:
“a ‘new’ application needs to be substantively different from a previous submission with fresh or significantly modified aims and objectives, a different or revised methodological approach and potentially a different team of investigators. This significant change of focus will be accompanied by a different set of costings to deliver the project. Applications that fall short of these broad criteria and reflect more minor amendments based on peer review feedback alone will be counted as re-submissions.”
Some of my former colleagues in philosophy might appreciate this particular version of the identity problem. I’ve had problems with this distinction in the past, where I’ve been involved in an application submitted to the ESRC which was bounced back as a resubmission without having the required letter explaining the changes. Despite what I said last time about having broad confidence in ESRC staff to undertake sifting activities, in this case they got it wrong. In fairness, it was a very technical economics application with a superficial similarity to a previous application, but you’d have to be an economist to know that. In the end, the application was allowed as a new application, but wasn’t funded. That case was merely frustrating, but the ESRC are planning on counting undeclared resubmissions as unsuccessful, with potential sanctions/quota consequences, so we need to get this right. Fortunately…
The identification of uninvited re-submissions will rest with staff within the ESRC, as is currently the practice. In difficult cases advice will be taken from GAP [Grant Assessment Panel] members. Applications identified as uninvited re-submissions will not be processed and classified as unsuccessful on quality grounds under any sanctions policy that we may introduce.”
Even so, I’d like to see the “further guidance” that the ESRC intend to produce on this. While we don’t want applicants disguising resubmissions as fresh applications, there’s a danger of a chilling effect which could serve to dissuade genuinely fresh applications on a similar or related topic. However, I’m heartened to see the statement about the involvement of GAP members in getting this right – that should provide some measure of reassurance.