The ESRC and “Demand Management”: Part 3 – Submissions and re-submissions

A picture of a boomerangIn the previous post in this series, I said a few things about the increased use of outline application stages and greater use of ‘sifting’ processes to filter out uncompetitive applications before they reach the refereeing stage.  But that’s not the only change taking place straight away.  The new prohibition on “uninvited” resubmissions for the open-call Research Grants scheme has been controversial, and it’s fair to say that it’s not a move that found universal favour in our internal discussions about our institutional response to the ESRC’s second Demand Management consultation.  Having said that, I personally think it’s sensible – which in my very British way is quite high praise.

In recent years I’ve advised against resubmissions on the ground that I strongly suspected that they were a waste of time.  Although they were technically allowed, the guidance notes gave the strong impression that this was grudging – perhaps even to the extent of being a case of yes in principle, no in practice.  After all, resubmissions were supposed to demonstrate that they had been “substantially revised” or some such phrase.

But the resubmissions the ESRC might have wanted presumably wouldn’t need to be “substantially revised” – tightening up perhaps, refocusing a bit, addressing criticisms, that kind of thing.  But “substantially revised”?  From memory, I don’t think an increase or decrease in scale would count.  Am I being unfair in thinking that any proposal that could be “substantially revised” and remain the same proposal (of which more later) was, well,  unfundable, and shouldn’t have been submitted in the first place?  The time and place for “substantially revising” your proposal is surely before submission.

The figures are interesting – apparently banning resubmissions should reduce application numbers by about 7% or so – a significant step in achieving the very ambitious goal of halving the number of applications by 2014.  Of those 7%, 80% are unsuccessful.  A 20% success rate sounds high compared to some scheme averages, but it’s not clear what period of time that figure relates to, nor how it’s split over different schemes.  But even if it was just this last year, a 20% success rate for resubmissions compared to about 15% for first time applications is not a substantial improvement.  We should probably expect resubmissions to be of a higher standard, after all, and that’s not much of a higher standard.

But moving to invited-only resubmissions shouldn’t be understood in isolation.  With very little fanfare, the ESRC have changed their policy on a right to respond to referees’ comments.  They do have a habit of sneaking stuff onto their website when I’m not looking, and this one caught me out a bit.  Previously the right to respond was only available to those asking for more than £500k – now it’s for all Standard Grant applications.  I’m amazed that the ESRC hasn’t linked this policy change more explicitly to the resubmissions change – I’m sure most applicants would happily swap the right to resubmit for the right to respond to referees’ comments.

There are problems with this idea of “invited resubmissions”, though, and I suspect that the ESRC are grappling with them at the moment.

The first problem will be identifying the kinds of applications that would benefit from being allowed a second bite of the cherry.  I would imagine these might be very promising ideas, but which perhaps are let down by poor exposition and/or grant writing – A for ideas, E for execution type applications.  Others might be very promising applications which have a single glaring weakness that could be addressed.  But I wonder how many applications really fall into either of these categories.  If you’re good enough to have a fundable idea, it’s hard to imagine that you’d struggle to write it up, or that it would contain a fixable weakness.  But perhaps there are applications like this, where (for example) a pathways to impact plan in unacceptably poor, or where the panel wants to fund one arm of the project, but not the other.  Clearly the 20% figure indicates that there are at least some like this.

The danger is that the “invited resubmission” might be a runner up prize for the applications that came closest to getting funding but which didn’t quite make it.  But if they’re that good, is there really any point asking for a full resubmission?  Wouldn’t it be better for the ESRC to think about having a repêchage, where a very small number of high quality applications will get another chance in the next funding round.  I’m told that there can be a large element of luck involved in the number, quality, and costs of the competition at each funding meeting, so perhaps allowing a very small number of unsuccessful applications to be carried forward might make sense.  It might mean re-costing because of changed start dates, but I’m sure we’d accept that as a price to pay.  Or we could re-cost on the same basis for the new project dates if successful.

A second problem is determining when an application is a resubmission, and when it’s a fresh application on a related topic.  So far we have this definition:

“a ‘new’ application needs to be substantively different from a previous submission with fresh or significantly modified aims and objectives, a different or revised methodological approach and potentially a different team of investigators. This significant change of focus will be accompanied by a different set of costings to deliver the project. Applications that fall short of these broad criteria and reflect more minor amendments based on peer review feedback alone will be counted as re-submissions.”

Some of my former colleagues in philosophy might appreciate this particular version of the identity problem.  I’ve had problems with this distinction in the past, where I’ve been involved in an application submitted to the ESRC which was bounced back as a resubmission without having the required letter explaining the changes.  Despite what I said last time about having broad confidence in ESRC staff to undertake sifting activities, in this case they got it wrong.  In fairness, it was a very technical economics application with a superficial similarity to a previous application, but you’d have to be an economist to know that.  In the end, the application was allowed as a new application, but wasn’t funded.  That case was merely frustrating, but the ESRC are planning on counting undeclared resubmissions as unsuccessful, with potential sanctions/quota consequences, so we need to get this right.  Fortunately…

The identification of uninvited re-submissions will rest with staff within the ESRC, as is currently the practice. In difficult cases advice will be taken from GAP [Grant Assessment Panel] members. Applications identified as uninvited re-submissions will not be processed and classified as unsuccessful on quality grounds under any sanctions policy that we may introduce.”

Even so, I’d like to see the “further guidance” that the ESRC intend to produce on this.  While we don’t want applicants disguising resubmissions as fresh applications, there’s a danger of a chilling effect which could serve to dissuade genuinely fresh applications on a similar or related topic.  However, I’m heartened to see the statement about the involvement of GAP members in getting this right – that should provide some measure of reassurance.

The ESRC and “Demand Management”: Part 2 – Sifting and Outlines

ESRC office staff start their new sifting role

In part one of this week fortnight long series of posts on the ESRC and “demand management”, I attempted to sketch out some context.  Briefly, we’re here because demand has increased while the available funds have remained static at best, and are now declining in real terms.  Phil Ward and Paul Benneworth have both added interesting comments – Phil has a longer professional memory than I do, and Paul makes some very useful comments from the perspective of a researcher starting his career during the period in question.  If you read the previous post before their comments appeared, I’d recommend going back and having a read.

It’s easy to think of “demand management” as something that’s at least a year away, but there are some changes that are being implemented straight away – this post is about outline applications and “sifting”.  Next I’ll talk about the ban on (uninvited) resubmissions.

Greater use of outline stages for managed mode schemes (i.e pretty much everything except open call Research Grants), for example, seems very sensible to me, provided that the application form is cut down sufficiently to represent a genuine time and effort saving for individuals and institutions, while still allowing applicants enough space to make a case.  It’s also important that reviewers treat outline applications as just that, and are sensitive to space constraints.  I understand that the ESRC are developing a new grading scheme for outline applications, which is a very good thing.  At outline stage, I would imagine that they’re looking for ideas that are of the right size in terms of scale and ambition, and at least some evidence that (a) the research team has the right skills and (b) that granting them more time and space to lay out their arguments will result in a competitive application.

With Standard Grants (now known as Research Grants, as there are no longer ‘large’ or ‘small’ grants), there will be “greater internal sifting by ESRC staff”.  I don’t know if this is in place yet, but I understand that there’s a strong possibility that this might not be done by academics.  I’m very relaxed about that – in fact, I welcome it – though I can imagine that some academics will be appalled.  But…. the fact is that about a third of the applications the ESRC receives are “uncompetitive”, which is a lovely British way of saying unfundable.  Not good enough.  Where all these applications are coming from I’ve no idea, and while I don’t think any of them are being submitted on my watch, it would be an act of extreme hubris to declare that absolutely.  However, I strongly suspect that they’re largely coming from universities that don’t have a strong research culture and/or don’t have high quality research support and/or are just firing off as many applications as possible in a mistaken belief that the ESRC is some kind of lottery.

I’d back myself to pick out the unfundable third in a pile of applications.  I wouldn’t back myself to pick the grant recipients, but even then I reckon I’d get close.  I can differentiate between what I don’t understand and what doesn’t make sense with a fair degree of accuracy, and while I’m no expert on research methods, I know when there isn’t a good account of methods, or when it’s not explained or justified.  I can spot a Case for Support that is 80% literature review and only 20% new proposal.  I can tell when the research questions(s) subtly change from section to section.  And I’d back others with similar roles to me to be able to do the same – if we can’t tell the difference between a sinner and a winner…. why are research intensive universities bothering to employ us?

And if I can do it with a combination of some academic background (MPhil political philosophy) and professional experience, I’m sure others could too, including ESRC staff.  They’d only have to sort the no-hopers from the rest, and if a few no-hopers slip through, or if a few low quality fundable some-hopers-but-a-very-long-way-down-the-lists drop out at that stage, it would make very little difference.  Unless, of course, one of the demand management sanction options is introduced, at which point the notion of  non-academics making decisions that could lead to individual or institutional becomes a little more complicated.  But again, I think I’d back myself to spot grant applications that should not have been submitted, even if I wouldn’t necessarily want a sanctions decision depending on my judgement alone.

Even if they were to go with a very conservative policy of only sifting out applications which, say, three ESRC staff think is dreadful, that could still make a substantial difference to the demands on academic reviewers.  I guess that’s the deal – you submit to a non-academic having some limited judgement role over your application, and in return, they stop sending you hopeless applications to review.

If I were an academic I’d take that like a shot.

Costs of interview transcription: Take a letter, Miss Jones….

A picture of Michelle from "'Allo 'allo"
"Listen verry carefully.... I will say zis anly wance"

Quick post on something other than the ESRC, for a change…..

Transcription is a major category of expense for social science research projects, and I’ve been wondering for some time whether it’s possible to make cost savings without sacrificing accuracy, consistency, confidentiality, speed of turnaround, and all of the other things we require.

One problem is that there seem to be a wide variety of different pricing models.  Some by hour of tape, some by hour of staff time, some by some other smaller unit of time.  Another is that there are different types of transcription – verbatim (which includes every last hesitation and verbal tic) and then varying degrees of near-verbatim stuff.  Some transcription is of fairly straightforward one-on-one interviews, but sometimes it’s whole focus groups or meetings where individual speakers need identifying.  The quality of the recordings and the clarity of those speaking may be variable.  I’ve also been assured that there are cases where a Research Associate with specialist knowledge (rather than a generalist audio typist)  is required, though that was for a video recording.

I imagine there are plenty of models of sourcing transcription across universities – in house capacity, a list of current/former staff looking for extra work, or a contract with a preferred supplier.  Or some kind of mixture of provision.  One option would be to look at getting better value, but given the difficulty in comparing price and quality, I’m not sure how far this would get us.  I’m also a little unhappy at the thought of trying to reduce what I suspect are already fairly low rates of pay.

I wonder if technology has reached a point where it would be worth looking seriously at voice recognition software for producing a first pass transcript.  At least for non-verbatim requirements, this might produce a document that would just need correcting and tidying up, which might be quicker (and therefore cheaper) than transcribing the whole thing.  However, I can’t help remember an episode when a friend tried voice recognition software which couldn’t cope with his Saarrf Lahndahn accent… which got more pronounced the more frustrated he got with its utter failure to anderstan’ wot ee waz sayin.  But I’m sure technology has moved on.

The ever-reliable Wikipedia reckons that 50% of live TV subtitles were produced via voice recognition as of 2005, though there’s a “citation needed” for this claim.  But even if true, I would imagine that a fair amount of speech on live TV is more scripted and rehearsed – and therefore easier to automatically transcribe – than what someone might say in a research interview.  More RP accents, too, I’d imagine.

Anyone have any experience of using voice recognition software for transcription?  Or is the technology not quite there yet?

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.