The ESRC and “Demand Management”: Part 2 – Sifting and Outlines

ESRC office staff start their new sifting role

In part one of this week fortnight long series of posts on the ESRC and “demand management”, I attempted to sketch out some context.  Briefly, we’re here because demand has increased while the available funds have remained static at best, and are now declining in real terms.  Phil Ward and Paul Benneworth have both added interesting comments – Phil has a longer professional memory than I do, and Paul makes some very useful comments from the perspective of a researcher starting his career during the period in question.  If you read the previous post before their comments appeared, I’d recommend going back and having a read.

It’s easy to think of “demand management” as something that’s at least a year away, but there are some changes that are being implemented straight away – this post is about outline applications and “sifting”.  Next I’ll talk about the ban on (uninvited) resubmissions.

Greater use of outline stages for managed mode schemes (i.e pretty much everything except open call Research Grants), for example, seems very sensible to me, provided that the application form is cut down sufficiently to represent a genuine time and effort saving for individuals and institutions, while still allowing applicants enough space to make a case.  It’s also important that reviewers treat outline applications as just that, and are sensitive to space constraints.  I understand that the ESRC are developing a new grading scheme for outline applications, which is a very good thing.  At outline stage, I would imagine that they’re looking for ideas that are of the right size in terms of scale and ambition, and at least some evidence that (a) the research team has the right skills and (b) that granting them more time and space to lay out their arguments will result in a competitive application.

With Standard Grants (now known as Research Grants, as there are no longer ‘large’ or ‘small’ grants), there will be “greater internal sifting by ESRC staff”.  I don’t know if this is in place yet, but I understand that there’s a strong possibility that this might not be done by academics.  I’m very relaxed about that – in fact, I welcome it – though I can imagine that some academics will be appalled.  But…. the fact is that about a third of the applications the ESRC receives are “uncompetitive”, which is a lovely British way of saying unfundable.  Not good enough.  Where all these applications are coming from I’ve no idea, and while I don’t think any of them are being submitted on my watch, it would be an act of extreme hubris to declare that absolutely.  However, I strongly suspect that they’re largely coming from universities that don’t have a strong research culture and/or don’t have high quality research support and/or are just firing off as many applications as possible in a mistaken belief that the ESRC is some kind of lottery.

I’d back myself to pick out the unfundable third in a pile of applications.  I wouldn’t back myself to pick the grant recipients, but even then I reckon I’d get close.  I can differentiate between what I don’t understand and what doesn’t make sense with a fair degree of accuracy, and while I’m no expert on research methods, I know when there isn’t a good account of methods, or when it’s not explained or justified.  I can spot a Case for Support that is 80% literature review and only 20% new proposal.  I can tell when the research questions(s) subtly change from section to section.  And I’d back others with similar roles to me to be able to do the same – if we can’t tell the difference between a sinner and a winner…. why are research intensive universities bothering to employ us?

And if I can do it with a combination of some academic background (MPhil political philosophy) and professional experience, I’m sure others could too, including ESRC staff.  They’d only have to sort the no-hopers from the rest, and if a few no-hopers slip through, or if a few low quality fundable some-hopers-but-a-very-long-way-down-the-lists drop out at that stage, it would make very little difference.  Unless, of course, one of the demand management sanction options is introduced, at which point the notion of  non-academics making decisions that could lead to individual or institutional becomes a little more complicated.  But again, I think I’d back myself to spot grant applications that should not have been submitted, even if I wouldn’t necessarily want a sanctions decision depending on my judgement alone.

Even if they were to go with a very conservative policy of only sifting out applications which, say, three ESRC staff think is dreadful, that could still make a substantial difference to the demands on academic reviewers.  I guess that’s the deal – you submit to a non-academic having some limited judgement role over your application, and in return, they stop sending you hopeless applications to review.

If I were an academic I’d take that like a shot.

Costs of interview transcription: Take a letter, Miss Jones….

A picture of Michelle from "'Allo 'allo"
"Listen verry carefully.... I will say zis anly wance"

Quick post on something other than the ESRC, for a change…..

Transcription is a major category of expense for social science research projects, and I’ve been wondering for some time whether it’s possible to make cost savings without sacrificing accuracy, consistency, confidentiality, speed of turnaround, and all of the other things we require.

One problem is that there seem to be a wide variety of different pricing models.  Some by hour of tape, some by hour of staff time, some by some other smaller unit of time.  Another is that there are different types of transcription – verbatim (which includes every last hesitation and verbal tic) and then varying degrees of near-verbatim stuff.  Some transcription is of fairly straightforward one-on-one interviews, but sometimes it’s whole focus groups or meetings where individual speakers need identifying.  The quality of the recordings and the clarity of those speaking may be variable.  I’ve also been assured that there are cases where a Research Associate with specialist knowledge (rather than a generalist audio typist)  is required, though that was for a video recording.

I imagine there are plenty of models of sourcing transcription across universities – in house capacity, a list of current/former staff looking for extra work, or a contract with a preferred supplier.  Or some kind of mixture of provision.  One option would be to look at getting better value, but given the difficulty in comparing price and quality, I’m not sure how far this would get us.  I’m also a little unhappy at the thought of trying to reduce what I suspect are already fairly low rates of pay.

I wonder if technology has reached a point where it would be worth looking seriously at voice recognition software for producing a first pass transcript.  At least for non-verbatim requirements, this might produce a document that would just need correcting and tidying up, which might be quicker (and therefore cheaper) than transcribing the whole thing.  However, I can’t help remember an episode when a friend tried voice recognition software which couldn’t cope with his Saarrf Lahndahn accent… which got more pronounced the more frustrated he got with its utter failure to anderstan’ wot ee waz sayin.  But I’m sure technology has moved on.

The ever-reliable Wikipedia reckons that 50% of live TV subtitles were produced via voice recognition as of 2005, though there’s a “citation needed” for this claim.  But even if true, I would imagine that a fair amount of speech on live TV is more scripted and rehearsed – and therefore easier to automatically transcribe – than what someone might say in a research interview.  More RP accents, too, I’d imagine.

Anyone have any experience of using voice recognition software for transcription?  Or is the technology not quite there yet?

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.

“It’s a bad review, we got a bad review …oh lord”

A picture of Clacton Pier
A large sandpit and a pier (re)view

A healthy portion of food for thought has been served up by the publication of a RAND Europe report into alternatives to peer review for research project funding.  Peer review is something that I – as an alleged research funding professional -have rather taken for granted as being the natural and obvious way to allocate (increasingly) scarce resources.  How do we decide who gets funded?  Well, let’s ask experts to report, and then make a judgement based upon what those experts say.  I’ve been aware of other ways, but I’ve not given them much thought – I’m a poacher, not a gamekeeper.

The Guardian Higher Education Network ran a poll over the second half of last week, and a whopping 70.8% of those who voted said that they had had a research proposal turned down  thought the process should be changed.  I’m aware of the limitations of peer review -it’s only as good as the peers, and the effort they’re prepared to make and the care they’re prepared to take with their review.  Anyone who has had any involvement in research funding will be aware of examples where comments come back that are frankly baffling: drawing odd conclusions, obsessing over irrelevancies, wanting the research to be about something else, making unsupported statements, or assertions that are just demonstrably false.

[Personally, I hate it when ‘Reviewer Q’ remarks that the project “seems expensive”, without further comment or justification about what’s too expensive.  That’s our carefully crafted budget you’re talking about there, Reviewer Q.  It’s meticulously pedantic, and pedantically meticulous.  We’ve Justified our Resources… so how about you justify your comment?  I wonder how annoyed I’d get if I wrote the whole application…..]

One commentator on the Guardian poll page, dianthusmed, said that

Anyone voting to change the peer review process, I will not take you seriously unless you tell me what you’d replace it with.

And that’s surely the $64,000 question (at 80% fEC)…. we’re all more or less familiar with the potential shortcomings of peer review as a method of allocating funding, but if not peer review… then what?

In fact, the Rand Europe report is not an anti peer-review polemic, and deserves a more nuanced response than a “peer review: yes or no” on-line poll.  The only sensible answer, surely, is: well, it depends what you want to achieve.  The report itself aims to

inspire thinking amongst research funders by showing how the research funding review process can be changed, and to give funders the confidence to try novel methods by explaining where and how such approaches have been used previously.

But crucially…

This is not intended to replace peer review, which remains the best method for review of grant applications in many situations. Rather, we hope that by considering some of the alternatives to peer review, where appropriate, research funders will be able to support a wider portfolio of projects, leading to more innovative, high-impact work.

A number of the options in the report seem to be more related to changing the nature and scope for calls for proposals than changing the nature of peer review itself – many in ways that aren’t unfamiliar.  But I’d like to pick out one idea for particular comment: sand pits.

I believe the origin of the term is from computing, where the term ‘sand box’ or ‘sand pit’ was used to describe an area for experimentation or testing, where no damage could be done to the overall system architecture.  I guess the notion of harmless – even playful – experimentation is what advocates have in mind.

They sound like a very interesting idea – get a group of people with expertise to bring to bear on a particular problem, put them all in same place for a day, or a number of days, and see what emerges from discussions.  It’s not really caught on yet in the social sciences, although social scientists have been involved, of course.  The notion of cooperating rather than competing, and of new research collaborations forming, is an interesting and an appealing idea.  As a way of bringing new perspectives to bear on a particular problem – especially an interdisciplinary problem – it looks like an attractive alternative.

There are problems, though.  If there are more applications to participate than there are places, there will inevitably need to be choices made and applications accepted and rejected.  I would imagine that questions of fit and balance would be relevant as well as questions of experience and expertise, but someone or some group of people will have to make choices.  From the application forms I’ve seen, this is often on the basis of short CV and a short statement.  So… don’t we end up relying on some element of peer review anyway?

Secondly, I wonder about equal opportunities.  If a sand pit event is to take place over several days in a hotel, it will inevitably be difficult or even impossible for some to attend. Those who are parents and/or carers. Those who have timetabled lectures and tutorials.  Those who have other professional or personal diary commitments that just can’t be moved.  For a standard peer reviewed call, no-one is excluded completely because it clashes with an important family event.  Can we be sure that all of the best researchers will even apply?

I should say that I’ve never attended a sandpit event, but I have attended graduate recruitment/selection events (offered, deferred, and finally declined, since you ask), and residential training courses.  They’re all strange situations where both competitive and cooperative behaviours are rewarded, and I wonder how people react.  If I were a funder, I’d be worried that the prizes might be going to the best social operators, rather than those with the best ideas.  It’s a myth that academic brilliance is always found in inverse proportion to social skills, of course, but even so, my concern would be about whether one or more dominant figures could ending up forming projects around themselves. I also wonder about existing cliques or vested interests of whatever kind having a disproportionate influence.

I’m sure that effective facilitation and chairing can go a long way to minimising at least some of the potential problems, and while I think sandpits are an intriguing and promising alternative to peer review, they’re not without problems of their own.  I’d be very interested to hear from anyone who’s attended a sandpit – am I doing them a disservice here?

Although I’m open to other ideas for distributing research funding – by all means, let’s be creative, and let’s look at alternatives – I don’t see a replacement for peer review.  Which isn’t to say that there isn’t scope to improve the quality of peer review.  Because, Reviewer Q, there certainly is.

And perhaps that’s the point that the 70.8% were trying to make.