Costs of interview transcription: Take a letter, Miss Jones….

A picture of Michelle from "'Allo 'allo"
"Listen verry carefully.... I will say zis anly wance"

Quick post on something other than the ESRC, for a change…..

Transcription is a major category of expense for social science research projects, and I’ve been wondering for some time whether it’s possible to make cost savings without sacrificing accuracy, consistency, confidentiality, speed of turnaround, and all of the other things we require.

One problem is that there seem to be a wide variety of different pricing models.  Some by hour of tape, some by hour of staff time, some by some other smaller unit of time.  Another is that there are different types of transcription – verbatim (which includes every last hesitation and verbal tic) and then varying degrees of near-verbatim stuff.  Some transcription is of fairly straightforward one-on-one interviews, but sometimes it’s whole focus groups or meetings where individual speakers need identifying.  The quality of the recordings and the clarity of those speaking may be variable.  I’ve also been assured that there are cases where a Research Associate with specialist knowledge (rather than a generalist audio typist)  is required, though that was for a video recording.

I imagine there are plenty of models of sourcing transcription across universities – in house capacity, a list of current/former staff looking for extra work, or a contract with a preferred supplier.  Or some kind of mixture of provision.  One option would be to look at getting better value, but given the difficulty in comparing price and quality, I’m not sure how far this would get us.  I’m also a little unhappy at the thought of trying to reduce what I suspect are already fairly low rates of pay.

I wonder if technology has reached a point where it would be worth looking seriously at voice recognition software for producing a first pass transcript.  At least for non-verbatim requirements, this might produce a document that would just need correcting and tidying up, which might be quicker (and therefore cheaper) than transcribing the whole thing.  However, I can’t help remember an episode when a friend tried voice recognition software which couldn’t cope with his Saarrf Lahndahn accent… which got more pronounced the more frustrated he got with its utter failure to anderstan’ wot ee waz sayin.  But I’m sure technology has moved on.

The ever-reliable Wikipedia reckons that 50% of live TV subtitles were produced via voice recognition as of 2005, though there’s a “citation needed” for this claim.  But even if true, I would imagine that a fair amount of speech on live TV is more scripted and rehearsed – and therefore easier to automatically transcribe – than what someone might say in a research interview.  More RP accents, too, I’d imagine.

Anyone have any experience of using voice recognition software for transcription?  Or is the technology not quite there yet?

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.

“It’s a bad review, we got a bad review …oh lord”

A picture of Clacton Pier
A large sandpit and a pier (re)view

A healthy portion of food for thought has been served up by the publication of a RAND Europe report into alternatives to peer review for research project funding.  Peer review is something that I – as an alleged research funding professional -have rather taken for granted as being the natural and obvious way to allocate (increasingly) scarce resources.  How do we decide who gets funded?  Well, let’s ask experts to report, and then make a judgement based upon what those experts say.  I’ve been aware of other ways, but I’ve not given them much thought – I’m a poacher, not a gamekeeper.

The Guardian Higher Education Network ran a poll over the second half of last week, and a whopping 70.8% of those who voted said that they had had a research proposal turned down  thought the process should be changed.  I’m aware of the limitations of peer review -it’s only as good as the peers, and the effort they’re prepared to make and the care they’re prepared to take with their review.  Anyone who has had any involvement in research funding will be aware of examples where comments come back that are frankly baffling: drawing odd conclusions, obsessing over irrelevancies, wanting the research to be about something else, making unsupported statements, or assertions that are just demonstrably false.

[Personally, I hate it when ‘Reviewer Q’ remarks that the project “seems expensive”, without further comment or justification about what’s too expensive.  That’s our carefully crafted budget you’re talking about there, Reviewer Q.  It’s meticulously pedantic, and pedantically meticulous.  We’ve Justified our Resources… so how about you justify your comment?  I wonder how annoyed I’d get if I wrote the whole application…..]

One commentator on the Guardian poll page, dianthusmed, said that

Anyone voting to change the peer review process, I will not take you seriously unless you tell me what you’d replace it with.

And that’s surely the $64,000 question (at 80% fEC)…. we’re all more or less familiar with the potential shortcomings of peer review as a method of allocating funding, but if not peer review… then what?

In fact, the Rand Europe report is not an anti peer-review polemic, and deserves a more nuanced response than a “peer review: yes or no” on-line poll.  The only sensible answer, surely, is: well, it depends what you want to achieve.  The report itself aims to

inspire thinking amongst research funders by showing how the research funding review process can be changed, and to give funders the confidence to try novel methods by explaining where and how such approaches have been used previously.

But crucially…

This is not intended to replace peer review, which remains the best method for review of grant applications in many situations. Rather, we hope that by considering some of the alternatives to peer review, where appropriate, research funders will be able to support a wider portfolio of projects, leading to more innovative, high-impact work.

A number of the options in the report seem to be more related to changing the nature and scope for calls for proposals than changing the nature of peer review itself – many in ways that aren’t unfamiliar.  But I’d like to pick out one idea for particular comment: sand pits.

I believe the origin of the term is from computing, where the term ‘sand box’ or ‘sand pit’ was used to describe an area for experimentation or testing, where no damage could be done to the overall system architecture.  I guess the notion of harmless – even playful – experimentation is what advocates have in mind.

They sound like a very interesting idea – get a group of people with expertise to bring to bear on a particular problem, put them all in same place for a day, or a number of days, and see what emerges from discussions.  It’s not really caught on yet in the social sciences, although social scientists have been involved, of course.  The notion of cooperating rather than competing, and of new research collaborations forming, is an interesting and an appealing idea.  As a way of bringing new perspectives to bear on a particular problem – especially an interdisciplinary problem – it looks like an attractive alternative.

There are problems, though.  If there are more applications to participate than there are places, there will inevitably need to be choices made and applications accepted and rejected.  I would imagine that questions of fit and balance would be relevant as well as questions of experience and expertise, but someone or some group of people will have to make choices.  From the application forms I’ve seen, this is often on the basis of short CV and a short statement.  So… don’t we end up relying on some element of peer review anyway?

Secondly, I wonder about equal opportunities.  If a sand pit event is to take place over several days in a hotel, it will inevitably be difficult or even impossible for some to attend. Those who are parents and/or carers. Those who have timetabled lectures and tutorials.  Those who have other professional or personal diary commitments that just can’t be moved.  For a standard peer reviewed call, no-one is excluded completely because it clashes with an important family event.  Can we be sure that all of the best researchers will even apply?

I should say that I’ve never attended a sandpit event, but I have attended graduate recruitment/selection events (offered, deferred, and finally declined, since you ask), and residential training courses.  They’re all strange situations where both competitive and cooperative behaviours are rewarded, and I wonder how people react.  If I were a funder, I’d be worried that the prizes might be going to the best social operators, rather than those with the best ideas.  It’s a myth that academic brilliance is always found in inverse proportion to social skills, of course, but even so, my concern would be about whether one or more dominant figures could ending up forming projects around themselves. I also wonder about existing cliques or vested interests of whatever kind having a disproportionate influence.

I’m sure that effective facilitation and chairing can go a long way to minimising at least some of the potential problems, and while I think sandpits are an intriguing and promising alternative to peer review, they’re not without problems of their own.  I’d be very interested to hear from anyone who’s attended a sandpit – am I doing them a disservice here?

Although I’m open to other ideas for distributing research funding – by all means, let’s be creative, and let’s look at alternatives – I don’t see a replacement for peer review.  Which isn’t to say that there isn’t scope to improve the quality of peer review.  Because, Reviewer Q, there certainly is.

And perhaps that’s the point that the 70.8% were trying to make.

ESRC Centres and Large Grants competition

A pot of gold at the end of a rainbow
"The leprechauns wanted in principle to leave a pot of gold at the end of your rainbow, but unfortunately funds are limited...."

The ESRC Centres and Large Grants competition was launched earlier this week.

We already knew a few things – that the full call would be out sometime this month, that it would have some steer towards some version of the three strategic priorities, and that there would be funding for about 5 centres or projects at £2m-£5m each.  We knew that the new scheme would be a combination of the formerly-separate Centres and Large Grant schemes.  Although there’s an argument for that a ‘Centre’ and a ‘Large Grant’ are different beasts, this seems to me like another example of a sensible merger of schemes, as with the new Future Research Leaders call combining the First Grants and the Postdoc Fellowship schemes.

We also knew that competition would be fierce.   It must be eighteen months, perhaps longer, since the last comparable call.  It wouldn’t be surprising, then, if there are two calls worth of ideas and projects being prepared for this call.  Unsurprisingly, there’s an outline proposal stage, followed by an invited full proposal stage, followed by short listing for interviews.  It would be interesting to know how many applications the ESRC foresee making it through each stage.  I’m sure this will depend in part on the quality of the applications they receive, but they must have a rough ratio in mind.  Whichever way you look at it, even for those with exceptional ideas, the odds aren’t great.  But then, they seldom are.

So, what do we know now that we didn’t know before?

We know that there are three areas – each an aspect of one of the three priorities – which the ESRC would “particularly welcome” applications on:

Risk: The importance of risk and its relationship with behaviours: for individuals and organisations, understanding the role of attitudes, decisions and consequences; for organisations and society the implications of public and practitioner constructions of risk and divergent framings; the challenges for effective governance, national and international – and the significance of social gradients and inequalities in essential areas of risk…

Behaviour change: Causes and agents of behavioural change: understanding how social norms, signals and triggers such as new technologies or novel regulation impact on decisions and actions of people, social groups and organisations, how and why behaviour changes at key periods and in what social, national and international contexts – thus informing the development and evaluation of interventions…

Community, participation and democracy in an era of austerity: Understanding how individuals and communities most effectively make their voices heard, and how social and physical mobility changes when in countries like the UK, the state retrenches…

Some might regard the third priority area as a brave move after the AHRC controversy.  But I guess as long as no-one mentions the government’s “BS” by name, probably no-one will notice.  And it is a legitimate and important area for research.

So….. three themes, an open element, and five to be funded.  One per theme and two open seems a likely outcome, though I’m sure that’s not pre-decided.  I guess the question for those with a project in mind is how far they’re willing and able to bend it to meet the themes, or whether they just ignore the steer and aim directly at the open element of the call.  And the question for the decision-makers is how they respond to bids that are covered in crowbar marks that are hidden under a thin veneer of priority-speak.  I think my advice to potential applicants would probably be to either to write an application that speaks directly and indisputably to one of the three areas of steer, or to go for the open element.  Or to swerve this call entirely, and go for the Research Grants scheme, which has an upper limit of £2m, the same as the lower limit for this call.

What else is striking about the call?   That academic merit alone won’t be enough.  Not to be trusted with up to £5 million quid of taxpayer’s cash in a time of austerity.

“…but it is likely that successful applications will be led by experienced researchers who are internationally recognised and have a well established publication track record within their field of study, and where we can be assured of the ability to manage a large scale research project.” [my underline]

And from the list of assessment criteria:

“A robust management structure with a nominated director(s) (for Centre applications) and clear arrangements for co-ordination and management of the strategic direction of the Centre/Grant”

At outline stage, one page of the available four for the Case for Support needs to be a Management Plan.  A full quarter of the available free text space, even at this early stage of the process.  The ‘Pathways to Impact’ document is not part of the outline stage, but the Management Plan is.  That surely tells its own story – have a strong account about project management to tell, or don’t expect to make it to the next stage.

And of course, it makes sense.  If I were in the unenviable position of thinning the field in the search for the famous five to be funded, one sifting approach I’d want to use is to knock out any that – regardless of the brilliance of their ideas – I don’t feel absolutely confident in trusting with the money.  These are massive, massive investments, and they’ve got to deliver.  They’ve got to give the ERSC success stories to shout about, given the relative generosity of the flat cash CSR settlement.  They just have to.

I hope there’s space for creativity and delegation in management planning, though, rather than expecting a superhuman PI to do everything.  And I hope other kinds of management experience (Head of School and similar roles, pre-academic career experience) as well as running large research projects will be acceptable assurances of ability.  In the medium and long term, though, with the fractured funding landscape, I can’t help but wonder how people are meant to get experience of leading projects.

One other thing struck me.  I was half-expecting that there might be some kind of ‘demand management’ measure here, perhaps limiting each institution to submitting one bid as lead partner.  But I’m pleased to see that there’s nothing like that – institutions aren’t in a good position to chose between competing proposals, as they lack experts without a conflict of interest.  Which is one of the reasons why I’m against Quota systems of demand management.

Demand management.  The fractured funding landscape.  Two things I promise I’ll blog about soon.