ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

How can we help researchers get responses for web questionnaires?

A picture of an energy saving lightbulb
*Insert your own hilarious and inaccurate joke about how long energy saving lightbulbs take to warm up here*

I’ve had an idea, and I’d like you, the internet, to tell me if it’s a good one or not.  Or how it might be made into a good one.

Would it be useful to set up a central list/blog/twitter account for ongoing research projects (including student projects) which need responses to an internet questionnaire from the general public?  Would researchers use it?  Would it add value?  Would people participate?

Every so often, I receive an email or tweet asking for people to complete a research questionnaire on a particular topic.  I usually do (if it’s not too long, and the topic isn’t one I consider intrusive), partly because some of them are quite interesting, partly because I feel a general duty to assist with research when asked, and partly because I probably need to get out more.  The latest one was one which a friend and former colleague shared on Facebook.  It was a PhD project from a student in her department about sun tanning knowledge and behaviour, and it’s here if you feel like taking part.  Now this is not a subject that I feel passionately about, but perhaps that’s why it might be useful for the likes of me to respond.

I guess the key assumptions that I’m making are that there are sufficient numbers of other people like me who would be willing to spend a few minutes every so often completing a web survey to support research, and that nothing like this exists already.  If there is, I’ve not heard about and I’d have thought that I would have done.  But I’d rather be embarrassed now rather than later!  Another assumption is that such a resource might be useful.  I strongly suspect that any such resource would have a deeply atypical demographic – I’d imagine it would be mainly university staff and students.  But I’d imagine that well designed research questionnaires would be asking sufficiently detailed demographic information to be able to factor this in.  For some student projects where the main challenge can be quantity rather than variety, this might not even matter too much.  I guess it depends what questions are being asked as part of the research.

I’ve not really thought this through at all yet.  I would imagine that only projects which could be completed by anyone would be suitable for inclusion, or at least only projects where responses are invited from a broad range of people.  Very specific projects probably wouldn’t work, and would make it harder for participants to find ones which they can do.  Obviously all projects would need to have ethical approval from their institution.  There would be an expectation that beneficiaries are prepared to reciprocate and help others in return.  And clearly there has to be a strategy to tell people about it.

In practical terms, I’m thinking about either a separate blog or a separate page of this one, and probably a separate twitter account.  Researchers could add details in a comment on a monthly blog post, and either tweet the account and ask for a re-tweet, or email me a tweet to send.  Participants could follow the twitter feed and subscribe to the comments and blog.

So… what do you think?  Please comment below (or email me if you prefer).  Would this be useful?  Would you participate?  What have I missed?  If I do set this up, how might I go about telling people about it?

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.