ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.

Is there a danger that research funding calls are getting too narrow?

The ESRC have recently added a little more detail to a previous announcement about a pending call for European-Chinese joint research projects on Green Economy and Population Change.  Specifically, they’re after projects which address the following themes:

Green Economy

  • The ‘greenness and dynamics of economies’
  • Institutions, Policies and planning for a green economy
  • The green economy in cities and metropolitan areas
  • Consumer behaviour and lifestyles in a green economy

Understanding population Change

  • changing life course
  • urbanisation and migration
  • labour markets and social security dynamics
  • methodology, modelling and forecasting
  • care provision
  • comparative policy learning

Projects will need to involve institutions from at least two of the participating European counties (UK, France (involvement TBC), Germany, Netherlands) and two institutions in China. On top of this is an expectation that there will be sustainability/capacity building around the research collaborations, plus the usual further plus points of involving stakeholders and interdisciplinary research.

Before I start being negative, or potentially negative, I have one blatant plug and some positive things to say. The blatant plug is that the University of Nottingham has a campus in Ningbo in China which is eligible for NSFC funding and therefore would presumably count as one Chinese partner. I wouldn’t claim to know all about all aspects of our Ningbo research expertise, but I know people who do.  Please feel free to contact me with ideas/research agendas and I’ll see if I can put you in touch with people who know people.

The positive things.  The topics seem to me to be important, and we’ve been given advance notice of the call and a fair amount of time to put something together.  There’s a reference to Open Research Area procedures and mechanisms, which refers to agreements between the UK, France, Netherlands and Germany on a common decision making process for joint projects in which each partner is funded by their national funder under their own national funding rules.  This is excellent, as it doesn’t require anyone to become an expert in another country’s national funder’s rules, and doesn’t have the double or treble jeopardy problem of previous calls where decisions were taken by individual funders.  It’s also good that national funders are working together on common challenges – this adds fresh insight, invites interesting comparative work and pools intellectual and financial resources.

However, what concerns me about calls like this is that the area at the centre of the particular Venn diagram of this call is really quite small.  It’s open to researchers with research interests in the right areas, with collaborators in the right European countries, with collaborators in China.   That’s two – arguably three – circles in the diagram.  Of course, there’s a fourth – proposals that are outstanding.  Will there be enough strong competition on the hallowed ground at the centre of all these circles? It’s hard to say, as we don’t know yet how much money is available.

I’m all for calls that encourage, incentivise, and facilitate international research.  I’m in favour of calls on specific topics which are under-researched, which are judged of particular national or international importance, or where co-funding from partners can be found to address areas of common interest.

But I’m less sure about having both in one call – both very specific requirements in terms of the nationality of the partner institutions, and in terms of the call themes. Probably the scope of this call is wide enough – presumably the funders think so – but I can’t help think that that less onerous eligibility requirements in terms of partners could lead to greater numbers of high quality applications.

The consequences of Open Access, part 2: Are researchers prepared for greater scrutiny?

In part 1 of this post, I raised questions about how academic writing might have to change in response to the open access agenda.  The spirit of open access surely requires not just the availability of academic papers, but the accessibility of those papers to research users and stakeholders.  I argued that lay summaries and context pieces will increasingly be required, and I was pleased to discover that at least some open access journals are already thinking about this.  In this second part, I want to raise questions about whether researchers and those who support them are ready for the potential extra degree of scrutiny and attention that open access may bring.

On February 23rd 2012, the Journal of Medical Ethics published a paper called After-birth abortion: why should the baby live? by Alberto Giubilini and Francesca Minerva.   The paper was not to advocate “after birth abortion” (i.e infanticide), but to argue that many of the arguments that are said to justify abortion also turn out to justify infanticide.  This isn’t a new argument by any means, but presumably there was sufficient novelty in the construction of the argument to warrant publications.  To those familiar with the conventions of applied ethics – the intended readers of the article – it’s understood that it was playing devil’s advocate, seeing how far arguments can be stretched, taking things to their logical conclusion, seeing how far the thin end of the edge will drive, what’s at the bottom of the slippery slope, just what kind of absurdium can be reductio-ed to.  While the paper isn’t satire in the same way as Jonathan Swift’s A Modest Proposal, no sensible reader would have concluded that the authors were calling for infanticide to be made legal, in spite of the title.

I understand that what happened next was that the existence of the article – for some reason – attracted attention in the right wing Christian blogosphere, prompting a rash of complaints, hostile commentary, fury, racist attacks, and death threats.  Journal editor Julian Savulescu wrote a blog post about the affair, below which are 624 comments.   It’s enlightening and depressing reading in equal measure.  Quick declaration of interest here – my academic background (such as it is) is in philosophy, and I used to work at Keele University’s Centre for Professional Ethics marketing their courses.  I know some of the people involved in the JME’s response, though not Savulescu or the authors of the paper.

There’s a lot that can (and probably should) be said about the deep misunderstanding that occurred between professional bioethicists and non-academics concerned about ethical issues who read the paper, or who heard about it.  Part of that misunderstanding is about what ethicists do – they explore arguments, analyse concepts, test theories, follow the arguments.  They don’t have any special access to moral truth, and while their private views are often much better thought out than most people, most see their role as helping to understand arguments, not pushing any particular position.  Though some of them do that too, especially if it gets them on Newsnight.  I’m not really well informed enough to comment too much on this, but it seems to me that the ethicists haven’t done a great job of explaining what they do to those more moderate and sensible critics.  Those who post death threats and racist abuse are probably past reasoned argument and probably love having something to rail against because it justifies their peculiar world view, but for everyone else, I think it ought to be possible to explain.  Perhaps the notion of a lay summary that I mentioned last time might be helpful here.

Part of the reason for the fuss might have been because the article wasn’t available via open access, so some critics may not have had the opportunity to read the article and make up their own mind.  This might be thought of as a major argument in favour of open access – and of course, it is – the reasonable and sensible would have at least skim-read the article, and it’s easier to marshal a response when what’s being complained about is out there for reference.

However….. the unfortunate truth is that there are elements out there who are looking for the next scandal, for the next chance to whip up outrage, for the next witch hunt.  And I’m not just talking about the blogosphere, I’m talking about elements of the mainstream media, who (regardless of our personal politics) have little respect or regard for notions of truth, integrity and fairness.  If they get their paper sales, web  hits, outraged comments, and resulting manufactured “scandal”, then they’re happy.  Think I’m exaggerating?  Ask Hilary Mantel, who was on the receiving end of an entirely manufactured fuss with comments she made in a long and thoughtful lecture being taken deliberately and dishonestly out of context.

While open access will make things easier for high quality journalism and for the open-minded citizen and/or professional, it’ll also make it easier for the scandal-mongers (in the mainstream media and in the blogosphere) to identify the next victim to be thrown to the ravenous outrage-hungry wolves that make up their particular constituency.  It’s already risky to be known to be researching and publishing in certain areas – anything involving animal research; climate change; crop science; evolutionary theory; Münchhausen’s by Proxy; vaccination; and (oddly) chronic fatigue syndrome/ME – appears to have a hostile activist community ready to pounce on any research that comes back with the “wrong” answer.

I don’t want to go too far in presenting the world outside the doors of the academy as being a swamp of unreason and prejudice.  But the fact is that alongside the majority of the general public (and bloggers and journalists) who are both rational and reasonable, there is an element that would be happy to twist (or invent) things to suit their own agenda, especially if that agenda involves whipping out manufactured outrage to enable their constituency to confirm their existing prejudices. Never mind the facts, just get angry!

Doubtless we all know academics who would probably relish the extra attention and are already comfortable with the public spotlight.  But I’m sure we also know academics who do not seek the limelight, who don’t trust the media, and who would struggle to cope with even five minutes of (in)fame(y).  One day you’re a humble bioethicist, presumably little known outside your professional circles, and the next, hundreds of people are wishing you dead and calling you every name under the sun.  While Richard Dawkins seems to revel in his (sweary) hate mail, I think a lot of people would find it very distressing to receive emails hoping for their painful death.  I know it would upset me a lot, so please don’t send me any, okay?  And be nice in the comments…..

Of course, even if things never get that far or go that badly, with open access there’s always a greater chance of hostile comment or criticism from the more mainstream and reasonable media, who have a much bigger platform from which to speak than an academic journal.  This criticism need not be malicious, could be legitimate opinion, could be based on a misunderstanding.  Open access opens up the academy to greater scrutiny and greater criticism.

As for what we do about this….. it’s hard to say.  I certainly don’t say that we retreat behind the safety of our paywalls and sally forth with our research only when guarded by a phalanx of heavy infantry to protect us from the swinish multitude besieging our ivory tower.  But I think that there are things that we can do in order to be better prepared.  The use of lay summaries, and greater consideration of the lay reader when writing academic papers will help guard against misunderstandings.

University external relations departments need to be ready to support and defend academic colleagues, and perhaps need to think about planning for these kind of problems, if they don’t do so already.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.

ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

There was an interesting snippet of information in an article in this week’s Times Higher about the latest research council success rates.

 [A] spokeswoman for the ESRC said that since the research council had begun requiring institutions from June 2011 to internally sift applications before submitting them, it had recorded an overall success rate of 24 per cent, rising to 33 per cent for its most recent round of responsive mode grants.  She said that application volumes had also dropped by 37 per cent, “which is an encouraging start towards our demand management target of a 50 per cent reduction” by the end of 2014-15.

Back in October last year I noticed what I thought was a change in tone from the ESRC which gave the impression that they were more confident that institutions had taken note of the shot across the bows of the “demand management” measures consultation exercise(s), and that perhaps asking for greater restraint in putting forward applications would be sufficient.  I hope it is, because as the current formal demand management proposals that will be implemented if required unfairly and unreasonably include co-applicants in any sanction.

I’ve written before (and others have added very interesting comments) about how I think we arrived at the situation where social science research units were flinging as many applications in as possible in the hope that some of them would stick.  And I hope the recent improvements in success rates to around 1-in-3, 1-in-4 don’t serve to re-encourage this kind of behaviour. We need long term, sustainable, careful, restraint in terms of what applications are submitted by institutions to the ESRC (and other major funders, for that matter) and the state in which they’re submitted.

Everyone will want to improve the quality of applications, and internal mentoring and peer review and the kind of lay review that I do will assist with that, but we also need to make sure that the underlying research idea is what I call ‘ESRC-able’.  At Nottingham University Business School, I secured agreement a while ago now to introduce a ‘proof of concept’ review phase for ESRC applications, where we review a two page outline first, before deciding whether to give the green light for the development of a full application.  I think this allows time for changes to be made at the earliest stage, and makes it much easier for us to say that the idea isn’t right and shouldn’t be developed than if a full application was in front of us.

And what isn’t ‘ESRC-able’?  I think a look at the assessment schema gives some useful clues – if you can’t honestly say that your application would fit in the top two categories on the final page, you probably shouldn’t bother.  ‘Dull but worthy’ stuff won’t get funded, and I’ve seen the phrase “incremental progress” used in referees’ comments to damn with faint praise.  There’s now a whole category of research that is of good quality and would doubtless score respectably in any REF exercise, but which simply won’t be competitive with the ESRC.  This, of course, raises the question about how non-groundbreaking stuff gets funded – the stuff that’s more than a series of footnotes to Plato, but which builds on and advances the findings of ground-breaking research by others.  And to that I have no answer – we have a system which craves the theoretically and methodologically innovative, but after a paradigm has been shifted, there’s no money available to explore the consequences.

*     *     *     *     *

Also in the Times Higher this week is the kind of story that appears every year – some universities have done better this year at getting research funding/with their success rates than in previous years, and some have done worse.  Some of those who have done better and worse are the traditional big players, and some are in the chasing pack.  Those who have done well credit their brilliant internal systems and those who have done badly will contest the figures or point to extenuating circumstances, such as the ending of large grants.

While one always wants to see one’s own institution doing well and doing better, and everyone always enjoys a good bit of schadenfreude at the expense of their rivals benchmark institutions and any apparent difficulties that a big beast find themselves in, are any of these short term variations of actual, real, statistical significance?  Apparently big gains can be down to a combination of a few big wins, grants transferring in with new staff, and just… well… the kind of natural variation you’d expect to see.  Big losses could be big grants ending, staff moving on, and – again – natural variance.  Yes, you could ascribe your big gains to your shiny new review processes, but would you also conclude that there’s a problem with those same processes and people the year after when performance is apparently less good?

Why these short term (and mostly meaningless) short term variations are more newsworthy than the radical variation in ESRC success rates for different social science disciplines I have no idea….