Update – read this post for the 2012/13 stats for success rates by discipline
The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22). While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.
The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants). However, Business and Management researchers submitted 68 applications, of which 1 was funded. One. One single funded application. In the whole year. For the whole discipline. Education fared little better with 2 successes out of 62.
Just pause for a moment to let that sink in. Business and Management. 1 of 68. Education. 2 of 62.
Others did worse still. Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10). However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this. It’s rather harder to explain a total of 3 applications funded from 130.
Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32). No other subjects had success rates in single figures. At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%. Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.
Is this year unusual, or is there a worrying and peculiar trend developing? Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07. Three caveats, though, before you go haring off to see your own discipline’s stats. One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference. The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls). The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.
You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while. Last year’s Education success rate? 5%. The years before? 8% and 14% Business and Management? A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work. And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.
When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.
So what’s going on here? Well, there are a number of possibilities. One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak. Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies. Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.
But are any or all of these possibilities sufficient to explain the difference in funding rates? I really don’t think so. So what’s going on? Unconscious bias? Snobbery? Institutional bias? Politics? Hidden agendas? All of the above? Anyone know?
More pertinently, what do we do about it? Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance. While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.
And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above. Over to greater minds than mine…..