Demand mismanagement: a practical guide

I’ve written an article on Demand (Mis)management for Research Professional. While most of the site’s content is behind a paywall, they’ve been kind enough to make my article open access.  Which saves me the trouble of cutting and pasting it here.

Universities are striving to make their grant applications as high in quality as possible, avoid wasting time and energy, and run a supportive yet critical internal review process. Here are a few tips on how not to do it. [read the full article]

In other news, I was at the ARMA conference earlier this week and co-presented a session on Research Development for the Special Interest Group with Dr Jon Hunt from the University of Bath.  A copy of the presentation and some further thoughts will follow once I’ve caught up with my email backlog….

ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

There was an interesting snippet of information in an article in this week’s Times Higher about the latest research council success rates.

 [A] spokeswoman for the ESRC said that since the research council had begun requiring institutions from June 2011 to internally sift applications before submitting them, it had recorded an overall success rate of 24 per cent, rising to 33 per cent for its most recent round of responsive mode grants.  She said that application volumes had also dropped by 37 per cent, “which is an encouraging start towards our demand management target of a 50 per cent reduction” by the end of 2014-15.

Back in October last year I noticed what I thought was a change in tone from the ESRC which gave the impression that they were more confident that institutions had taken note of the shot across the bows of the “demand management” measures consultation exercise(s), and that perhaps asking for greater restraint in putting forward applications would be sufficient.  I hope it is, because as the current formal demand management proposals that will be implemented if required unfairly and unreasonably include co-applicants in any sanction.

I’ve written before (and others have added very interesting comments) about how I think we arrived at the situation where social science research units were flinging as many applications in as possible in the hope that some of them would stick.  And I hope the recent improvements in success rates to around 1-in-3, 1-in-4 don’t serve to re-encourage this kind of behaviour. We need long term, sustainable, careful, restraint in terms of what applications are submitted by institutions to the ESRC (and other major funders, for that matter) and the state in which they’re submitted.

Everyone will want to improve the quality of applications, and internal mentoring and peer review and the kind of lay review that I do will assist with that, but we also need to make sure that the underlying research idea is what I call ‘ESRC-able’.  At Nottingham University Business School, I secured agreement a while ago now to introduce a ‘proof of concept’ review phase for ESRC applications, where we review a two page outline first, before deciding whether to give the green light for the development of a full application.  I think this allows time for changes to be made at the earliest stage, and makes it much easier for us to say that the idea isn’t right and shouldn’t be developed than if a full application was in front of us.

And what isn’t ‘ESRC-able’?  I think a look at the assessment schema gives some useful clues – if you can’t honestly say that your application would fit in the top two categories on the final page, you probably shouldn’t bother.  ‘Dull but worthy’ stuff won’t get funded, and I’ve seen the phrase “incremental progress” used in referees’ comments to damn with faint praise.  There’s now a whole category of research that is of good quality and would doubtless score respectably in any REF exercise, but which simply won’t be competitive with the ESRC.  This, of course, raises the question about how non-groundbreaking stuff gets funded – the stuff that’s more than a series of footnotes to Plato, but which builds on and advances the findings of ground-breaking research by others.  And to that I have no answer – we have a system which craves the theoretically and methodologically innovative, but after a paradigm has been shifted, there’s no money available to explore the consequences.

*     *     *     *     *

Also in the Times Higher this week is the kind of story that appears every year – some universities have done better this year at getting research funding/with their success rates than in previous years, and some have done worse.  Some of those who have done better and worse are the traditional big players, and some are in the chasing pack.  Those who have done well credit their brilliant internal systems and those who have done badly will contest the figures or point to extenuating circumstances, such as the ending of large grants.

While one always wants to see one’s own institution doing well and doing better, and everyone always enjoys a good bit of schadenfreude at the expense of their rivals benchmark institutions and any apparent difficulties that a big beast find themselves in, are any of these short term variations of actual, real, statistical significance?  Apparently big gains can be down to a combination of a few big wins, grants transferring in with new staff, and just… well… the kind of natural variation you’d expect to see.  Big losses could be big grants ending, staff moving on, and – again – natural variance.  Yes, you could ascribe your big gains to your shiny new review processes, but would you also conclude that there’s a problem with those same processes and people the year after when performance is apparently less good?

Why these short term (and mostly meaningless) short term variations are more newsworthy than the radical variation in ESRC success rates for different social science disciplines I have no idea….

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

Book review: The Research Funding Toolkit (Part 1)

For the purposes of this review, I’ve set aside my aversion to the use of terms like ‘toolkit’ and ‘workshop’.

The existence of a market for The Research Funding Toolkit, by Jacqueline Aldridge and Andrew Derrington, is yet more evidence of how difficult it is to get research funding in the current climate.  Although the primary target audience is an academic one, research managers and those in similar roles “will also find most of this book useful”, and I’d certainly have no hesitation in recommending this book to researchers who want to improve their chances of getting funding, and also to new and to experienced research managers.  In particular, academics who don’t have regular access to research managers (or similar) and to experienced grant getters and givers at their own institution should consider this book essential reading if they entertain serious ambitions about obtaining research funding.  While no amount of skill in grant writing will get a poor idea funded, a lack of skill in grant writing can certainly prevent an outstanding idea from getting the hearing it deserves if the application lacks clarity, fails to highlight the key issues, or fails to make a powerful case for its importance.

The authors have sought to distil a substantial amount of advice and experience down into one short book which covers finding appropriate funding sources, planning an application, understanding application forms, and assembling budgets.  But it goes beyond mere administrative advice, and also addresses writing style, getting useful (rather than merely polite) feedback on draft versions, the internal politics of grant getting, the challenges of collaborative projects, and the key questions that need to be addressed in every application.  Crucially, it demystifies what really goes on at grant decision making meetings – something that far too many applicants know far too little about.  Applicants would love to think that the scholarly and eminent panel spend hours subjecting every facet of their magnum opus to detailed, rigorous, and forensic analysis.  The reality is – unavoidably given application numbers  – rather different.

Aldridge and Derrington are well-situated to write a book about obtaining research funding.  Aldridge is Research Manager at Kent Business School and has over eight years’ experience of research management and administration.  Derrington is Pro-Vice Chancellor for Humanities and Social Sciences at the University of Liverpool, and has served on grant committees for several UK research councils and for the Wellcome Trust.  His research has been “continuously funded” by various schemes and funders for 30 years.  I think a book like this could only have been written in close collaboration between an academic with grant getting and giving experience, and a research manager with experience of supporting applications over a number of years.

The book practices what it preaches by applying the principles of grant writing that it advocates to the style and layout of the book itself.  It is organised into 13 distinct chapters, each containing a summary and introduction, and a conclusion at the end to summarise the key points and lessons to be taken.  It includes 19 different practical tools, as well as examples from successful grant applications. One of the appendixes offers advice on running institutional events on grant getting.  As it advises applicants, it breaks the text down into small chunks, makes good use of headings and subheadings, and uses clear, straightforward language.  It’s certainly an easy, straightforward read which won’t take too long to read cover-to-cover, and the structure allows the reader to dip back in to re-read appropriate sections later.  Probably the most impressive thing for me about the style is how lightly it wears its expertise – genuinely useful advice without falling into the traps of condescension, smugness, or preaching.  Although the prose sacrifices sparkle for clarity and brevity, the book coins a number of useful phrases or distinctions that will be of value, and I’ll certainly be adopting one or two of them.

Writing a book of this nature raises a number of challenges about specificity and relevance.  Different subjects have different funders with different priorities and conventions, and arrangements vary from country to country, and – of course – over time.  The authors have deliberately sought to use a wide range of example funders, including funders from Australia, America, and from Europe – though as you might expect the majority of exemplar funders are UK-based.  However, different Research Councils are used as case studies, and I would imagine that the advice given is generalisable enough to be of real value across academic disciplines and countries.  It’s harder to tell how this book will date, (references to web resources all date from Oct 2011), but much of the advice flows directly from (a) the scarcity of resources, and (b) the way that grant panels are organised and work, and it’s hard to imagine either changing substantially.  The authors are careful not to make generalisations or sweeping assertions based on any particular funder or scheme, so I would be broadly optimistic about the book’s continuing relevance and utility in years to come.  There’s also a website to accompany the book where new materials and updates may be added in the future.  There are already a number of blog posts subsequent to the publication date of the book.

Worries about appearing dated may account for the book having comparatively little to say about the impact agenda and how to go about writing an impact statement.  Only two pages address this directly, and much of these are taken up with examples.  Although not all UK funders ask for impact statements yet, the research councils have been asking for them for some time, and indications are that other countries are more likely to follow suit than not.  However, I think the authors were right not to devote a substantial section to this, as understandings and approaches to impact are still comparatively in their infancy, and such a section would probably be likely to date.

I’ve attempted a fairly general review in this post, and I’ll save most of my personal reaction for Part 2 of this post.  As well as highlighting a few areas that I found particularly useful, I’m going to raise a few issues that arise from the book as a bit of a jumping off point for debate and discussion.  Attempting to do that in this first post will make it too long, and unbalance the review by placing excessive focus on areas where I’d tentatively disagree, rather than the overwhelming majority of the points and arguments made in the book which I’d thoroughly agree with and endorse absolutely.

‘The Research Funding Toolkit(£21.99 for the paperback version) is available from Sage.  The Sage website also mentions an ebook version, but the link doesn’t appear to be working at the time of writing.

Declarations of interest:
Publishers Sage were kind enough to provide me with a free review copy of this book.  I have had some very brief Twitter interactions with Derrington and I met Aldridge briefly at the ARMA conference earlier this year.

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.