Meanwhile, over at the ESRC…

There have been a few noteworthy developments at the ESRC over the summer months which I think are probably worth drawing together into a single blog post for those (like me) who’ve made the tactical error of choosing to have some time off over the summer.

1.  The annual report

I’ve been looking forward to this (I know, I know….) to see whether there’s been any substantial change to the huge differences in success rates between different academic disciplines.  I wrote a post about this back in October and it’s by some distance the most read article on my blog. Has there been any improvements since 2011/12, when Business and Management had 1 of 68 applications funded and Education 2 of 62, compared to Socio-Legal Studies (39%, 7 of 18), and Social Anthropology (28%, 5 from 18).

Sadly, we still don’t know, because this information is nowhere to be found in the annual report. We know the expenditure by region and the top 11 (sic) recipients of research expenditure, research and training expenditure, and the two combined.  But we don’t know how this breaks down by subject.  To be fair, that information wasn’t published until October last year, and so presumably it will be forthcoming.  And presumably the picture will be better this year.

That’s not to say that there’s no useful information in the annual report. We learn that the ESRC Knowledge Exchange Scheme has a very healthy success rate of 52%, though I think I’m right in saying that the scheme will have been through a number of variations in the period in question. Historically it’s not been an easy scheme to apply for, partly because of the need for co-funding from research partners, and partly because of a number of very grey areas around costing rules.

For the main Research Grants Scheme success rates are also up, though by how much is unclear.  The text of the report (p. 18) states that

After a period where rates plummeted to as low as 11 per cent, they have now risen to 35 per cent, in part because we have committed additional funding to the scheme [presumably through reallocation, rather than new money] but also because application volume has decreased. This shows the effects of our demand management strategy, with HEIs now systematically quality assuring their applications and filtering out those which are not ready for submission. We would encourage HEIs to continue to develop their demand management strategies as this means academics and administrators in both HEIs and the ESRC have been able to focus efforts on processing and peer-reviewing a smaller number of good quality applications, rather than spending time on poor quality proposals which have no chance of being funded.

Oddly the accompanying table gives a 27% success rate, and unfortunately (at the time of writing) the document with success rates for individual panel meetings hasn’t been updated since April 2012, and the individual panel meeting documents only list funded projects, not success rates. But whatever the success rate is, it does appear to be a sign that “demand management” is working and that institutions are practising restraint in their application habits.  Success rates of between a quarter and a third sound about right to me – enough applications to allow choice, but not so many as to be a criminal waste of time and effort.

The report also contains statistics about the attendance of members at Council and Audit Committee Meetings, but you’ll have to look them up for yourself as I have a strict “no spoilers” policy on this blog.

I very much look forward – and I think the research community is too – to seeing the success rates by academic discipline at a later date.

2. A new Urgency Grants Mechanism

More good news…. a means by which research funding decisions can be taken quickly in response to the unexpected and significant.  The example given is the Riots of summer 2011, and I remember thinking that someone would get a grant out of all this as I watched TV pictures my former stomping ground of Croydon burn.  But presumably less… explosive unexpected opportunities might arise too.  All this seems only sensible, and allows a way for urgent requests to be considered in a timely and transparent manner.

3. ESRC Future Research Leaders call

But “sensible” isn’t a word I’d apply to the timing of this latest call.  First you’ve heard of it?  Well, better get your skates on because the deadline is the 24th September. Outline applications?  Expressions of interest?  Nope, a full application.  And in all likelihood, you should probably take your skates off again because chances are that your institution’s internal deadlines for internal peer review have already been and gone.

The call came out on or about the 23rd July, with a deadline of 24th September. Notwithstanding what I’ve said previously about no time of the academic year being a good time to get anything done, it’s very hard to understand why this happened.  Surely the ESRC know that August/September is when a lot of academic staff (and therefore research support) are away from the university on a mixture of annual leave and undertaking research.  Somehow, institutions are expected to cobble together a process of internal review and institutional support, and individuals are expected to find time to write the application.  It’s hard enough for the academics to write the applications, but if we take the demand management agenda seriously, we should be looking at both the track record and the proposed project of potential applicants, thinking seriously about mentoring and support, and having difficult conversations with people we don’t think are ready.  That needs a lot of senior time, and a lot of research management time.

This scheme is a substantial investment.  Effectively 70 projects worth up to £250k (at 80% fEC).  This is a major investment, and given that the Small Grants scheme and British Academy Fellowship success rates are tiny, this is really the major opportunity to be PI on a substantial project.  This scheme is overtly picking research leaders of the future, but the timetable means that it’s picking those leaders from those who didn’t have holiday booked in the wrong couple of weeks, or who could clear their diaries to write the application, or who don’t have a ton of teaching to prepare for – which is most early career academics, I would imagine.

Now it might be objected that we should have know that the call was coming.  Well…. yes and no. The timing was similar last year, and it was tight then, but it’s worse this year – it was announced on about the same date, but with a deadline 4th October, almost two working weeks later.  Two working weeks that turns it from a tall order into something nigh on impossible, and which can only favour those with lighter workloads in the run-up to the new academic year. And even knowing that it’s probably coming doesn’t help.  Do we really expect people to start making holiday plans around when a particular call might come out?  Really?  If we must have a September deadline, can we know about it in January?  Or even earlier?  To be fair, the ESRC has got much better with pre-call announcements of late, at least for very narrow schemes, but this really isn’t good enough.

I also have a recollection (backed up by a quick search through old emails, but not by documentary evidence) that last year the ESRC were talking about changing the scheme for this year, possibly with multiple deadlines or even going open call.  Surely, I remember thinking, this start-of-year madness can only be a one-off.

Apparently not.

Is there a danger that research funding calls are getting too narrow?

The ESRC have recently added a little more detail to a previous announcement about a pending call for European-Chinese joint research projects on Green Economy and Population Change.  Specifically, they’re after projects which address the following themes:

Green Economy

  • The ‘greenness and dynamics of economies’
  • Institutions, Policies and planning for a green economy
  • The green economy in cities and metropolitan areas
  • Consumer behaviour and lifestyles in a green economy

Understanding population Change

  • changing life course
  • urbanisation and migration
  • labour markets and social security dynamics
  • methodology, modelling and forecasting
  • care provision
  • comparative policy learning

Projects will need to involve institutions from at least two of the participating European counties (UK, France (involvement TBC), Germany, Netherlands) and two institutions in China. On top of this is an expectation that there will be sustainability/capacity building around the research collaborations, plus the usual further plus points of involving stakeholders and interdisciplinary research.

Before I start being negative, or potentially negative, I have one blatant plug and some positive things to say. The blatant plug is that the University of Nottingham has a campus in Ningbo in China which is eligible for NSFC funding and therefore would presumably count as one Chinese partner. I wouldn’t claim to know all about all aspects of our Ningbo research expertise, but I know people who do.  Please feel free to contact me with ideas/research agendas and I’ll see if I can put you in touch with people who know people.

The positive things.  The topics seem to me to be important, and we’ve been given advance notice of the call and a fair amount of time to put something together.  There’s a reference to Open Research Area procedures and mechanisms, which refers to agreements between the UK, France, Netherlands and Germany on a common decision making process for joint projects in which each partner is funded by their national funder under their own national funding rules.  This is excellent, as it doesn’t require anyone to become an expert in another country’s national funder’s rules, and doesn’t have the double or treble jeopardy problem of previous calls where decisions were taken by individual funders.  It’s also good that national funders are working together on common challenges – this adds fresh insight, invites interesting comparative work and pools intellectual and financial resources.

However, what concerns me about calls like this is that the area at the centre of the particular Venn diagram of this call is really quite small.  It’s open to researchers with research interests in the right areas, with collaborators in the right European countries, with collaborators in China.   That’s two – arguably three – circles in the diagram.  Of course, there’s a fourth – proposals that are outstanding.  Will there be enough strong competition on the hallowed ground at the centre of all these circles? It’s hard to say, as we don’t know yet how much money is available.

I’m all for calls that encourage, incentivise, and facilitate international research.  I’m in favour of calls on specific topics which are under-researched, which are judged of particular national or international importance, or where co-funding from partners can be found to address areas of common interest.

But I’m less sure about having both in one call – both very specific requirements in terms of the nationality of the partner institutions, and in terms of the call themes. Probably the scope of this call is wide enough – presumably the funders think so – but I can’t help think that that less onerous eligibility requirements in terms of partners could lead to greater numbers of high quality applications.

Demand mismanagement: a practical guide

I’ve written an article on Demand (Mis)management for Research Professional. While most of the site’s content is behind a paywall, they’ve been kind enough to make my article open access.  Which saves me the trouble of cutting and pasting it here.

Universities are striving to make their grant applications as high in quality as possible, avoid wasting time and energy, and run a supportive yet critical internal review process. Here are a few tips on how not to do it. [read the full article]

In other news, I was at the ARMA conference earlier this week and co-presented a session on Research Development for the Special Interest Group with Dr Jon Hunt from the University of Bath.  A copy of the presentation and some further thoughts will follow once I’ve caught up with my email backlog….

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

Responding to Referees

Preliminary evidence appears to show that this approach to responding to referees is - on balance - probably sub-optimal. (Photo by Tseen Khoo)

This post is co-authored by Adam Golberg of Cash for Questions (UK), and Jonathan O’Donnell and Tseen Khoo of The Research Whisperer (Australia).

It arises out of a comment that Jonathan made about understanding and responding to referees on one of Adam’s posts about what to do if your grant application is unsuccessful. This seemed like a good topic for an article of its own, so here it is, cross-posted to our respective blogs.

A quick opening note on terminology: We use ‘referee’ or ‘assessor’ to refer to academics who read and review research grant applications, then feed their comments into the final decision-making process. Terminology varies a bit between funders, and between the UK and Australia. We’re not talking about journal referees, although some of the advice that follows may also apply there.

————————————-

There are funding schemes that offer applicants the opportunity to respond to referees’ comments. These responses are then considered alongside the assessors’ scores/comments by the funding panel. Some funders (including the Economic and Social Research Council [ESRC] in the UK) have a filtering process before this point, so if you are being asked to respond to referees’ comments, you should consider it a positive sign as not all applications get this far. Others, such as the Australian Research Council (ARC), offer you the chance to write a rejoinder regardless of the level of referees’ reports.

If the funding body offers you the option of a response, you should consider your response as one of the most important parts of the application process.  A good response can draw the sting from criticisms, emphasise the positive comments, and enhance your chances of getting funding.  A bad one can doom your application.

And if you submit no response at all? That can signal negative things about your project and research team that might live on beyond this grant round.

The first thing you might need to do when you get the referees’ comments about your grant application is kick the (imaginary) cat.* This is an important process. Embrace it.

When that’s out of your system, here are four strategies for putting together a persuasive response and pulling that slaved-over application across the funding finish line.

1. Attitude and tone

Be nice.  Start with a brief statement thanking the anonymous referees for their careful and insightful comments, even if actually you suspect some of them are idiots who haven’t read your masterpiece properly. Think carefully about the tone of the rest of the response as well.  You’re aiming for calm, measured, and appropriately assertive.  There’s nothing wrong with saying that a referee is just plain wrong on a particular point, but do it calmly and politely.  If you’re unhappy about a criticism or reviewer, there’s a good chance that it will take several drafts before you eliminate all the spikiness from the text.  If it makes you feel better (and it might), you can write what you really think in the tone that you think it in but, whatever you do, don’t send that version! This is the version that may spontaneously combust from the deadly mixture of vitriol and pleading contained within.

Preparing a response is not about comprehensively refuting every criticism, or establishing intellectual superiority over the referees. You need to sift the comments to identify the ones that really matter. What are the criticisms (or backhanded compliments) that will harm your cause? Highlight those and answer them methodically (see below). Petty argy-bargy isn’t worth spending your time on.

2. Understanding and interpreting referees’ comments

One UK funder provides referee report templates that invite the referees to state their level of familiarity with the topic and even a little about their research background, so that the final decision-making panel can put their comments into context. This is a great idea, and we would encourage other funding agencies to embrace it.

Beyond this volunteered information (if provided), never assume you know who the referee is, or that you can infer anything else about them because you could be going way off-base with your rant against econometricians who don’t ‘get’ sociological work. If there’s one thing worse than an ad hominem response, it’s an ad hominem response aimed at the wrong target!

One exercise that you might find useful is to produce a matrix listing all of the criticisms, and indicating the referee(s) who made those objections. As these reports are produced independently, the more referees make a particular point, the more problematic it might be.  This tabled information can be sorted by section (e.g. methodology, impact/dissemination plan, alternative approaches). You can then repeat the exercise with the positive comments that were made. While assimilating and processing information is a task that academics tend to be good at, it’s worth being systematic about this because it’s easy to overlook praise or attach too much weight to objections that are the most irritating.

Also, look out for, and highlight, any requests that you do a different project. Sometimes, these can be as obvious as “you should be doing Y instead”, where Y is a rather different project and probably closer to the reviewer’s own interests. These can be quite difficult criticisms to deal with, as what they are proposing may be sensible enough, but not what you want to do.  In such cases, stick to your guns, be clear what you want to do, and why it’s of at least as much value as the alternative proposal.

Using the matrix that you have prepared, consider further how damaging each criticism might be in the minds of the decision makers.  Using a combination of weight of opinion (positive remarks on a particular point minus criticisms) and multiplying by potential damage, you should now have a sense of which are the most serious criticisms.

Preparing a response is not a task to be attempted in isolation. You should involve other members of your team, and make full use of your research support office and senior colleagues (who are not directly involved in the application). Take advantage of assistance in interpreting the referees’ comments, and reviewing multiple drafts of your response.

Don’t read the assessor reports by themselves; you should also go back to your whole application, several times if necessary. It has probably been some time since you submitted the application, and new eyes and a bit of distance will help you to see the application as the referees may have seen it. You could pinpoint the reasons for particular criticisms, or misunderstandings that you assumed they made. While their criticisms may not be valid for the application you thought you wrote, they may very well be so for the one that you actually submitted.

3. The response

You should plan to use the available space in line with the exercise above, setting aside space for each criticism in proportion to its risk of stopping you getting funded.

Quibbles about your budgeted expenditure for hotel accommodation are insignificant compared to objections that question your entire approach, devalue your track-record, invalidate your methodology, or claim that you’re adding little that’s new to the sum of human knowledge. So, your response should:

  • Make it easy for the decision-makers: Be clear and concise.
  • Be specific when rebutting from the application. For example: “As we stated on page 24, paragraph 3…”. However, don’t lose sight of the need to create a document that can be understood in isolation as far as possible.
  • If possible and appropriate, introduce something that you’ve done in the time since submission to rebut a negative comment (be careful, though, as some schemes may not allow the introduction of new material).
  • Acknowledge any misunderstandings that arise from the application’s explanatory shortcomings or limitations of space, and be open to new clarifications.
  • Be grateful for the positive comments, but focus on rebutting the negative comments.

4. Be the reviewer

For the best way to really get an idea of what the response dynamic is all about in these funding rounds, consider becoming a grant referee. Once you’ve assessed a few applications and cut your teeth on a whole funding round (they can often be year-long processes), you quickly learn about the demands of the job and how regular referees ‘value’ applications.

Look out for chances to be on grant assessment panels, and say yes to invitations to review for various professional bodies or government agencies. Almost all funding schemes could do with a larger and more diverse pool of academics to act as their ‘gate-keepers’.

Finally: Remember to keep your eyes on the prize. The purpose of this response exercise is to give your project the best possible chance of getting funding. It is an inherent part of many funding rounds these days, and not only an afterthought to your application.

* The writers and their respective organisations do not, in any way, endorse the mistreatment of animals. We love cats.  We don’t kick them, and neither should you. It’s just an expression. For those who’ve never met it, it means ‘to vent your frustration and powerlessness’.

I’ve disabled comments on this entry so that we can keep conversations on this article to one place – please head over to the Research Whisperer if you’d like to comment. (AG).

Coping with rejection: What to do if your grant application is unsuccessful. Part 2: Next Steps

Look, I know I said that not getting funded doesn't mean they disliked your proposal, but I need a picture and it's either this or a picture of Simon Cowell with his thumb down. Think on.

In the first part of this series, I argued that it’s important not to misunderstand or misinterpret the reasons for a grant application being unsuccessful.  In the comments, Jo VanEvery shared a phrase that she’s heard from a senior figure at one of the Canadian Research Councils – that research funding “is not a test, it’s a contest”.  Not getting funded doesn’t necessarily mean that your research isn’t considered to be of high quality.  This second entry is about what steps to consider next.

1.  Some words of wisdom

‘Tis a lesson you should heed:  Try, try, try again.
If at first you don’t succeed, Try, try, try again
William Edward Hickson (1803-1870)

The definition of insanity is doing the same thing over and over but expecting different results
Ben Franklin, Albert Einstein, or Narcotics Anonymous

I like these quotes because they’re both correct in their own way.  There’s value to Hickson’s exhortation.  Success rates are low for most schemes and most funders, so even if you’ve done everything right, the chances are against you.  To be successful, you need a degree of resilience to look for another funder or a new project, rather than embarking on a decade-long sulk, muttering plaintively about how “the ESRC doesn’t like” your research whenever the topic of external funding is raised.

However Franklin et al (or al?) also have a point about not learning from the experience, and repeating the same mistakes without learning anything as you drift from application to application.  While doing this, you can convince yourself that research funding is a lottery (which it isn’t) and all you have to do is to submit enough applications and eventually your number will come up (which it won’t).  This is the kind of approach (on the part of institutions as well as individuals) that’s pushed us close to ‘demand management’ measures with the ESRC.  More on learning from the experience in a moment or two.

2.  Can you do the research anyway?

This might seem like an odd question to ask, but it’s always the first one I ask academic colleagues who’ve been unsuccessful with a grant application (yes, this does happen,  even at Nottingham University Business School).  The main component of most research projects is staff time.  And if you’re fortunate enough to be employed by a research-intensive institution which gives you a generous research time allocation, then this shouldn’t be a problem.  Granted, you can’t have that full time research associate you wanted, but could you cut down the project and take on some or all of that work yourself or between the investigators?  Could you involve more people – perhaps junior colleagues – to help cover the work? Would others be willing to be involved if they can either co-author or be sole author on some of the outputs?  Could it be a PhD project?

Directly incurred research expenses are more of a problem – transcription costs, data costs, travel and expenses – especially if you and your co-investigators don’t have personal research accounts to dip into.  But if it turns out that all you need is your expenses paying, then a number of other funding options become viable – some external, but perhaps also some internal.

Of course, doing it anyway isn’t always possible, but it’s worth asking yourself and your team that question.  It’s also one that’s well worth asking before you decide to apply for funding.

3.  What can you learn for next time?

It’s not nice not getting your project funded.  Part of you probably wants to lock that application away and not think about it again.  Move onwards and upwards, and perhaps trying again with another research idea.  While resilience is important, it’s just as important to learn whatever lessons there are to learn to give yourself the best possible chance next time.

One lesson you might be able to take from the experience is about planning the application.  If you found yourself running out of time, or not getting sufficient input from senior colleagues, not taking full advantage of the support available within your institution, well, that’s a lesson to learn.  Give yourself more time, start earlier before the deadline, and don’t make yourself rush it.  If you did all this last time, remember that you did, and the difference that it made.  If you didn’t, then the fact is that your application was almost certainly not as strong as it could have been.  And if your application document is not the strongest possible iteration of your research idea, your chances of getting funded are pretty minimal.

I’d recommend reading through your application and the call guidance notes once again in the light of referees’ comments.  Now that you have sufficient distance from the application, you should ‘referee’ it yourself as well.  What would you do better next time?  Not necessarily individual application-specific aspects, but more general points.  Did your application address the priorities of the call specifically enough, or were the crowbar marks far too visible?  Did you get the balance right between exposition and background and writing about the current project?  Did you pay enough attention to each section?  Did you actually answer the questions asked?  Do you understand any criticisms that the referees had?

4. Can you reapply?  Should you reapply?

If it’s the ESRC you’re thinking about, then the answer’s no unless you’re invited.  I think we’re still waiting on guidance from the ESRC about what constitutes a resubmission, but if you find yourself thinking about how much you might need to tinker with your unsuccessful project to make it a fresh submission, then the chances are that you’ll be barking up the wrong tree.  Worst case scenario is that it’s thrown straight out without review, and best case is probably that you end up with something a little too contrived to stand any serious chance of funding.

Some other research funders do allow resubmissions, but generally you will need to declare it.  While you might get lucky with a straight resubmission, my sense is that if it was unsuccessful once it will be unsuccessful again. But if you were to thoroughly revise it, polish it, take advice from anyone willing to give it, and have one more go, well, who knows?

But there’s really no shame in walking away.  Onwards and upwards to the next idea.  Let this one go for now, and working on something new and fresh and exciting instead.  Just remember everything that you learnt along the way.  One former colleague once told me that he usually got at least one paper out of an application even it was unsuccessful.  I don’t know how true that might be more generally, but you’ve obviously done a literature review and come up with some ideas for future research.  Might there be a paper in all that somewhere?

Another option which I hinted at earlier when I mentioned looking for the directly incurred costs only is resubmitting to another funder.  My advice on this is simple…. don’t resubmit to another funder.  Or at least, don’t treat it like a resubmission.  Every research funder, every scheme, has different interests and priorities.  You wrote an application for one funder, which presumably was tailored to that funder (it was, wasn’t it?).  So a few alterations probably won’t be enough.

For one thing, the application form is almost certainly different, and that eight page monstrosity won’t fit into two pages.  But cut it down crudely, and if it reads like it’s been cut down crudely, you have no chance.  I’ve never worked for a research funding body (unless you count internal schemes where I’ve had a role in managing the process), but I would imagine that if I did, the best way to annoy me (other than using the word ‘impactful‘) would be sending me some other funder’s cast-offs.  It’s not quite like romancing a potential new partner and using your old flame’s name by mistake, but you get the picture.  Your new funder wants to feel special and loved.  They want you to have picked out them – and them alone – for their unique and enlightened approach to funding.  Only they can fill the hole in your heart wallet, and satisfy your deep yearning for fulfilment.

And where should you look if your first choice funder does not return your affections?  Well, I’m not going to tell you (not without a consultancy fee, anyway).  But I’m sure your research funding office will be able to help find you some new prospective partners.

 

Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”