Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”

Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.

 

Outstanding researcher or Oustanding grant writer?

"It's all the game, yo....."

The Times Higher has a report on Sir Paul Nurse‘s ‘Anniversary Day’ address to the Royal Society.  Although the Royal Society is a learned society in the natural rather than the social sciences, he makes an interesting distinction that seems to have – more or less unchallenged – become a piece of received wisdom across many if not all fields of research.

Here’s part of what Sir Paul had to say (my underline added)

Given this emphasis on the primacy of the individuals carrying out the research, decisions should be guided by the effectiveness of the researchers making the research proposal. The most useful criterion for effectiveness is immediate past progress. Those that have recently carried out high quality research are most likely to continue to do so. In coming to research funding decisions the objective is not to simply support those that write good quality grant proposals but those that will actually carry out good quality research. So more attention should be given to actual performance rather than planned activity. Obviously such an emphasis needs to be tempered for those who have only a limited recent past record, such as early career researchers or those with a break in their careers. In these cases making more use of face-to-face interviews can be very helpful in determining the quality of the researcher making the application.

I guess my first reaction to this is to wonder whether interviews are the best way of deciding research funding for early career researchers.  Apart from the cost, inconvenience and potential equal opportunities issues of holding interviews, I wonder if they’re even a particularly good way of making decisions.  When it comes to job interviews, I’ve seen many cases where interview performance seems to take undue priority over CV and experience.  And if the argument is that sometimes the best researchers aren’t the best communicators (which is fair), it’s not clear to me how an interview will help.

My second reaction is to wonder about the right balance between funding excellent research and funding excellent researchers.  And I think this is really the point that Sir Paul is making.  But that’s a subject for another entry, another time.  Coming soon!

My third reaction – and what this entry is about – is the increasingly common assumption that there is one tribe of researchers who can write outstanding applications, and another which actually does outstanding research.  One really good expression of this can be found in a cartoon at the ever-excellent Research Counselling.  Okay, so it’s only a cartoon, but it wouldn’t have made it there unless it was tapping into some deeper cultural assumptions.  This article from the Times Higher back at the start of November speaks of ‘Dr Plods’ – for whom getting funding is an aim in itself – and ‘Dr Sparks’ – the ones who deserve it – and there seems to be little challenge from readers in the comments section below.

But does this assumption have any basis in fact?  Are those who get funded mere journeymen and women researchers, mere average intellects, whose sole mark of distinction is their ability to toady effectively to remote and out-of-touch funding bodies?  To spot the research priority flavour-of-the-month from the latest Delivery Plan, and cynically twist their research plans to match it?  It’s a comforting thought for the increasingly large number of people who don’t get funding for their project.  We’d all like to be the brilliant-but-eccentric-misunderstood-radical-unappreciated genius, who doesn’t play by the rules, cuts a few corners but gets the job done, and to hell with the pencil pushers at the DA’s office in city hall in RCUK’s offices in downtown Swindon.  A weird kind of cross between Albert Einstein and Jimmy McNulty from ‘The Wire’.

While I don’t think anyone is seriously claiming that the Sparks-and-Plods picture should be taken literally, I’m not even sure how much truth there is in it as a parable or generalisation.  For one thing, I don’t see how anyone could realistically Plod their way very far from priority to priority as they change and still have a convincing track record for all of them.  I’m sure that a lot of deserving proposals don’t get funded, but I doubt very much that many undeserving proposals do get the green light.  The brute fact is that there are more good ideas than there is money to spend on funding them, and the chances of that changing in the near future are pretty much zero.  I think that’s one part of what’s powering this belief – if good stuff isn’t being funded, that must be because mediocre stuff is being funded.  Right?  Er, well…. probably not.  I think the reality is that it’s the Sparks who get funded, but it’s those Sparks who are better able to communicate their ideas and make a convincing case for fit with funders’ or scheme priorities.  Plods, and their ‘incremental’ research (a term that damns with faint praise in some ESRC referee’s reports that I’ve seen) shouldn’t even be applying to the ESRC – or at least not to the standard Research Grants scheme.

A share of this Sparks/Plods view is probably caused by the impact agenda.  If impact is hard for the social sciences, it’s at least ten times as hard for basic research in many of the natural sciences.  I can understand why people don’t like the impact agenda, and I can understand why people are hostile.  However, I’ve always understood the impact agenda as far as research funding applications are concerned is that if a project has the potential for impact, it ought to, and there ought to be a good, solid, thought through, realistic, and defensible plan for bringing it about.  If there genuinely is no impact, argue the case in the impact statement.  Consider this, from the RCUK impact FAQ.

How do Pathways to Impact affect funding decisions within the peer review process?

The primary criterion within the peer review process for all Research Councils is excellent research. This has always been the case and remains unchanged. As such, problematic research with an excellent Pathways to Impact will not be funded. There are a number of other criteria that are assessed within research proposals, and Pathways to Impact is now one of those (along with e.g. management of the research and academic beneficiaries).

Of course, how this plays out in practice is another matter, but every indication I’ve had from the ESRC is that this is taken very seriously.  Research excellence comes first.  Impact (and other factors) second.  These may end up being used in tie-breakers, but if it’s not excellent, it won’t get funded.  Things may be different at the other Research Councils that I know less about, especially the EPSRC which is repositioning itself as a sponsor of research, and is busy dividing and subdividing and prioritising research areas for expansion or contraction in funding terms.

It’s worth recalling that it’s academics who make decisions on funding.  It’s not Suits in Swindon.  It’s academics.  Your peers.  I’d be willing to take seriously arguments that the form of peer review that we have can lead to conservatism and caution in funding decisions.  But I find it much harder to accept the argument that senior academics – researchers and achievers in their own right – are funding projects of mediocre quality but good impact stories ahead of genuinely innovative, ground-breaking research which could drive the relevant discipline forward.

But I guess my message to anyone reading this who considers herself to be more of a ‘Doctor Spark’ who is losing out to ‘Doctor Plod’ is to point out that it’s easier for Sparky to do what Ploddy does well than vice versa.  Ploddy will never match your genius, but you can get the help of academic colleagues and your friendly neighbourhood research officer – some of whom are uber-Plods, which in at least some cases is a large part of the reason why they’re doing their job rather than yours.

Want funding?  Maximise your chances of getting it.  Want to win?  Learn the rules of the game and play it better.  Might your impact plan be holding you back?  Take advantage of any support that your institution offers you – and if it does, be aware of the advantage that this gives you.  Might your problem be the art of grant writing?  Communicating your ideas to a non-specialised audience?  To reviewers and panel members from a cognate discipline?  To a referee not from your precise area?  Take advice.  Get others to read it.  Take their impressions and even their misunderstandings seriously.

Or you could write an application with little consideration for impact, with little concern for clarity of expression or the likely audience, and then if you’re unsuccessful, you can console yourself with the thought that it’s the system, not you, that’s at fault.

ESRC Demand Management Part 5: And the winner is…. researcher sanctions!

"And the prize for best supporting sanctions scheme goes to...."
And the winner is.....

The ESRC today revealed the outcome of the ‘Demand Management’ consultation, with the consultation exercise showing a strong preference for researcher sanctions rather than the other main options, which were institutional sanctions, institutional quotas, or charging for applications.  And therefore….

Given this clear message, it is likely that any further steps will reflect these views.

Which I think means that that’s what they’re going to do.  But being (a) academics, and (b) British, it has to be expressed in the passive voice and as tentatively as possible.

Individual researcher sanctions got the vote of  82% of institutional responses, 80% of learned society responses, and 44% of individual responses.   To put that in context, though, 32% of the individual responses were interpreted as backing none of the possible measures, which I don’t think was ever going to be a particular convincing response.    Institutional sanctions came second among institutions (11%), and institutional quotas (20%) among individual respondents.  Charging for applications was, as I expected, a non-starter, apparently attracting the support of two institutions and one learned society or ‘other agency’.  I’m surprised it got that many.

The issue of the presentation of the results as a ‘vote’ is an interesting one, as I don’t think that’s what this exercise was presented as at the time.  The institutional response that I was involved in was – I like to think – a bit more nuanced and thoughtful than just a ‘vote’ for one particular option.  In any case, if it was a vote, I’m sure that the ‘First Past the Post’ system which appears to have been used wouldn’t be appropriate – some kind of ‘alternative vote’ system to find the least unpopular option would surely have been more appropriate.  I’m also puzzled by the combining of the results from institutions, individuals, and learned societies into totals for ‘all respondents’ which seems to give the same weighting to individual and institutional responses.

Fortunately – or doubly-fortunately – those elements of the research community which responded delivered a clear signal about the preferred method of demand management, and, in my view at least, it’s the right one.  I’ll admit to being a bit surprised by how clear cut the verdict appears to be, but it’s very much one I welcome.

It’s not all good news, though.  The announcement is silent on exactly what form the programme of researcher sanctions will take, and there is still the possibility that sanctions may apply to co-investigators as well as the principal investigator.  As I’ve argued before, I think this would be a mistake, and would be grossly unfair in far too many cases.  I know that there are some non-Nottingham folks reading this blog, so if your institution isn’t one of the ones that responded (and remember only 44 of 115 universities did), it might be worth finding out why not, and making your views known on this issue.

One interesting point that is stressed in the announcement is that individual researcher sanctions – or any form of further ‘demand management’ measures – may never happen.  The ESRC have been clear about this all along – the social science research community was put on notice about the unsustainablity of the current volume of applications being submitted, and that a review would take place in autumn 2012.  The consultation was about the general form of any further steps should they prove necessary.  And interestingly the ESRC are apparently ‘confident’ they they will not.

We remain confident that by working in partnership with HEIs there will be no need to take further steps. There has been a very positive response from institutions to our call for greater self-regulation, and we expect that this will lead to a reduction in uncompetitive proposals.

Contrast that with this, from March, when the consultation was launched:

We very much hope that we will not need additional measures.

Might none of this happen?  I’d like to think not, but I don’t share their confidence, and I fear that “very much hope” was nearer the mark.  I can well believe that each institution is keen to up its game, and I’m sure discussions are going on about new forms of internal peer review, mentoring, research leadership etc in institutions all across the country.  Whether this will lead to a sufficient fall in the number of uncompetitive applications, well, I’m not so sure.

I think there needs to be an acceptance that there are plenty of perfectly good research ideas that would lead to high quality research outputs in quality journals, perhaps with strong non-academic impact, which nevertheless aren’t ‘ESRC-able’ – because they’re merely ‘very good’ or ‘excellent’ rather than ‘outstanding’.  And it’s only the really outstanding ideas that are going to be competitive.  If all institutions realise this, researcher sanctions may never happen.  But if hubris wins out, and everyone concludes that it’s everyone else’s applications that are the problem, then researcher sanctions are inevitable.