Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.

Estimating Investigator and Researcher Time on a Project

PS: Time is also overheads

Prompted in part by an interesting discussion of the importance of the budget in establishing the overall credibility and shape of a research proposal at the ever-excellent Research Whisperer, I thought I’d put fingers-to-keyboard on the vexed issue of estimating staff time on research grant applications.  Partly this is to share some of what I do and what I recommend, but mostly it’s to ask others how they approach it. Comments, thoughts, suggestions, and experiences welcome as ever in the comments below.

Estimating staff time is by some distance the hardest part of the budget, and often when I discuss this with academic colleagues, we end up in a kind of “I dunno, what do you reckon?” impasse.  (I should say that it’s me who speaks like that, not them).  I’ve never been involved in a research project (other than my ‘desk research’ MPhil), so I’ve really no idea, and I don’t have any particular insight into how long certain tasks will take.  Career young academics, and the increasing number of academics who have never had the opportunity to lead on a research project, have little experience to draw upon, and even those who have experience seem to find it difficult.  I can understand that, because I’d find it difficult too.  If someone where to ask me how estimate how much time I spent over the course of a year on (say) duties related to my role as School Research Ethics Officer, I’d struggle to give them in answer in terms of a percentage FTE or in terms of a total number of working days.

It’s further complicated by the convenient fiction that academic staff work the standard five day weeks, and 37.5 hour working weeks.  Even those who don’t regularly work evenings and weekends will almost certainly have flexible working patterns that make it even harder to estimate how long something will take.  This means that the standard units of staff time that are most often used – total days and percentage of full time – aren’t straightforwardly convertible from one currency to the other.  To make matters worse, some European funding sources seem to prefer ‘person months’.  Rather than the standard working week, I think this probably reflects the reality of academic work in many institutions:

The response to my question about how much time the project will take is often answered with another question.  What will the funder pay for?  What looks right?  What feels right?  Longer, thinner project, or shorter and more intensive?  The answer is always and inevitably, ‘it depends’.  It depends upon what is right for the project.  A longer less intensive project might make sense if you have to wait for survey responses to come in, or for other things to happen.  On the other hand, if it’s something that you can and want to work on intensively, go for it.  But the project has to come first.  What would it take to properly take on this research challenge?

Often the answer is a bit hand-wavy and guesstimate-ish.  Oh, I don’t know….say… about, two days per week?  Two and a half?  Would they fund three?  This is a generally a good starting point.  What’s not a good starting point is the kind of fifteen-minutes-every-Tuesday approach, where you end up with a ‘salami project’ – a few people involved, but sliced so thinly it’s hard to see how anything will get done.  This just isn’t credible, and it’s very unlikely to get funded by the likes of the ESRC (and other funders too, I’m sure) if they don’t believe that you’ll possibly be able to deliver for the amount of money (time) you’re asking for.  Either they’ll conclude that you don’t understand what your own research requires (which is fatal for any chance of funding), or they’ll think that you’re trying to pull a fast one in terms of the value for money criterion.  And they won’t like that much either.

The other way to make sure that you’re not going to get funded is to go the other way and become greedy.  I’ve noticed that – oddly – academics seldom over-estimate their own time, but over-estimate the amount of researcher time required.  I’ve seen potential bids before now that include a heavy smattering of research associates, but no clear idea what it actually is they’ll be doing all day, other than doing stuff that the lead investigator doesn’t want to do.  In a UK context, overheads are calculated on the basis of investigator and researcher time, so including researchers is doubly expensive.  One way round this that I often recommend is to ensure that what’s required is actually a research associate (who attracts overheads) rather than an academic-related project manager or administrator (who doesn’t).  But if it’s hard to estimate how long it will take you to do any given task, it’s doubly hard to estimate how long it will take someone else – usually someone less experienced and less skilled – and perhaps from a cognate sub-discipline rather than from your own.

My usual advice is for would-be principal investigators to draw up a table showing the various phases of the project as rows, and project staff as columns.  In addition to the main phases of the research, extra rows should be added for the various stages of dissemination and impact activity; for project coordination with colleagues; for line management of any researchers or administrative/managerial staff; for professional development where appropriate.  Don’t forget travelling time associated with meetings, fieldwork, conferences etc.  The main thing that I usually see underestimated is project management time.  As principal investigator, you will have to meet reasonably regularly with your finance people, and you’ll have to manage and direct the project research associates.  Far too many academics seem to see RAs as clones who will instinctively know what to do and don’t need much in the way of direction, advice, or feedback.

Once this is done, I suggest adding up the columns and then working backwards from those total numbers of project days to a percentage FTE, or days per week, or whatever alternative metric you prefer.  In the UK, the research councils assuming that 220 days = 1 working year, so you can use this to calculate the percentage of full time.  The number that you come out with should feel intuitively about right.  If it doesn’t, then something has probably gone wrong.  Go back to the table, and adjust things a little.  By going to and fro, from the precision of the table to the intuition of the percentage of time, you should reach what my philosophical hero and subject of my thesis, John Rawls, called ‘reflective equilibrium’.  Though he wasn’t talking about investigator time.  Once you’re happy with the result, you should probably use the figure from the table, and you should certainly consider putting the table in full in the ‘justification for resources’ section.

Something I’m starting at the moment as part of the end-of-project review process is to go back to the original estimates of staff time, and to get a sense from the research team about how accurate they were, and what they would estimate differently next time, if anything.  The two things that have come out strongly from this so far I’ve outlined above – managing staff and project administration – but I’ll be looking for others.

So…. over to you.  How do you estimate researcher and investigator time?  Have you been involved in a funded project?  If so, what did you miss in your forecasts (if anything)?  What would you do differently next time?

What would wholesale academic adoption of social media look like?

A large crowd of people
Crowded out?

Last week, the LSE Impact of Social Sciences blog was asking for its readers and followers to nominate their favourite academic tweeters.  This got me thinking.  While that’s a sensible question to ask now, and one that could create a valuable resource, I wonder whether the question would make as much sense if asked in a few years time?

The drivers for academics (and arguably academic-related types like me) to start to use twitter and to contribute to a blog are many – brute self-promotion; desire to join a community or communities; to share ideas; to test ideas; to network and make new contacts; to satisfy the impact requirements of the research funder; and so on and so forth.  I think most current PhD students would be very well advised to take advantage of social media to start building themselves an online presence as an early investment in their search for a job (academic or otherwise).   I’d imagine that a social media strategy is now all-but-standard in most ESRC ‘Pathways to Impact’ documents.  Additionally, there are now many senior, credible, well-established academic bloggers and twitterers, many of whom are also advocates for the use of social media.

So, what would happen if there was a huge upsurge in the number of academics (and academic-relateds) using social media?  What if, say, participation rates reach about 20% or so?  Would the utility of social media scale, or would the noise to signal ratio be such that its usefulness would decrease?

This isn’t a rhetorical question – I’ve really no idea and I’m curious.  Anyone?  Any thoughts?

I guess that there’s a difference between different types of social media.  I have friends who are outside the academy and who have Twitter accounts for following and listening, rather than for leading or talking.  They follow the Brookers and the Frys and the Goldacres, and perhaps some news sources.  They use Twitter like a form of RSS feed, essentially.

But what about blogging, or using Twitter to transmit, rather than to receive?  If even 10% of academics have an active blog, will it still be possible or practical to keep track of everything relevant that’s written.  In my field, I think I’ve linked to pretty much every related blog (see links in the sidebar) in the UK, and one from Australia.  In certain academic fields it’s probably similarly straightforward to keep track of everyone significant and relevant.  If this blogging lark catches on, there will come a point at which it’s no longer possible for anyone to keep up with everything in any given field.  So, maybe people become more selective and we drop down to sub-specialisms, and it becomes sensible to ask for our favourite academic tweeters on non-linear economics, or something like that.

On the other hand, it might be that new entrants to the blogging market will be limited and inhibited by the number already present.  Or we might see more multi-author blogs, mergers etc and so on until we re-invent the journal.  Or strategies that involve attracting the attention and comment of influential bloggers and the academic twitterati (a little bit of me died inside typing that, I hope you’re happy….).  Might that be what happens?  That e-hierarchies form (arguably they already exist) that echo real world hierarchies, and effectively squeeze out new entrants?  Although… I guess good content will always have a chance of ‘going viral’ within relevant communities.

Of course, it may well be that something else will happen.  That Twitter will end up in the same pile as MySpace.  Or that it simply won’t be widely adopted or become mainstream at all.  After all, most academics still don’t have much of a web 1.0 presence beyond a perfunctory page on their Department website.

That’s all a bit rambly and far longer than I meant it to be.  But as someone who is going to be recommending the greater use of social media to researchers, I’d like to have a sense of where all this might be going, and what the future might hold.  Would the usefulness of social media as an academic communication, information sharing, and networking tool effectively start to diminish once a certain point is reached?  Or would it scale?

 

The ESRC and “Demand Management”: Part 4 – Quotas and Sanctions, PIs and Co-Is….

A picture of Stuart Pearce and Fabio Capello
It won't just be Fabio getting sanctioned if the referee's comments aren't favourable.

Previously in this series of posts on ESRC Demand Management I’ve discussed the background to the current unsustainable situation and aspects of the initial changes, such as the greater use of sifting and outline stages, and the new ban on (uninvited) resubmissions.  In this post I’ll be looking forward to the possible measures that might be introduced in a year or so’s time should application numbers not drop substantially….

When the ESRC put their proposals out to consultation, there were four basic strategies proposed.

  • Charging for applications
  • Quotas for numbers of applications per institution
  • Sanctions for institutions
  • Sanctions for individual researchers

Reading in between the lines of the demand management section of the presentation that the ERSC toured the country with in the spring, charging for applications is a non-starter.  Even in the consultation documents, this option only appeared to be included for the sake of completeness – it was readily admitted that there was no evidence that it would have the desired effect.

I think we can also all-but-discount quotas as an option.  The advantage of quotas is that it would allow the ESRC to precisely control the maximum number of applications that could be submitted.  Problem is, it’s the nuclear option, and I think it would be sensible to try less radical options first.  If their call for better self-regulation and internal peer review within institutions fails, and then sanctions schemes are tried and fail, then (and only then) should they be thinking about quotas.  Sanctions (and the threat of sanctions) are a seek to modify application submission behaviour, while quotas pretty much dictate it.  There may yet be a time when Quotas are necessary, though I really hope not.

What’s wrong with Quotas, then?  Well, there will be difficulties in assigning quotas fairly to institutions, in spite of complex plans for banding and ‘promotion’ and ‘relegation’ from the bands.  That’ll lead to a lot of game playing, and it’s also likely that there will be a lot of mucking around with the lead applicant.  If one of my colleagues has a brilliant idea and we’re out of Quota, well, maybe we’ll find someone at an institution that isn’t and ask them to lead.  I can imagine a lot of bickering over who should spend their quota on submitting an application with a genuinely 50-50 institutional split.

But my main worry is that institutions are not good at comparing applications from different disciplines.  If we have applications from (say) Management and Law vying for the last precious quota slot, how is the institution to choose between them?  Even if it has experts who are not on the project team, they will inevitably have a conflict of interest – there would be a worry that they would support their ‘team’.  We could give it a pretty good cognate discipline review, but I’m not confident we would always get the decision right.  It won’t take long before institutions start teaming up to provide external preliminary peer review of each other’s applications, and before you know it, we end up just shifting the burden from post-submission to pre-submission for very little gain.

In short, I think quotas are a last resort idea, and shouldn’t be seriously considered unless we end up in a situation where a combination of (a) the failure of other demand management measures, and/or (b) significant cuts in the amount of funding available.

Which leaves sanctions – either on individual researchers or on their institutions.  The EPSRC has had a policy of researcher sanctions for some time, and that’s had quite a considerable effect.  I don’t think it’s so much through sanctioning people and taking them out of the system so much as a kind of chill or placebo effect, whereby greater self-selection is taking place.  Once there’s a penalty for throwing in applications and hoping that some stick, people will stop.

As I argued previously, I think a lot of that pressure for increased submissions is down to institutions rather than individuals, who in many cases are either following direct instructions and expectations, or at least a very strong steer.  As a result, I was initially in favour of a hybrid system of sanctions where both individual researchers and institutions could potentially be sanctioned.  Both bear a responsibility for the application, and both are expected to put their name to it.  But after discussions internally, I’ve been persuaded that individual sanctions are the way to go, in order to have a consistent approach with the EPSRC, and with the other Research Councils, who I think are very likely to have their own version.  While the formulae may vary according to application profiles, as much of a common approach as possible should be adopted, unless of course there are overwhelming reasons why one of the RCs that I’m less familiar with should be different.

For me, the big issue is not whether we end up with individual, institutional, or hybrid sanctions, but whether the ESRC go ahead with plans to penalise co-investigators (and/or their institutions) as well as PIs in cases where an application does not reach the required standard.

This is a terrible, terrible, terrible idea and I would urge them to drop it.  The EPSRC don’t do it, and it’s not clear why the ESRC want to.  For me, the co-I issue is more important than which sanction model we end up with.

Most of the ESRC’s documents on demand management are thoughtful and thorough.  They’re written to inform the consultation exercise rather than dictate a solution, and I think the author(s) should be – on the whole – congratulated on their work.  Clearly a lot of hard work has gone into the proposals, which given the seriousness of the proposals is only right.  However, nowhere is there to be found any kind of argument or justification that I can find for why co-investigators (insert your own ‘and/or institutions’ from here on)  should be regarded as equally culpable.

I guess the argument (which the ESRC doesn’t make) might be that an application will be given yet more careful consideration if more than the principal investigator has something to lose.  At the moment, I don’t do a great deal if an application is led from elsewhere – I offer my services, and sometimes that offer is taken up, sometimes it isn’t.  But no doubt I’d be more forceful in my ‘offer’ if a colleague or my university could end up with a sanctions strike against us.  Further, I’d probably be recommending that none of my academic colleagues get involved in an application without it going through our own rigorous internal peer review processes.  Similarly, I’d imagine that academics would be much more careful about what they allowed their name to be put to, and would presumably take a more active role in drafting the application.  Both institutions and individual academics, can, I think, be guilty of regarding an application led from elsewhere as being a free roll of the dice.  But we’re taking action on this – or at least I am.

The problem is that these benefits are achieved (if they are achieved at all) at the cost of abandoning basic fairness.  It’s just not clear to me why an individual/institution with only a minor role in a major project should be subject to the same penalty as the principal investigator and/or the institution that failed to spot that the application was unfundable.  It’s not clear to me why the career-young academic named as co-I on a much more senior colleague’s proposal should be held responsible for its poor quality.  I understand that there’s a term in aviation – cockpit gradient – which refers to the difference in seniority between Pilot and Co-Pilot.  A very senior Pilot and a very junior co-Pilot is a bad mix because the junior will be reluctant to challenge the senior.  I don’t understand why someone named as co-I for an advisory role – on methodology perhaps, or for a discrete task, should bear the same responsibility.  And so on and so forth.  One response might be to create a new category of research team member less responsible than a ‘co-investigator’ but more involved in the project direction (or part of the project direction) than a ‘researcher’, but do we really want to go down the road of redefining categories?

Now granted, there are proposals where the PI is primus inter pares among a team of equally engaged and responsible investigators, where there is no single, obvious candidate for the role of PI.  In those circumstances, we might think it would be fair for all of them to pay the penalty. But I wonder what proportion of applications are like this, with genuine joint leadership?  Even in such cases, every one of those joint leaders ought to be happy to be named as PI, because they’ve all had equal input.  But the unfairness inherent in only one person getting a strike against their name (and other(s) not), is surely much less unfair than the examples above?

As projects become larger, with £200k (very roughly, between two and two and a half person-years including overheads and project expenses) now being the minimum, the complex, multi-armed, innovative, interdisciplinary project is likely to be come more and more common, because that’s what the ESRC says that it wants to fund.   But the threat of a potential sanction (or step towards sanction) for every last co-I involved is going to be a) a massive disincentive to large-scale collaboration, b) a logistical and organisational nightmare, or c) both.

Institutionally, it makes things very difficult.  Do we insist that every last application involving one of our academics goes through our peer review processes?  Or do we trust the lead institution?  Or do we trust some (University of Russell) but not others (Poppleton University)?  How does the PI manage writing and guiding the project through various different approval processes, with the danger that team members may withdraw (or be forced to withdraw) by their institution?  I’d like to think that in the event of sanctions on co-Is and/or institutions that most Research Offices would come up with some sensible proposals for managing the risk of junior-partnerdom in a proportionate manner, but it only takes one or two to start demanding to see everything and to run everything to their timetable to make things very difficult indeed.

ESRC Future Research Leaders call announced

A Brazilian football badge
Brazilian international footballers... guaranteed four stars....

The ESRC has recently launched their long-awaited Future Research Leaders scheme, and it’s a mixture of good news and not so good news.

The good news first – that there’s a scheme at all, and that there’s funding at all.  As senior ESRC staff are quick to point out, the research councils did well to get a ‘flat cash’ settlement in the comprehensive spending review.  It could be much, much worse.  Another piece of good news, I think, has been the merger of the old ‘First Grants Scheme’ and the ‘Post-doctoral Fellowship’ scheme.  The problem with the PDF was that those who had a permanent academic contract could not apply.  I don’t know about other disciplines, but in Business and Management, I think it’s fair to say that most of the best and brightest career young researchers would be snapped up.  Now, it’s possible that some of the best and brightest might have turned down a permanent academic (research and teaching) contract for a year or so of concentrated research time, but that would be a brave move.  So I wonder if the ESRC ended up funding the best of the best who didn’t get permanent jobs – but perhaps that’s unfair.

So… limitations of the old PDF scheme and reduced budgets make a consolidated scheme seem sensible.  But the change in emphasis is clear even from the language.  The clue’s in the name – with the old ‘First Grants Scheme’, it was about outstanding career young researchers with outstanding ideas who hadn’t yet had a chance to be PI on their own project.  Make no mistake – it was always very competitive, and before the ESRC introduced an outline stage, the success rates were lower than for the late lamented Small Grants Scheme.  But ‘Future Research Leaders’ strikes a rather different note.  When I first heard the name I thought this marked a shift from the a broad scheme, to a much more narrow, much more elitist one.  And that’s been confirmed by the call specification.

“We expect to see only a limited number of outline applications from a single research organisation; only bids from outstanding individuals, with the potential in Research Excellence Framework terms to become the 4* researchers of the future, should be submitted through this call”

And there are other limitations too.  If I remember rightly, the old FGS eligibility rules were for seven years post-PhD.  With FRL, we’re down to four years.  Add in the fact that there was no call last year because of the Comprehensive Spending Review, and it’s obvious that a whole cohort of early career researchers will miss out on this opportunity.  The only people who should be applying are those sitting right at the centre of a Venn Diagram of demonstrable 4* potential, post-doc experience eligibility, and having an absolutely first class outstanding project.  Anyone else looking at this call, frankly, is wasting their time.

While I’m not sure about the eligibility rule changes, did anyone really think that those getting funding through this scheme or its predecessors weren’t the 4*ers of the future?  Perhaps this is just an example of the ESRC being more up front about its funding criteria – or, better – what it actually takes to get funding through this call.  But I do think that the current social science research funding landscape has very serious problems.  Yes, let’s encourage the 4*s of the future, but we also need 3*s, and even 2*s and 1*s, both in their own right, and to properly exploit, comment upon, and explore the implications and applications of 4* research.  But the dysfunction of the funding landscape is a topic for another blog.

But….. no-one can accuse the ESRC of not being absolutely up front about this.  And it’s not hard to see why.  With no call for two years, other funding sources drying up, institutional hunger for attracting research funding, rising teaching loads across the sector, and promotion incentives for grant getting, there was a real danger of the ESRC drowning in a tidal wave of applications.  In many ways, this is the first test of the ESRC’s “demand management” request for institutions to self-regulate.  Let’s see if we’re capable.