Coping with rejection: What to do if your grant application is unsuccessful. Part 2: Next Steps

Look, I know I said that not getting funded doesn't mean they disliked your proposal, but I need a picture and it's either this or a picture of Simon Cowell with his thumb down. Think on.

In the first part of this series, I argued that it’s important not to misunderstand or misinterpret the reasons for a grant application being unsuccessful.  In the comments, Jo VanEvery shared a phrase that she’s heard from a senior figure at one of the Canadian Research Councils – that research funding “is not a test, it’s a contest”.  Not getting funded doesn’t necessarily mean that your research isn’t considered to be of high quality.  This second entry is about what steps to consider next.

1.  Some words of wisdom

‘Tis a lesson you should heed:  Try, try, try again.
If at first you don’t succeed, Try, try, try again
William Edward Hickson (1803-1870)

The definition of insanity is doing the same thing over and over but expecting different results
Ben Franklin, Albert Einstein, or Narcotics Anonymous

I like these quotes because they’re both correct in their own way.  There’s value to Hickson’s exhortation.  Success rates are low for most schemes and most funders, so even if you’ve done everything right, the chances are against you.  To be successful, you need a degree of resilience to look for another funder or a new project, rather than embarking on a decade-long sulk, muttering plaintively about how “the ESRC doesn’t like” your research whenever the topic of external funding is raised.

However Franklin et al (or al?) also have a point about not learning from the experience, and repeating the same mistakes without learning anything as you drift from application to application.  While doing this, you can convince yourself that research funding is a lottery (which it isn’t) and all you have to do is to submit enough applications and eventually your number will come up (which it won’t).  This is the kind of approach (on the part of institutions as well as individuals) that’s pushed us close to ‘demand management’ measures with the ESRC.  More on learning from the experience in a moment or two.

2.  Can you do the research anyway?

This might seem like an odd question to ask, but it’s always the first one I ask academic colleagues who’ve been unsuccessful with a grant application (yes, this does happen,  even at Nottingham University Business School).  The main component of most research projects is staff time.  And if you’re fortunate enough to be employed by a research-intensive institution which gives you a generous research time allocation, then this shouldn’t be a problem.  Granted, you can’t have that full time research associate you wanted, but could you cut down the project and take on some or all of that work yourself or between the investigators?  Could you involve more people – perhaps junior colleagues – to help cover the work? Would others be willing to be involved if they can either co-author or be sole author on some of the outputs?  Could it be a PhD project?

Directly incurred research expenses are more of a problem – transcription costs, data costs, travel and expenses – especially if you and your co-investigators don’t have personal research accounts to dip into.  But if it turns out that all you need is your expenses paying, then a number of other funding options become viable – some external, but perhaps also some internal.

Of course, doing it anyway isn’t always possible, but it’s worth asking yourself and your team that question.  It’s also one that’s well worth asking before you decide to apply for funding.

3.  What can you learn for next time?

It’s not nice not getting your project funded.  Part of you probably wants to lock that application away and not think about it again.  Move onwards and upwards, and perhaps trying again with another research idea.  While resilience is important, it’s just as important to learn whatever lessons there are to learn to give yourself the best possible chance next time.

One lesson you might be able to take from the experience is about planning the application.  If you found yourself running out of time, or not getting sufficient input from senior colleagues, not taking full advantage of the support available within your institution, well, that’s a lesson to learn.  Give yourself more time, start earlier before the deadline, and don’t make yourself rush it.  If you did all this last time, remember that you did, and the difference that it made.  If you didn’t, then the fact is that your application was almost certainly not as strong as it could have been.  And if your application document is not the strongest possible iteration of your research idea, your chances of getting funded are pretty minimal.

I’d recommend reading through your application and the call guidance notes once again in the light of referees’ comments.  Now that you have sufficient distance from the application, you should ‘referee’ it yourself as well.  What would you do better next time?  Not necessarily individual application-specific aspects, but more general points.  Did your application address the priorities of the call specifically enough, or were the crowbar marks far too visible?  Did you get the balance right between exposition and background and writing about the current project?  Did you pay enough attention to each section?  Did you actually answer the questions asked?  Do you understand any criticisms that the referees had?

4. Can you reapply?  Should you reapply?

If it’s the ESRC you’re thinking about, then the answer’s no unless you’re invited.  I think we’re still waiting on guidance from the ESRC about what constitutes a resubmission, but if you find yourself thinking about how much you might need to tinker with your unsuccessful project to make it a fresh submission, then the chances are that you’ll be barking up the wrong tree.  Worst case scenario is that it’s thrown straight out without review, and best case is probably that you end up with something a little too contrived to stand any serious chance of funding.

Some other research funders do allow resubmissions, but generally you will need to declare it.  While you might get lucky with a straight resubmission, my sense is that if it was unsuccessful once it will be unsuccessful again. But if you were to thoroughly revise it, polish it, take advice from anyone willing to give it, and have one more go, well, who knows?

But there’s really no shame in walking away.  Onwards and upwards to the next idea.  Let this one go for now, and working on something new and fresh and exciting instead.  Just remember everything that you learnt along the way.  One former colleague once told me that he usually got at least one paper out of an application even it was unsuccessful.  I don’t know how true that might be more generally, but you’ve obviously done a literature review and come up with some ideas for future research.  Might there be a paper in all that somewhere?

Another option which I hinted at earlier when I mentioned looking for the directly incurred costs only is resubmitting to another funder.  My advice on this is simple…. don’t resubmit to another funder.  Or at least, don’t treat it like a resubmission.  Every research funder, every scheme, has different interests and priorities.  You wrote an application for one funder, which presumably was tailored to that funder (it was, wasn’t it?).  So a few alterations probably won’t be enough.

For one thing, the application form is almost certainly different, and that eight page monstrosity won’t fit into two pages.  But cut it down crudely, and if it reads like it’s been cut down crudely, you have no chance.  I’ve never worked for a research funding body (unless you count internal schemes where I’ve had a role in managing the process), but I would imagine that if I did, the best way to annoy me (other than using the word ‘impactful‘) would be sending me some other funder’s cast-offs.  It’s not quite like romancing a potential new partner and using your old flame’s name by mistake, but you get the picture.  Your new funder wants to feel special and loved.  They want you to have picked out them – and them alone – for their unique and enlightened approach to funding.  Only they can fill the hole in your heart wallet, and satisfy your deep yearning for fulfilment.

And where should you look if your first choice funder does not return your affections?  Well, I’m not going to tell you (not without a consultancy fee, anyway).  But I’m sure your research funding office will be able to help find you some new prospective partners.

 

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”

Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.