Book review: The Research Funding Toolkit (Part 1)

For the purposes of this review, I’ve set aside my aversion to the use of terms like ‘toolkit’ and ‘workshop’.

The existence of a market for The Research Funding Toolkit, by Jacqueline Aldridge and Andrew Derrington, is yet more evidence of how difficult it is to get research funding in the current climate.  Although the primary target audience is an academic one, research managers and those in similar roles “will also find most of this book useful”, and I’d certainly have no hesitation in recommending this book to researchers who want to improve their chances of getting funding, and also to new and to experienced research managers.  In particular, academics who don’t have regular access to research managers (or similar) and to experienced grant getters and givers at their own institution should consider this book essential reading if they entertain serious ambitions about obtaining research funding.  While no amount of skill in grant writing will get a poor idea funded, a lack of skill in grant writing can certainly prevent an outstanding idea from getting the hearing it deserves if the application lacks clarity, fails to highlight the key issues, or fails to make a powerful case for its importance.

The authors have sought to distil a substantial amount of advice and experience down into one short book which covers finding appropriate funding sources, planning an application, understanding application forms, and assembling budgets.  But it goes beyond mere administrative advice, and also addresses writing style, getting useful (rather than merely polite) feedback on draft versions, the internal politics of grant getting, the challenges of collaborative projects, and the key questions that need to be addressed in every application.  Crucially, it demystifies what really goes on at grant decision making meetings – something that far too many applicants know far too little about.  Applicants would love to think that the scholarly and eminent panel spend hours subjecting every facet of their magnum opus to detailed, rigorous, and forensic analysis.  The reality is – unavoidably given application numbers  – rather different.

Aldridge and Derrington are well-situated to write a book about obtaining research funding.  Aldridge is Research Manager at Kent Business School and has over eight years’ experience of research management and administration.  Derrington is Pro-Vice Chancellor for Humanities and Social Sciences at the University of Liverpool, and has served on grant committees for several UK research councils and for the Wellcome Trust.  His research has been “continuously funded” by various schemes and funders for 30 years.  I think a book like this could only have been written in close collaboration between an academic with grant getting and giving experience, and a research manager with experience of supporting applications over a number of years.

The book practices what it preaches by applying the principles of grant writing that it advocates to the style and layout of the book itself.  It is organised into 13 distinct chapters, each containing a summary and introduction, and a conclusion at the end to summarise the key points and lessons to be taken.  It includes 19 different practical tools, as well as examples from successful grant applications. One of the appendixes offers advice on running institutional events on grant getting.  As it advises applicants, it breaks the text down into small chunks, makes good use of headings and subheadings, and uses clear, straightforward language.  It’s certainly an easy, straightforward read which won’t take too long to read cover-to-cover, and the structure allows the reader to dip back in to re-read appropriate sections later.  Probably the most impressive thing for me about the style is how lightly it wears its expertise – genuinely useful advice without falling into the traps of condescension, smugness, or preaching.  Although the prose sacrifices sparkle for clarity and brevity, the book coins a number of useful phrases or distinctions that will be of value, and I’ll certainly be adopting one or two of them.

Writing a book of this nature raises a number of challenges about specificity and relevance.  Different subjects have different funders with different priorities and conventions, and arrangements vary from country to country, and – of course – over time.  The authors have deliberately sought to use a wide range of example funders, including funders from Australia, America, and from Europe – though as you might expect the majority of exemplar funders are UK-based.  However, different Research Councils are used as case studies, and I would imagine that the advice given is generalisable enough to be of real value across academic disciplines and countries.  It’s harder to tell how this book will date, (references to web resources all date from Oct 2011), but much of the advice flows directly from (a) the scarcity of resources, and (b) the way that grant panels are organised and work, and it’s hard to imagine either changing substantially.  The authors are careful not to make generalisations or sweeping assertions based on any particular funder or scheme, so I would be broadly optimistic about the book’s continuing relevance and utility in years to come.  There’s also a website to accompany the book where new materials and updates may be added in the future.  There are already a number of blog posts subsequent to the publication date of the book.

Worries about appearing dated may account for the book having comparatively little to say about the impact agenda and how to go about writing an impact statement.  Only two pages address this directly, and much of these are taken up with examples.  Although not all UK funders ask for impact statements yet, the research councils have been asking for them for some time, and indications are that other countries are more likely to follow suit than not.  However, I think the authors were right not to devote a substantial section to this, as understandings and approaches to impact are still comparatively in their infancy, and such a section would probably be likely to date.

I’ve attempted a fairly general review in this post, and I’ll save most of my personal reaction for Part 2 of this post.  As well as highlighting a few areas that I found particularly useful, I’m going to raise a few issues that arise from the book as a bit of a jumping off point for debate and discussion.  Attempting to do that in this first post will make it too long, and unbalance the review by placing excessive focus on areas where I’d tentatively disagree, rather than the overwhelming majority of the points and arguments made in the book which I’d thoroughly agree with and endorse absolutely.

‘The Research Funding Toolkit(£21.99 for the paperback version) is available from Sage.  The Sage website also mentions an ebook version, but the link doesn’t appear to be working at the time of writing.

Declarations of interest:
Publishers Sage were kind enough to provide me with a free review copy of this book.  I have had some very brief Twitter interactions with Derrington and I met Aldridge briefly at the ARMA conference earlier this year.

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

Responding to Referees

Preliminary evidence appears to show that this approach to responding to referees is - on balance - probably sub-optimal. (Photo by Tseen Khoo)

This post is co-authored by Adam Golberg of Cash for Questions (UK), and Jonathan O’Donnell and Tseen Khoo of The Research Whisperer (Australia).

It arises out of a comment that Jonathan made about understanding and responding to referees on one of Adam’s posts about what to do if your grant application is unsuccessful. This seemed like a good topic for an article of its own, so here it is, cross-posted to our respective blogs.

A quick opening note on terminology: We use ‘referee’ or ‘assessor’ to refer to academics who read and review research grant applications, then feed their comments into the final decision-making process. Terminology varies a bit between funders, and between the UK and Australia. We’re not talking about journal referees, although some of the advice that follows may also apply there.

————————————-

There are funding schemes that offer applicants the opportunity to respond to referees’ comments. These responses are then considered alongside the assessors’ scores/comments by the funding panel. Some funders (including the Economic and Social Research Council [ESRC] in the UK) have a filtering process before this point, so if you are being asked to respond to referees’ comments, you should consider it a positive sign as not all applications get this far. Others, such as the Australian Research Council (ARC), offer you the chance to write a rejoinder regardless of the level of referees’ reports.

If the funding body offers you the option of a response, you should consider your response as one of the most important parts of the application process.  A good response can draw the sting from criticisms, emphasise the positive comments, and enhance your chances of getting funding.  A bad one can doom your application.

And if you submit no response at all? That can signal negative things about your project and research team that might live on beyond this grant round.

The first thing you might need to do when you get the referees’ comments about your grant application is kick the (imaginary) cat.* This is an important process. Embrace it.

When that’s out of your system, here are four strategies for putting together a persuasive response and pulling that slaved-over application across the funding finish line.

1. Attitude and tone

Be nice.  Start with a brief statement thanking the anonymous referees for their careful and insightful comments, even if actually you suspect some of them are idiots who haven’t read your masterpiece properly. Think carefully about the tone of the rest of the response as well.  You’re aiming for calm, measured, and appropriately assertive.  There’s nothing wrong with saying that a referee is just plain wrong on a particular point, but do it calmly and politely.  If you’re unhappy about a criticism or reviewer, there’s a good chance that it will take several drafts before you eliminate all the spikiness from the text.  If it makes you feel better (and it might), you can write what you really think in the tone that you think it in but, whatever you do, don’t send that version! This is the version that may spontaneously combust from the deadly mixture of vitriol and pleading contained within.

Preparing a response is not about comprehensively refuting every criticism, or establishing intellectual superiority over the referees. You need to sift the comments to identify the ones that really matter. What are the criticisms (or backhanded compliments) that will harm your cause? Highlight those and answer them methodically (see below). Petty argy-bargy isn’t worth spending your time on.

2. Understanding and interpreting referees’ comments

One UK funder provides referee report templates that invite the referees to state their level of familiarity with the topic and even a little about their research background, so that the final decision-making panel can put their comments into context. This is a great idea, and we would encourage other funding agencies to embrace it.

Beyond this volunteered information (if provided), never assume you know who the referee is, or that you can infer anything else about them because you could be going way off-base with your rant against econometricians who don’t ‘get’ sociological work. If there’s one thing worse than an ad hominem response, it’s an ad hominem response aimed at the wrong target!

One exercise that you might find useful is to produce a matrix listing all of the criticisms, and indicating the referee(s) who made those objections. As these reports are produced independently, the more referees make a particular point, the more problematic it might be.  This tabled information can be sorted by section (e.g. methodology, impact/dissemination plan, alternative approaches). You can then repeat the exercise with the positive comments that were made. While assimilating and processing information is a task that academics tend to be good at, it’s worth being systematic about this because it’s easy to overlook praise or attach too much weight to objections that are the most irritating.

Also, look out for, and highlight, any requests that you do a different project. Sometimes, these can be as obvious as “you should be doing Y instead”, where Y is a rather different project and probably closer to the reviewer’s own interests. These can be quite difficult criticisms to deal with, as what they are proposing may be sensible enough, but not what you want to do.  In such cases, stick to your guns, be clear what you want to do, and why it’s of at least as much value as the alternative proposal.

Using the matrix that you have prepared, consider further how damaging each criticism might be in the minds of the decision makers.  Using a combination of weight of opinion (positive remarks on a particular point minus criticisms) and multiplying by potential damage, you should now have a sense of which are the most serious criticisms.

Preparing a response is not a task to be attempted in isolation. You should involve other members of your team, and make full use of your research support office and senior colleagues (who are not directly involved in the application). Take advantage of assistance in interpreting the referees’ comments, and reviewing multiple drafts of your response.

Don’t read the assessor reports by themselves; you should also go back to your whole application, several times if necessary. It has probably been some time since you submitted the application, and new eyes and a bit of distance will help you to see the application as the referees may have seen it. You could pinpoint the reasons for particular criticisms, or misunderstandings that you assumed they made. While their criticisms may not be valid for the application you thought you wrote, they may very well be so for the one that you actually submitted.

3. The response

You should plan to use the available space in line with the exercise above, setting aside space for each criticism in proportion to its risk of stopping you getting funded.

Quibbles about your budgeted expenditure for hotel accommodation are insignificant compared to objections that question your entire approach, devalue your track-record, invalidate your methodology, or claim that you’re adding little that’s new to the sum of human knowledge. So, your response should:

  • Make it easy for the decision-makers: Be clear and concise.
  • Be specific when rebutting from the application. For example: “As we stated on page 24, paragraph 3…”. However, don’t lose sight of the need to create a document that can be understood in isolation as far as possible.
  • If possible and appropriate, introduce something that you’ve done in the time since submission to rebut a negative comment (be careful, though, as some schemes may not allow the introduction of new material).
  • Acknowledge any misunderstandings that arise from the application’s explanatory shortcomings or limitations of space, and be open to new clarifications.
  • Be grateful for the positive comments, but focus on rebutting the negative comments.

4. Be the reviewer

For the best way to really get an idea of what the response dynamic is all about in these funding rounds, consider becoming a grant referee. Once you’ve assessed a few applications and cut your teeth on a whole funding round (they can often be year-long processes), you quickly learn about the demands of the job and how regular referees ‘value’ applications.

Look out for chances to be on grant assessment panels, and say yes to invitations to review for various professional bodies or government agencies. Almost all funding schemes could do with a larger and more diverse pool of academics to act as their ‘gate-keepers’.

Finally: Remember to keep your eyes on the prize. The purpose of this response exercise is to give your project the best possible chance of getting funding. It is an inherent part of many funding rounds these days, and not only an afterthought to your application.

* The writers and their respective organisations do not, in any way, endorse the mistreatment of animals. We love cats.  We don’t kick them, and neither should you. It’s just an expression. For those who’ve never met it, it means ‘to vent your frustration and powerlessness’.

I’ve disabled comments on this entry so that we can keep conversations on this article to one place – please head over to the Research Whisperer if you’d like to comment. (AG).

Coping with rejection: What to do if your grant application is unsuccessful. Part 2: Next Steps

Look, I know I said that not getting funded doesn't mean they disliked your proposal, but I need a picture and it's either this or a picture of Simon Cowell with his thumb down. Think on.

In the first part of this series, I argued that it’s important not to misunderstand or misinterpret the reasons for a grant application being unsuccessful.  In the comments, Jo VanEvery shared a phrase that she’s heard from a senior figure at one of the Canadian Research Councils – that research funding “is not a test, it’s a contest”.  Not getting funded doesn’t necessarily mean that your research isn’t considered to be of high quality.  This second entry is about what steps to consider next.

1.  Some words of wisdom

‘Tis a lesson you should heed:  Try, try, try again.
If at first you don’t succeed, Try, try, try again
William Edward Hickson (1803-1870)

The definition of insanity is doing the same thing over and over but expecting different results
Ben Franklin, Albert Einstein, or Narcotics Anonymous

I like these quotes because they’re both correct in their own way.  There’s value to Hickson’s exhortation.  Success rates are low for most schemes and most funders, so even if you’ve done everything right, the chances are against you.  To be successful, you need a degree of resilience to look for another funder or a new project, rather than embarking on a decade-long sulk, muttering plaintively about how “the ESRC doesn’t like” your research whenever the topic of external funding is raised.

However Franklin et al (or al?) also have a point about not learning from the experience, and repeating the same mistakes without learning anything as you drift from application to application.  While doing this, you can convince yourself that research funding is a lottery (which it isn’t) and all you have to do is to submit enough applications and eventually your number will come up (which it won’t).  This is the kind of approach (on the part of institutions as well as individuals) that’s pushed us close to ‘demand management’ measures with the ESRC.  More on learning from the experience in a moment or two.

2.  Can you do the research anyway?

This might seem like an odd question to ask, but it’s always the first one I ask academic colleagues who’ve been unsuccessful with a grant application (yes, this does happen,  even at Nottingham University Business School).  The main component of most research projects is staff time.  And if you’re fortunate enough to be employed by a research-intensive institution which gives you a generous research time allocation, then this shouldn’t be a problem.  Granted, you can’t have that full time research associate you wanted, but could you cut down the project and take on some or all of that work yourself or between the investigators?  Could you involve more people – perhaps junior colleagues – to help cover the work? Would others be willing to be involved if they can either co-author or be sole author on some of the outputs?  Could it be a PhD project?

Directly incurred research expenses are more of a problem – transcription costs, data costs, travel and expenses – especially if you and your co-investigators don’t have personal research accounts to dip into.  But if it turns out that all you need is your expenses paying, then a number of other funding options become viable – some external, but perhaps also some internal.

Of course, doing it anyway isn’t always possible, but it’s worth asking yourself and your team that question.  It’s also one that’s well worth asking before you decide to apply for funding.

3.  What can you learn for next time?

It’s not nice not getting your project funded.  Part of you probably wants to lock that application away and not think about it again.  Move onwards and upwards, and perhaps trying again with another research idea.  While resilience is important, it’s just as important to learn whatever lessons there are to learn to give yourself the best possible chance next time.

One lesson you might be able to take from the experience is about planning the application.  If you found yourself running out of time, or not getting sufficient input from senior colleagues, not taking full advantage of the support available within your institution, well, that’s a lesson to learn.  Give yourself more time, start earlier before the deadline, and don’t make yourself rush it.  If you did all this last time, remember that you did, and the difference that it made.  If you didn’t, then the fact is that your application was almost certainly not as strong as it could have been.  And if your application document is not the strongest possible iteration of your research idea, your chances of getting funded are pretty minimal.

I’d recommend reading through your application and the call guidance notes once again in the light of referees’ comments.  Now that you have sufficient distance from the application, you should ‘referee’ it yourself as well.  What would you do better next time?  Not necessarily individual application-specific aspects, but more general points.  Did your application address the priorities of the call specifically enough, or were the crowbar marks far too visible?  Did you get the balance right between exposition and background and writing about the current project?  Did you pay enough attention to each section?  Did you actually answer the questions asked?  Do you understand any criticisms that the referees had?

4. Can you reapply?  Should you reapply?

If it’s the ESRC you’re thinking about, then the answer’s no unless you’re invited.  I think we’re still waiting on guidance from the ESRC about what constitutes a resubmission, but if you find yourself thinking about how much you might need to tinker with your unsuccessful project to make it a fresh submission, then the chances are that you’ll be barking up the wrong tree.  Worst case scenario is that it’s thrown straight out without review, and best case is probably that you end up with something a little too contrived to stand any serious chance of funding.

Some other research funders do allow resubmissions, but generally you will need to declare it.  While you might get lucky with a straight resubmission, my sense is that if it was unsuccessful once it will be unsuccessful again. But if you were to thoroughly revise it, polish it, take advice from anyone willing to give it, and have one more go, well, who knows?

But there’s really no shame in walking away.  Onwards and upwards to the next idea.  Let this one go for now, and working on something new and fresh and exciting instead.  Just remember everything that you learnt along the way.  One former colleague once told me that he usually got at least one paper out of an application even it was unsuccessful.  I don’t know how true that might be more generally, but you’ve obviously done a literature review and come up with some ideas for future research.  Might there be a paper in all that somewhere?

Another option which I hinted at earlier when I mentioned looking for the directly incurred costs only is resubmitting to another funder.  My advice on this is simple…. don’t resubmit to another funder.  Or at least, don’t treat it like a resubmission.  Every research funder, every scheme, has different interests and priorities.  You wrote an application for one funder, which presumably was tailored to that funder (it was, wasn’t it?).  So a few alterations probably won’t be enough.

For one thing, the application form is almost certainly different, and that eight page monstrosity won’t fit into two pages.  But cut it down crudely, and if it reads like it’s been cut down crudely, you have no chance.  I’ve never worked for a research funding body (unless you count internal schemes where I’ve had a role in managing the process), but I would imagine that if I did, the best way to annoy me (other than using the word ‘impactful‘) would be sending me some other funder’s cast-offs.  It’s not quite like romancing a potential new partner and using your old flame’s name by mistake, but you get the picture.  Your new funder wants to feel special and loved.  They want you to have picked out them – and them alone – for their unique and enlightened approach to funding.  Only they can fill the hole in your heart wallet, and satisfy your deep yearning for fulfilment.

And where should you look if your first choice funder does not return your affections?  Well, I’m not going to tell you (not without a consultancy fee, anyway).  But I’m sure your research funding office will be able to help find you some new prospective partners.

 

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”

Coping with rejection: What to do if your grant application is unsuccessful. Part 1: Understand what it means…. and what it doesn’t mean

You can't have any research funding. In this life, or the next....

Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference.  PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued.  In principle, everyone could have one if they met the requirements.  You’re not going to fail your driving test because there are better drivers than you.  Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.

You’d think this was a fairly obvious point to make.  But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough.  They end up taking it rather personally.  And given the amount of time and effort that must researchers put into their applications, that’s not surprising.

It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job.  Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill.  In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference.  While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability.  My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment.  You did the best that you could.  With better luck you’ll get the next one.

Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision.  You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true.  Your application might have missed out by a fraction, or been one of the first into the discard pile.

Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process.  In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several.  Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done.  These apples are fine as far as they go, but they should really taste of oranges.  I like oranges.

Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember.  Even ahead of the valid ones.  And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded.  Everything else was positive, so that one negative review must be the reason, yes?  Well, maybe not.  It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project.  But we don’t know.  I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why.  I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.

Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded.  This should give a much stronger indication of whether it was a near miss or miles off.  Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions.   So it’s not all good news.

But this is where we should be heading with feedback.  Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible.  Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing.  It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected).  Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review.  And that’s something….

So… in conclusion….  just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable.  And don’t take it personally.  It’s not personal.  Just the business of research funding.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.