An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Jobs in university administration

This man had hair before he started shortlisting.....

The Guardian Higher Education network recently hosted a careers clinic on ‘How to break into university administration‘, and I posted a few thoughts that I thought might be useful.  According to my referral stats for my blog, a number of visitors end up here with similar questions about both recruitment processes and what it’s like to work for a university.  I think it’s mainly my post on Academics vs University Administrators part 94 that gets those hits.  I’ve also been asked by friends and relatives for my very limited wisdom on this topic.

I also think it’s good to share this information, because one of my worries whenever I’m involved in recruiting staff is that we end up employing people who are best at writing applications and being interviewed.  In my particular line of work, that’s fine – if you can’t write a strong job application against set criteria, you probably shouldn’t be helping academics with grant applications.  But that’s the exception.

So what follows is me spilling the beans on my very limited experience of recruiting administrative staff in two institutions, both as panel chair and as an external panel member.  I’m not an HR expert.  I’m not a careers advisor.  But for what it’s worth, what follows is an edited and expanded version of what I posted on the Guardian page.

——————————————————————————

When an administrative job is advertised, a document called a ‘person specification’ is drawn up. Formats vary, but usually this is a list of skills, attributes, experiences, and attitudes that are either classed as “essential” or “desirable”. Often it’ll say which part of the recruitment process these will be examined (application, aptitude test, or interview).

In all of the recruitment I’ve been involved in, this is an absolutely vital document. Decisions about who to short list for interview and who not to and ultimately who to appoint will be made on the basis of this person specification and justified on that basis.  And we must be able to justify our decisions if challenged.  As panel chair I was required to (briefly) explain reasons for rejection for everyone we didn’t interview, and then everyone we didn’t appoint.  I’m sure the importance of the person specification isn’t unique to universities.

To get an interview, an applicant needs to show that they meet all of the essential criteria and as many of the desirable ones as possible. My advice to applicants is that if they don’t have some of the desirable criteria, they should make the case for having something equivalent, or a plan to get that skill. For example, if a person spec lists “web design” as desirable and you can’t do it, express willingness to go on a course. For bonus points, find a course that you’d like to go on.  If you’re offered an interview, you can use the person spec to predict the interview questions – they’ll be questions aimed at getting evidence about your fit with the person spec.  You could do worse than to imagine that you’re on the interview panel and think of the questions you’d ask to get evidence about candidates’ fit with those criteria.  Chances are you won’t be a million miles off.

Unfortunately, if you don’t meet the essential criteria, it’s a waste of time applying.  You won’t get an interview.

As an applicant, your job in your application form is to make it as obvious as possible to the panel members that you meet the criteria. Back it up with evidence and at least some detail. If a criterion concerns supporting committees with minute taking and agenda prep, don’t just assert you’ve done it – say a bit about the committee, and what you did exactly, and how you did it.  Culturally, we’re not good at blowing our own trumpets, and a good and effective way round this is to just stick to the facts.  Don’t tell, show.

Panel members really appreciate it when applicants make it easy – they can just look down the person spec, look through the application, and tick, tick, tick, you’re on the potential interviewees pile.  Don’t make panel members guess or try to interpret what you say to measure it against the criteria.  There’s nothing more frustrating than an applicant who might be exactly what we need, but who hasn’t made a strong enough or clear enough case, especially about transferable skills.

Panel members can tell the difference between an application that’s being tweaked slightly and sent to every job vacancy, and one that’s been tailored for that particular vacancy. Do that, put in the effort, and you will stand out, because so many people don’t. Take the application seriously, and you’ll be taken seriously in turn. And spell check and proof read is your friend.  A good admin vacancy in a university in the current climate attract hundreds of applications.  That’s not an exaggeration.

Two other tips. One is always ask for feedback if you’re unsuccessful at interview. In every process I’ve been involved in, there’s useful feedback there for you if you want it. Even if it’s “someone else was better suited, and there’s nothing you could have done differently/better”, you still want to know that. If you were good, chances are that the university in question would like you to apply again in the future. The second is to always take up any offer of an informal conversation in advance of applying.  If you can ask sensible questions that show you’ve read all the documents thoroughly, there’s a chance that you’ll be remembered when you apply. You won’t get special treatment, but it can’t hurt.

Jobs will be advertised in a variety of places, depending on the grade and the degree of specialism needed.  Universities will have a list of current vacancies on their websites, and often use local papers for non-specialist roles.  Jobs.ac.uk is also widely used, and has customisable searches/vacancy emails, as well as some more good advice on job seeking.

Finally….. every job interview process that I’ve been involved with has attracted outstanding candidates. Some with little work experience, some with NHS or local authority admin experience, many from the private sector too. Universities are generally good employers and good places to work. It’s competitive at the best of times, and will be doubly so now.

————————————————————————–

The fact that most of you reading this not only (a) already have university jobs; and (b) know perfectly well how the recruitment process works isn’t lost on me.  But this one’s for my random google visitors.  Normal service will be resuming shortly.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

Resourse list for academics new to social media

(This didn't happen)
"You will make sure that your research methodology links with your research questions, you snivelling little maggot!"

This week I was asked to be involved in a Research Grant application ‘bootcamp’ to talk in particular about the use of social media in pathways to impact plans, and academic blogging in general.  I was quick to disclaim expertise in this area – I’ve been blogging for a while now, but I’m not an academic and I’m certainly not an expert on social media.  I’m also not sure about this use of the word ‘bootcamp’.  We already have ‘workshop’ and ‘surgery’ as workplace-based metaphors for types of activity, and I’m not sure we’re ready for ‘bootcamp’.  So unless the event turns out to involve buzzcuts, a ten mile run, and an assault course, I’ll be asking for my money back.

But I thought I’d try to put together a list of resources and examples that I was already aware of in time for the session, and I then I wondered about ‘crowdsourcing’ (i.e. lazily ask my readers/twitter followers) some others that I might have missed.  Hopefully we’ll then end up with a general list of resources that everyone can use.  I’ve pasted some links below, along with a few observations of my own.  Please do chip in with your thoughts, experiences, tips, and recommendations for resources.

———————————–

Things I have learnt about using social media

Blogging

  • You must have a clear idea about your intended audience and what you hope to achieve.  Blogging for the sake of it or because it’s flavour of the month or because you think it is expected is unlikely to be sustainable or to achieve the desired results.
  • A good way to start is to search for people doing a similar thing and contact them asking if you can link to their blog.  Everyone likes being linked to, and this is a good way to start conversations.  Once established, support others in the same way.
  • You have to build something of a track record of posts and tweets to be credible as a consistent source of quality content – you’ve got to earn a following, and this takes time, work, and patience.  And even then, might not work.  Consider a ‘soft launch’ to build your track record, and then a second wave of more intensive effort to get noticed.
  • Posting quality comments on other people’s blogs, either in their comments section, or in a post on your blog, can be a good way to attract attention.
  • Illustrate blog posts with a picture (perhaps found through google images) – a lot of successful bloggers seem to do this.
  • Multi-author blogs and/or guest posts are a good way to share the load.
  • And consequently, offering guest posts or content to established blogs is a way to get noticed.
  • The underlying technology is now very straightforward.  Anyone who is reasonably computer literate will have little trouble learning the technical skills.  The editing frame where I’m writing this in looks a lot like Word, and I’ve used precisely no programming/HTML stuff – that can all be automated now.

Twitter

  • The technology of @s and # is fairly straightforward to pick up – find some relevant/interesting people to follow and you’ll soon pick it up, or read one of the guides below.
  • A good way to reach people is to get “retweets” – essentially when someone else with a bigger following forwards your message.  You do this by addressing posts to them using the @ symbol
  • Generally the pattern of retweets seems to be when people find something interesting and it suits their message.  So… the ESRC retweeted my blog post linking to their regional visit presentation when my blog post said nice things about the visit and linked to their presentation
  • Weird mix of personal and professional.  Some twitter accounts are uniquely professional, others uniquely personal, but many seem a mixture.  Some of the usual barriers seem not to apply, or apply only loosely.  Care needs to be taken here.

General

  • Social media is potentially a huge time sink – keep in mind costs in time versus benefits gained
  • It can be a struggle if you’re naturally shy and attention seeking doesn’t come easily to you

Resources and further reading:

Examples of individual UoN blogs:

Patter – Pat Thomson, School of Education http://patthomson.wordpress.com/
Political Apparitions – Steven Fielding, School of Politics http://stevenfielding.com/
Registrarism – Paul Greatrix, University Registrar  http://registrarism.wordpress.com/
Cash for Questions, Adam Golberg, NUBS  https://socialscienceresearchfunding.co.uk/

UoN Group/institutional/project blogs:

Bullets and Ballots – UoN School of Politics: http://nottspolitics.org/
China Policy Institute http://blogs.nottingham.ac.uk/chinapolicyinstitute/
Centre for Corporate Social Responsibility
http://blogs.nottingham.ac.uk/betterbusiness/

UoN blogs home http://blogs.nottingham.ac.uk/

Guides:

Twitter Guide – LSE Impact in Social Sciences
http://blogs.lse.ac.uk/impactofsocialsciences/2011/09/29/twitter-guide/

6 tips on blogging about research (Sarah Stewart (EdD Student, Otago University, NZ)
http://sarah-stewart.blogspot.co.uk/2012/04/my-top-6-tips-for-how-to-blog-about.html

Blogging about your research – first steps  (University of Warwick)
http://www2.warwick.ac.uk/services/library/researchexchange/topics/gd0007/

Is blogging or tweeting about research papers worth it? (Melissa Terras, UCL)
http://blogs.lse.ac.uk/impactofsocialsciences/2012/04/19/blog-tweeting-papers-worth-it/

A gentle introduction to twitter for the apprehensive academic, (Dorothy Bishop, University of Oxford)
http://deevybee.blogspot.co.uk/2011/06/gentle-introduction-to-twitter-for.html

Twitter accounts:

List of official University of Nottingham Twitter accounts
https://twitter.com/#!/UniofNottingham/uontwitteraccounts

Lists of academic twitter accounts (Curator: LSE Impact project team)
https://twitter.com/#!/LSEImpactBlog/soc-sci-academic-tweeters

https://twitter.com/#!/LSEImpactBlog/business-tweeters

https://twitter.com/#!/LSEImpactBlog/arts-academic-tweeters

https://twitter.com/#!/LSEImpactBlog/think-tanks

——————

Some of the links and choices of examples, are more than a little University of Nottingham-centric, but then this was an internal event.  I’ve not checked with the authors of the various resources I’ve linked to, and taken the liberty of assuming that they won’t mind the link and recognition.  But happy to remove any on request.

Any resources I’ve missed?  Any more thoughts and suggestions?  Please comment below….

Responding to Referees

Preliminary evidence appears to show that this approach to responding to referees is - on balance - probably sub-optimal. (Photo by Tseen Khoo)

This post is co-authored by Adam Golberg of Cash for Questions (UK), and Jonathan O’Donnell and Tseen Khoo of The Research Whisperer (Australia).

It arises out of a comment that Jonathan made about understanding and responding to referees on one of Adam’s posts about what to do if your grant application is unsuccessful. This seemed like a good topic for an article of its own, so here it is, cross-posted to our respective blogs.

A quick opening note on terminology: We use ‘referee’ or ‘assessor’ to refer to academics who read and review research grant applications, then feed their comments into the final decision-making process. Terminology varies a bit between funders, and between the UK and Australia. We’re not talking about journal referees, although some of the advice that follows may also apply there.

————————————-

There are funding schemes that offer applicants the opportunity to respond to referees’ comments. These responses are then considered alongside the assessors’ scores/comments by the funding panel. Some funders (including the Economic and Social Research Council [ESRC] in the UK) have a filtering process before this point, so if you are being asked to respond to referees’ comments, you should consider it a positive sign as not all applications get this far. Others, such as the Australian Research Council (ARC), offer you the chance to write a rejoinder regardless of the level of referees’ reports.

If the funding body offers you the option of a response, you should consider your response as one of the most important parts of the application process.  A good response can draw the sting from criticisms, emphasise the positive comments, and enhance your chances of getting funding.  A bad one can doom your application.

And if you submit no response at all? That can signal negative things about your project and research team that might live on beyond this grant round.

The first thing you might need to do when you get the referees’ comments about your grant application is kick the (imaginary) cat.* This is an important process. Embrace it.

When that’s out of your system, here are four strategies for putting together a persuasive response and pulling that slaved-over application across the funding finish line.

1. Attitude and tone

Be nice.  Start with a brief statement thanking the anonymous referees for their careful and insightful comments, even if actually you suspect some of them are idiots who haven’t read your masterpiece properly. Think carefully about the tone of the rest of the response as well.  You’re aiming for calm, measured, and appropriately assertive.  There’s nothing wrong with saying that a referee is just plain wrong on a particular point, but do it calmly and politely.  If you’re unhappy about a criticism or reviewer, there’s a good chance that it will take several drafts before you eliminate all the spikiness from the text.  If it makes you feel better (and it might), you can write what you really think in the tone that you think it in but, whatever you do, don’t send that version! This is the version that may spontaneously combust from the deadly mixture of vitriol and pleading contained within.

Preparing a response is not about comprehensively refuting every criticism, or establishing intellectual superiority over the referees. You need to sift the comments to identify the ones that really matter. What are the criticisms (or backhanded compliments) that will harm your cause? Highlight those and answer them methodically (see below). Petty argy-bargy isn’t worth spending your time on.

2. Understanding and interpreting referees’ comments

One UK funder provides referee report templates that invite the referees to state their level of familiarity with the topic and even a little about their research background, so that the final decision-making panel can put their comments into context. This is a great idea, and we would encourage other funding agencies to embrace it.

Beyond this volunteered information (if provided), never assume you know who the referee is, or that you can infer anything else about them because you could be going way off-base with your rant against econometricians who don’t ‘get’ sociological work. If there’s one thing worse than an ad hominem response, it’s an ad hominem response aimed at the wrong target!

One exercise that you might find useful is to produce a matrix listing all of the criticisms, and indicating the referee(s) who made those objections. As these reports are produced independently, the more referees make a particular point, the more problematic it might be.  This tabled information can be sorted by section (e.g. methodology, impact/dissemination plan, alternative approaches). You can then repeat the exercise with the positive comments that were made. While assimilating and processing information is a task that academics tend to be good at, it’s worth being systematic about this because it’s easy to overlook praise or attach too much weight to objections that are the most irritating.

Also, look out for, and highlight, any requests that you do a different project. Sometimes, these can be as obvious as “you should be doing Y instead”, where Y is a rather different project and probably closer to the reviewer’s own interests. These can be quite difficult criticisms to deal with, as what they are proposing may be sensible enough, but not what you want to do.  In such cases, stick to your guns, be clear what you want to do, and why it’s of at least as much value as the alternative proposal.

Using the matrix that you have prepared, consider further how damaging each criticism might be in the minds of the decision makers.  Using a combination of weight of opinion (positive remarks on a particular point minus criticisms) and multiplying by potential damage, you should now have a sense of which are the most serious criticisms.

Preparing a response is not a task to be attempted in isolation. You should involve other members of your team, and make full use of your research support office and senior colleagues (who are not directly involved in the application). Take advantage of assistance in interpreting the referees’ comments, and reviewing multiple drafts of your response.

Don’t read the assessor reports by themselves; you should also go back to your whole application, several times if necessary. It has probably been some time since you submitted the application, and new eyes and a bit of distance will help you to see the application as the referees may have seen it. You could pinpoint the reasons for particular criticisms, or misunderstandings that you assumed they made. While their criticisms may not be valid for the application you thought you wrote, they may very well be so for the one that you actually submitted.

3. The response

You should plan to use the available space in line with the exercise above, setting aside space for each criticism in proportion to its risk of stopping you getting funded.

Quibbles about your budgeted expenditure for hotel accommodation are insignificant compared to objections that question your entire approach, devalue your track-record, invalidate your methodology, or claim that you’re adding little that’s new to the sum of human knowledge. So, your response should:

  • Make it easy for the decision-makers: Be clear and concise.
  • Be specific when rebutting from the application. For example: “As we stated on page 24, paragraph 3…”. However, don’t lose sight of the need to create a document that can be understood in isolation as far as possible.
  • If possible and appropriate, introduce something that you’ve done in the time since submission to rebut a negative comment (be careful, though, as some schemes may not allow the introduction of new material).
  • Acknowledge any misunderstandings that arise from the application’s explanatory shortcomings or limitations of space, and be open to new clarifications.
  • Be grateful for the positive comments, but focus on rebutting the negative comments.

4. Be the reviewer

For the best way to really get an idea of what the response dynamic is all about in these funding rounds, consider becoming a grant referee. Once you’ve assessed a few applications and cut your teeth on a whole funding round (they can often be year-long processes), you quickly learn about the demands of the job and how regular referees ‘value’ applications.

Look out for chances to be on grant assessment panels, and say yes to invitations to review for various professional bodies or government agencies. Almost all funding schemes could do with a larger and more diverse pool of academics to act as their ‘gate-keepers’.

Finally: Remember to keep your eyes on the prize. The purpose of this response exercise is to give your project the best possible chance of getting funding. It is an inherent part of many funding rounds these days, and not only an afterthought to your application.

* The writers and their respective organisations do not, in any way, endorse the mistreatment of animals. We love cats.  We don’t kick them, and neither should you. It’s just an expression. For those who’ve never met it, it means ‘to vent your frustration and powerlessness’.

I’ve disabled comments on this entry so that we can keep conversations on this article to one place – please head over to the Research Whisperer if you’d like to comment. (AG).

Russell Group signs four new institutions

I've got nothing remotely clever or informative to say about this, and yes, this post is largely an excuse for this pun.  SoSueMe.....
The Russell brand apparently has quite an appeal....

The Russell Group announced today that the Universities of Durham, Exeter, York, and Queen Mary University of London have been offered and accepted membership, taking the group from 20 to 24.  The 1994 group – their former mission group home – has yet to announce whether they will rename themselves the 1990 group or look to make some new signings of their own.  There’s a fair few out-of-contract unaffiliated universities who are up for grabs, so perhaps that will be the next logical step.

Christopher Cook, the Education correspondent of the Financial Times, reported the story  thusly on twitter:

Russell Group to expand to include universities everyone thought were already in it – Durham, Exeter, QMUL and York.

… which I think sums it up nicely.  Speaking of Twitter, it’s surely a sign of something when ‘Russell Group’ starts to trend.  It’s very odd reading the spambots tweeting about it as well – clearly the realignment of HE mission groups is a hot topic in the world of the internet fraudster and spammer.  Trending is normally reserved for topics that I’m reliably told are related to a Canadian singing beaver, footballists who have done a goal, celebrities who have just died, the twoutrage du jour, One Directioners – presumably a re-branding of the Girl Guides – and wretched, wretched Saturday night reality television.

Getting ‘Russell Group’ trending is a sign that the LSE Impact Blog’s mission to get every last academic on Twitter by 2014 is well on track.  And when we see Bertram Russell trending, we’ll know they’ve finally  won.  Or that Twitter has gone the way of MySpace.

How can we help researchers get responses for web questionnaires?

A picture of an energy saving lightbulb
*Insert your own hilarious and inaccurate joke about how long energy saving lightbulbs take to warm up here*

I’ve had an idea, and I’d like you, the internet, to tell me if it’s a good one or not.  Or how it might be made into a good one.

Would it be useful to set up a central list/blog/twitter account for ongoing research projects (including student projects) which need responses to an internet questionnaire from the general public?  Would researchers use it?  Would it add value?  Would people participate?

Every so often, I receive an email or tweet asking for people to complete a research questionnaire on a particular topic.  I usually do (if it’s not too long, and the topic isn’t one I consider intrusive), partly because some of them are quite interesting, partly because I feel a general duty to assist with research when asked, and partly because I probably need to get out more.  The latest one was one which a friend and former colleague shared on Facebook.  It was a PhD project from a student in her department about sun tanning knowledge and behaviour, and it’s here if you feel like taking part.  Now this is not a subject that I feel passionately about, but perhaps that’s why it might be useful for the likes of me to respond.

I guess the key assumptions that I’m making are that there are sufficient numbers of other people like me who would be willing to spend a few minutes every so often completing a web survey to support research, and that nothing like this exists already.  If there is, I’ve not heard about and I’d have thought that I would have done.  But I’d rather be embarrassed now rather than later!  Another assumption is that such a resource might be useful.  I strongly suspect that any such resource would have a deeply atypical demographic – I’d imagine it would be mainly university staff and students.  But I’d imagine that well designed research questionnaires would be asking sufficiently detailed demographic information to be able to factor this in.  For some student projects where the main challenge can be quantity rather than variety, this might not even matter too much.  I guess it depends what questions are being asked as part of the research.

I’ve not really thought this through at all yet.  I would imagine that only projects which could be completed by anyone would be suitable for inclusion, or at least only projects where responses are invited from a broad range of people.  Very specific projects probably wouldn’t work, and would make it harder for participants to find ones which they can do.  Obviously all projects would need to have ethical approval from their institution.  There would be an expectation that beneficiaries are prepared to reciprocate and help others in return.  And clearly there has to be a strategy to tell people about it.

In practical terms, I’m thinking about either a separate blog or a separate page of this one, and probably a separate twitter account.  Researchers could add details in a comment on a monthly blog post, and either tweet the account and ask for a re-tweet, or email me a tweet to send.  Participants could follow the twitter feed and subscribe to the comments and blog.

So… what do you think?  Please comment below (or email me if you prefer).  Would this be useful?  Would you participate?  What have I missed?  If I do set this up, how might I go about telling people about it?

Coping with rejection: What to do if your grant application is unsuccessful. Part 2: Next Steps

Look, I know I said that not getting funded doesn't mean they disliked your proposal, but I need a picture and it's either this or a picture of Simon Cowell with his thumb down. Think on.

In the first part of this series, I argued that it’s important not to misunderstand or misinterpret the reasons for a grant application being unsuccessful.  In the comments, Jo VanEvery shared a phrase that she’s heard from a senior figure at one of the Canadian Research Councils – that research funding “is not a test, it’s a contest”.  Not getting funded doesn’t necessarily mean that your research isn’t considered to be of high quality.  This second entry is about what steps to consider next.

1.  Some words of wisdom

‘Tis a lesson you should heed:  Try, try, try again.
If at first you don’t succeed, Try, try, try again
William Edward Hickson (1803-1870)

The definition of insanity is doing the same thing over and over but expecting different results
Ben Franklin, Albert Einstein, or Narcotics Anonymous

I like these quotes because they’re both correct in their own way.  There’s value to Hickson’s exhortation.  Success rates are low for most schemes and most funders, so even if you’ve done everything right, the chances are against you.  To be successful, you need a degree of resilience to look for another funder or a new project, rather than embarking on a decade-long sulk, muttering plaintively about how “the ESRC doesn’t like” your research whenever the topic of external funding is raised.

However Franklin et al (or al?) also have a point about not learning from the experience, and repeating the same mistakes without learning anything as you drift from application to application.  While doing this, you can convince yourself that research funding is a lottery (which it isn’t) and all you have to do is to submit enough applications and eventually your number will come up (which it won’t).  This is the kind of approach (on the part of institutions as well as individuals) that’s pushed us close to ‘demand management’ measures with the ESRC.  More on learning from the experience in a moment or two.

2.  Can you do the research anyway?

This might seem like an odd question to ask, but it’s always the first one I ask academic colleagues who’ve been unsuccessful with a grant application (yes, this does happen,  even at Nottingham University Business School).  The main component of most research projects is staff time.  And if you’re fortunate enough to be employed by a research-intensive institution which gives you a generous research time allocation, then this shouldn’t be a problem.  Granted, you can’t have that full time research associate you wanted, but could you cut down the project and take on some or all of that work yourself or between the investigators?  Could you involve more people – perhaps junior colleagues – to help cover the work? Would others be willing to be involved if they can either co-author or be sole author on some of the outputs?  Could it be a PhD project?

Directly incurred research expenses are more of a problem – transcription costs, data costs, travel and expenses – especially if you and your co-investigators don’t have personal research accounts to dip into.  But if it turns out that all you need is your expenses paying, then a number of other funding options become viable – some external, but perhaps also some internal.

Of course, doing it anyway isn’t always possible, but it’s worth asking yourself and your team that question.  It’s also one that’s well worth asking before you decide to apply for funding.

3.  What can you learn for next time?

It’s not nice not getting your project funded.  Part of you probably wants to lock that application away and not think about it again.  Move onwards and upwards, and perhaps trying again with another research idea.  While resilience is important, it’s just as important to learn whatever lessons there are to learn to give yourself the best possible chance next time.

One lesson you might be able to take from the experience is about planning the application.  If you found yourself running out of time, or not getting sufficient input from senior colleagues, not taking full advantage of the support available within your institution, well, that’s a lesson to learn.  Give yourself more time, start earlier before the deadline, and don’t make yourself rush it.  If you did all this last time, remember that you did, and the difference that it made.  If you didn’t, then the fact is that your application was almost certainly not as strong as it could have been.  And if your application document is not the strongest possible iteration of your research idea, your chances of getting funded are pretty minimal.

I’d recommend reading through your application and the call guidance notes once again in the light of referees’ comments.  Now that you have sufficient distance from the application, you should ‘referee’ it yourself as well.  What would you do better next time?  Not necessarily individual application-specific aspects, but more general points.  Did your application address the priorities of the call specifically enough, or were the crowbar marks far too visible?  Did you get the balance right between exposition and background and writing about the current project?  Did you pay enough attention to each section?  Did you actually answer the questions asked?  Do you understand any criticisms that the referees had?

4. Can you reapply?  Should you reapply?

If it’s the ESRC you’re thinking about, then the answer’s no unless you’re invited.  I think we’re still waiting on guidance from the ESRC about what constitutes a resubmission, but if you find yourself thinking about how much you might need to tinker with your unsuccessful project to make it a fresh submission, then the chances are that you’ll be barking up the wrong tree.  Worst case scenario is that it’s thrown straight out without review, and best case is probably that you end up with something a little too contrived to stand any serious chance of funding.

Some other research funders do allow resubmissions, but generally you will need to declare it.  While you might get lucky with a straight resubmission, my sense is that if it was unsuccessful once it will be unsuccessful again. But if you were to thoroughly revise it, polish it, take advice from anyone willing to give it, and have one more go, well, who knows?

But there’s really no shame in walking away.  Onwards and upwards to the next idea.  Let this one go for now, and working on something new and fresh and exciting instead.  Just remember everything that you learnt along the way.  One former colleague once told me that he usually got at least one paper out of an application even it was unsuccessful.  I don’t know how true that might be more generally, but you’ve obviously done a literature review and come up with some ideas for future research.  Might there be a paper in all that somewhere?

Another option which I hinted at earlier when I mentioned looking for the directly incurred costs only is resubmitting to another funder.  My advice on this is simple…. don’t resubmit to another funder.  Or at least, don’t treat it like a resubmission.  Every research funder, every scheme, has different interests and priorities.  You wrote an application for one funder, which presumably was tailored to that funder (it was, wasn’t it?).  So a few alterations probably won’t be enough.

For one thing, the application form is almost certainly different, and that eight page monstrosity won’t fit into two pages.  But cut it down crudely, and if it reads like it’s been cut down crudely, you have no chance.  I’ve never worked for a research funding body (unless you count internal schemes where I’ve had a role in managing the process), but I would imagine that if I did, the best way to annoy me (other than using the word ‘impactful‘) would be sending me some other funder’s cast-offs.  It’s not quite like romancing a potential new partner and using your old flame’s name by mistake, but you get the picture.  Your new funder wants to feel special and loved.  They want you to have picked out them – and them alone – for their unique and enlightened approach to funding.  Only they can fill the hole in your heart wallet, and satisfy your deep yearning for fulfilment.

And where should you look if your first choice funder does not return your affections?  Well, I’m not going to tell you (not without a consultancy fee, anyway).  But I’m sure your research funding office will be able to help find you some new prospective partners.

 

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”