Book review: The Research Funding Toolkit (Part 1)

For the purposes of this review, I’ve set aside my aversion to the use of terms like ‘toolkit’ and ‘workshop’.

The existence of a market for The Research Funding Toolkit, by Jacqueline Aldridge and Andrew Derrington, is yet more evidence of how difficult it is to get research funding in the current climate.  Although the primary target audience is an academic one, research managers and those in similar roles “will also find most of this book useful”, and I’d certainly have no hesitation in recommending this book to researchers who want to improve their chances of getting funding, and also to new and to experienced research managers.  In particular, academics who don’t have regular access to research managers (or similar) and to experienced grant getters and givers at their own institution should consider this book essential reading if they entertain serious ambitions about obtaining research funding.  While no amount of skill in grant writing will get a poor idea funded, a lack of skill in grant writing can certainly prevent an outstanding idea from getting the hearing it deserves if the application lacks clarity, fails to highlight the key issues, or fails to make a powerful case for its importance.

The authors have sought to distil a substantial amount of advice and experience down into one short book which covers finding appropriate funding sources, planning an application, understanding application forms, and assembling budgets.  But it goes beyond mere administrative advice, and also addresses writing style, getting useful (rather than merely polite) feedback on draft versions, the internal politics of grant getting, the challenges of collaborative projects, and the key questions that need to be addressed in every application.  Crucially, it demystifies what really goes on at grant decision making meetings – something that far too many applicants know far too little about.  Applicants would love to think that the scholarly and eminent panel spend hours subjecting every facet of their magnum opus to detailed, rigorous, and forensic analysis.  The reality is – unavoidably given application numbers  – rather different.

Aldridge and Derrington are well-situated to write a book about obtaining research funding.  Aldridge is Research Manager at Kent Business School and has over eight years’ experience of research management and administration.  Derrington is Pro-Vice Chancellor for Humanities and Social Sciences at the University of Liverpool, and has served on grant committees for several UK research councils and for the Wellcome Trust.  His research has been “continuously funded” by various schemes and funders for 30 years.  I think a book like this could only have been written in close collaboration between an academic with grant getting and giving experience, and a research manager with experience of supporting applications over a number of years.

The book practices what it preaches by applying the principles of grant writing that it advocates to the style and layout of the book itself.  It is organised into 13 distinct chapters, each containing a summary and introduction, and a conclusion at the end to summarise the key points and lessons to be taken.  It includes 19 different practical tools, as well as examples from successful grant applications. One of the appendixes offers advice on running institutional events on grant getting.  As it advises applicants, it breaks the text down into small chunks, makes good use of headings and subheadings, and uses clear, straightforward language.  It’s certainly an easy, straightforward read which won’t take too long to read cover-to-cover, and the structure allows the reader to dip back in to re-read appropriate sections later.  Probably the most impressive thing for me about the style is how lightly it wears its expertise – genuinely useful advice without falling into the traps of condescension, smugness, or preaching.  Although the prose sacrifices sparkle for clarity and brevity, the book coins a number of useful phrases or distinctions that will be of value, and I’ll certainly be adopting one or two of them.

Writing a book of this nature raises a number of challenges about specificity and relevance.  Different subjects have different funders with different priorities and conventions, and arrangements vary from country to country, and – of course – over time.  The authors have deliberately sought to use a wide range of example funders, including funders from Australia, America, and from Europe – though as you might expect the majority of exemplar funders are UK-based.  However, different Research Councils are used as case studies, and I would imagine that the advice given is generalisable enough to be of real value across academic disciplines and countries.  It’s harder to tell how this book will date, (references to web resources all date from Oct 2011), but much of the advice flows directly from (a) the scarcity of resources, and (b) the way that grant panels are organised and work, and it’s hard to imagine either changing substantially.  The authors are careful not to make generalisations or sweeping assertions based on any particular funder or scheme, so I would be broadly optimistic about the book’s continuing relevance and utility in years to come.  There’s also a website to accompany the book where new materials and updates may be added in the future.  There are already a number of blog posts subsequent to the publication date of the book.

Worries about appearing dated may account for the book having comparatively little to say about the impact agenda and how to go about writing an impact statement.  Only two pages address this directly, and much of these are taken up with examples.  Although not all UK funders ask for impact statements yet, the research councils have been asking for them for some time, and indications are that other countries are more likely to follow suit than not.  However, I think the authors were right not to devote a substantial section to this, as understandings and approaches to impact are still comparatively in their infancy, and such a section would probably be likely to date.

I’ve attempted a fairly general review in this post, and I’ll save most of my personal reaction for Part 2 of this post.  As well as highlighting a few areas that I found particularly useful, I’m going to raise a few issues that arise from the book as a bit of a jumping off point for debate and discussion.  Attempting to do that in this first post will make it too long, and unbalance the review by placing excessive focus on areas where I’d tentatively disagree, rather than the overwhelming majority of the points and arguments made in the book which I’d thoroughly agree with and endorse absolutely.

‘The Research Funding Toolkit(£21.99 for the paperback version) is available from Sage.  The Sage website also mentions an ebook version, but the link doesn’t appear to be working at the time of writing.

Declarations of interest:
Publishers Sage were kind enough to provide me with a free review copy of this book.  I have had some very brief Twitter interactions with Derrington and I met Aldridge briefly at the ARMA conference earlier this year.

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

The ARMA conference, social media, the future of this blog, and some downtime

The Association of Research Managers and Administrators conference was held in Southampton last week, and I’ve only got time to scribble a few words about it.  It’s a little frustrating, really – I’ve come back from the conference with various ideas and schemes for work, and a few for the blog, but I’m on annual leave until the end of July.  While I’ve always written this blog in my own time, I’m going to have a near-complete break (apart from perhaps a little Twitter lurking) so my reader will have to wait until July at the very earliest for the second instalment of my impact series.

I co-presented a session at ARMA on ‘Social Media in Research Support’ with Phil Ward of ‘Fundermentals‘ and the University of Kent, Julie Northam (Bournemouth University Research blog), and David Young (Northumbria University Research blog).  Phil has written a concise summary of the plenary sessions, and our presentation can be found on the Northumbria blog.

I have a slight stammer that I’m told that most people don’t notice, so I’m not a ‘natural’ public speaker, but I’m very pleased with the way that the session went.  I’m very grateful to my three co-presenters for their efforts and for what really amounted to quite a lot of preparation time, including a meeting in London.  I’m also very grateful to the delegates who attended – I think I counted 50 or so, which for the final session of the conference and scheduled against a very strong line-up of parallel sessions, was pretty good.    It was a very warm afternoon, but energy and attention levels in the room felt high, and this helped enormously.  So if you made it, thank you for coming, thank you for your attention, and most importantly of all, thank you for laughing at our jokes.

David opened the session by asking about the audience’s experience with social media, I was surprised at how much experience there was in the room.  We weren’t far short of 100% on Facebook, probably about 20% or more on or having using Twitter, and four or five bloggers.  Perhaps it shouldn’t have been a surprise, as perhaps the title of the session would have particularly appealed to those with an interest or previous experience.  But it was good to have an idea of the level to pitch things.

The session consisted of a brief introduction and explanation of social media, followed by four case studies.  Phil and I talked about our motivations in setting up our own blogs, our experiences, lessons learnt, and benefits and challenges.  Julie and David talked about their experience in setting up institutional research blogs, and how they went about getting institutional acceptance and academic buy-in.  It was interesting to see that the Open University had a poster presentation about a research blog that they’ve set up, though that’s internal only at the moment.  ARMA itself is now on Twitter, and this was the first year that the conference had an official hashtag – #ARMA2012.  While there’s no need for an official one – sometimes they just emerge – it’s very helpful to have an element of coordination.  I don’t think blogging or social media are going away any time soon, and I can only see their usage increasing. – though I do have reservations about scalability and sustainability.

As I said in the presentation, my motivations in setting up a blog were to try to join in a broader conversation with academics, funders, and people like me.  We get to do a lot of that at the annual ARMA conference, but it would be good to keep that going throughout the rest of the year too.  A secondary motivation was to learn by doing – I’m expected to help academics write their pathways to impact, which almost inevitably involve social media, and by getting involved myself I understand it in a way that I could never have understood as a mere bystander.

My blog is now a few weeks shy of its first birthday, an auspicious event marked by a birthday card invoice from my hosting provider, and a time for reflection.  I’ve managed reasonably well to hit an average of 2-3 posts per month – some reactions to news, some more detailed think pieces, and some lighter reflections on university culture and life.  That’s not too bad, but looking into the future I wonder whether I’ll be able to sustain this, and whether I’ll want to spend my own time writing about these things.  While I’m hopeful that I might be able to shift a little of the blog into my ‘day job’ (discussions on that to follow), one other option is to share the load, and I think the future for most blogs is multi-author.  Producing semi-regular, consistent quality content is a challenge, and I’m going to be soliciting guest posts in the future to feature alongside my own – whether that’s semi-regular or one off.  So, if you’d like to write occasionally but don’t want a whole blog, this might be a good opportunity.  Happy to discuss anything that’s a good fit with the overall theme of the blog.  Please drop me an email if you’re interested – I don’t bite.

One issue that came up in the questions (and afterwards on Twitter), was the question of the personal and the professional.  My sense was that a fair few people in the room had their own Twitter accounts already, but used them for personal purposes, rather than for professional purposes, and were concerned about mixing the two.  Probably there was little or no reference to their job in their bio, and they tweet about their interests and talk to family and friends.  This issue of the personal and the professional was something we touched on only very briefly in our talk, and mainly in reference to blogs rather than Twitter.  But it’s clearly something that concerns people, and may be an active barrier to more people getting involved in Twitter conversations.  Probably the one thing I’d do differently about the presentation would be to say more about this, and I’ve added it to my list of topics for blog posts for the future.

Unless anyone else wants to write it?

An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Jobs in university administration

This man had hair before he started shortlisting.....

The Guardian Higher Education network recently hosted a careers clinic on ‘How to break into university administration‘, and I posted a few thoughts that I thought might be useful.  According to my referral stats for my blog, a number of visitors end up here with similar questions about both recruitment processes and what it’s like to work for a university.  I think it’s mainly my post on Academics vs University Administrators part 94 that gets those hits.  I’ve also been asked by friends and relatives for my very limited wisdom on this topic.

I also think it’s good to share this information, because one of my worries whenever I’m involved in recruiting staff is that we end up employing people who are best at writing applications and being interviewed.  In my particular line of work, that’s fine – if you can’t write a strong job application against set criteria, you probably shouldn’t be helping academics with grant applications.  But that’s the exception.

So what follows is me spilling the beans on my very limited experience of recruiting administrative staff in two institutions, both as panel chair and as an external panel member.  I’m not an HR expert.  I’m not a careers advisor.  But for what it’s worth, what follows is an edited and expanded version of what I posted on the Guardian page.

——————————————————————————

When an administrative job is advertised, a document called a ‘person specification’ is drawn up. Formats vary, but usually this is a list of skills, attributes, experiences, and attitudes that are either classed as “essential” or “desirable”. Often it’ll say which part of the recruitment process these will be examined (application, aptitude test, or interview).

In all of the recruitment I’ve been involved in, this is an absolutely vital document. Decisions about who to short list for interview and who not to and ultimately who to appoint will be made on the basis of this person specification and justified on that basis.  And we must be able to justify our decisions if challenged.  As panel chair I was required to (briefly) explain reasons for rejection for everyone we didn’t interview, and then everyone we didn’t appoint.  I’m sure the importance of the person specification isn’t unique to universities.

To get an interview, an applicant needs to show that they meet all of the essential criteria and as many of the desirable ones as possible. My advice to applicants is that if they don’t have some of the desirable criteria, they should make the case for having something equivalent, or a plan to get that skill. For example, if a person spec lists “web design” as desirable and you can’t do it, express willingness to go on a course. For bonus points, find a course that you’d like to go on.  If you’re offered an interview, you can use the person spec to predict the interview questions – they’ll be questions aimed at getting evidence about your fit with the person spec.  You could do worse than to imagine that you’re on the interview panel and think of the questions you’d ask to get evidence about candidates’ fit with those criteria.  Chances are you won’t be a million miles off.

Unfortunately, if you don’t meet the essential criteria, it’s a waste of time applying.  You won’t get an interview.

As an applicant, your job in your application form is to make it as obvious as possible to the panel members that you meet the criteria. Back it up with evidence and at least some detail. If a criterion concerns supporting committees with minute taking and agenda prep, don’t just assert you’ve done it – say a bit about the committee, and what you did exactly, and how you did it.  Culturally, we’re not good at blowing our own trumpets, and a good and effective way round this is to just stick to the facts.  Don’t tell, show.

Panel members really appreciate it when applicants make it easy – they can just look down the person spec, look through the application, and tick, tick, tick, you’re on the potential interviewees pile.  Don’t make panel members guess or try to interpret what you say to measure it against the criteria.  There’s nothing more frustrating than an applicant who might be exactly what we need, but who hasn’t made a strong enough or clear enough case, especially about transferable skills.

Panel members can tell the difference between an application that’s being tweaked slightly and sent to every job vacancy, and one that’s been tailored for that particular vacancy. Do that, put in the effort, and you will stand out, because so many people don’t. Take the application seriously, and you’ll be taken seriously in turn. And spell check and proof read is your friend.  A good admin vacancy in a university in the current climate attract hundreds of applications.  That’s not an exaggeration.

Two other tips. One is always ask for feedback if you’re unsuccessful at interview. In every process I’ve been involved in, there’s useful feedback there for you if you want it. Even if it’s “someone else was better suited, and there’s nothing you could have done differently/better”, you still want to know that. If you were good, chances are that the university in question would like you to apply again in the future. The second is to always take up any offer of an informal conversation in advance of applying.  If you can ask sensible questions that show you’ve read all the documents thoroughly, there’s a chance that you’ll be remembered when you apply. You won’t get special treatment, but it can’t hurt.

Jobs will be advertised in a variety of places, depending on the grade and the degree of specialism needed.  Universities will have a list of current vacancies on their websites, and often use local papers for non-specialist roles.  Jobs.ac.uk is also widely used, and has customisable searches/vacancy emails, as well as some more good advice on job seeking.

Finally….. every job interview process that I’ve been involved with has attracted outstanding candidates. Some with little work experience, some with NHS or local authority admin experience, many from the private sector too. Universities are generally good employers and good places to work. It’s competitive at the best of times, and will be doubly so now.

————————————————————————–

The fact that most of you reading this not only (a) already have university jobs; and (b) know perfectly well how the recruitment process works isn’t lost on me.  But this one’s for my random google visitors.  Normal service will be resuming shortly.