A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.

 

Season’s greetings to both my readers….

The build-up to Christmas tends to be a funny time at universities.  Well, I say ‘build up’, but it’s more of a ‘fade out’ as people slope off a few at a time on annual leave.  We do very well in leave terms over Christmas because of ‘university holidays’, and I’m grateful for that.  I get quite annoyed by the way that the sales and other commercial stuff seems to start up again straight away.  Can’t we all have a bit of a break over Christmas?

Apropos of very little, and without even the flimsiest of justifications, here’s my favourite Christmas song… ‘It’s Clichéd to be Cynical at Christmas’ by the incomparable ‘Half Man Half Biscuit’.

Best wishes to you and yours for the festive season…..

Adam

 

Outstanding researcher or Oustanding grant writer?

"It's all the game, yo....."

The Times Higher has a report on Sir Paul Nurse‘s ‘Anniversary Day’ address to the Royal Society.  Although the Royal Society is a learned society in the natural rather than the social sciences, he makes an interesting distinction that seems to have – more or less unchallenged – become a piece of received wisdom across many if not all fields of research.

Here’s part of what Sir Paul had to say (my underline added)

Given this emphasis on the primacy of the individuals carrying out the research, decisions should be guided by the effectiveness of the researchers making the research proposal. The most useful criterion for effectiveness is immediate past progress. Those that have recently carried out high quality research are most likely to continue to do so. In coming to research funding decisions the objective is not to simply support those that write good quality grant proposals but those that will actually carry out good quality research. So more attention should be given to actual performance rather than planned activity. Obviously such an emphasis needs to be tempered for those who have only a limited recent past record, such as early career researchers or those with a break in their careers. In these cases making more use of face-to-face interviews can be very helpful in determining the quality of the researcher making the application.

I guess my first reaction to this is to wonder whether interviews are the best way of deciding research funding for early career researchers.  Apart from the cost, inconvenience and potential equal opportunities issues of holding interviews, I wonder if they’re even a particularly good way of making decisions.  When it comes to job interviews, I’ve seen many cases where interview performance seems to take undue priority over CV and experience.  And if the argument is that sometimes the best researchers aren’t the best communicators (which is fair), it’s not clear to me how an interview will help.

My second reaction is to wonder about the right balance between funding excellent research and funding excellent researchers.  And I think this is really the point that Sir Paul is making.  But that’s a subject for another entry, another time.  Coming soon!

My third reaction – and what this entry is about – is the increasingly common assumption that there is one tribe of researchers who can write outstanding applications, and another which actually does outstanding research.  One really good expression of this can be found in a cartoon at the ever-excellent Research Counselling.  Okay, so it’s only a cartoon, but it wouldn’t have made it there unless it was tapping into some deeper cultural assumptions.  This article from the Times Higher back at the start of November speaks of ‘Dr Plods’ – for whom getting funding is an aim in itself – and ‘Dr Sparks’ – the ones who deserve it – and there seems to be little challenge from readers in the comments section below.

But does this assumption have any basis in fact?  Are those who get funded mere journeymen and women researchers, mere average intellects, whose sole mark of distinction is their ability to toady effectively to remote and out-of-touch funding bodies?  To spot the research priority flavour-of-the-month from the latest Delivery Plan, and cynically twist their research plans to match it?  It’s a comforting thought for the increasingly large number of people who don’t get funding for their project.  We’d all like to be the brilliant-but-eccentric-misunderstood-radical-unappreciated genius, who doesn’t play by the rules, cuts a few corners but gets the job done, and to hell with the pencil pushers at the DA’s office in city hall in RCUK’s offices in downtown Swindon.  A weird kind of cross between Albert Einstein and Jimmy McNulty from ‘The Wire’.

While I don’t think anyone is seriously claiming that the Sparks-and-Plods picture should be taken literally, I’m not even sure how much truth there is in it as a parable or generalisation.  For one thing, I don’t see how anyone could realistically Plod their way very far from priority to priority as they change and still have a convincing track record for all of them.  I’m sure that a lot of deserving proposals don’t get funded, but I doubt very much that many undeserving proposals do get the green light.  The brute fact is that there are more good ideas than there is money to spend on funding them, and the chances of that changing in the near future are pretty much zero.  I think that’s one part of what’s powering this belief – if good stuff isn’t being funded, that must be because mediocre stuff is being funded.  Right?  Er, well…. probably not.  I think the reality is that it’s the Sparks who get funded, but it’s those Sparks who are better able to communicate their ideas and make a convincing case for fit with funders’ or scheme priorities.  Plods, and their ‘incremental’ research (a term that damns with faint praise in some ESRC referee’s reports that I’ve seen) shouldn’t even be applying to the ESRC – or at least not to the standard Research Grants scheme.

A share of this Sparks/Plods view is probably caused by the impact agenda.  If impact is hard for the social sciences, it’s at least ten times as hard for basic research in many of the natural sciences.  I can understand why people don’t like the impact agenda, and I can understand why people are hostile.  However, I’ve always understood the impact agenda as far as research funding applications are concerned is that if a project has the potential for impact, it ought to, and there ought to be a good, solid, thought through, realistic, and defensible plan for bringing it about.  If there genuinely is no impact, argue the case in the impact statement.  Consider this, from the RCUK impact FAQ.

How do Pathways to Impact affect funding decisions within the peer review process?

The primary criterion within the peer review process for all Research Councils is excellent research. This has always been the case and remains unchanged. As such, problematic research with an excellent Pathways to Impact will not be funded. There are a number of other criteria that are assessed within research proposals, and Pathways to Impact is now one of those (along with e.g. management of the research and academic beneficiaries).

Of course, how this plays out in practice is another matter, but every indication I’ve had from the ESRC is that this is taken very seriously.  Research excellence comes first.  Impact (and other factors) second.  These may end up being used in tie-breakers, but if it’s not excellent, it won’t get funded.  Things may be different at the other Research Councils that I know less about, especially the EPSRC which is repositioning itself as a sponsor of research, and is busy dividing and subdividing and prioritising research areas for expansion or contraction in funding terms.

It’s worth recalling that it’s academics who make decisions on funding.  It’s not Suits in Swindon.  It’s academics.  Your peers.  I’d be willing to take seriously arguments that the form of peer review that we have can lead to conservatism and caution in funding decisions.  But I find it much harder to accept the argument that senior academics – researchers and achievers in their own right – are funding projects of mediocre quality but good impact stories ahead of genuinely innovative, ground-breaking research which could drive the relevant discipline forward.

But I guess my message to anyone reading this who considers herself to be more of a ‘Doctor Spark’ who is losing out to ‘Doctor Plod’ is to point out that it’s easier for Sparky to do what Ploddy does well than vice versa.  Ploddy will never match your genius, but you can get the help of academic colleagues and your friendly neighbourhood research officer – some of whom are uber-Plods, which in at least some cases is a large part of the reason why they’re doing their job rather than yours.

Want funding?  Maximise your chances of getting it.  Want to win?  Learn the rules of the game and play it better.  Might your impact plan be holding you back?  Take advantage of any support that your institution offers you – and if it does, be aware of the advantage that this gives you.  Might your problem be the art of grant writing?  Communicating your ideas to a non-specialised audience?  To reviewers and panel members from a cognate discipline?  To a referee not from your precise area?  Take advice.  Get others to read it.  Take their impressions and even their misunderstandings seriously.

Or you could write an application with little consideration for impact, with little concern for clarity of expression or the likely audience, and then if you’re unsuccessful, you can console yourself with the thought that it’s the system, not you, that’s at fault.

On strike…..

"Careful Now"
"Down with this sort of thing!"

I hate having to take strike action.  I hate having to take action short of a strike, which recently involved the highly radical step of, er, working to contract.

I particularly hate it at the moment because tomorrow will be my second anniversary at Nottingham University Business School.  I think I’m very lucky to be at a well-run university and in a well-run School.  I admire and respect my colleagues, and have no reason to think that that respect isn’t returned.  I enjoy my work – challenging enough to stretch me, not so stressful that it might break me.  I hope these words won’t come back to haunt me, but for now, I consider myself to be very, very lucky.

So I don’t want to strike.  I also don’t want to ‘politicise’ my blog by saying too much about it.  Not least because it’s hard to get to the bottom of what’s really going on.  I’ve foolishly  neglected to become an expert in pensions, and so I don’t fully grasp the issues.  I know enough not to take at face value the information that the employers are giving us, nor the information from the UCU.  On the one hand, it’s hard not to conclude that (regardless of your personal politics) that the government is doing all kinds of things that it’s secretly wanted to do for ages under the guise of TINA (‘There Is No Alternative’).  It’s also hard to avoid the fact that changes were made to our pension scheme not so long ago that were supposed to address the (undoubted) issues of longer life expectancy.  So it’s hard not to wonder why we’re back again so soon.  And hard not to wonder how long it will be before we’re back revisiting and adjusting again.  And again.  And again.

There’s something of the theatrical about all of the public posturing and negotiations and the wars of words and the spin that goes on with every industrial dispute.  Often I think what’s really going on is not what it seems.  “Offers” are made which are intended to be rejected, and in the full knowledge that a better offer will be made after the inevitable industrial action.  Unions ask for more than they could possibly expect to get.  In the end, we usually end up with an agreement which lets both sides claim victory and appease their constituency.  But what we will have tomorrow is a show of the strength of feeling and stomach for a fight.  It may or not make any difference in the short term.  But in the long term, it sends a clear signal and it will make a difference to the eventual outcome of the war, even if the ‘battle’ is stage managed.

I’d recommend union membership to anyone.  If you can join a union, you should.  Not only will they represent members’ interests collectively, they’ll also have your back if things go bad and make sure you get due process and fair treatment.  If I had a pound for every story I’ve heard about union representation and support making a real difference to how someone is treated, I’d make at least some of the cash back that I’ll lose by striking tomorrow.

Estimating Investigator and Researcher Time on a Project

PS: Time is also overheads

Prompted in part by an interesting discussion of the importance of the budget in establishing the overall credibility and shape of a research proposal at the ever-excellent Research Whisperer, I thought I’d put fingers-to-keyboard on the vexed issue of estimating staff time on research grant applications.  Partly this is to share some of what I do and what I recommend, but mostly it’s to ask others how they approach it. Comments, thoughts, suggestions, and experiences welcome as ever in the comments below.

Estimating staff time is by some distance the hardest part of the budget, and often when I discuss this with academic colleagues, we end up in a kind of “I dunno, what do you reckon?” impasse.  (I should say that it’s me who speaks like that, not them).  I’ve never been involved in a research project (other than my ‘desk research’ MPhil), so I’ve really no idea, and I don’t have any particular insight into how long certain tasks will take.  Career young academics, and the increasing number of academics who have never had the opportunity to lead on a research project, have little experience to draw upon, and even those who have experience seem to find it difficult.  I can understand that, because I’d find it difficult too.  If someone where to ask me how estimate how much time I spent over the course of a year on (say) duties related to my role as School Research Ethics Officer, I’d struggle to give them in answer in terms of a percentage FTE or in terms of a total number of working days.

It’s further complicated by the convenient fiction that academic staff work the standard five day weeks, and 37.5 hour working weeks.  Even those who don’t regularly work evenings and weekends will almost certainly have flexible working patterns that make it even harder to estimate how long something will take.  This means that the standard units of staff time that are most often used – total days and percentage of full time – aren’t straightforwardly convertible from one currency to the other.  To make matters worse, some European funding sources seem to prefer ‘person months’.  Rather than the standard working week, I think this probably reflects the reality of academic work in many institutions:

The response to my question about how much time the project will take is often answered with another question.  What will the funder pay for?  What looks right?  What feels right?  Longer, thinner project, or shorter and more intensive?  The answer is always and inevitably, ‘it depends’.  It depends upon what is right for the project.  A longer less intensive project might make sense if you have to wait for survey responses to come in, or for other things to happen.  On the other hand, if it’s something that you can and want to work on intensively, go for it.  But the project has to come first.  What would it take to properly take on this research challenge?

Often the answer is a bit hand-wavy and guesstimate-ish.  Oh, I don’t know….say… about, two days per week?  Two and a half?  Would they fund three?  This is a generally a good starting point.  What’s not a good starting point is the kind of fifteen-minutes-every-Tuesday approach, where you end up with a ‘salami project’ – a few people involved, but sliced so thinly it’s hard to see how anything will get done.  This just isn’t credible, and it’s very unlikely to get funded by the likes of the ESRC (and other funders too, I’m sure) if they don’t believe that you’ll possibly be able to deliver for the amount of money (time) you’re asking for.  Either they’ll conclude that you don’t understand what your own research requires (which is fatal for any chance of funding), or they’ll think that you’re trying to pull a fast one in terms of the value for money criterion.  And they won’t like that much either.

The other way to make sure that you’re not going to get funded is to go the other way and become greedy.  I’ve noticed that – oddly – academics seldom over-estimate their own time, but over-estimate the amount of researcher time required.  I’ve seen potential bids before now that include a heavy smattering of research associates, but no clear idea what it actually is they’ll be doing all day, other than doing stuff that the lead investigator doesn’t want to do.  In a UK context, overheads are calculated on the basis of investigator and researcher time, so including researchers is doubly expensive.  One way round this that I often recommend is to ensure that what’s required is actually a research associate (who attracts overheads) rather than an academic-related project manager or administrator (who doesn’t).  But if it’s hard to estimate how long it will take you to do any given task, it’s doubly hard to estimate how long it will take someone else – usually someone less experienced and less skilled – and perhaps from a cognate sub-discipline rather than from your own.

My usual advice is for would-be principal investigators to draw up a table showing the various phases of the project as rows, and project staff as columns.  In addition to the main phases of the research, extra rows should be added for the various stages of dissemination and impact activity; for project coordination with colleagues; for line management of any researchers or administrative/managerial staff; for professional development where appropriate.  Don’t forget travelling time associated with meetings, fieldwork, conferences etc.  The main thing that I usually see underestimated is project management time.  As principal investigator, you will have to meet reasonably regularly with your finance people, and you’ll have to manage and direct the project research associates.  Far too many academics seem to see RAs as clones who will instinctively know what to do and don’t need much in the way of direction, advice, or feedback.

Once this is done, I suggest adding up the columns and then working backwards from those total numbers of project days to a percentage FTE, or days per week, or whatever alternative metric you prefer.  In the UK, the research councils assuming that 220 days = 1 working year, so you can use this to calculate the percentage of full time.  The number that you come out with should feel intuitively about right.  If it doesn’t, then something has probably gone wrong.  Go back to the table, and adjust things a little.  By going to and fro, from the precision of the table to the intuition of the percentage of time, you should reach what my philosophical hero and subject of my thesis, John Rawls, called ‘reflective equilibrium’.  Though he wasn’t talking about investigator time.  Once you’re happy with the result, you should probably use the figure from the table, and you should certainly consider putting the table in full in the ‘justification for resources’ section.

Something I’m starting at the moment as part of the end-of-project review process is to go back to the original estimates of staff time, and to get a sense from the research team about how accurate they were, and what they would estimate differently next time, if anything.  The two things that have come out strongly from this so far I’ve outlined above – managing staff and project administration – but I’ll be looking for others.

So…. over to you.  How do you estimate researcher and investigator time?  Have you been involved in a funded project?  If so, what did you miss in your forecasts (if anything)?  What would you do differently next time?

ESRC Demand Management Part 5: And the winner is…. researcher sanctions!

"And the prize for best supporting sanctions scheme goes to...."
And the winner is.....

The ESRC today revealed the outcome of the ‘Demand Management’ consultation, with the consultation exercise showing a strong preference for researcher sanctions rather than the other main options, which were institutional sanctions, institutional quotas, or charging for applications.  And therefore….

Given this clear message, it is likely that any further steps will reflect these views.

Which I think means that that’s what they’re going to do.  But being (a) academics, and (b) British, it has to be expressed in the passive voice and as tentatively as possible.

Individual researcher sanctions got the vote of  82% of institutional responses, 80% of learned society responses, and 44% of individual responses.   To put that in context, though, 32% of the individual responses were interpreted as backing none of the possible measures, which I don’t think was ever going to be a particular convincing response.    Institutional sanctions came second among institutions (11%), and institutional quotas (20%) among individual respondents.  Charging for applications was, as I expected, a non-starter, apparently attracting the support of two institutions and one learned society or ‘other agency’.  I’m surprised it got that many.

The issue of the presentation of the results as a ‘vote’ is an interesting one, as I don’t think that’s what this exercise was presented as at the time.  The institutional response that I was involved in was – I like to think – a bit more nuanced and thoughtful than just a ‘vote’ for one particular option.  In any case, if it was a vote, I’m sure that the ‘First Past the Post’ system which appears to have been used wouldn’t be appropriate – some kind of ‘alternative vote’ system to find the least unpopular option would surely have been more appropriate.  I’m also puzzled by the combining of the results from institutions, individuals, and learned societies into totals for ‘all respondents’ which seems to give the same weighting to individual and institutional responses.

Fortunately – or doubly-fortunately – those elements of the research community which responded delivered a clear signal about the preferred method of demand management, and, in my view at least, it’s the right one.  I’ll admit to being a bit surprised by how clear cut the verdict appears to be, but it’s very much one I welcome.

It’s not all good news, though.  The announcement is silent on exactly what form the programme of researcher sanctions will take, and there is still the possibility that sanctions may apply to co-investigators as well as the principal investigator.  As I’ve argued before, I think this would be a mistake, and would be grossly unfair in far too many cases.  I know that there are some non-Nottingham folks reading this blog, so if your institution isn’t one of the ones that responded (and remember only 44 of 115 universities did), it might be worth finding out why not, and making your views known on this issue.

One interesting point that is stressed in the announcement is that individual researcher sanctions – or any form of further ‘demand management’ measures – may never happen.  The ESRC have been clear about this all along – the social science research community was put on notice about the unsustainablity of the current volume of applications being submitted, and that a review would take place in autumn 2012.  The consultation was about the general form of any further steps should they prove necessary.  And interestingly the ESRC are apparently ‘confident’ they they will not.

We remain confident that by working in partnership with HEIs there will be no need to take further steps. There has been a very positive response from institutions to our call for greater self-regulation, and we expect that this will lead to a reduction in uncompetitive proposals.

Contrast that with this, from March, when the consultation was launched:

We very much hope that we will not need additional measures.

Might none of this happen?  I’d like to think not, but I don’t share their confidence, and I fear that “very much hope” was nearer the mark.  I can well believe that each institution is keen to up its game, and I’m sure discussions are going on about new forms of internal peer review, mentoring, research leadership etc in institutions all across the country.  Whether this will lead to a sufficient fall in the number of uncompetitive applications, well, I’m not so sure.

I think there needs to be an acceptance that there are plenty of perfectly good research ideas that would lead to high quality research outputs in quality journals, perhaps with strong non-academic impact, which nevertheless aren’t ‘ESRC-able’ – because they’re merely ‘very good’ or ‘excellent’ rather than ‘outstanding’.  And it’s only the really outstanding ideas that are going to be competitive.  If all institutions realise this, researcher sanctions may never happen.  But if hubris wins out, and everyone concludes that it’s everyone else’s applications that are the problem, then researcher sanctions are inevitable.

Yet another ‘oh look, the start of term’ blog post….

Apologies for the lack of posts recently.  I’ve been off on leave for a couple of weeks, but although this blog is written in my own time and in a personal capacity, I decided to ended up taking a complete break from all things research funding related.  And yes, I did have a nice break, thanks for asking…. part ‘stay-cation’ and part ‘prepare for house move that won’t now take place this leave year after all’


“Hello! Hello! It’s good to be back!”

I managed to miss the first week of term, although the return of the students is fairly hard to miss in university cities like Nottingham.  Suddenly there are young people everywhere, and about a third of them look lost.  I played my part in supporting the student induction experience by giving directions to an undergraduate who had lost herself between two of the University of Nottingham campuses (campi?).  Easily done.  This is usually been the limit of my interaction with undergraduates, other than telling them that, no, I don’t know the code to the computer room, and that they should ask at reception.

Universities are strange, almost depressing, places outside term time.  A little bit like I’d imagine the whole world would be after a ‘rapture’ of the kind that some odd kinds of Christians are expecting.  Sure, it’s nice for a day or so to have the place to ourselves, but when the students go, so does the infrastructure.  Limited choice of sandwiches at lunchtime, a reduced bus service, and of course, the staff slope off as well.  Academics for a combination of annual leave and research time (except this year, of course.  Thanks, ESRC, for those September deadlines.   Thank you so much), and the rest of us will also look to take the bulk of our leave then too.  On one level, you’d think it would a good time to get things done, but on the other, the people you need to get on board to get any of them done tend not to be around.  And as we’ve seen, no time of the year is really any good.

Does anyone else play the ‘out of office’ lottery?  Trying to predict how many out of office emails you’ll get in a day, or in response to any one particular email.  (On the subject of which, wouldn’t it be handy to have an ‘oh, never mind, enjoy yourself’ option to respond to o-o-o emails with, so that you could delete your original email so they’d have one less to deal with when they return.  I’d also quite like an “I’ve told you once already” o-o-o-email which subtly escalates in annoyance if more emails are received from the same person).

But it’s remarkable how soon the spring in the step fades, even on a warm October morning.  The campus is bustling with activity, when academic and non-academic colleagues are around (if busy) and the corridors are full of students’ chatter.  Office doors everywhere are left just a little bit ajar, colleagues are catching up on their summer holidays research, buses are more frequent (if a little less reliable), optimism and excitement are in the air.and the university feels, well, like a university again.

But then I’m asked for the code to the computer room before I even get as far as unlocking my office door, I have to queue for ten minutes for a sandwich at lunchtime, I can’t get on to the hopper bus after a meeting and have to walk back to base, and corridors are blocked with lost or dawdling students, or just ‘hanging out’.  Though I suspect no-one says ‘hangs out’ any more.  And then I start to yearn for the peace and quiet of the summer.  Almost.

Not really…. Hello, Nottingham University Business School.  Hello, term time.  It’s good to be back.

University Life: Why now is the wrong time of the academic year to get anything done….

A picture of a calendar
Better luck next year?

… and why that’s true for absolutely any value of “now”…..

September
“It’s the start of the academic year soon, everyone’s concentrating on preparing their teaching”

October
“It’s the busiest time for teaching.  I’ve got 237 tutorials this week alone”

November
“I’ve got about 4,238 essays to mark.”

December
“It’s nearly Christmas, nothing gets done at this time of year”

January
“I’ve got sixty thousand exam scripts to mark”

February
[See October]

March-April
“With the Easter break coming up, well…”

May
[See January, but with added Finalist-related urgency and some conferences]

June-August
“Conference season…. annual leave…. concentrated period of research in Tuscany

What would wholesale academic adoption of social media look like?

A large crowd of people
Crowded out?

Last week, the LSE Impact of Social Sciences blog was asking for its readers and followers to nominate their favourite academic tweeters.  This got me thinking.  While that’s a sensible question to ask now, and one that could create a valuable resource, I wonder whether the question would make as much sense if asked in a few years time?

The drivers for academics (and arguably academic-related types like me) to start to use twitter and to contribute to a blog are many – brute self-promotion; desire to join a community or communities; to share ideas; to test ideas; to network and make new contacts; to satisfy the impact requirements of the research funder; and so on and so forth.  I think most current PhD students would be very well advised to take advantage of social media to start building themselves an online presence as an early investment in their search for a job (academic or otherwise).   I’d imagine that a social media strategy is now all-but-standard in most ESRC ‘Pathways to Impact’ documents.  Additionally, there are now many senior, credible, well-established academic bloggers and twitterers, many of whom are also advocates for the use of social media.

So, what would happen if there was a huge upsurge in the number of academics (and academic-relateds) using social media?  What if, say, participation rates reach about 20% or so?  Would the utility of social media scale, or would the noise to signal ratio be such that its usefulness would decrease?

This isn’t a rhetorical question – I’ve really no idea and I’m curious.  Anyone?  Any thoughts?

I guess that there’s a difference between different types of social media.  I have friends who are outside the academy and who have Twitter accounts for following and listening, rather than for leading or talking.  They follow the Brookers and the Frys and the Goldacres, and perhaps some news sources.  They use Twitter like a form of RSS feed, essentially.

But what about blogging, or using Twitter to transmit, rather than to receive?  If even 10% of academics have an active blog, will it still be possible or practical to keep track of everything relevant that’s written.  In my field, I think I’ve linked to pretty much every related blog (see links in the sidebar) in the UK, and one from Australia.  In certain academic fields it’s probably similarly straightforward to keep track of everyone significant and relevant.  If this blogging lark catches on, there will come a point at which it’s no longer possible for anyone to keep up with everything in any given field.  So, maybe people become more selective and we drop down to sub-specialisms, and it becomes sensible to ask for our favourite academic tweeters on non-linear economics, or something like that.

On the other hand, it might be that new entrants to the blogging market will be limited and inhibited by the number already present.  Or we might see more multi-author blogs, mergers etc and so on until we re-invent the journal.  Or strategies that involve attracting the attention and comment of influential bloggers and the academic twitterati (a little bit of me died inside typing that, I hope you’re happy….).  Might that be what happens?  That e-hierarchies form (arguably they already exist) that echo real world hierarchies, and effectively squeeze out new entrants?  Although… I guess good content will always have a chance of ‘going viral’ within relevant communities.

Of course, it may well be that something else will happen.  That Twitter will end up in the same pile as MySpace.  Or that it simply won’t be widely adopted or become mainstream at all.  After all, most academics still don’t have much of a web 1.0 presence beyond a perfunctory page on their Department website.

That’s all a bit rambly and far longer than I meant it to be.  But as someone who is going to be recommending the greater use of social media to researchers, I’d like to have a sense of where all this might be going, and what the future might hold.  Would the usefulness of social media as an academic communication, information sharing, and networking tool effectively start to diminish once a certain point is reached?  Or would it scale?