Is there a danger that research funding calls are getting too narrow?

The ESRC have recently added a little more detail to a previous announcement about a pending call for European-Chinese joint research projects on Green Economy and Population Change.  Specifically, they’re after projects which address the following themes:

Green Economy

  • The ‘greenness and dynamics of economies’
  • Institutions, Policies and planning for a green economy
  • The green economy in cities and metropolitan areas
  • Consumer behaviour and lifestyles in a green economy

Understanding population Change

  • changing life course
  • urbanisation and migration
  • labour markets and social security dynamics
  • methodology, modelling and forecasting
  • care provision
  • comparative policy learning

Projects will need to involve institutions from at least two of the participating European counties (UK, France (involvement TBC), Germany, Netherlands) and two institutions in China. On top of this is an expectation that there will be sustainability/capacity building around the research collaborations, plus the usual further plus points of involving stakeholders and interdisciplinary research.

Before I start being negative, or potentially negative, I have one blatant plug and some positive things to say. The blatant plug is that the University of Nottingham has a campus in Ningbo in China which is eligible for NSFC funding and therefore would presumably count as one Chinese partner. I wouldn’t claim to know all about all aspects of our Ningbo research expertise, but I know people who do.  Please feel free to contact me with ideas/research agendas and I’ll see if I can put you in touch with people who know people.

The positive things.  The topics seem to me to be important, and we’ve been given advance notice of the call and a fair amount of time to put something together.  There’s a reference to Open Research Area procedures and mechanisms, which refers to agreements between the UK, France, Netherlands and Germany on a common decision making process for joint projects in which each partner is funded by their national funder under their own national funding rules.  This is excellent, as it doesn’t require anyone to become an expert in another country’s national funder’s rules, and doesn’t have the double or treble jeopardy problem of previous calls where decisions were taken by individual funders.  It’s also good that national funders are working together on common challenges – this adds fresh insight, invites interesting comparative work and pools intellectual and financial resources.

However, what concerns me about calls like this is that the area at the centre of the particular Venn diagram of this call is really quite small.  It’s open to researchers with research interests in the right areas, with collaborators in the right European countries, with collaborators in China.   That’s two – arguably three – circles in the diagram.  Of course, there’s a fourth – proposals that are outstanding.  Will there be enough strong competition on the hallowed ground at the centre of all these circles? It’s hard to say, as we don’t know yet how much money is available.

I’m all for calls that encourage, incentivise, and facilitate international research.  I’m in favour of calls on specific topics which are under-researched, which are judged of particular national or international importance, or where co-funding from partners can be found to address areas of common interest.

But I’m less sure about having both in one call – both very specific requirements in terms of the nationality of the partner institutions, and in terms of the call themes. Probably the scope of this call is wide enough – presumably the funders think so – but I can’t help think that that less onerous eligibility requirements in terms of partners could lead to greater numbers of high quality applications.

The consequences of Open Access, part 2: Are researchers prepared for greater scrutiny?

In part 1 of this post, I raised questions about how academic writing might have to change in response to the open access agenda.  The spirit of open access surely requires not just the availability of academic papers, but the accessibility of those papers to research users and stakeholders.  I argued that lay summaries and context pieces will increasingly be required, and I was pleased to discover that at least some open access journals are already thinking about this.  In this second part, I want to raise questions about whether researchers and those who support them are ready for the potential extra degree of scrutiny and attention that open access may bring.

On February 23rd 2012, the Journal of Medical Ethics published a paper called After-birth abortion: why should the baby live? by Alberto Giubilini and Francesca Minerva.   The paper was not to advocate “after birth abortion” (i.e infanticide), but to argue that many of the arguments that are said to justify abortion also turn out to justify infanticide.  This isn’t a new argument by any means, but presumably there was sufficient novelty in the construction of the argument to warrant publications.  To those familiar with the conventions of applied ethics – the intended readers of the article – it’s understood that it was playing devil’s advocate, seeing how far arguments can be stretched, taking things to their logical conclusion, seeing how far the thin end of the edge will drive, what’s at the bottom of the slippery slope, just what kind of absurdium can be reductio-ed to.  While the paper isn’t satire in the same way as Jonathan Swift’s A Modest Proposal, no sensible reader would have concluded that the authors were calling for infanticide to be made legal, in spite of the title.

I understand that what happened next was that the existence of the article – for some reason – attracted attention in the right wing Christian blogosphere, prompting a rash of complaints, hostile commentary, fury, racist attacks, and death threats.  Journal editor Julian Savulescu wrote a blog post about the affair, below which are 624 comments.   It’s enlightening and depressing reading in equal measure.  Quick declaration of interest here – my academic background (such as it is) is in philosophy, and I used to work at Keele University’s Centre for Professional Ethics marketing their courses.  I know some of the people involved in the JME’s response, though not Savulescu or the authors of the paper.

There’s a lot that can (and probably should) be said about the deep misunderstanding that occurred between professional bioethicists and non-academics concerned about ethical issues who read the paper, or who heard about it.  Part of that misunderstanding is about what ethicists do – they explore arguments, analyse concepts, test theories, follow the arguments.  They don’t have any special access to moral truth, and while their private views are often much better thought out than most people, most see their role as helping to understand arguments, not pushing any particular position.  Though some of them do that too, especially if it gets them on Newsnight.  I’m not really well informed enough to comment too much on this, but it seems to me that the ethicists haven’t done a great job of explaining what they do to those more moderate and sensible critics.  Those who post death threats and racist abuse are probably past reasoned argument and probably love having something to rail against because it justifies their peculiar world view, but for everyone else, I think it ought to be possible to explain.  Perhaps the notion of a lay summary that I mentioned last time might be helpful here.

Part of the reason for the fuss might have been because the article wasn’t available via open access, so some critics may not have had the opportunity to read the article and make up their own mind.  This might be thought of as a major argument in favour of open access – and of course, it is – the reasonable and sensible would have at least skim-read the article, and it’s easier to marshal a response when what’s being complained about is out there for reference.

However….. the unfortunate truth is that there are elements out there who are looking for the next scandal, for the next chance to whip up outrage, for the next witch hunt.  And I’m not just talking about the blogosphere, I’m talking about elements of the mainstream media, who (regardless of our personal politics) have little respect or regard for notions of truth, integrity and fairness.  If they get their paper sales, web  hits, outraged comments, and resulting manufactured “scandal”, then they’re happy.  Think I’m exaggerating?  Ask Hilary Mantel, who was on the receiving end of an entirely manufactured fuss with comments she made in a long and thoughtful lecture being taken deliberately and dishonestly out of context.

While open access will make things easier for high quality journalism and for the open-minded citizen and/or professional, it’ll also make it easier for the scandal-mongers (in the mainstream media and in the blogosphere) to identify the next victim to be thrown to the ravenous outrage-hungry wolves that make up their particular constituency.  It’s already risky to be known to be researching and publishing in certain areas – anything involving animal research; climate change; crop science; evolutionary theory; Münchhausen’s by Proxy; vaccination; and (oddly) chronic fatigue syndrome/ME – appears to have a hostile activist community ready to pounce on any research that comes back with the “wrong” answer.

I don’t want to go too far in presenting the world outside the doors of the academy as being a swamp of unreason and prejudice.  But the fact is that alongside the majority of the general public (and bloggers and journalists) who are both rational and reasonable, there is an element that would be happy to twist (or invent) things to suit their own agenda, especially if that agenda involves whipping out manufactured outrage to enable their constituency to confirm their existing prejudices. Never mind the facts, just get angry!

Doubtless we all know academics who would probably relish the extra attention and are already comfortable with the public spotlight.  But I’m sure we also know academics who do not seek the limelight, who don’t trust the media, and who would struggle to cope with even five minutes of (in)fame(y).  One day you’re a humble bioethicist, presumably little known outside your professional circles, and the next, hundreds of people are wishing you dead and calling you every name under the sun.  While Richard Dawkins seems to revel in his (sweary) hate mail, I think a lot of people would find it very distressing to receive emails hoping for their painful death.  I know it would upset me a lot, so please don’t send me any, okay?  And be nice in the comments…..

Of course, even if things never get that far or go that badly, with open access there’s always a greater chance of hostile comment or criticism from the more mainstream and reasonable media, who have a much bigger platform from which to speak than an academic journal.  This criticism need not be malicious, could be legitimate opinion, could be based on a misunderstanding.  Open access opens up the academy to greater scrutiny and greater criticism.

As for what we do about this….. it’s hard to say.  I certainly don’t say that we retreat behind the safety of our paywalls and sally forth with our research only when guarded by a phalanx of heavy infantry to protect us from the swinish multitude besieging our ivory tower.  But I think that there are things that we can do in order to be better prepared.  The use of lay summaries, and greater consideration of the lay reader when writing academic papers will help guard against misunderstandings.

University external relations departments need to be ready to support and defend academic colleagues, and perhaps need to think about planning for these kind of problems, if they don’t do so already.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.

ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

There was an interesting snippet of information in an article in this week’s Times Higher about the latest research council success rates.

 [A] spokeswoman for the ESRC said that since the research council had begun requiring institutions from June 2011 to internally sift applications before submitting them, it had recorded an overall success rate of 24 per cent, rising to 33 per cent for its most recent round of responsive mode grants.  She said that application volumes had also dropped by 37 per cent, “which is an encouraging start towards our demand management target of a 50 per cent reduction” by the end of 2014-15.

Back in October last year I noticed what I thought was a change in tone from the ESRC which gave the impression that they were more confident that institutions had taken note of the shot across the bows of the “demand management” measures consultation exercise(s), and that perhaps asking for greater restraint in putting forward applications would be sufficient.  I hope it is, because as the current formal demand management proposals that will be implemented if required unfairly and unreasonably include co-applicants in any sanction.

I’ve written before (and others have added very interesting comments) about how I think we arrived at the situation where social science research units were flinging as many applications in as possible in the hope that some of them would stick.  And I hope the recent improvements in success rates to around 1-in-3, 1-in-4 don’t serve to re-encourage this kind of behaviour. We need long term, sustainable, careful, restraint in terms of what applications are submitted by institutions to the ESRC (and other major funders, for that matter) and the state in which they’re submitted.

Everyone will want to improve the quality of applications, and internal mentoring and peer review and the kind of lay review that I do will assist with that, but we also need to make sure that the underlying research idea is what I call ‘ESRC-able’.  At Nottingham University Business School, I secured agreement a while ago now to introduce a ‘proof of concept’ review phase for ESRC applications, where we review a two page outline first, before deciding whether to give the green light for the development of a full application.  I think this allows time for changes to be made at the earliest stage, and makes it much easier for us to say that the idea isn’t right and shouldn’t be developed than if a full application was in front of us.

And what isn’t ‘ESRC-able’?  I think a look at the assessment schema gives some useful clues – if you can’t honestly say that your application would fit in the top two categories on the final page, you probably shouldn’t bother.  ‘Dull but worthy’ stuff won’t get funded, and I’ve seen the phrase “incremental progress” used in referees’ comments to damn with faint praise.  There’s now a whole category of research that is of good quality and would doubtless score respectably in any REF exercise, but which simply won’t be competitive with the ESRC.  This, of course, raises the question about how non-groundbreaking stuff gets funded – the stuff that’s more than a series of footnotes to Plato, but which builds on and advances the findings of ground-breaking research by others.  And to that I have no answer – we have a system which craves the theoretically and methodologically innovative, but after a paradigm has been shifted, there’s no money available to explore the consequences.

*     *     *     *     *

Also in the Times Higher this week is the kind of story that appears every year – some universities have done better this year at getting research funding/with their success rates than in previous years, and some have done worse.  Some of those who have done better and worse are the traditional big players, and some are in the chasing pack.  Those who have done well credit their brilliant internal systems and those who have done badly will contest the figures or point to extenuating circumstances, such as the ending of large grants.

While one always wants to see one’s own institution doing well and doing better, and everyone always enjoys a good bit of schadenfreude at the expense of their rivals benchmark institutions and any apparent difficulties that a big beast find themselves in, are any of these short term variations of actual, real, statistical significance?  Apparently big gains can be down to a combination of a few big wins, grants transferring in with new staff, and just… well… the kind of natural variation you’d expect to see.  Big losses could be big grants ending, staff moving on, and – again – natural variance.  Yes, you could ascribe your big gains to your shiny new review processes, but would you also conclude that there’s a problem with those same processes and people the year after when performance is apparently less good?

Why these short term (and mostly meaningless) short term variations are more newsworthy than the radical variation in ESRC success rates for different social science disciplines I have no idea….

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

How can we help researchers get responses for web questionnaires?

A picture of an energy saving lightbulb
*Insert your own hilarious and inaccurate joke about how long energy saving lightbulbs take to warm up here*

I’ve had an idea, and I’d like you, the internet, to tell me if it’s a good one or not.  Or how it might be made into a good one.

Would it be useful to set up a central list/blog/twitter account for ongoing research projects (including student projects) which need responses to an internet questionnaire from the general public?  Would researchers use it?  Would it add value?  Would people participate?

Every so often, I receive an email or tweet asking for people to complete a research questionnaire on a particular topic.  I usually do (if it’s not too long, and the topic isn’t one I consider intrusive), partly because some of them are quite interesting, partly because I feel a general duty to assist with research when asked, and partly because I probably need to get out more.  The latest one was one which a friend and former colleague shared on Facebook.  It was a PhD project from a student in her department about sun tanning knowledge and behaviour, and it’s here if you feel like taking part.  Now this is not a subject that I feel passionately about, but perhaps that’s why it might be useful for the likes of me to respond.

I guess the key assumptions that I’m making are that there are sufficient numbers of other people like me who would be willing to spend a few minutes every so often completing a web survey to support research, and that nothing like this exists already.  If there is, I’ve not heard about and I’d have thought that I would have done.  But I’d rather be embarrassed now rather than later!  Another assumption is that such a resource might be useful.  I strongly suspect that any such resource would have a deeply atypical demographic – I’d imagine it would be mainly university staff and students.  But I’d imagine that well designed research questionnaires would be asking sufficiently detailed demographic information to be able to factor this in.  For some student projects where the main challenge can be quantity rather than variety, this might not even matter too much.  I guess it depends what questions are being asked as part of the research.

I’ve not really thought this through at all yet.  I would imagine that only projects which could be completed by anyone would be suitable for inclusion, or at least only projects where responses are invited from a broad range of people.  Very specific projects probably wouldn’t work, and would make it harder for participants to find ones which they can do.  Obviously all projects would need to have ethical approval from their institution.  There would be an expectation that beneficiaries are prepared to reciprocate and help others in return.  And clearly there has to be a strategy to tell people about it.

In practical terms, I’m thinking about either a separate blog or a separate page of this one, and probably a separate twitter account.  Researchers could add details in a comment on a monthly blog post, and either tweet the account and ask for a re-tweet, or email me a tweet to send.  Participants could follow the twitter feed and subscribe to the comments and blog.

So… what do you think?  Please comment below (or email me if you prefer).  Would this be useful?  Would you participate?  What have I missed?  If I do set this up, how might I go about telling people about it?

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

Leverhulme Trust to support British Academy Small Research Grant scheme

The logo of the British Academy
BA staff examine the Leverhulme memorandum of understanding

The British Academy announced yesterday that agreement has been reached on a new collaborative agreement with the Leverhulme Trust about funding for its Small Grants Scheme.  This is very good news for researchers in the humanities and the social sciences, and I’m interrupting my series of gloom-and-doom posts on what to do if your application is unsuccessful to inflict my take on some really good news upon you, oh gentle reader.  And to see if I can set a personal best for the number of links in an opening sentence.  Which I can.

When I first started supporting grant-getting activity back in the halcyon days of 2005ish, the British Academy Small Grants scheme was a small and beautifully formed scheme.  It funded up to £7.5k or so for projects of up to two years, and only covered research expenses – so no funding for investigator time, replacement teaching, or overheads, but would cover travel, subsistence, transcription, data, casual research assistance etc and so on.  It was a light touch application on a simple form, and enjoyed a success rate of around 50% or so.  The criterion for funding was academic merit.  Nothing else mattered.  It funded some brilliant work, and Ken Emond of the British Academy has always spoken very warmly about this scheme, and considered it a real success story.  Gradually people started cottoning on to just how good a scheme it was, and success rates started to drop – but that’s what happens when you’re successful.

Then along came the Comprehensive Spending Review and budgets were cut.  I presume the scheme was scrapped under government pressure, only for our heroes at the BA to eventually win the argument.  At the same time, the ESRC decided that their reviewers weren’t going get out of bed in the morning for less than £200k.  Suddenly bigger projects were the only option and (funded) academic research looked to be all about perpetual paradigm shifts with only outstanding stuff that will change everything to be funded.  And there was no evidence of any thought as to how these major theoretical breakthroughs gained through massive grants might be developed and expanded and exploited and extended through smaller projects.

Although it was great to see the BA SGS scheme survive in any form, the reduced funding made it inevitable that success rates would plummet.  However, the increased funding from the Leverhulme Trust could make a difference.  According to the announcement, the Trust has promised £1.5 million funding over three years.  Let’s assume:

  • that every penny goes to supporting research, and not a penny goes on infrastructure and overheads and that it’s all additional (rather than replacement) funding
  • that £10k will remain the maximum available
  • that the average amount awarded will be £7.5k

So…. £1.5m over three years is 500k per year.  500k divided by £7.5k average project cost is about 67 extra projects.  While we don’t know how many projects will be funded in this year’s reduced scheme, we do  know about last year.  According to the British Academy’s 2010/11 annual report

For the two rounds of competition held during 2010/11 the Academy received 1,561 applications for consideration and 538 awards were made, a success rate of 34.5%.Awards were spread over the whole range of Humanities and Social Sciences, and were made to individuals based in more than 110 institutions, as well as to more than 20 independent scholars.

2010/11 was the last year that the scheme ran in full and at the time, we all thought that the spring 2011 call would the last, so I suspect that the success rate might have been squeezed by a number of ‘now-or-never’ applications.  We won’t know until next month how many awards were made in the Autumn 2011 call, nor what the success rate is, so we won’t know until then whether the Leverhulme cash will restore the scheme to its former glory.  I suspect that it won’t, and that the combined total of the BA’s own funds and the Leverhulme contribution will add up to less than was available for the scheme before the comprehensive spending review struck.

Nevertheless, there will be about 67 more small social science and humanities projects funded than otherwise would have been the case.  So let’s raise a non-alcoholic beverage to the Leverhulme Trust, and in memory of founder William Hesketh Lever and his family’s values of “‘liberalism, nonconformity, and abstinence”.

23rd Jan update:  In response to a question on Twitter from @Funding4Res (aka Marie-Claire from the University of Huddersfield’s Research and Enterprise team), the British Academy have been said that “they’ll be rounds for Small Research Grants in the spring and autumn. Dates will be announced soon.”