‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

The Stern Review – Publications, Portability, and Panic

Research Managers everywhere, earlier today.

The Stern Review on the future of the REF is out today, and there are any number of good summaries of the key recommendations that you can read. You could also follow the #sternreview hashtag on Twitter, or read it for yourself. It’s not particularly long, and it’s an easy read considering. The first point worth noting is that these are recommendations, not final policy, and they’re certainly nothing like a worked up final set of guidance notes for the next REF. I won’t repeat the summary, and I won’t add much on the impact issue, which Prof Mark Reed aka @fasttrackimpact has covered already.

The issue that has set twitter ablaze is that of portability – that is, which institution gets to return an academic’s publications when she moves from one institution to another. Under the old rules, there was full portability. So if Professor Portia Bililty moved from one institution to another in the final months of a REF cycle, all of her publications would come with her, and would all be returnable by her new employer. Her old employer lost all claim. Impact was different – that remained with the institution where it was created.

This caused problems. As the report puts it

72. There is a problem in the current REF system associated with the demonstrable increase in the number of individuals being recruited from other institutions shortly before the census date. This has costs for the UK HEI system in terms of recruitment and retention. An institution might invest very significantly in the recruitment, start up and future career of a faculty member, only to see the transfer market prior to REF drastically reduce the returns to that investment. This is a distortion to investment incentives in the direction of short-termism and can encourage rent-seeking by individuals and put pressure on budgets.

There was also some fairly grubby game-playing whereby big names from outside the UK were brought in on fractional contracts for their publications alone. To be fair, I’ve heard about places where this was done for other reasons, where these big names regularly attended their new fractional employer, helped develop research culture, mentored career young researchers and published articles with existing faculty. But let’s not pretend that happened everywhere.

So there’s a problem to be solved.

Stern’s response is to say that outputs – like impact – will no longer be portable.

73. We therefore recommend that outputs should be submitted only by the institution where the output was demonstrably generated. If individuals transfer between institutions (including from overseas) during the REF period, their works should be allocated to the HEI where they were based when the work was accepted for publication. A smaller maximum number of outputs might be permitted for the outputs of staff who have left an institution through retirement or to another HEI. Bearing in mind Recommendation 2, which recommends that any individual should be able to submit up to six outputs, a maximum of three outputs from those who have left the institution in the REF period would seem appropriate.
74. HEIs hiring staff during the REF cycle would be able to include them in their staff return. But they would be able to include only outputs by the individual that have been accepted for publication after joining the institution. Disincentivising short-term and narrowly-motivated movement across the sector, whilst still incentivising long-term investment in people will benefit UK research and should also encourage greater collaboration across the system.

I have to say that my first reaction to this will be extremely positive. The poaching and gameplaying were very dispiriting, and this just seems…. fairer.

However, looking at the Twitter reaction, the response was rather different. Concern was expressed that this would make it very difficult for researchers to move institutions, and it would make it especially difficult for early career researchers. I’ve been back and forth on this, and I’m no longer convinced that this is such a problem.

Let’s play Fantasy REF Manager 2020. It’s the start of the 2016/2017 season academic year. All of the existing publications from my squad of academics are mine to return, whatever happens to them and whatever career choices they make. Let’s say that one of my promising youth players  early career researchers gets an offer for elsewhere. I can try to match or beat whatever offer she has, but whatever happens, my team gets credit for the publications she’s produced. Let’s say that she moves on, and I want to recruit a replacement, and I identify the person I want. He’s got some great publications which he can’t bring with him… but I don’t need them, because I’ve got those belonging to his predecessor. Of course, I’d be very interested in track record, but I’m appointing entirely on potential. His job is to pick up where she left off.

Might recruiting on potential actually work in favour of early career researchers? Under the old system, if I were a short termist manager, I’d probably favour the solid early-mid career plodder who can bring me a number of guaranteed, safe publications, rather than someone who is much longer on promise but shorter on actual published publications. Might it also bring an end to the system where very early career researchers were advantaged just by having *any* bankable publications that had actually appeared?

I wonder if some early career researchers are so used to a system where they’re (unfairly) judged by the sole criterion of potential REF contribution that they’re imagining a scenario where they – and perhaps they alone – are being prevented from using the only thing that makes them employable. Institutions with foresight and with long term planning have always recruited on the basis of potential and other indicators and factors beyond the REF, and this change may force more of them to do that.

However, I can see a few problems that I might have as Fantasy REF Manager. The example above presumed one-in, one-out. But what if I want to increase the size of my squad through building new areas of specialism, or put together an entirely new School or Research Group? This might present more of a problem, because it’ll take much longer for me to see any REF benefits in exchange for my investment. However, rival managers would argue that the old rules meant I could do an academic-Chelsea or academic-Manchester City, and just buy all those REF benefits straight away. And that doesn’t feel right.

Another problem might be if I was worried about returning publications from people who have left. What image to it give to the REF panel if more than a certain small percentage of your returned publications are from researchers who’ve left? Would it make us look like we were trading on past glories, while in fact we’d deteriorated rapidly? Perhaps some guidance to the panels that they’re to take no account of this in assessing submissions would help here, and a clear signal that a good publication by a researcher-past has the same value as researcher-current.

Does the new system give me as the Fantasy REF Manager too much power over my players, early career or not? I’m not sure. It’s true that I have their publications in the bag, so they can’t threaten me with taking them away. But I’m still going to want to keep them on my team if I think they’re going to continue to produce work of that standard that I want in the future. If I don’t think that – for whatever reason – then I’ve no reason to want to keep them. They can still hold me to ransom, but what they’re holding over me is their future potential, not recent past glories. And to me, that seems more like an appropriate correction in the balance of power. Though… might any discrimination be more likely to be against career elderly researchers who I think are winding down? Not sure.

Of course, there are compromise positions between full portability and no portability. Perhaps a one or two year window of portability, and perhaps longer for early career researchers… though that might give some too great an advantage. That would be an improvement on the status quo, and might assuage some worries that a lot of ECRs (judging by my timeline on Twitter, anyway) have at the moment.

Even with a window, there are potential problems around game-playing. Do researchers looking for a move hold off from submitting their papers? Might they filibuster corrections and final changes? Might editors be pressurised to delay formal acceptances? Are we clear what constitutes a formal date of acceptance (open access experience suggests not)? And probably most seriously, might papers “under review” rather than papers published be the new currency?

Probably the last point is what worries me most, but I think these are relatively small issues, and I’d be worried if hiring decisions were based on such small margins. But perhaps they are.

This article is entirely knee-jerk. I’m making it up as I go along, changing my mind, being influenced. But I think that ECRs have less to worry about than many fear, and I think my tentative view is that limiting portability – either entirely, or with a narrow window – is significantly better than the current situation of unlimited portability. But I may have missed something, and I’m open to convincing.

Please feel free to tell me what I’ve missed in the comments, or tweet me.

UPDATE: 29th July AM

I’ve been following the discussion on Twitter with some interest, and I’ve been reflecting on whether or not there’s a particular issue for early career researchers. As I said earlier, I’ve been going backwards and forwards on this. Martin Eve has written an excellent post in which he argues that some of the concern may be because

“the current hiring paradigm is so geared towards REF and research it can be hard to imagine what a new hiring environment looks like”

He also makes an important point about ownership of IP, which a lot of academics don’t seem to understand.

Athene Donald has written a really interesting post in which she describes “egregious examples” of game-playing which she’s seen first hand, and anyone who doesn’t think this is a serious issue needs to read this. She also draws much-needed attention to a major benefit of the proposals – that returning everyone and having returning nx2 publications does away with all of the personal circumstances exceptions work required last time to earn the right to submit fewer than four outputs – this is difficult and time consuming for institutions, and potentially distressing for individuals. She also echoes Martin Eve’s point about some career young researchers not being able to think into a new paradigm yet by recalling her long experience of REFs and RAEs.

However, while I do – on the whole – think that some early career researchers are overreacting, perhaps not understanding that the game changes for everyone, and that appointments are now on potential, not on recent publishing history. And that this might benefit them as I argued above.

Having said that, I am now persuaded that there are good arguments for an exception to the portability rules for ECRs. My sense is that there’s a fair amount of mining and developing the PhD for publications that could be done, but after that, there has to come a stage of moving on to the next thing, adding new strings to the bow, and that that might in principle be a less productive time in terms of publishing. And although I think at least some ECR worries are misplaced, if what I’m reading on Twitter is representative, I think there’s a case for taking them seriously and doing something to assuage those fears with an exemption or limited exemption. There’s a lot that’s positive about the Stern Review, but I think the confidence of the ECR community is important in itself.

Some really interesting issues have been raised that relate to detail and to exceptions and which would have to be ironed out later, but are worth consideration. Can an institution claim the publications of a teaching fellow? (I’d argue no). What happens to publications accepted when the author has two fractional (and presumably temporary) contracts? (I’d argue they can’t be claimed, certainly not if the contract is sessional). What if the author is unemployed?

One argument I’ve read a few times is that there’s a strong incentive for institutions to hire from within, rather than from without. But I’m not clear why that is – in my example above, I already have any publications from internal candidates, whether or not I make an internal appointment. I can’t have the publications of anyone from outside – so it’s a case of the internal candidates future publications (plus broader contribution, but let’s take that as read) versus the external candidate’s. I think that sounds like a reasonably level playing field, but perhaps I’m missing something. I suppose I wouldn’t have to return publications of someone who’s left if I make an internal appointment, but if there’s no penalty (formal or informal) for this, why should I – as Fantasy REF Manager -care? If there were portability, I’d be choosing between the internal’s past and potential, and the external’s past and potential. That might change my calculations, depending on those publications – though actually if the internal’s publications were co-authored with existing faculty I might not mind if they go. So…. yes, there is a whole swamp of unintended consequences here, but I’m not sure whether allowing ECR portability helps any.

The rise of the machines – automation and the future of research development

"I've seen research ideas you people wouldn't believe. Impact plans on fire off the shoulder of Orion. I watched JeS-beams glitter in the dark near the Tannhäuser ResearchGate. All those proposals will be lost in time, like tears...in...rain. Time to revise and resubmit."
“I’ve seen first drafts you people wouldn’t believe. Impact plans on fire off the shoulder of Orion. I watched JeS beams glitter in the dark near the Tannhäuser ResearchGate. All those research proposals will be lost in time, like tears…in…rain. Time to resubmit.”

In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?

In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.

A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools  – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.

While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.

I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).

But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.

It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.

Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.

Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.

Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.

But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.

I for one welcome our new research development AI overlords.

How useful is reading examples of successful grant applications?

This article is prompted by a couple of twitter conversations around a Times Higher Education article which quotes Ross Mounce, founding editor of Research Ideas and Outcomes, who argues for open publication at every stage of the research process, including (successful and unsuccessful) grant applications. The article acknowledges that this is likely to be controversial, but it got a few of us thinking about the value of reading other people’s grant applications to improve one’s own.

I’m asked about this a lot by prospective grant applicants – “do you have any examples of successful applications that you can share?” – and while generally I will supply them if I have access to them, I also add substantial caveats and health warnings about their use.

The first and perhaps most obvious worry is that most schemes change and evolve over time, and what works for one call might not work in another. Even if the application form hasn’t changed substantially, funder priorities – both hard priorities and softer steers – may have changed. And even if neither have changed, competitive pressures and improved grant writing skills may well be raising the bar, and an application that got funded – say – three or four years ago might not get funding today. Not necessarily because the project is weaker, but because the exposition and argument would now need to be stronger. This is particularly the case for impact – it’s hard to imagine that many of the impact sections on RCUK applications written in the early days of impact would pass muster now.

The second, and more serious worry, is that potential applicants take the successful grant application far too seriously and far too literally. I’ve seen smart, sensible, sophisticated people become obsessed with a successful grant application and try to copy everything about it, whether relevant or not, as if there was some mystical secret encoded into the text, and any subtle deviation would prevent the magic from working. Things like… the exact balance of the application, the tables/diagrams used or not used (“but the successful application didn’t have diagrams!”), the referencing system, the font choice, the level of technical detail, the choice and exposition of methods, whether there are critical friends and/or a steering group, the number of Profs on the bid, the amount of RA time, the balance between academic and stakeholder impact.

It’s a bit like a locksmith borrowing someone else’s front door key, making as exact a replica as she can, and then expecting it to open her front door too. Or a bit like taking a recipe that you’ve successfully followed and using it to make a completely different dish by changing the ingredients while keeping the cooking processes the same. Is it a bit like cargo cult thinking? Attempting to replicate an observed success or desired outcome by copying everything around it as closely as possible, without sufficient reflection on cause and effect? It’s certainly generalising inappropriately from a very small sample size (often n=1).

But I think – subject to caveats and health warnings – it can be useful to look at previously successful applications from the same scheme. I think it can sometimes even be useful to look at unsuccessful applications. I’ve changed my thinking on this quite a bit in the last few years, when I used to steer people away from them much more strongly. I think they can be useful in the following ways:

  1. Getting a sense of what’s required. It’s one thing seeing a blank application form and list of required annexes and additional documents, it’s another seeing the full beast. This will help potential applicants get a sense of the time and commitment that’s required, and make sensible, informed decisions about their workload and priorities and whether to apply or not.
  2. It also highlights all of the required sections, so no requirement of the application should come as a shock. Increasingly with the impact agenda it’s a case of getting your ducks in a row before you even think about applying, and it’s good to find that out early.
  3. It makes success feel real, and possible, especially if the grant winner is someone the applicant knows, or who works at the same institution. Low success rates can be demoralising, but it helps to know not only that someone, somewhere is successful, but that someone here and close by has been successful.
  4. It does set a benchmark in terms of the state of readiness, detail, thoroughness, and ducks-in-a-row-ness that the attentive potential applicant should aspire to at least equal, if not exceed. Early draft and early stage research applications often have larger or smaller pockets of vaguery and are often held together with a generous helping of fudge. Successful applications should show what’s needed in terms of clarity and detail, especially around methods.
  5. Writing skills. Writing grant applications is a very different skill to writing academic papers, which may go some way towards explaining why the Star Wars error in grant writing is so common. So it’s going to be useful to see examples of that skill used successfully… but having said that, I have a few examples in my library of successes which were clearly great ideas, but which were pretty mediocre as examples of how to craft a grant application.
  6. Concrete ideas and inspiration. Perhaps about how to use social media, or ways to engage stakeholders, or about data management, or other kinds of issues, questions and challenges if (and only if) they’re also relevant for the new proposal.

So on balance, I think reading (funder and scheme) relevant, recent, and highly rated (even if not successful) funding applications can help prospective applicants…. provided that they remember that what they’re reading and drawing inspiration from is a different application from a different team to do different things for different reasons at a different time.

And not a mystical, magical, alchemical formula for funding success.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

I’m running a marathon….

“Tcroydonhalf2015 12he first rule of Running Club is that you DO NOT stop talking about running.”

It starts with the couch-to-5k running programme. This is a relatively gentle start to talking about running, with typical sessions involving only talking about running for a minute or so before resting for another minute while someone else talks about something else before you continue to talk about running. A good way to start is to talk about all your new gear – your suspicion that “gait analysis” may have a slightly dodgy scientific basis and that that nice bloke at the shop might not be fully-qualified podiatrist, but having said that, your new shoes fit brilliantly and running now feels so much easier on your joints.

Once you’re a couch-to-5k graduate, you get to talk about Parkrun – free, weekly, inclusive 5k runs which take places all over the UK (and Ireland, and a few other places) on Saturday mornings. You can talk about how surprised you were about how supportive everyone was, and about perhaps how you felt like a real runner for the first time, and about how they’re open to everyone from serious club runners to couch-to-5k graduates. After you’ve been a few times, you can start talking about “PBs” and how much time you’ve beaten your previous best by, and what your target is now. You can drop “building towards a sub-25” into your conversations.

So once you can run 5k without stopping, you can probably talk about running non-stop for a decent length of time. Attempting a 10k sounds daunting, as you’re doubling the duration of both running and talking about running. But the first 5k/30 minutes is the hardest, and after you’ve done that it’s easier than you’d think to build towards 10k by doing more of what you’ve been doing. By this time (if you’re not already) you might be a member of a local running club or a lone wolf getting advice off the interweb. And you’ve got a whole more terms to sprinkle your running talk with…. tempo runs, hill training, the LSR, interval training, fartleks. You might even be talking about being able to run “negative splits” on race-day, though you should probably explain that’s a good thing and not a terrible injury. And if you did join a running club, you’ve got all your new mates to talk about as well as regional cross country or summer league races.

So things are going great – double it again, add interest, and you’re at the half marathon stage. At this stage, you must seriously advise anyone who’ll listen (and those who won’t) that a half marathon is not a half of anything, and although that’s logically and mathematically false, if you say it in a serious enough tone, no one will pick you up on it. At half marathon stage, you can litter your running talk with pacing strategies and “race day” strategies, carb loading, and about not wanting to be overtaken by a bloke dressed as a gorilla.

If you’re a bloke, you can regale your soon-to-be-former friends with tales of nipple chafing, and associated micropore/vaseline dilemmas, and of course there’s runner’s trots (- if you don’t know, don’t ask).

And this the stage I’m at at the moment. I’ve run five half marathons and I’m going to run my first full marathon in Nottingham at the end of September. I can comfortably talk about running for at least three hours, but on race day I’m going to have to stretch it out to between 3:45 and 4:00 to go the full distance. My training is going really well, and I can’t be happier at the progress I’m making in turn into a monumental bore. I’m having to spend a full three hours every weekend out on my “long slow run”, talking about “nutrition” and I’ve even caught myself referring to the question of what snacks to take with me as a “refuelling strategy”. Believe me, that all this is turning me into a five star prick, and my only redeeming feature is that I don’t wear lycra for training or racing.

And that’s before we get started on requests for sponsorship. So far in my running career I’ve taken the view that it’s basically my leisure activity and I shouldn’t ask people to donate their money to a charity of my choice whose work is clearly in my own interest. But this is a marathon… it’s a monumental challenge even for a semi-regular half-marathoner and underwhelming club runner like me, and to be honest I’m scared. So scared that I have to spend ages talking about it getting reassurance.

So, for the first and almost certainly last time, I’m asking for sponsorship.

If the excellent work that Crohn’s and Colitis UK do won’t motivate you to sponsor me, and if you’ve not got sufficient value out of my blog in the last few years to warrant even a small donation, then please consider the effect of all this on my ever-more-distant-nearest and dearest. Won’t someone think of my colleagues, who dare not ask “how was your weekend” in my hearing any more?

And if all that doesn’t move you, consider this….. at least I’m not a cyclist. Cyclist bores are the worst.

ESRC success rates 2014/2015 – a quick and dirty commentary

"meep meep"
Success rates. Again.

The ESRC has issued its annual report and accounts for the financial year 2014/15, and they don’t make good reading. As predicted by Brian Lingley and Phil Ward back in January on the basis of the figures from the July open call, the success rate is well down – to 13% –  from the 25% I commented on last year , 27% on 2012-13 and 14% of 2011-2012.

Believe it or not there is a staw-grasping positive way of looking at these figures… of which more later.

This research professional article has a nice overview which I can’t add much to, so read it first. Three caveats about these figures, though…

  • They’re for the standard open call research grant scheme, not for all calls/schemes
  • They relate to the financial year, not the academic year
  • It’s very difficult to compare year-on-year due to changes to the scheme rules, including minimum and maximum thresholds which have changed substantially.

In previous years I’ve focused on how different academic disciplines have got on, but there’s probably very little to add. You can read them for yourself (p. 38), but the report only bothers to calculate success rates for the disciplines with the highest numbers of applications – presumably beyond that there’s little statistical significance. I could be claiming that it’s been a bumper year for Education research, which for years bumped along at the bottom of the league table with Business and Management Studies in terms of success rates, but which this year received 3 awards from 22 applications, tracking the average success rate. Political Science and Socio-Legal Studies did well, as they always tend to do. But it’s generalising from small numbers.

As last year, there is also a table of success rates by institution. In an earlier section on demand management, the report states that the ESRC “are discussing ways of enhancing performance with those HEIs where application volume is high and quality is relatively weak”. But as with last year, it’s hard to see from the raw success rate figures which these institutions might be – though of course detailed institutional profiles showing the final scores for applications might tell a very different story. Last year I picked out Leeds (10/0), Edinburgh (8/1), and Southampton (14/2) as doing poorly, and Kings College (7/3), King Leicester III (9/4), Oxford (14/6) as doing well – though again, one more or less success changes the picture.

This year, Leeds (8/1) and Edinburgh (6/1) have stats that look much better. Southampton doesn’t look to have improved (12/0) at all, and is one of the worst performers. Of those who did well last year, none did so well this year – Kings were down to 11/1, Leicester 2/0, and Oxford 11/2. Along with Southampton, this year’s poor performers were Durham (10/0), UCL (15/1)  and Sheffield (11/0) – though all three had respectable enough scores last time. This year’s standouts were Cambridge at 10/4. Perhaps someone with more time than me can combine success rates from the last two years, and I’m sure someone at the ESRC already has….

So… on the basis of success rates alone, probably only Southampton jumps out as doing consistently poorly. But again, much depends on the quality profile of the applications being submitted – it’s entirely possible that they were very unlucky, and that small numbers mask much more slapdash grant submission behaviour from other institutions. And of course, these figures only relate to the lead institution as far as I know.

It’s worth noting that demand management has worked… after a fashion.

We remain committed to managing application volume, with
the aim of focusing sector-wide efforts on the submission
of a fewer number of higher quality proposals with a
genuine chance of funding. General progress is positive.
Application volume is down by 48 per cent on pre-demand
management levels – close to our target of 50 per cent.
Quality is improving with the proportion of applications now
in the ‘fundable range’ up by 13 per cent on pre-demand
management levels, to 42 per cent. (p. 21).

I remember the target of reducing the numbers of applications received by 50% as being regarded as very ambitious at the time, and even if some of it was achieved by changing scheme rules to increase the minimum value of a grant application and banning resubmissions, it’s still some achievement. Back in October 2011 I argued that the ESRC had started to talk optimistically about meeting that target after researcher sanctions (in some form) had started to look inevitable. And in November 2012 things looked nicely on track.

But reducing brute numbers of applications is all very well. But if only 42% of applications are within the “fundable range”, then that’s a problem because it means that a lot of applications being submitted still aren’t good enough.This is where there’s cause for optimism – if less than half of the applications are fundable, your own chances should be more than double the average success rate – assuming that your application is of “fundable” quality. So there’s your good news. Problem is, no-one applies who doesn’t think their application is fundable.

Internal peer review/demand management processes are often framed in terms of improving the quality of what gets submitted, but perhaps not enough of a filtering process. So we refine and we polish and we make 101 incremental improvements… but ultimately you can’t polish a sow’s ear. Or something.

Proper internal filtering is really, really hard to do – sometimes it’s just easier to let stuff from people who won’t be told through and see if what happens is exactly what you think will happen, which it always is. There’s also a fine line (though one I think that can be held and defended) between preventing perceived uncompetitive applications from doing so and impinging on academic freedom. I don’t think telling someone they can’t submit a crap application is infringing their academic freedom, but any such decisions need to be taken with a great deal of care. There’s always the possibility of suspicion of ulterior motives – be it personal, be it subject or methods-based prejudice, or senior people just overstepping the mark and inappropriately imposing their convictions (ideological, methodological etc) on others. Like the external examiner who insists on “more of me” on the reading list….

The elephant in the room, of course, is the flat cash settlement and the fact that that’s now really biting, and that there’s nowhere near enough funding to go around for all of the quality social science research that’s badly needed. But we can’t do much about that – and we can do something about the quality of the applications we’re submitting and allowing to be submitted.

I wrote something for research professional a few years back on how not to do demand management/filtering processes, and I think it still stands up reasonably well and is even quite funny in places (though I say so myself). So I’m going to link to it, as I seem to be linking to a disproportionate amount of my back catalogue in this post.

A combination of a new minimum of £350k for the ESRC standard research grants scheme and the latest drop in success rates makes me think it’s worth writing a companion piece to this blog post about potential ESRC applicants need to consider before applying, and what I think is expected of a “fundable” application.

Hopefully something for the autumn…. a few other things to write about first.

ESRC – sweeping changes to the standard grants scheme

The ESRC have just announced a huge change to their standard grants scheme, and I think it’s fair to say that it’s going to prove somewhat controversial.

At the moment, it’s possible to apply to the ESRC Standard Grant Scheme at any time for grants of between £200k and £2million. From the end of June this year, the minimum threshold will raise from £200k to £350k, and the maximum threshold will drop from £2m to £1m.

Probably those numbers don’t mean very much to you if you’re not familiar with research grant costing, but as a rough rule of thumb, a full time researcher for a year (including employment costs and overheads) comes to somewhere around £70k-80k. So a rough rule of thumb I used to use was that if your project needed two years of researcher time, it was big enough. So… for £350k you’d probably need three researcher years, a decent amount of PI and Co-I time, and a fair chunk of non-pay costs. That’s a big project. I don’t have my filed in front of me as I’m writing this, so maybe I’ll add a better illustration later on.

This isn’t the first time the lower limit has been raised. Up until February 2011, there used to be a “Small Grants Scheme” for projects up to £200k before that was shut, with £200k becoming the new minimum. The argument at the time was that larger grants delivered more, and had fewer overheads in terms of the costs of reviewing, processing and administering. And although the idea was that they’d help early career researchers, the figures didn’t really show that.

The reasons given for this change are a little disingenuous puzzling. Firstly, this:

The changes are a response to the pattern of demand that is being placed on the standard grants scheme by the social science community. The average value of a standard grant application has steadily increased and is now close to £500,000, so we have adjusted the centre of gravity of the scheme to reflect applicant behaviour.

Now that’s an interesting tidbit of information – I wouldn’t have guessed that the “average value” would be that high, but you don’t have to be an expert in statistics (and believe me, in spite of giving 110% in maths class at school I’m not one) to wonder what “average” means, and further, why it even matters. This might be an attempt at justification, but I don’t see why this provides a rationale for change.

Then we have this….

The changes are also a response to feedback from our Grant Assessment Panels who have found it increasingly difficult to assess and compare the value of applications ranging from £200,000 to £2 million, where there is variable level of detail on project design, costs and deliverables. This issue has become more acute as the number of grant applications over £1 million has steadily increased over the last two years. Narrowing the funding range of the scheme will help to maintain the robustness of the assessment process, ensuring all applications get a fair hearing.

I have every sympathy for the Grant Assessment Panel members here – how do you choose between funding one £2m project and funding 10 x £200k projects, or any combination you can think of? It’s not so much comparing apples to oranges as comparing grapes to water melons. And they’re right to point out the “variable” level of detail provided – but that’s only because their own rules give a maximum of 6 A4 page for the Case for Support for projects under £1m and 12 for those over. If you think that sounds superficially reasonable, then notice that it’s potentially double the space to argue for ten times the money. I’ve supported applications of £1m+ and 12 sides of A4 is nowhere near enough, compared to the relative luxury of 6 sides for £200k. This is a problem.

In my view it makes sense to “introduce an annual open competition for grants between £1 million and £2.5 million”, which is what the ESRC propose to do. So I think there’s a good argument for lowering the upper threshold from £2m to £1m and setting it up as a separate competition. I know the ESRC want to reduce the number of calls/schemes, but this makes sense. As things stand I’ve regularly steered people away from the Centres/Large Grants competition towards Standard Grants instead, where I think success rates will be higher and they’ll get a fairer hearing. So I’d be all in favour of having some kind of single Centres/Large/Huge/Grants of Unusual Size competition.

But nothing here seems to me to be an argument for raising the lower limit.

But finally, I think we come to what I suspect is the real reason, and judging by Twitter comments so far, I’m not alone in thinking this.

We anticipate that these changes will reduce the volume of applications we receive through the Standard Grants scheme. That will increase overall success rates for those who do apply as well as reducing the peer review requirements we need to place on the social science community.

There’s a real problem with ESRC success rates, which dropped to 10% in the July open call, with over half the “excellent” proposals unfunded. This is down from around 25% success rates, much improved in the last few years. I don’t know whether this is a blip – perhaps a few very expensive projects were funded and a lot of cheaper ones missed out – but it’s not good news. So it’s hard not to see this change as driven entirely by a desire to get success rates up, and perhaps an indication that this wasn’t a blip.

In a recent interview with Adam Smith of Research Professional, Chief Exec Jane Eliot recently appeared to rule out the option of individual sanctions which had been threatened if institutional restraint failed to bring down the number of poor quality applications and it appears that the problem is not so much poor quality applications as lots of high quality applications, not enough money, plummeting success rates, and something needing to be done.

All this raises some difficult questions.

  • Where are social science researchers now supposed to go for funding for projects whose “natural size” is between £10k (British Academy Small Grants) and £350k, the proposed new minimum threshold? There’s only really the Leverhulme Trust, whose schemes will suit some project types and but not others, and they’re not exclusively a social science funder.
  • Where will the next generation of PIs to be entrusted with £350k of taxpayer’s money have an opportunity to cut their teeth, both in terms of proving themselves academically and managerially?
  • What about career young researchers? At least here we can expect a further announcement – there has been talk of merging the ‘future leaders scheme’ into Standard Grants, so perhaps there will be a lower minimum for them. But we’ll see.
  • Given that the minimum threshold has been almost doubled, what consultation has been carried out? I’m just a humble Business School Research Manager (I mean I’m humble, my Business School is outstanding, obviously) so perhaps it’s not surprising that this the first I’ve heard. But was there any meaningful consultation over this? Is there any evidence underpinning claims for the efficiency of fewer, longer and larger grants?
  • How do institutions respond? I guess one way will be to work harder to create bigger gestalt projects with multiple themes and streams and work packages. But surely expectations of grant getting for promotion and other purposes need to be dialled right back, if they haven’t been already. Do we encourage or resist a rush to get applications in before the change, at a time when success rates will inevitably be dire?

Of course, the underlying problem is that there’s not enough money in the ESRC’s budget to support excellent social science after years and years of “flat cash” settlements. And it’s hard to see what can be done about that in the current political climate.

Using Social Media to Support Research Management – ARMA training and development event

Last week I gave a brief presentation at a training and development event organised by ARMA (Association of Research Managers and Administrators) entitled ‘Using Social Media to Support Research Management’. Also presenting were Professor Andy Miah of the University of Salford, Sierra Williams of the LSE Impact of Social Sciences blog, Terry Bucknell of Altmetric. and Phil Ward of Fundermentals and the University of Kent.   A .pdf of my wibblings as inflicted can be found here.

I guess there are three things from the presentation and from the day as a whole that I’d pick out for particular comment.

Firstly, if you’re involved in research management/support/development/impact, then you should be familiar with social media, and by familiar I don’t mean just knowing the difference between Twitter and Friends Reunited – I mean actually using it. That’s not to say that everyone must or should dash off and start a blog – for one thing, I’m not sure I could handle the competition. But I do think you should have a professional presence on Twitter. And I think the same applies to any academics whose research interests involve social media in any way – I’ve spoken to researchers wanting to use Twitter data who are not themselves on Twitter. Call it a form of ethnography if you like (or, probably better, action research), I think you only really understand social media by getting involved – you should “inhabit the ecosystem”, as Andy Miah put it in a quite brilliant presentation that you should definitely make time to watch.

I’ve listed some of the reasons for getting involved, and some of the advantages and challenges, in my presentation. But briefly, it’s only by using it and experiencing for yourself the challenge of finding people to follow, getting followers, getting attention for the messages you want to transmit, risking putting yourself and your views out there that you come to understand it. I used to just throw words like “blog” and “twitter” and “social media engagement” around like zeitgeisty confetti when talking to academic colleagues about their various project impact plans, without understanding any of it properly. Now I can talk about plans to get twitter followers, strategies to gain readers for the project blog, the way the project’s social media presence will be involved in networks and ecosystems relevant to the topic.

One misunderstanding that a lot of people have is that you have to tweet a lot of original content – in fact, it’s better not to. Andy mentioned a “70/30” rule – 70% other people’s stuff, 30% yours, as a rough rule of thumb. Even if your social media presence is just as a kind of curator – finding and retweeting interesting links and making occasional comments, you’re still contributing and you’re still part of the ecosystem, and if your interests overlap with mine, I’ll want to follow you because you’ll find things I miss. David Gauntlett wrote a really interesting article for the LSE impact blog on the value of “publish, then filter” systems for finding good content, which is well worth a read. Filtering is important work.

The second issue I’d like to draw out is an issue around personal and professional identity on Twitter. When Phil Ward, Julie Northam, David Young and I gave a presentation on social media at the ARMA conference in 2012, many delegates were already using Twitter in a personal capacity, but were nervous about mixing the personal and professional. I used to think this was much more of a problem/challenge than I do now. In last week’s presentation, I argued that there were essentially three kinds of Twitter account – the institutional, the personal, and what I called “Adam at work”. Institutional wears a shirt and tie and is impersonal and professional. Personal is sat in its pants on the sofa tweeting about football or television programmes or politics. Adam-at-work is more ‘smart casual’ and tweets about professional stuff, but without being so straight-laced as the institutional account.

Actually Adam-at-Work (and, for that matter You-at-Work) are not difficult identities to work out and to stick to. We all manage it every day.  We’re professional and focused and on-topic, but we also build relations with our office mates and co-workers, and some of that relationship building is through sharing weekend plans, holidays, interests etc. I want to try to find a way of explaining this without resorting to the words “water cooler” or (worse) “banter”, but I’m sure you know what I mean. Just as we need to show our human sides to bond with colleagues in everyday life, we need to do the same on Twitter. Essentially, if you wouldn’t lean over and tell it to the person at the desk next to you, don’t tweet about it. I think we’re all well capable of doing this, and we should trust ourselves to do it. By all means keep a separate personal twitter account (because you don’t want your REF tweets to send your friends to sleep) and use that to shout at the television if you’d like to.

I think it’s easy to exaggerate the dangers of social media, not least because of regular stories about people doing or saying something ill-advised. But it’s worth remembering that a lot of those people are famous or noteworthy in some way, and so attract attention and provocation in a way that we just don’t. While a footballer might get tweeted all kinds of nonsense after a poor performance, I’m unlikely to get twitter-trolled by someone who disagrees with something I’ve written, or booed while catching a train. Though I do think a football crowd style crescendo of booing might be justified in the workplace for people who send mass emails without the intended attachment/with the incorrect date/both.

Having said all that… this is just my experience, and as a white male it may well be that I don’t attract that kind of negative attention on social media. I trust/hope that female colleagues have had similar positive experiences and I’ve no reason to think they haven’t, but I don’t want to pass off my experience as universal. (*polishes feminist badge*).

The third thing is to repeat an invitation which I’ve made before – if anyone would like to write a guest post for my blog on any topic relevant to its general themes, please do get in touch. And if anyone has an questions about twitter, blogging, social media that they think I might have a sporting chance of answering, please ask away.