USS Pensions Strike – Could deliberative democracy be a way out of the impasse?

“Freedom for the University of Tooting!”

My headline, unfortunately, is a classic QTWTAIN (‘question to which the answer is no’) because I can’t see any evidence that the employers want to negotiate or seek alternatives or engage in any meaningful way. You can find an admirably clear (and referenced) summary of the current situation at the time of writing here.

And if you want to read my previous wibblings from a previous dispute about why you should join the union, see the second half of this post.

All I’ll add is that we’ve been here before as regards pensions cuts… again and again and again… only previously it’s been salami slices, or at least compared to what’s being proposed now. These previous changes, we were told, would put the scheme back on the right track, and were necessary due to increased life expectancy etc and so on. So my question is… were those previous claims about past changes just straightforward lies, or have things got worse? And if they’ve got worse, is that wider economic conditions, or incompetence? And either way, why are the people responsible taking huge pay increases? Why is my pension scheme on the way to becoming a regular in the pages of Private Eye?

Anyway… I wanted to talk about deliberative democracy. I listened to a really interesting Reasons to be Cheerful podcast (presented by Ed Miliband and Geoff Lloyd) on deliberative democracy the other week. If we ask everyone what they think on a particular topic, the problem is that not everyone will be equally well informed, will have the necessary time to follow the arguments and find the evidence, or will come to the topic with an open mind. The idea of deliberative democracy is finding a small, representative group, giving them full access to the evidence and the arguments and expertise, and then, through deliberation, work towards a consensus decision if possible.

Trial by jury follows this model very closely, though we don’t typically think of a jury as an expression of democracy. These are twelve ordinary people, selected at random (with some exceptions and criteria), and trusted to follow the arguments in a criminal trial. But we regard this as fair, and as legitimate, and my perception is that there’s widespread faith in trial by jury as an institution.

Could we extend this to other issues? For example, the current strike action about cuts to the USS pension scheme. At the moment I’m reading a lot of criticism about the methods of calculation, the underpinning assumptions, and some very questionable motivations and methods of reaching and spinning decisions by Universities UK. But some of that criticism comes from people who aren’t experts in this area, but have relevant expertise in other related areas, or in areas that share a skill set. Such cognate-experts might well be right, but equally there might be good explanations for some of the peculiar-looking assumptions. In keeping with the Dunning-Kruger Effect, might such people be overestimating their own expertise and underestimating those of genuine experts? I don’t know.

Hence my interest in deliberative democracy… get a representative group of pension scheme members (academic and APM, a range of ages (including PhD students and retired staff), union and non-union members, a range of seniority and experience and subject area/specialism), give them access to experts and evidence, and let’s see what they come back with. A report from such a group that contends that, yes, the pension scheme is in trouble and an end to defined benefit is the only thing that will keep it sustainable, would have credibility and legitimacy. On the other hand, a report that came back with other options and which denied that case for the necessity for such a drastic step, would also be persuasive. This would be a decision by my peers who have taken more time and more trouble than I have, who have access to expertise and arguments and evidence, and who I would therefore trust.

“Down with this sort of thing!”

I strongly suspect that we have two very polarised actuarial valuations of the scheme – one, from the employers, which seems to me to be laughably flawed (but again, Dunning Kruger… what do I know?), and another, from UCU, which may turn out to be laughably optimistic. Point is, I don’t know, and I don’t want to make the mistake of assuming that the truth must lie somewhere in between.

One objection is that this might be little different to recent accusations about university Vice Chancellors sitting on the committees that set their salaries. However, a range of ages and career stages could mitigate against this – younger group members would surely resist any attempts to allow the scheme to limp on until older members are likely to be retired but which would leave them with little or nothing. We can also include information about affordability and HE finance in general to ensure that we don’t end up with recommendations that are completely unaffordable.I’d also like to think that those who chose careers in academia or in university management – in most cases ahead of more lucrative careers – have a commitment to the sector and its future.

And no-one’s saying that the report of such a group need be binding, but, a properly constituted group undertaking deliberative work with access to evidence and expertise would carry a great deal of authority and would be hard to be simply set aside. It’s an example of what John Rawls called ‘pure procedural justice‘. Its outcome is fair because it is set up and operates in a way that’s fair.

So I guess that’s my challenge to Universities UK and (to a lesser extent) the UCU too. If, UUK, your argument is that ‘There Is No Alternative’ (TINA) – which we’ve heard before, ad nausem – let’s see if that’s really the case. Their complete refusal to engage on the issue of the ending of defined benefit doesn’t bode well here, nor does the obvious disingenuous of offering “talks” while refusing to negotiate on the issue the strike is actually about. But let’s see if UCU’s claims bear scrutiny too. No-one is immune from wishful thinking, and some elements within UCU seem to enjoy being on strike a bit too much for my liking.

Because, frankly, I’d quite like (a) to get back to work; and (b) have some sort of security in retirement, and the same for generations of academics and APM staff to come.

My fictional heroes of research development (and perhaps university management more generally)

Consider this post a bit of an early Christmas indulgence, by which I mean largely ignore it..

As an undergraduate, I was very taken with Aristotelian ethics, and in particular ideas about character and about exemplars of moral excellences and other kinds of excellences (public speaking, bravery, charisma etc). Roughly, a good way to learn is to observe people who do certain things well and learn from their example. Conversely, one can also learn from people who are terrible at things, and avoid their mistakes. No-one’s so awful that they can’t at least serve as a bad example and as a warning to others. I remember later conversations about who the ultimate Aristotelian exemplar might be – in reality or in contemporary culture – with a lot of votes for Captains Kirk and Picard, leading to the merits of real life and therefore more complex figures taking second place to a who’s-your-favourite-Star-Trek-captain debate.

Years later, I fell to wondering who the exemplars are for research development, or perhaps university management/leadership/academic wrangling more generally. I could write about people who’ve influenced my thinking and my career, but instead, like the Star Trek fans, I’ve been distracted by fictional examples.

My first nomination comes via a 2012 Inside Higher Education blog post from ‘Dean Dad’ in the US, and is for Kermit the Frog. The nomination goes as follows:

[Kermit] keeps the show running, but it’s clear that he actually enjoys the Chaos Muppets and wants them to be able to do what they do.  His work makes it possible for Gonzo to jump through the flaming hoop with a chicken under his arm while reciting Shakespeare, even though Kermit would never do that himself.

Kermit endures snark from Statler and Waldorf in the balcony; let’s just say I get that.  And the few times that Kermit freaks out have much more impact than when, say, Animal does, because a freaked-out Kermit threatens the working of the show.  Freaking out is just what Animal does.

The nomination comes complete with a whole theory of academic management based on the Muppet Show. Muppets can be divided into ‘order’ and ‘chaos’ muppets, with ‘hard’ and ‘soft’ examples of each. Kermit is the epitome of a soft order muppet because he understands the importance of order and structure, but doesn’t enjoy it for its own sake and wants to help others to what they do best. I’d quite like to add “soft order muppet” to my email signature and even my office door sign, but I don’t think the world’s quite ready for that.

My second nomination is Sergeant Wilson of ‘Dad’s Army’, played by John Le Mesurier. Catchphrases include “would you mind awfully…” and the eventual title of a biography of JLeM: “Do you think that’s wise, sir?” He’s usually a model of subtle and understated influence, providing gentle but timely challenge to those set above him. Good humoured, unflappable, wise, and reassuring, he’s the ideal sergeant.

My third – more controversially – is Edmund Blackadder, and in particular Blackadder III. This exchange alone – when reviewing the Prince of Wales’ first draft of a love letter – makes him the patron fictional saint of research development staff.

“Would you mind if I changed  just one tiny aspect of it?”
“What’s that?”
“The words.”

Blackadder loses points for deviousness, more points for largely unsuccessful deviousness, consistent mistreatment of those he line manages, and general cynicism about and contempt for those in power. However, in the latter case, in his world, he’s got something of a point. But, as I said, exemplars can embody what not to do as well as what to do.

Next up, a trip to Fawlty Towers and Polly Sherman (Connie Booth, who also co-wrote the series), the voice of sanity (mostly) and a model of competence, dedication, and loyalty. She usually manages to keep her head while all around are losing theirs, and has a level of compassion, understanding, and tolerance for the eccentricities of those around her which the likes of Edmund Blackadder never reach.

Finally, one I’ve changed my mind over. Initially, it was Sir Humphrey Appleby (left), the Permanent Private Secretary in Yes (Prime) Minister, who was my nominee. While the Minister, Jim Hacker (centre), was all fresh ideas and act-without-thinking, Sir Humphrey was the voice of experience and the embodiment of institutional memory.

On reflection, though, the real hero is Bernard Woolley (right), the Principal Private Secretary. Sir Humphrey’s first priority is the civil service, and no academic management role model can put the cart before the horse in such a way. And more seriously, the ‘Sir Humphrey’ view of the civil service and of administration and management more generally is a reactionary, cynical, and highly damaging view. My Nottingham colleague Steven Fielding wrote an interesting piece about the effects of YM on perceptions of civil servants and cynicism about government. But he is an example of someone who has concluded that success/good governance isn’t possible without an effective and professional civil service, but then in seeking to defend the means, ends up forgetting the end. And that’s a kind of negative exemplar as well. Let’s none of us forget who we’re here for, or why. Kermit doesn’t think the Muppet Show is all about him.

Bernard Woolley, though, struggles to manage conflicted loyalties (multiple stakeholders and bottom lines), and is under pressure both from Jim Hacker, the government minister actually in charge of the Department, but only likely to have a very limited term of office – and Sir Humphrey, his rather more permanent boss with huge power of his career prospects. Anyone else ever felt like that – (temporary) Heads of School or Research Directors or other fixed-term academic leaders on one side, and more more permanent senior administrative, professional, and managerial colleagues on the other? People who won’t be stepping down inside eighteen months and returning, a good job well done, to their research and teaching?

Well, you may have felt like that, but I couldn’t possibly comment.

So… who have I missed? Who else deserves a mention? Kryten from Red Dwarf, perhaps? Smithers from the Simpsons? Bunk Moreland from The Wire?

Prêt-à-non-portability? Implications and possible responses to the phasing out of publication portability

“How much as been decided about the REF? About this much. And how much of the REF period is there to go? Well, again…

Last week Recently, I attended an Open Forum Events one day conference with the slightly confusing title ‘Research Impact: Strengthening the Excellence Framework‘ and gave a short presentation with the same title as this blog post. It was a very interesting event with some great speakers (and me), and I was lucky enough to meet up with quite a few people I only previously ‘knew’ through Twitter. I’d absolutely endorse Sarah Hayes‘ blogpost for Research Whisperer about the benefits of social media for networking for introverts.

Oh, and if you’re an academic looking for something approaching a straightforward explanation about the REF, can I recommend Charlotte Mathieson‘s excellent blog post. For those of you after in-depth half-baked REF policy stuff, read on…

I was really pleased with how the talk went – it’s one thing writing up summaries and knee-jerk analyses for a mixed audience of semi-engaged academics and research development professionals, but it’s quite another giving a REF-related talk to a room full of REF experts. It was based in part on a previous post I’ve written on portability but my views (and what we know about the REF) has moved on since then, so I thought I’d have a go at summarising the key points.

I started by briefly outlining the problem and the proposed interim arrangements before looking at the key principles that needed to form part of any settled solution on portability for the REF after next.

Why non-portability? What’s the problem?

I addressed most of this in my previous post, but I think the key problem is that it turns what ought to be something like a football league season into an Olympic event. With a league system, the winner is whoever earns the most points over a long, drawn out season. Three points is three points, whatever stage of the season it comes in. With Olympic events, it’s all about peaking at the right time during the cycle – and in some events within the right ten seconds of that cycle. Both are valid as sporting competition formats, but for me, Clive the REF should be more like a league season than to see who can peak best on census day. And that’s what the previous REF rules encourages – fractional short term appointments around the census date; bulking out the submission then letting people go afterwards; rent-seeking behaviour from some academics holding their institution to ransom; poaching and instability, transfer window effects on mobility; and panic buying.

If the point of the REF is to reward sustained excellence over the previous REF cycle with funding to institutions to support research over the next REF cycle, surely it’s a “league season” model we should be looking at, not an Olympic model. The problem with portability is that it’s all about who each unit of assessment has under contract and able to return at the time, even if that’s not a fair reflection of their average over the REF cycle. So if a world class researcher moves six months before the REF census date, her new institution would get REF credit for all of her work over the last REF cycle, and the one which actually paid her salary would get nothing in REF terms. Strictly speaking, this isn’t a problem of publication portability, it’s a problem of publication non-retention. Of which more later.

I summarised what’s being proposed as regards portability as a transition measure in my ‘Initial Reactions‘ post, but briefly by far most likely outcome for this REF is one that retains full portability and full retention. In other words, when someone moves institution, she takes her publications with her and leaves them behind. I’m going to follow Phil Ward of Fundermentals and call these Schrodinger’s Publications, but as HEFCE point out, plenty of publications were returned multiple times by multiple institutions in the last REF, as each co-author could return it for her institution. It would be interesting to see what proportion of publications were returned multiple times, and what the record is for the number of times that a single publication has been submitted.

Researcher Mobility is a Good Thing

Marie Curie and Mr Spock have more in common than radiation-related deaths – they’re both examples of success through researcher mobility. And researcher mobility is important – it spreads ideas and methods, allows critical masses of expertise to be formed. And researchers are human too, and are likely to need to relocate for personal reasons, are entitled to seek better paid work and better conditions, and might – like any other employee – just benefit from a change of scene.

For all these reasons, future portability rules need to treat mobility as positive, and as a human right. We need to minimise ‘transfer window’ effects that force movement into specific stages of the REF cycle – although it’s worth noting that plenty of other professions have transfer windows – teachers, junior doctors (I think), footballers, and probably others too.

And for this reason, and for reasons of fairness, publications from staff who have departed need to be assessed in exactly the same way as publications from staff who are still employed by the returning UoA. Certainly no UoA should be marked down or regarded as living on past glories for returning as much of the work of former colleagues as they see fit.

Render unto Caesar

Institutions are entitled to a fair return on investment in terms of research, though as I mentioned earlier, it’s not portability that’s the problem here so much as non-retention. As Fantasy REF Manager I’m not that bothered by someone else submitting some of my departed star player’s work written on my £££, but I’m very much bothered if I can’t get any credit for it. Universities are given funding on the basis of their research performance as evaluated through the previous REF cycle to support their ongoing endeavors in the next one. This is a really strong argument for publication retention, and it seems to me to be the same argument that underpins impact being retained by the institution.

However, there is a problem which I didn’t properly appreciate in my previous writings on this. It’s the investment/divestment asymmetry issue, as absolutely no-one except me is calling it. It’s an issue not for the likely interim solution, but for the kind of full non-portability system we might have for the REF after next.

In my previous post I imagined a Fantasy REF Manager operating largely a one-in, one-out policy – thus I didn’t need new appointee’s publications because I got to keep their predecessors. And provided that staff mobility was largely one-in, one-out, that’s fine. But it’s less straightforward if it’s not. At the moment the University of Nottingham is looking to invest in a lot of new posts around specific areas (“beacons”) of research strength – really inspiring projects, such as the new Rights Lab which aims to help end modern slavery. And I’m sure plenty of other institutions have similar plans to create or expand areas of critical mass.

Imagine a scenario where I as Fantasy REF Manager decide to sack a load of people  immediately prior to the REF census date. Under the proposed rules I get to return all of their publications and I can have all of the income associated for the duration of the next REF cycle – perhaps seven years funding. On the other hand, if I choose to invest in extra posts that don’t merely replace departed staff, it could be a very long time before I see any return, via REF funding at least. It’s not just that I can’t return their publications that appeared before I recruited them, it’s that the consequences of not being able to return a full REF cycle’s worth of publications will have funding implications for the whole of the next REF cycle. The no-REF-disincentive-to-divest and long-lead-time-for-REF-reward-for-investment looks lopsided and problematic.

I’m a smart Fantasy REF Manager, it means I’ll save up my redundancy axe wielding (at worst) or recruitment freeze (at best) for the end of the REF cycle, and I’ll be looking to invest only right at the beginning of the REF cycle. I’ve no idea what the net effect of all this will be repeated across the sector, but it looks to me as if non-portability just creates new transfer windows and feast and famine around recruitment. And I’d be very worried if universities end up delaying or cancelling or scaling back major strategic research investments because of a lack of REF recognition in terms of new funding.

Looking forward: A settled portability policy

A few years back, HEFCE issued some guidance about Open Access and its place in the coming REF. They did this more or less ‘without prejudice’ to any other aspect of the REF – essentially, whatever the rest of the REF looks like, these will be the open access rules. And once we’ve settled the portability rules for this time (almost certainly using the Schrodinger’s publications model), I’d like to see them issue some similar ‘without prejudice’ guidelines for the following REF.

I think it’s generally agreed that the more complicated but more accurate model that would allow limited portability and full retention can’t be implemented at such short notice. But perhaps something similar could work with adequate notice and warning for institutions to get the right systems in place, which was essentially the point of the OA announcement.

I don’t think a full non-portability full-retention system as currently envisaged could work without some finessing, and every bid of finessing for fairness comes at the cost of complication.  As well as the investment-divestment asymmetry problem outlined above, there are other issues too.

The academic ‘precariat’ – those on fixed term/teaching only/fractional/sessional contracts need special rules. An institution employing someone to teach one module with no research allocation surely shouldn’t be allowed to return that person’s publications. One option would be to say something like ‘teaching only’ = full portability, no retention; and ‘fixed term with research allocation’ = the Schrodinger system of publications being retained and being portable. Granted this opens the door to other games to be played (perhaps turning down a permanent contract to retain portability?) but I don’t think these are as serious as current games, and I’m sure could be finessed.

While I argued previously that career young researchers had more to gain than to lose from a system whereby appointments are made more on potential rather than track record, the fact that so many are as concerned as they are means that there needs to be some sort of reassurance or allowance for those not in permanent roles.

Disorder at the border. What happens about publications written on Old Institution’s Time, but eventually published under New Institution’s affiliation? We can also easily imagine publication filibustering whereby researchers delay publication to maximise their position in the job market. Not only are delays in publication bad for science, but there’s also the potential for inappropriate pressure to be applied by institutions to hold something back/rush something out. It could easily put researchers in an impossible position, and has the potential to poison relationships with previous employers and with new ones. Add in the possible effects of multiple job moves on multi-author publications and this gets messy very quickly.

One possible response to this would be to allow a portability/retention window that goes two ways – so my previous institution could still return my work published (or accepted) up to (say) a year after my official leave date. Of course, this creates a lot of admin, but it’s entirely up to my former institution whether it thinks that it’s worth tracking my publications once I’ve gone.

What about retired staff? As far as I can see there’s nothing in any documents about the status of the publications of retired staff either in this REF or in any future plans. The logic should be that they’re returnable in the same way as those of any other researcher who has left during the REF period. Otherwise we’ll end up with pressure to say on and perhaps other kinds of odd incentives not to appoint people who retire before the end of a REF cycle.

One final suggestion…

One further half-serious suggestion… if we really object to game playing, perhaps the only fair to properly reward excellent research and impact and to minimise game playing is to keep the exact rules of REF a secret for as long as possible in each cycle. Forcing institutions just to focus on “doing good stuff” and worrying less about gaming the REF.

  • If you’re really interested, you can download a copy of my presentation … but if you weren’t there, you’ll just have to wonder about the blank page…

Initial Reactions to HEFCE’s ‘Initial decisions on REF 2021’

This lunchtime HEFCE have announced some more “Initial Decisions” on REF 2021, which I’ve summarised below.

Slightly frustratingly, the details are scattered across a few documents, and it’s easy to miss some of them. There’s an exec summary,  a circular letter (which is more of a rectangle, really), the main text of the report that can be downloaded from the bottom of the exec summary page (along with an annex listing UoAs and further particulars for panel chair roles)… and annex A on a further consultation staff return and output portability, downloadable from the bottom of the circular letter page.

I’ve had a go at a quick summary, by bullet point theme rather than in the order they appear, or in a grand narrative sweep. This is one of my knee-jerk pieces, and I’ve added a few thoughts of my own. But it’s early days, and possibly I’ve missed something or misunderstood, so please let me know.

Outputs

  • Reserve output allowed where publication may not appear in time
  • Worth only 60% of total mark this time (see scoring system)

I think the reduction in the contribution of outputs to the overall mark (at the expense of impact) is probably what surprised me most, and I suspect this will be controversial. I think the original plan was for environment to be downgraded to make way, but there’s a lot more demanded from the environment statement this time (see below) so it’s been protected. Great to have the option of submitting an insurance publication in case one of the in-press ones doesn’t appear by close of play.

Panels/Units of Assessment

  • Each sub-panel to have at least one appointed member for interdisciplinary research “with a specific role to ensure its equitable assessment”. New identifier/flag for interdisciplinary outputs to capture
  • Single UoA for engineering, multiple submissions allowed
  • Archaeology split from Geography and Environmental studies – now separate
  • Film and Screen Studies to be explicitly included in UoA 33 with Dance, Drama, Performing Arts
  • Decisions on forensic science and criminology (concerns about visibility) due in Autumn
  • Mapping staff to UoAs will be done by institutions, not HESA cost centres, but may ask for more info in the event of any “major variances” from HESA data.

What do people think about a single UoA for engineering? That’s not an area I support much. Is this just tidying up, or does this has greater implications? Is it ironic that forensic science and criminology have been left a cop show cliff-hanger ending?

Environment

  • Expansion of Unit of Assessment environment section to include sections on:
    • Structures to support interdisciplinary research
    • Supporting collaboration with “organisations beyond higher education”
    • Impact template will now be in the environment element
    • Approach to open research/open access
    • Supporting equality and diversity
  • More quant data in UoA environment template (we don’t know what yet)
  • Standard Institution level information
  • Non-assessed invite only pilot for institution level environment statement
  • Expansion of environment section is given as a justification for maintaining it at 15% of score rather than reducing as expected.

The inclusion of a statement about support for interdisciplinary work is interesting, as this moves beyond merely addressing justifiable criticism about the fate of interdisciplinary research (see the welcome addition to each UoA of an appointed ‘Member for Interdisciplinarity’ above). This makes it compulsory, and an end in itself. This will go down better in some UoAs than others.

Impact

  • Institutional level impact case studies will be piloted, but not assessed
  • Moves towards unifying definitions of “impact” and “academic impact” between REF and Research Councils – both part of dual funding system for research
  • Impact on teaching/curriculum will count as impact – more guidance to be published
  • Underpinning work “at least equivalent to 2*” and published between 1st Jan 2000 and 31st Dec 2020. Impact must take place between 1st Aug 2013 and 31st July 2020
  • New impact case study template, more questions asked, more directed, more standardised, more “prefatory” material to make assessment easier.
  • Require “routine provision of audit evidence” for case study templates, but not given to panel
  • Uncertain yet on formula for calculating number of case study requirements, but overall “should not significantly exceed… 2014”. Will be done on some measure of “volume of activity”, possibly outputs
  • Continuation of case studies from 2014 is allowed, but must meet date rules for both impact and publication, need to declare it is a continuation.
  • Increased to 25% of total score

And like a modern day impact superhero, he comes Mark Reed aka Fast Track Impact with a blog post of his own on the impact implications of the latest announcement. I have to say that I’m pleased that we’re only having a pilot for institutional case studies, because I’m not sure that’s a go-er.

Assessment and Scoring system

  • Sub-panels may decide to use metrics/citation data, but will set out criteria statements stating whether/how they’ll use it. HEFCE will provide the citation data
  • As 2014, overall excellence profile, 3 sub-profiles (outputs, impact, environment)
  • Five point scale from unclassified to 4*
  • Outputs 60, Impact 25, Environment 15. Increase of impact to 25, but as extra environment info sought, has come at the expense of outputs.

There was some talk of a possible necessity for a 5* category to be able to differentiate at the very top. but I don’t think this gained much traction.

But on the really big questions… further consultation (deadline 29th Sept):

There’s been some kicking into the short grass, but things are looking a bit clearer…

(1) Staff submission:

All staff “with a significant responsibility to undertake research” will be submitted, but “no single indicator identifies those within the scope of the exercise”.  Institutions have the option of submitting 100% of staff who meet the core eligibility requirement OR come up with a code of practice that they’ll use to decide who is eligible. Audit-able evidence will be required and Institutions can choose different options for different UoAs.

Proposed core eligibility requirements – staff must meet all of the following:

  • “have an academic employment function of ‘research only’ or ‘teaching and research’
  • are independent researchers [i.e. not research assistants unless ‘demonstrably’ independent]
  • hold minimum employment of 0.2 full time equivalent
  • have a substantive connection to the submitting institution.”

I like this as an approach – it throws the question back to universities, and leaves it up to them whether they think it’s worth the time and trouble running an exercise in one or more UoAs. And I think the proposed core requirements look sensible, and faithful to the core aim which is to maximise the number of researchers returned and prevent the hyper selectivity game being played.

(2) Transition arrangements for non-portability of publications.

HEFCE are consulting on either:

(a) “The simplified model, whereby outputs would be eligible for return by the originating institution (i.e. the institution where the research output was demonstrably generated and at which the member of staff was employed) as well as by the newly employing institution”.
or
(b) “The hybrid approach, with a deadline (to be determined), after which a limited number of outputs would transfer with staff, with eligibility otherwise linked to the originating institution. (This would mean operating two rules for portability in this exercise: the outputs of staff employed before the specified date falling under the 2014 rules of full portability; outputs from staff employed after this date would fall under the new rules.)”

I wrote a previous post on portability and non-portability when the Stern Review was first published, which I still think is broadly correct.

I wonder how simple the simplified model will be… if we end having to return n=2 publications, and choosing those publications from a list of everything published by everyone while they worked here. But it’s probably less work than having a cut off date.

More to follow….

Mistakes in Grant Writing, part 95 – “The Gollum”

Image: Alonso Javier Torres [CC BY 2.0] via Flickr
A version of this article first appeared in Funding Insight on 20th July 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Previously I’ve written about the ‘Star Wars Error’ in grant writing, and my latest attempt to crowbar popular culture references into articles about grant writing mistakes is ‘the Gollum’. Gollum is a character from Lord of the Rings, a twisted, tortured figure – wicked and pitiable in equal measure. He’s an addict whose sole drive is possession of the Ring of Power, which he loves and hates with equal ferocity. A little like me and my thesis.

Only begotten

For current purposes, it’s his cry of “my precious!” and obsession with keeping the Ring for himself that I’m thinking of in terms of an analogy with research grant applicants, rather than (spoilers) eating raw fish, plunging into volcanoes, or murdering friends. Even in the current climate of ‘demand management’, internal peer review, and research development support, there are still researchers who treat their projects as their “precious” and are unable or unwilling to share them or to seek comment and feedback.

It’s easy to understand why – there’s the fear of being scooped and of someone else taking and using the idea. There’s the fear of public failure – with low success rates, a substantial majority of applications will be unsuccessful, and perhaps the thought is that if one is going to fail, few people should know about it. And let’s not pretend that internal review/filtering processes don’t at least raise questions about academic freedom.

Power play

But there are other fears. The first is about sabotage or interference from colleagues who might be opposed to the research, whether through ideological and methodological differences, or because they’re on the other side of some major scientific controversy. In my experience, this concern has been largely unfounded. I’ve been very fortunate to work with senior academics who are very clear about their role as internal reviewer, which is to further improve competitive applications and ideas, while filtering out or diverting uncompetitive ideas, or applications that simply aren’t ready. But while internal reviewers will have their views, I’ve not seen anyone let that power go to their heads.

Enough of experts

Second, if the concern isn’t about integrity or (unconscious) bias, it might be about background or knowledge. One view I’ve encountered – mainly unspoken, but occasionally spoken and once shouted – is that no-one else at the institution has the expertise to review their proposal and therefore internal review is a waste of time.

It might well be true that no-one else internally has equivalent expertise to the applicant, and (apart from early career researchers) that’s to be expected and welcomed. But if it’s true internally, it might also be true of the external expert reviewers chosen by the funder, and it’s even more likely to be true of the people on the final decision-making panel. The chances are that the principal applicant on any major project is one of the leaders in that field, and even if she regards a handful of others as appropriate reviewers, there’s absolutely no guarantee that she’ll get them.

Significant other

Ultimately, the purpose of a funding application is to convince expert peer reviewers from the same or cognate discipline and a much broader panel of distinguished scientists of the superior merits of your ideas and the greater significance of your research challenge compared to rival proposals. Because once the incompetent and the unfeasible have been weeded out – it’s all about significance.

A quality internal peer review process will mirror those conditions as closely as possible. It doesn’t matter that internal reviewer X isn’t from the same field and knows little about the topic – what’s of use to the applicant is what X makes of the application as a senior academic from another (sub)discipline. Can she understand the research challenges, why they’re significant and important? Does the application tell her exactly what the applicant proposes to do? What’s particularly valuable are creative misunderstandings – if an internal reviewer has misunderstood a key point or misinterpreted something, a wise applicant will return to the application and seek to identify the source of that misunderstanding and head it off, rather than just dismissing the feedback out of hand.

Forever alone

And that’s without touching on the value that research development support can add. People in my kind of role who may not be academics, but who have seen a great many grant applications over the years. People who aren’t academic experts, but who know when something isn’t clear, or doesn’t make sense to the intelligent lay person.

Most institutions that take research seriously will offer good support to their researchers. Despite this, there are still researchers who only engage with others where they absolutely must, and take little notice of feedback or experience during the grant application process. Do they really think that others are unworthy of gazing upon the magnificence of The Precious?

I’d like to urge them here to turn back, to take the advice and feedback that’s on offer, lest they end up wandering the dark places of the world, alone and unfunded.

“Once more unto the breach” – Should I resubmit my unsuccessful research grant application?

A picture of a boomerangThis article first appeared in Funding Insight on 11th May 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit  www.researchprofessional.com
* * * * * * * * * * * * * * * * * * ** *

Should I resubmit my unsuccessful research grant application?

No.

‘No’ is the short answer – unless you’ve received an invitation or steer from the funder to do so. Many funders don’t permit uninvited resubmissions, so the first step should always be to check your funder’s rules and definitions of resubmission with your research development team.

To be, or not to be

That’s not to say that you should abandon your research proposal – more that it’s a mistake to think of your next application on the same or similar topic as a resubmission. It’s much better – if you do wish to pursue it – to treat it as a fresh application and to give yourself and your team the opportunity to develop your ideas. It’s unlikely that nothing has changed between the date of submission and now. It’s also unlikely that nothing could be improved about the underpinning research idea or the way it was expressed in the application.

However, sometimes the best approach is to let an idea go, cut your losses, avoid the sunk costs fallacy. Onwards and upwards to the next idea. I was recently introduced to the concept of a “negative CV”, which is the opposite of a normal CV, listing only failed grant applications, rejected papers, unsuccessful conference pitches and job market rejections. Even the most eminent scholars have lengthy negative CVs, and there’s no shame in being unsuccessful, especially as success rates are so low. It’s really difficult – you’ve got your team together, you’ve been through the discussions and debates and the honing of your idea and then the grant writing, and then the disappointment of not getting funded. It’s very definitely worth having meetings and discussion to see what can be salvaged and repurposed – publishing literature reviews, continuing to engage with stakeholders etc. It’s only natural to look for some other avenue for your work, but sometimes it’s best to move on to something else.

Here are two bits of wisdom that are both true in their own way:

  • If at first you don’t succeed, try, try try again (William Edward Hickson)
  • The definition of insanity is doing the same thing over and over but expecting different results (disputed- perhaps Einstein or Franklin, but I reckon US Narcotics Anonymous)

So what should you do? What factors should you consider in deciding whether to rise from the canvas like Rocky, or instead emulate Elsa and Let It Go?

What being unsuccessful means… and what it doesn’t

As a Canadian research council director once said, research funding is a contest, not a test. Research funding is a limited commodity, like Olympic medals, jobs, and winning lottery tickets. It’s not an unlimited commodity like driving licenses or PhDs, commodities which everyone who reaches the required standard can obtain. Sometimes I think researchers confuse the two – if the driving test examiner says I failed on my three point turn, if I get it right next time (and make no further mistakes) I’ll pass. But even if I respond adequately to all of the points made in the referees’ comments, there’s still no guarantee I’ll get funded. The quality of my driving in the morning doesn’t affect your chances of passing your test in the afternoon, but if too many applications are better than yours, you won’t get funded. And just as many recruitment exercises produce more appointable candidates than posts, so funding calls attract far more fundable applications than the funds available.

Sometimes referees’ comments can be misinterpreted. Feedback might list the real or perceived faults with the application, but (once the fundamentally flawed have been excluded) ultimately it’s a competition about significance. What significance means is defined by the funder and the scheme and doesn’t necessarily mean impact – it could be about academic significance, contribution to the field and so on.

As a public panel member for an NIHR scheme I’ve seen this from the inside – project proposals which are technically competent, sensible and feasible. Yet either because they fail to articulate the significance or because their research challenge is just not that significant an issue, they don’t get funded because they’re not competitive against similarly competent applications taking on much more significant and important research challenges. Feedback is given which would have improved the application, but simply addressing that feedback will seldom make it any more competitive.

When major Research Centre calls come out, I often have conversations with colleagues who have great ideas for perfectly formed projects which unfortunately I don’t think are significant enough to be one of three or four funded across the whole of social sciences. Ideally the significance question, the “so what/who cares?” question should be posed before applying in the first place, but you should definitely look again at what was funded and ask it again of your project before considering trying to rework it.

Themed Calls Cast a Long Shadow

One of the most dispiriting grant rejection experiences is rejection from a targeted call which seemed perfect. It’s not like an open call where you have to compete with rival bids on significance from all across your research council’s remit – rather, the significance is already recognised.

Yet the reality is that narrower calls often have similarly low success rates. Although they’re narrower, everyone who can pile in, does pile in. And deciding what to do next is much harder. Themed calls cast a long shadow – if as a funder I’ve just made a major investment in field X through niche call Y, I’m not sure how I’m going to feel about an X-related application coming back in through the open call route. Didn’t we just fund a lot of this stuff? Should we fund more, especially if an idea like this was unsuccessful last time? Shouldn’t we support something else? And I think this effect might be true even with different funders who will be aware of what’s going on elsewhere. If a tranche of projects in your research area have been funded through a particular call, it’s going to be very difficult to get investment through any other scheme anytime soon.

Switching calls, Switching funders

An exception to this might be the Global Challenges Research Fund or perhaps other areas where there’s a lot of funding available (relatively speaking) and a number of different calls with slightly different priorities. Being unsuccessful with an application to an open call or a broader call and then looking to repurpose the research idea in response to a narrower themed call is more likely to pay off than the other way round, moving from a specific call to a general one. But even so, my advice would be to ban the “r” word entirely. It’s not a ‘resubmission’, it’s an entirely new application written for a different funding scheme with different priorities, even if some of the underlying ideas are similar.

This goes double when it comes to switching funders. A good way of wasting everyone’s time is trying to crowbar a previously unsuccessful application into the format required by a different funder. Different funders have different priorities and different application procedures, formats and rules, and so you must treat it as a fresh application. Not doing so is a bit like getting out some love letters you sent to a former paramour, changing the name at the top, and reposting them to the current object of your affections. Neither will end well.

The Leverhulme Trust are admirably clear on this point, they’re “keen to avoid assuming the role of ‘funder of last resort’; that is, of routinely providing support for proposals which have been fully matched to the requirement of another funding agency, but have failed to win support on the grounds of either lack of quality or insufficient available funds.” If you’re going to apply to the Leverhulme Trust, for example, make it a Leverhulme-y application, and that means shifting not just the presentational style but also the substance of what you’re proposing.

Whatever the change, forget any notion of resubmission if you’re taking an idea from one call to another. Yes, you may be able to reuse some of your previous materials, but if you submit something clearly written for another call with the crowbar marks still visible, you won’t get funded.

The Five Stages of Grant Application Failure

I’m reluctant to draw this comparison, but I wonder if responding to grant application rejection is a bit like the Kubler-Ross model of grief (denial, anger, bargaining, depression, and acceptance). Perhaps one question to ask yourself is if your resubmission plans are coming from a position of acceptance – in which case fine, but don’t regard it as a resubmission – or a part of the bargaining stage. In which case…. perhaps take a little longer to decide what to do.

Further reading: What to do if your grant application is unsuccessful. Part 1 – What it Means and What it dDoesn’t and Part 2 – Next Steps.

‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

The Stern Review – Publications, Portability, and Panic

Research Managers everywhere, earlier today.

The Stern Review on the future of the REF is out today, and there are any number of good summaries of the key recommendations that you can read. You could also follow the #sternreview hashtag on Twitter, or read it for yourself. It’s not particularly long, and it’s an easy read considering. The first point worth noting is that these are recommendations, not final policy, and they’re certainly nothing like a worked up final set of guidance notes for the next REF. I won’t repeat the summary, and I won’t add much on the impact issue, which Prof Mark Reed aka @fasttrackimpact has covered already.

The issue that has set twitter ablaze is that of portability – that is, which institution gets to return an academic’s publications when she moves from one institution to another. Under the old rules, there was full portability. So if Professor Portia Bililty moved from one institution to another in the final months of a REF cycle, all of her publications would come with her, and would all be returnable by her new employer. Her old employer lost all claim. Impact was different – that remained with the institution where it was created.

This caused problems. As the report puts it

72. There is a problem in the current REF system associated with the demonstrable increase in the number of individuals being recruited from other institutions shortly before the census date. This has costs for the UK HEI system in terms of recruitment and retention. An institution might invest very significantly in the recruitment, start up and future career of a faculty member, only to see the transfer market prior to REF drastically reduce the returns to that investment. This is a distortion to investment incentives in the direction of short-termism and can encourage rent-seeking by individuals and put pressure on budgets.

There was also some fairly grubby game-playing whereby big names from outside the UK were brought in on fractional contracts for their publications alone. To be fair, I’ve heard about places where this was done for other reasons, where these big names regularly attended their new fractional employer, helped develop research culture, mentored career young researchers and published articles with existing faculty. But let’s not pretend that happened everywhere.

So there’s a problem to be solved.

Stern’s response is to say that outputs – like impact – will no longer be portable.

73. We therefore recommend that outputs should be submitted only by the institution where the output was demonstrably generated. If individuals transfer between institutions (including from overseas) during the REF period, their works should be allocated to the HEI where they were based when the work was accepted for publication. A smaller maximum number of outputs might be permitted for the outputs of staff who have left an institution through retirement or to another HEI. Bearing in mind Recommendation 2, which recommends that any individual should be able to submit up to six outputs, a maximum of three outputs from those who have left the institution in the REF period would seem appropriate.
74. HEIs hiring staff during the REF cycle would be able to include them in their staff return. But they would be able to include only outputs by the individual that have been accepted for publication after joining the institution. Disincentivising short-term and narrowly-motivated movement across the sector, whilst still incentivising long-term investment in people will benefit UK research and should also encourage greater collaboration across the system.

I have to say that my first reaction to this will be extremely positive. The poaching and gameplaying were very dispiriting, and this just seems…. fairer.

However, looking at the Twitter reaction, the response was rather different. Concern was expressed that this would make it very difficult for researchers to move institutions, and it would make it especially difficult for early career researchers. I’ve been back and forth on this, and I’m no longer convinced that this is such a problem.

Let’s play Fantasy REF Manager 2020. It’s the start of the 2016/2017 season academic year. All of the existing publications from my squad of academics are mine to return, whatever happens to them and whatever career choices they make. Let’s say that one of my promising youth players  early career researchers gets an offer for elsewhere. I can try to match or beat whatever offer she has, but whatever happens, my team gets credit for the publications she’s produced. Let’s say that she moves on, and I want to recruit a replacement, and I identify the person I want. He’s got some great publications which he can’t bring with him… but I don’t need them, because I’ve got those belonging to his predecessor. Of course, I’d be very interested in track record, but I’m appointing entirely on potential. His job is to pick up where she left off.

Might recruiting on potential actually work in favour of early career researchers? Under the old system, if I were a short termist manager, I’d probably favour the solid early-mid career plodder who can bring me a number of guaranteed, safe publications, rather than someone who is much longer on promise but shorter on actual published publications. Might it also bring an end to the system where very early career researchers were advantaged just by having *any* bankable publications that had actually appeared?

I wonder if some early career researchers are so used to a system where they’re (unfairly) judged by the sole criterion of potential REF contribution that they’re imagining a scenario where they – and perhaps they alone – are being prevented from using the only thing that makes them employable. Institutions with foresight and with long term planning have always recruited on the basis of potential and other indicators and factors beyond the REF, and this change may force more of them to do that.

However, I can see a few problems that I might have as Fantasy REF Manager. The example above presumed one-in, one-out. But what if I want to increase the size of my squad through building new areas of specialism, or put together an entirely new School or Research Group? This might present more of a problem, because it’ll take much longer for me to see any REF benefits in exchange for my investment. However, rival managers would argue that the old rules meant I could do an academic-Chelsea or academic-Manchester City, and just buy all those REF benefits straight away. And that doesn’t feel right.

Another problem might be if I was worried about returning publications from people who have left. What image to it give to the REF panel if more than a certain small percentage of your returned publications are from researchers who’ve left? Would it make us look like we were trading on past glories, while in fact we’d deteriorated rapidly? Perhaps some guidance to the panels that they’re to take no account of this in assessing submissions would help here, and a clear signal that a good publication by a researcher-past has the same value as researcher-current.

Does the new system give me as the Fantasy REF Manager too much power over my players, early career or not? I’m not sure. It’s true that I have their publications in the bag, so they can’t threaten me with taking them away. But I’m still going to want to keep them on my team if I think they’re going to continue to produce work of that standard that I want in the future. If I don’t think that – for whatever reason – then I’ve no reason to want to keep them. They can still hold me to ransom, but what they’re holding over me is their future potential, not recent past glories. And to me, that seems more like an appropriate correction in the balance of power. Though… might any discrimination be more likely to be against career elderly researchers who I think are winding down? Not sure.

Of course, there are compromise positions between full portability and no portability. Perhaps a one or two year window of portability, and perhaps longer for early career researchers… though that might give some too great an advantage. That would be an improvement on the status quo, and might assuage some worries that a lot of ECRs (judging by my timeline on Twitter, anyway) have at the moment.

Even with a window, there are potential problems around game-playing. Do researchers looking for a move hold off from submitting their papers? Might they filibuster corrections and final changes? Might editors be pressurised to delay formal acceptances? Are we clear what constitutes a formal date of acceptance (open access experience suggests not)? And probably most seriously, might papers “under review” rather than papers published be the new currency?

Probably the last point is what worries me most, but I think these are relatively small issues, and I’d be worried if hiring decisions were based on such small margins. But perhaps they are.

This article is entirely knee-jerk. I’m making it up as I go along, changing my mind, being influenced. But I think that ECRs have less to worry about than many fear, and I think my tentative view is that limiting portability – either entirely, or with a narrow window – is significantly better than the current situation of unlimited portability. But I may have missed something, and I’m open to convincing.

Please feel free to tell me what I’ve missed in the comments, or tweet me.

UPDATE: 29th July AM

I’ve been following the discussion on Twitter with some interest, and I’ve been reflecting on whether or not there’s a particular issue for early career researchers. As I said earlier, I’ve been going backwards and forwards on this. Martin Eve has written an excellent post in which he argues that some of the concern may be because

“the current hiring paradigm is so geared towards REF and research it can be hard to imagine what a new hiring environment looks like”

He also makes an important point about ownership of IP, which a lot of academics don’t seem to understand.

Athene Donald has written a really interesting post in which she describes “egregious examples” of game-playing which she’s seen first hand, and anyone who doesn’t think this is a serious issue needs to read this. She also draws much-needed attention to a major benefit of the proposals – that returning everyone and having returning nx2 publications does away with all of the personal circumstances exceptions work required last time to earn the right to submit fewer than four outputs – this is difficult and time consuming for institutions, and potentially distressing for individuals. She also echoes Martin Eve’s point about some career young researchers not being able to think into a new paradigm yet by recalling her long experience of REFs and RAEs.

However, while I do – on the whole – think that some early career researchers are overreacting, perhaps not understanding that the game changes for everyone, and that appointments are now on potential, not on recent publishing history. And that this might benefit them as I argued above.

Having said that, I am now persuaded that there are good arguments for an exception to the portability rules for ECRs. My sense is that there’s a fair amount of mining and developing the PhD for publications that could be done, but after that, there has to come a stage of moving on to the next thing, adding new strings to the bow, and that that might in principle be a less productive time in terms of publishing. And although I think at least some ECR worries are misplaced, if what I’m reading on Twitter is representative, I think there’s a case for taking them seriously and doing something to assuage those fears with an exemption or limited exemption. There’s a lot that’s positive about the Stern Review, but I think the confidence of the ECR community is important in itself.

Some really interesting issues have been raised that relate to detail and to exceptions and which would have to be ironed out later, but are worth consideration. Can an institution claim the publications of a teaching fellow? (I’d argue no). What happens to publications accepted when the author has two fractional (and presumably temporary) contracts? (I’d argue they can’t be claimed, certainly not if the contract is sessional). What if the author is unemployed?

One argument I’ve read a few times is that there’s a strong incentive for institutions to hire from within, rather than from without. But I’m not clear why that is – in my example above, I already have any publications from internal candidates, whether or not I make an internal appointment. I can’t have the publications of anyone from outside – so it’s a case of the internal candidates future publications (plus broader contribution, but let’s take that as read) versus the external candidate’s. I think that sounds like a reasonably level playing field, but perhaps I’m missing something. I suppose I wouldn’t have to return publications of someone who’s left if I make an internal appointment, but if there’s no penalty (formal or informal) for this, why should I – as Fantasy REF Manager -care? If there were portability, I’d be choosing between the internal’s past and potential, and the external’s past and potential. That might change my calculations, depending on those publications – though actually if the internal’s publications were co-authored with existing faculty I might not mind if they go. So…. yes, there is a whole swamp of unintended consequences here, but I’m not sure whether allowing ECR portability helps any.

The rise of the machines – automation and the future of research development

"I've seen research ideas you people wouldn't believe. Impact plans on fire off the shoulder of Orion. I watched JeS-beams glitter in the dark near the Tannhäuser ResearchGate. All those proposals will be lost in time, like tears...in...rain. Time to revise and resubmit."
“I’ve seen first drafts you people wouldn’t believe. Impact plans on fire off the shoulder of Orion. I watched JeS beams glitter in the dark near the Tannhäuser ResearchGate. All those research proposals will be lost in time, like tears…in…rain. Time to resubmit.”

In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?

In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.

A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools  – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.

While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.

I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).

But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.

It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.

Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.

Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.

Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.

But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.

I for one welcome our new research development AI overlords.