Initial Reactions to HEFCE’s ‘Initial decisions on REF 2021’

This lunchtime HEFCE have announced some more “Initial Decisions” on REF 2021, which I’ve summarised below.

Slightly frustratingly, the details are scattered across a few documents, and it’s easy to miss some of them. There’s an exec summary,  a circular letter (which is more of a rectangle, really), the main text of the report that can be downloaded from the bottom of the exec summary page (along with an annex listing UoAs and further particulars for panel chair roles)… and annex A on a further consultation staff return and output portability, downloadable from the bottom of the circular letter page.

I’ve had a go at a quick summary, by bullet point theme rather than in the order they appear, or in a grand narrative sweep. This is one of my knee-jerk pieces, and I’ve added a few thoughts of my own. But it’s early days, and possibly I’ve missed something or misunderstood, so please let me know.

Outputs

  • Reserve output allowed where publication may not appear in time
  • Worth only 60% of total mark this time (see scoring system)

I think the reduction in the contribution of outputs to the overall mark (at the expense of impact) is probably what surprised me most, and I suspect this will be controversial. I think the original plan was for environment to be downgraded to make way, but there’s a lot more demanded from the environment statement this time (see below) so it’s been protected. Great to have the option of submitting an insurance publication in case one of the in-press ones doesn’t appear by close of play.

Panels/Units of Assessment

  • Each sub-panel to have at least one appointed member for interdisciplinary research “with a specific role to ensure its equitable assessment”. New identifier/flag for interdisciplinary outputs to capture
  • Single UoA for engineering, multiple submissions allowed
  • Archaeology split from Geography and Environmental studies – now separate
  • Film and Screen Studies to be explicitly included in UoA 33 with Dance, Drama, Performing Arts
  • Decisions on forensic science and criminology (concerns about visibility) due in Autumn
  • Mapping staff to UoAs will be done by institutions, not HESA cost centres, but may ask for more info in the event of any “major variances” from HESA data.

What do people think about a single UoA for engineering? That’s not an area I support much. Is this just tidying up, or does this has greater implications? Is it ironic that forensic science and criminology have been left a cop show cliff-hanger ending?

Environment

  • Expansion of Unit of Assessment environment section to include sections on:
    • Structures to support interdisciplinary research
    • Supporting collaboration with “organisations beyond higher education”
    • Impact template will now be in the environment element
    • Approach to open research/open access
    • Supporting equality and diversity
  • More quant data in UoA environment template (we don’t know what yet)
  • Standard Institution level information
  • Non-assessed invite only pilot for institution level environment statement
  • Expansion of environment section is given as a justification for maintaining it at 15% of score rather than reducing as expected.

The inclusion of a statement about support for interdisciplinary work is interesting, as this moves beyond merely addressing justifiable criticism about the fate of interdisciplinary research (see the welcome addition to each UoA of an appointed ‘Member for Interdisciplinarity’ above). This makes it compulsory, and an end in itself. This will go down better in some UoAs than others.

Impact

  • Institutional level impact case studies will be piloted, but not assessed
  • Moves towards unifying definitions of “impact” and “academic impact” between REF and Research Councils – both part of dual funding system for research
  • Impact on teaching/curriculum will count as impact – more guidance to be published
  • Underpinning work “at least equivalent to 2*” and published between 1st Jan 2000 and 31st Dec 2020. Impact must take place between 1st Aug 2013 and 31st July 2020
  • New impact case study template, more questions asked, more directed, more standardised, more “prefatory” material to make assessment easier.
  • Require “routine provision of audit evidence” for case study templates, but not given to panel
  • Uncertain yet on formula for calculating number of case study requirements, but overall “should not significantly exceed… 2014”. Will be done on some measure of “volume of activity”, possibly outputs
  • Continuation of case studies from 2014 is allowed, but must meet date rules for both impact and publication, need to declare it is a continuation.
  • Increased to 25% of total score

And like a modern day impact superhero, he comes Mark Reed aka Fast Track Impact with a blog post of his own on the impact implications of the latest announcement. I have to say that I’m pleased that we’re only having a pilot for institutional case studies, because I’m not sure that’s a go-er.

Assessment and Scoring system

  • Sub-panels may decide to use metrics/citation data, but will set out criteria statements stating whether/how they’ll use it. HEFCE will provide the citation data
  • As 2014, overall excellence profile, 3 sub-profiles (outputs, impact, environment)
  • Five point scale from unclassified to 4*
  • Outputs 60, Impact 25, Environment 15. Increase of impact to 25, but as extra environment info sought, has come at the expense of outputs.

There was some talk of a possible necessity for a 5* category to be able to differentiate at the very top. but I don’t think this gained much traction.

But on the really big questions… further consultation (deadline 29th Sept):

There’s been some kicking into the short grass, but things are looking a bit clearer…

(1) Staff submission:

All staff “with a significant responsibility to undertake research” will be submitted, but “no single indicator identifies those within the scope of the exercise”.  Institutions have the option of submitting 100% of staff who meet the core eligibility requirement OR come up with a code of practice that they’ll use to decide who is eligible. Audit-able evidence will be required and Institutions can choose different options for different UoAs.

Proposed core eligibility requirements – staff must meet all of the following:

  • “have an academic employment function of ‘research only’ or ‘teaching and research’
  • are independent researchers [i.e. not research assistants unless ‘demonstrably’ independent]
  • hold minimum employment of 0.2 full time equivalent
  • have a substantive connection to the submitting institution.”

I like this as an approach – it throws the question back to universities, and leaves it up to them whether they think it’s worth the time and trouble running an exercise in one or more UoAs. And I think the proposed core requirements look sensible, and faithful to the core aim which is to maximise the number of researchers returned and prevent the hyper selectivity game being played.

(2) Transition arrangements for non-portability of publications.

HEFCE are consulting on either:

(a) “The simplified model, whereby outputs would be eligible for return by the originating institution (i.e. the institution where the research output was demonstrably generated and at which the member of staff was employed) as well as by the newly employing institution”.
or
(b) “The hybrid approach, with a deadline (to be determined), after which a limited number of outputs would transfer with staff, with eligibility otherwise linked to the originating institution. (This would mean operating two rules for portability in this exercise: the outputs of staff employed before the specified date falling under the 2014 rules of full portability; outputs from staff employed after this date would fall under the new rules.)”

I wrote a previous post on portability and non-portability when the Stern Review was first published, which I still think is broadly correct.

I wonder how simple the simplified model will be… if we end having to return n=2 publications, and choosing those publications from a list of everything published by everyone while they worked here. But it’s probably less work than having a cut off date.

More to follow….

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

The Stern Review – Publications, Portability, and Panic

Research Managers everywhere, earlier today.

The Stern Review on the future of the REF is out today, and there are any number of good summaries of the key recommendations that you can read. You could also follow the #sternreview hashtag on Twitter, or read it for yourself. It’s not particularly long, and it’s an easy read considering. The first point worth noting is that these are recommendations, not final policy, and they’re certainly nothing like a worked up final set of guidance notes for the next REF. I won’t repeat the summary, and I won’t add much on the impact issue, which Prof Mark Reed aka @fasttrackimpact has covered already.

The issue that has set twitter ablaze is that of portability – that is, which institution gets to return an academic’s publications when she moves from one institution to another. Under the old rules, there was full portability. So if Professor Portia Bililty moved from one institution to another in the final months of a REF cycle, all of her publications would come with her, and would all be returnable by her new employer. Her old employer lost all claim. Impact was different – that remained with the institution where it was created.

This caused problems. As the report puts it

72. There is a problem in the current REF system associated with the demonstrable increase in the number of individuals being recruited from other institutions shortly before the census date. This has costs for the UK HEI system in terms of recruitment and retention. An institution might invest very significantly in the recruitment, start up and future career of a faculty member, only to see the transfer market prior to REF drastically reduce the returns to that investment. This is a distortion to investment incentives in the direction of short-termism and can encourage rent-seeking by individuals and put pressure on budgets.

There was also some fairly grubby game-playing whereby big names from outside the UK were brought in on fractional contracts for their publications alone. To be fair, I’ve heard about places where this was done for other reasons, where these big names regularly attended their new fractional employer, helped develop research culture, mentored career young researchers and published articles with existing faculty. But let’s not pretend that happened everywhere.

So there’s a problem to be solved.

Stern’s response is to say that outputs – like impact – will no longer be portable.

73. We therefore recommend that outputs should be submitted only by the institution where the output was demonstrably generated. If individuals transfer between institutions (including from overseas) during the REF period, their works should be allocated to the HEI where they were based when the work was accepted for publication. A smaller maximum number of outputs might be permitted for the outputs of staff who have left an institution through retirement or to another HEI. Bearing in mind Recommendation 2, which recommends that any individual should be able to submit up to six outputs, a maximum of three outputs from those who have left the institution in the REF period would seem appropriate.
74. HEIs hiring staff during the REF cycle would be able to include them in their staff return. But they would be able to include only outputs by the individual that have been accepted for publication after joining the institution. Disincentivising short-term and narrowly-motivated movement across the sector, whilst still incentivising long-term investment in people will benefit UK research and should also encourage greater collaboration across the system.

I have to say that my first reaction to this will be extremely positive. The poaching and gameplaying were very dispiriting, and this just seems…. fairer.

However, looking at the Twitter reaction, the response was rather different. Concern was expressed that this would make it very difficult for researchers to move institutions, and it would make it especially difficult for early career researchers. I’ve been back and forth on this, and I’m no longer convinced that this is such a problem.

Let’s play Fantasy REF Manager 2020. It’s the start of the 2016/2017 season academic year. All of the existing publications from my squad of academics are mine to return, whatever happens to them and whatever career choices they make. Let’s say that one of my promising youth players  early career researchers gets an offer for elsewhere. I can try to match or beat whatever offer she has, but whatever happens, my team gets credit for the publications she’s produced. Let’s say that she moves on, and I want to recruit a replacement, and I identify the person I want. He’s got some great publications which he can’t bring with him… but I don’t need them, because I’ve got those belonging to his predecessor. Of course, I’d be very interested in track record, but I’m appointing entirely on potential. His job is to pick up where she left off.

Might recruiting on potential actually work in favour of early career researchers? Under the old system, if I were a short termist manager, I’d probably favour the solid early-mid career plodder who can bring me a number of guaranteed, safe publications, rather than someone who is much longer on promise but shorter on actual published publications. Might it also bring an end to the system where very early career researchers were advantaged just by having *any* bankable publications that had actually appeared?

I wonder if some early career researchers are so used to a system where they’re (unfairly) judged by the sole criterion of potential REF contribution that they’re imagining a scenario where they – and perhaps they alone – are being prevented from using the only thing that makes them employable. Institutions with foresight and with long term planning have always recruited on the basis of potential and other indicators and factors beyond the REF, and this change may force more of them to do that.

However, I can see a few problems that I might have as Fantasy REF Manager. The example above presumed one-in, one-out. But what if I want to increase the size of my squad through building new areas of specialism, or put together an entirely new School or Research Group? This might present more of a problem, because it’ll take much longer for me to see any REF benefits in exchange for my investment. However, rival managers would argue that the old rules meant I could do an academic-Chelsea or academic-Manchester City, and just buy all those REF benefits straight away. And that doesn’t feel right.

Another problem might be if I was worried about returning publications from people who have left. What image to it give to the REF panel if more than a certain small percentage of your returned publications are from researchers who’ve left? Would it make us look like we were trading on past glories, while in fact we’d deteriorated rapidly? Perhaps some guidance to the panels that they’re to take no account of this in assessing submissions would help here, and a clear signal that a good publication by a researcher-past has the same value as researcher-current.

Does the new system give me as the Fantasy REF Manager too much power over my players, early career or not? I’m not sure. It’s true that I have their publications in the bag, so they can’t threaten me with taking them away. But I’m still going to want to keep them on my team if I think they’re going to continue to produce work of that standard that I want in the future. If I don’t think that – for whatever reason – then I’ve no reason to want to keep them. They can still hold me to ransom, but what they’re holding over me is their future potential, not recent past glories. And to me, that seems more like an appropriate correction in the balance of power. Though… might any discrimination be more likely to be against career elderly researchers who I think are winding down? Not sure.

Of course, there are compromise positions between full portability and no portability. Perhaps a one or two year window of portability, and perhaps longer for early career researchers… though that might give some too great an advantage. That would be an improvement on the status quo, and might assuage some worries that a lot of ECRs (judging by my timeline on Twitter, anyway) have at the moment.

Even with a window, there are potential problems around game-playing. Do researchers looking for a move hold off from submitting their papers? Might they filibuster corrections and final changes? Might editors be pressurised to delay formal acceptances? Are we clear what constitutes a formal date of acceptance (open access experience suggests not)? And probably most seriously, might papers “under review” rather than papers published be the new currency?

Probably the last point is what worries me most, but I think these are relatively small issues, and I’d be worried if hiring decisions were based on such small margins. But perhaps they are.

This article is entirely knee-jerk. I’m making it up as I go along, changing my mind, being influenced. But I think that ECRs have less to worry about than many fear, and I think my tentative view is that limiting portability – either entirely, or with a narrow window – is significantly better than the current situation of unlimited portability. But I may have missed something, and I’m open to convincing.

Please feel free to tell me what I’ve missed in the comments, or tweet me.

UPDATE: 29th July AM

I’ve been following the discussion on Twitter with some interest, and I’ve been reflecting on whether or not there’s a particular issue for early career researchers. As I said earlier, I’ve been going backwards and forwards on this. Martin Eve has written an excellent post in which he argues that some of the concern may be because

“the current hiring paradigm is so geared towards REF and research it can be hard to imagine what a new hiring environment looks like”

He also makes an important point about ownership of IP, which a lot of academics don’t seem to understand.

Athene Donald has written a really interesting post in which she describes “egregious examples” of game-playing which she’s seen first hand, and anyone who doesn’t think this is a serious issue needs to read this. She also draws much-needed attention to a major benefit of the proposals – that returning everyone and having returning nx2 publications does away with all of the personal circumstances exceptions work required last time to earn the right to submit fewer than four outputs – this is difficult and time consuming for institutions, and potentially distressing for individuals. She also echoes Martin Eve’s point about some career young researchers not being able to think into a new paradigm yet by recalling her long experience of REFs and RAEs.

However, while I do – on the whole – think that some early career researchers are overreacting, perhaps not understanding that the game changes for everyone, and that appointments are now on potential, not on recent publishing history. And that this might benefit them as I argued above.

Having said that, I am now persuaded that there are good arguments for an exception to the portability rules for ECRs. My sense is that there’s a fair amount of mining and developing the PhD for publications that could be done, but after that, there has to come a stage of moving on to the next thing, adding new strings to the bow, and that that might in principle be a less productive time in terms of publishing. And although I think at least some ECR worries are misplaced, if what I’m reading on Twitter is representative, I think there’s a case for taking them seriously and doing something to assuage those fears with an exemption or limited exemption. There’s a lot that’s positive about the Stern Review, but I think the confidence of the ECR community is important in itself.

Some really interesting issues have been raised that relate to detail and to exceptions and which would have to be ironed out later, but are worth consideration. Can an institution claim the publications of a teaching fellow? (I’d argue no). What happens to publications accepted when the author has two fractional (and presumably temporary) contracts? (I’d argue they can’t be claimed, certainly not if the contract is sessional). What if the author is unemployed?

One argument I’ve read a few times is that there’s a strong incentive for institutions to hire from within, rather than from without. But I’m not clear why that is – in my example above, I already have any publications from internal candidates, whether or not I make an internal appointment. I can’t have the publications of anyone from outside – so it’s a case of the internal candidates future publications (plus broader contribution, but let’s take that as read) versus the external candidate’s. I think that sounds like a reasonably level playing field, but perhaps I’m missing something. I suppose I wouldn’t have to return publications of someone who’s left if I make an internal appointment, but if there’s no penalty (formal or informal) for this, why should I – as Fantasy REF Manager -care? If there were portability, I’d be choosing between the internal’s past and potential, and the external’s past and potential. That might change my calculations, depending on those publications – though actually if the internal’s publications were co-authored with existing faculty I might not mind if they go. So…. yes, there is a whole swamp of unintended consequences here, but I’m not sure whether allowing ECR portability helps any.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

Using Social Media to Support Research Management – ARMA training and development event

Last week I gave a brief presentation at a training and development event organised by ARMA (Association of Research Managers and Administrators) entitled ‘Using Social Media to Support Research Management’. Also presenting were Professor Andy Miah of the University of Salford, Sierra Williams of the LSE Impact of Social Sciences blog, Terry Bucknell of Altmetric. and Phil Ward of Fundermentals and the University of Kent.   A .pdf of my wibblings as inflicted can be found here.

I guess there are three things from the presentation and from the day as a whole that I’d pick out for particular comment.

Firstly, if you’re involved in research management/support/development/impact, then you should be familiar with social media, and by familiar I don’t mean just knowing the difference between Twitter and Friends Reunited – I mean actually using it. That’s not to say that everyone must or should dash off and start a blog – for one thing, I’m not sure I could handle the competition. But I do think you should have a professional presence on Twitter. And I think the same applies to any academics whose research interests involve social media in any way – I’ve spoken to researchers wanting to use Twitter data who are not themselves on Twitter. Call it a form of ethnography if you like (or, probably better, action research), I think you only really understand social media by getting involved – you should “inhabit the ecosystem”, as Andy Miah put it in a quite brilliant presentation that you should definitely make time to watch.

I’ve listed some of the reasons for getting involved, and some of the advantages and challenges, in my presentation. But briefly, it’s only by using it and experiencing for yourself the challenge of finding people to follow, getting followers, getting attention for the messages you want to transmit, risking putting yourself and your views out there that you come to understand it. I used to just throw words like “blog” and “twitter” and “social media engagement” around like zeitgeisty confetti when talking to academic colleagues about their various project impact plans, without understanding any of it properly. Now I can talk about plans to get twitter followers, strategies to gain readers for the project blog, the way the project’s social media presence will be involved in networks and ecosystems relevant to the topic.

One misunderstanding that a lot of people have is that you have to tweet a lot of original content – in fact, it’s better not to. Andy mentioned a “70/30” rule – 70% other people’s stuff, 30% yours, as a rough rule of thumb. Even if your social media presence is just as a kind of curator – finding and retweeting interesting links and making occasional comments, you’re still contributing and you’re still part of the ecosystem, and if your interests overlap with mine, I’ll want to follow you because you’ll find things I miss. David Gauntlett wrote a really interesting article for the LSE impact blog on the value of “publish, then filter” systems for finding good content, which is well worth a read. Filtering is important work.

The second issue I’d like to draw out is an issue around personal and professional identity on Twitter. When Phil Ward, Julie Northam, David Young and I gave a presentation on social media at the ARMA conference in 2012, many delegates were already using Twitter in a personal capacity, but were nervous about mixing the personal and professional. I used to think this was much more of a problem/challenge than I do now. In last week’s presentation, I argued that there were essentially three kinds of Twitter account – the institutional, the personal, and what I called “Adam at work”. Institutional wears a shirt and tie and is impersonal and professional. Personal is sat in its pants on the sofa tweeting about football or television programmes or politics. Adam-at-work is more ‘smart casual’ and tweets about professional stuff, but without being so straight-laced as the institutional account.

Actually Adam-at-Work (and, for that matter You-at-Work) are not difficult identities to work out and to stick to. We all manage it every day.  We’re professional and focused and on-topic, but we also build relations with our office mates and co-workers, and some of that relationship building is through sharing weekend plans, holidays, interests etc. I want to try to find a way of explaining this without resorting to the words “water cooler” or (worse) “banter”, but I’m sure you know what I mean. Just as we need to show our human sides to bond with colleagues in everyday life, we need to do the same on Twitter. Essentially, if you wouldn’t lean over and tell it to the person at the desk next to you, don’t tweet about it. I think we’re all well capable of doing this, and we should trust ourselves to do it. By all means keep a separate personal twitter account (because you don’t want your REF tweets to send your friends to sleep) and use that to shout at the television if you’d like to.

I think it’s easy to exaggerate the dangers of social media, not least because of regular stories about people doing or saying something ill-advised. But it’s worth remembering that a lot of those people are famous or noteworthy in some way, and so attract attention and provocation in a way that we just don’t. While a footballer might get tweeted all kinds of nonsense after a poor performance, I’m unlikely to get twitter-trolled by someone who disagrees with something I’ve written, or booed while catching a train. Though I do think a football crowd style crescendo of booing might be justified in the workplace for people who send mass emails without the intended attachment/with the incorrect date/both.

Having said all that… this is just my experience, and as a white male it may well be that I don’t attract that kind of negative attention on social media. I trust/hope that female colleagues have had similar positive experiences and I’ve no reason to think they haven’t, but I don’t want to pass off my experience as universal. (*polishes feminist badge*).

The third thing is to repeat an invitation which I’ve made before – if anyone would like to write a guest post for my blog on any topic relevant to its general themes, please do get in touch. And if anyone has an questions about twitter, blogging, social media that they think I might have a sporting chance of answering, please ask away.

MOOCing about: My experience of a massively open online course

I’ve just completed my first Massively Open Online Course (or MOOC) entitled ‘The mind is flat: the shocking shallowness of human psychology run via the Futurelearn platform.  It was run by Professor Nick Chater and PhD student Jess Whittlestone of Warwick Business School and this is the second iteration of the course, which I understand will be running again at some point. Although teaching and learning in general (and MOOCs in particular) are off topic for this blog, I thought it might be interesting to jot down a few thoughts about my very limited experience of being on the receiving end of a MOOCing.  There’s been a lot of discussion of MOOCs which I’ve been following in a kind of half-hearted way, but I’ve not seen much (if anything) written from the student perspective.

“Alright dudes… I’m the future of higher education, apparently. Could be worse… could be HAL 9000”

I was going to explain my motivations for signing up for the course to add a bit of context, but one of the key themes of the MOOC has been the shallowness and instability of human reasons and motivations.  We can’t just reach back into our minds, it seems, and retrieve our thinking and decision making processes from a previous point in time.  Rather, the mind is an improviser, and can cobble together – on demand – all kinds of retrospective justifications and explanations for our actions which fit the known facts including our previous decisions and the things we like to think motivate us.

So my post-hoc rationalisation of my decision to sign up is probably three-fold. Firstly, I think a desire for lifelong learning and in particular an interest in (popular) psychology are things I ascribe to myself.  Hence an undergraduate subsidiary module in psychology and having read Stuart Sutherland’s wonderful book ‘Irrationality‘.  A second plausible explanation is that I work with behavioural economists in my current role, and this MOOC would help me understand them and their work better.  A third possibility is that I wanted to find out what MOOCs were all about and what it was like to do one, not least because of their alleged disruptive potential for higher education.

So…. what does the course consist of?  Well, it’s a six week course requiring an estimated five hours of time per week.  Each week-long chunk has a broad overarching theme, and consists of a round-up of themes arising from questions from the previous week, and then a series of short videos (generally between 4 and 20 minutes) either in a lecture/talking head format, or in an interview format.  Interviewees have included other academics and industry figures.  There are a few very short written sections to read, a few experiments to do to demonstrate some of the theories, a talking point, and finally a multiple choice test.  Students are free to participate whenever they like, but there’s a definite steer towards trying to finish each week’s activities within that week, rather than falling behind or ploughing ahead. Each video or page provides the opportunity to add comments, and it’s possible for students to “like” each other’s comments and respond to them.  In particular there’s usually one ‘question of the week’ where comment is particularly encouraged.

The structure means that it’s very easy to fit alongside work and other commitments – so far I’ve found myself watching course videos during half time in Champions League matches (though the half time analysis could have told its own story about the shallowness of human psychology and the desire to create narratives), last thing at night in lieu of bedtime reading, and when killing time between finishing work and heading off to meet friends.  The fact that the videos are short means that it’s not a case of finding an hour or more at a time for uninterrupted study. Having said that, this is a course which assumes “no special knowledge or previous experience of studying”, and I can well imagine that other MOOCs require a much greater commitment in terms of time and attention.

I’ve really enjoyed the course, and I’ve found myself actively looking forward to the start of a new week, and to carving out a free half hour to make some progress into the new material.  As a commitment-light, convenient way of learning, it’s brilliant.  The fact that it’s free helps.  Whether I’d pay for it or not I’m not sure, not least because I’ve learnt that we’re terrible at working out absolute value, as our brains are programmed to compare.  Once a market develops and gives me some options to compare, I’d be able to think about it.  Once I had a few MOOCs under my belt, I’d certainly consider paying actual money for the right course on the right topic at the right level with the right structure. At the moment it’s possible to pay for exams (about £120, or £24 for a “statement of participation”) on some courses, but as they’re not credit bearing it’s hard to imagine there would be much uptake. What might be a better option to offer is a smaller see for a self-printable .pdf record of courses completed, especially once people start racking up course completions.

One drawback is the multiple choice method of examining/testing, which doesn’t allow much sophistication or nuance in answers.  A couple of the questions on the MOOC I completed were ambiguous or poorly phrased, and one in particular made very confusing use of “I” and “you” in a scenario question, and I’d still argue (sour grapes alert) that the official “correct” answer was wrong. I can see that multiple choice is the only really viable way of having tests at the moment (though one podcast I was listening to the other day mooted the possibility of machine text analysis marking for short essays based on marks given to a sample number), but I think a lot more work needs to go into developing best (and better) practice around question setting.  It’s difficult – as a research student I remember being asked to come up with some multiple choice questions about the philosophy of John Rawls for an undergraduate exam paper, and struggled with that.  Though I did remove the one from the previous paper which asked how many principles of justice there were (answer: it depends how you count them).

But could it replace an undergraduate degree programme?  Could I imagine doing a mega-MOOC as my de facto full time job, watching video lectures, reading course notes and core materials, taking multiple choice questions and (presumably) writing essays?  I think probably not.  I think the lack of human interaction would probably drive me mad – and I say this as a confirmed introvert.  Granted, a degree level MOOC would probably have more opportunities for social interaction – skype tutorials, better comments systems, more interaction with course tutors, local networks to meet fellow students who live nearby – but I think the feeling of disconnection, isolation, and alienation would just be too strong.  Having said that, perhaps to digital natives this won’t be the case, and perhaps compared (as our brains are good at comparing) to the full university experience a significantly lighter price tag might be attractive.  And of course, for those in developing countries or unable or unwilling to relocate to a university campus (for whatever reason), it could be a serious alternative.

But I can certainly see a future that blends MOOC-style delivery with more traditional university approaches to teaching and learning.  Why not restructure lectures into shorter chunks and make them available online, at the students’ convenience?  There are real opportunities to bring in extra content with expert guest speakers, especially industry figures, world leading academic experts, and particularly gifted and engaging communicators.  It’s not hard to imagine current student portals (moodle, blackboard etc) becoming more and more MOOC-like in terms of content and interactivity.  In particular, I can imagine a future where MOOCs offer opportunities for extra credit, or for non-credit bearing courses for students to take alongside their main programme of study.  These could be career-related courses, courses that complement their ‘major’, or entirely hobby or interest based.

One thought that struck me was whether it was FE rather than HE that might be threatened by MOOCs.  Or at least the Adult Ed/evening classes aspect of FE.  But I think even there a motivation to – say – decide to learn Spanish, is only one motivation – another is often to meet new people and to learn together, and I don’t think that that’s an itch that MOOCs are entirely ready to scratch. But I can definitely see a future for MOOCs as the standard method of continuing professional development in any number of professional fields, whether these are university-led or not. This has already started to happen, with a course called ‘Discovering Business in Society‘ counting as an exemption towards one paper of an accounting qualification.  I also understand that Futurelearn are interested in pilot schemes for the use of MOOCs 16-19 year olds to support learning outcomes in schools.

It’s also a great opportunity for hobbyists and dabblers like me to try something new and pursue other intellectual interests.  I can certainly imagine a future in which huge numbers of people are undertaking a MOOC of one kind or another, with many going from MOOC to MOOC and building up quite a CV of virtual courses, whether for career reasons, personal interest, or a combination of both.  Should we see MOOCs as the next logical and interactive step from watching documentaries? Those who today watch Horizon and Timewatch and, well, most of BBC4, might in future carry that interest forward to MOOCs.

So perhaps rather than seeing MOOCs in terms of what they’re going to disrupt or displace or replace, we’re better off seeing them as something entirely new.

And I’m starting my next MOOC on Monday – Cooperation in the contemporary world: Unlocking International Politics led by Jamie Johnson of the University of Birmingham.  And there are several more that look tempting… How to read your boss from colleagues at the University of Nottingham, and England in the time of Richard III from – where else – the University of Leicester.

Adam Golberg announces new post about Ministers inserting themselves into research grant announcements

“You might very well think that as your hypothesis, but I couldn’t possibly comment”

Here’s something I’ve been wondering recently.  Is it just me, or have major research council funding announcements started to be made by government ministers, rather than by the, er, research councils?

Here’s a couple of examples that caught my eye from the last week or so. First, David Willetts MP “announces £29 million of funding for ESRC Centres and Large Grants“.  Thanks Dave!  To be fair, he is Minster of State for Universities and Science.  Rather more puzzling is George Osborne announcing “22 new Centres for Doctoral Training“, though apparently he found the money as Chancellor of the Exchequer.  Seems a bit tenuous to me.

So I had a quick look back through the ESRC and EPSRC press release archives to see if the prominence of government ministers in research council funding announcements was a new thing or not.  Because I hadn’t noticed it before.  With the ESRC, it is new.  Here’s the equivalent announcement from last year in which no government minister is mentioned.  With the EPSRC, it’s being going on for longer.  This year’s archive and the 2013 archive show government ministers (mainly Willetts, sometimes Cable or Osborne) front and centre in major announcements.  In 2012 they get a name check, but normally in the second or third paragraph, not in the headline, and don’t get a picture of themselves attached to the story.

Does any of this matter? Perhaps not, but here’s why I think it’s worth mentioning.  The Haldane Principle is generally defined as “decisions about what to spend research funds on should be made by researchers rather than politicians”.  And one of my worries is that in closely associating political figures with funding decisions, the wrong impression is given.  Read the recent ESRC announcement again, and it’s only when you get down to the ‘Notes for Editors’ section that there’s any indication that there was a competition, and you have to infer quite heavily from those notes that decisions were taken independently of government.

Why is this happening? It might be for quite benign reasons – perhaps research council PR people think (probably not unreasonably) that name-checking a government minister gives them a greater chance of media coverage. But I worry that it might be for less benign reasons related to political spin – seeking credit and basking in the reflected glory of all these new investments, which to the non-expert eye look to be something novel, rather than research council business as usual.  To be fair, there are good arguments for thinking that the current government does deserve some credit for protecting research budgets – a flat cash settlement (i.e. cut only be the rate of inflation each year) is less good than many want, but better than many feared. But it would be deeply misleading if the general public were to think that these announcements represented anything above and beyond the normal day-to-day work of the research councils.

Jo VanEvery tells me via Twitter that ministerial announcements are normal practice in Canada, but something doesn’t quite sit right with me about this, and it’s not a party political worry.  I feel there’s a real risk of appearing to politicise research.  If government claims credit, it’s reasonable for the opposition to criticise… now that might be the level of investment, but might it extend to the investments chosen?  Or do politicians know better than to go there for cheap political points?

Or should we stop worrying and just embrace it? It’s not clear that many people outside of the research ‘industry’ notice anyway (though the graphene announcement was very high profile), and so perhaps the chances of the electorate being misled (about this, at least) are fairly small.

But we could go further.  MEPs to announce Horizon 2020 funding? Perhaps Nick Clegg should announce the results of the British Academy/Leverhulme Small Grants Scheme, although given the Victorian origins of investments and wealth supporting work of the Leverhulme Trust, perhaps the honour should go to the ghosts of Gladstone or Disraeli.

Is there a danger that research funding calls are getting too narrow?

The ESRC have recently added a little more detail to a previous announcement about a pending call for European-Chinese joint research projects on Green Economy and Population Change.  Specifically, they’re after projects which address the following themes:

Green Economy

  • The ‘greenness and dynamics of economies’
  • Institutions, Policies and planning for a green economy
  • The green economy in cities and metropolitan areas
  • Consumer behaviour and lifestyles in a green economy

Understanding population Change

  • changing life course
  • urbanisation and migration
  • labour markets and social security dynamics
  • methodology, modelling and forecasting
  • care provision
  • comparative policy learning

Projects will need to involve institutions from at least two of the participating European counties (UK, France (involvement TBC), Germany, Netherlands) and two institutions in China. On top of this is an expectation that there will be sustainability/capacity building around the research collaborations, plus the usual further plus points of involving stakeholders and interdisciplinary research.

Before I start being negative, or potentially negative, I have one blatant plug and some positive things to say. The blatant plug is that the University of Nottingham has a campus in Ningbo in China which is eligible for NSFC funding and therefore would presumably count as one Chinese partner. I wouldn’t claim to know all about all aspects of our Ningbo research expertise, but I know people who do.  Please feel free to contact me with ideas/research agendas and I’ll see if I can put you in touch with people who know people.

The positive things.  The topics seem to me to be important, and we’ve been given advance notice of the call and a fair amount of time to put something together.  There’s a reference to Open Research Area procedures and mechanisms, which refers to agreements between the UK, France, Netherlands and Germany on a common decision making process for joint projects in which each partner is funded by their national funder under their own national funding rules.  This is excellent, as it doesn’t require anyone to become an expert in another country’s national funder’s rules, and doesn’t have the double or treble jeopardy problem of previous calls where decisions were taken by individual funders.  It’s also good that national funders are working together on common challenges – this adds fresh insight, invites interesting comparative work and pools intellectual and financial resources.

However, what concerns me about calls like this is that the area at the centre of the particular Venn diagram of this call is really quite small.  It’s open to researchers with research interests in the right areas, with collaborators in the right European countries, with collaborators in China.   That’s two – arguably three – circles in the diagram.  Of course, there’s a fourth – proposals that are outstanding.  Will there be enough strong competition on the hallowed ground at the centre of all these circles? It’s hard to say, as we don’t know yet how much money is available.

I’m all for calls that encourage, incentivise, and facilitate international research.  I’m in favour of calls on specific topics which are under-researched, which are judged of particular national or international importance, or where co-funding from partners can be found to address areas of common interest.

But I’m less sure about having both in one call – both very specific requirements in terms of the nationality of the partner institutions, and in terms of the call themes. Probably the scope of this call is wide enough – presumably the funders think so – but I can’t help think that that less onerous eligibility requirements in terms of partners could lead to greater numbers of high quality applications.

The consequences of Open Access, part 2: Are researchers prepared for greater scrutiny?

In part 1 of this post, I raised questions about how academic writing might have to change in response to the open access agenda.  The spirit of open access surely requires not just the availability of academic papers, but the accessibility of those papers to research users and stakeholders.  I argued that lay summaries and context pieces will increasingly be required, and I was pleased to discover that at least some open access journals are already thinking about this.  In this second part, I want to raise questions about whether researchers and those who support them are ready for the potential extra degree of scrutiny and attention that open access may bring.

On February 23rd 2012, the Journal of Medical Ethics published a paper called After-birth abortion: why should the baby live? by Alberto Giubilini and Francesca Minerva.   The paper was not to advocate “after birth abortion” (i.e infanticide), but to argue that many of the arguments that are said to justify abortion also turn out to justify infanticide.  This isn’t a new argument by any means, but presumably there was sufficient novelty in the construction of the argument to warrant publications.  To those familiar with the conventions of applied ethics – the intended readers of the article – it’s understood that it was playing devil’s advocate, seeing how far arguments can be stretched, taking things to their logical conclusion, seeing how far the thin end of the edge will drive, what’s at the bottom of the slippery slope, just what kind of absurdium can be reductio-ed to.  While the paper isn’t satire in the same way as Jonathan Swift’s A Modest Proposal, no sensible reader would have concluded that the authors were calling for infanticide to be made legal, in spite of the title.

I understand that what happened next was that the existence of the article – for some reason – attracted attention in the right wing Christian blogosphere, prompting a rash of complaints, hostile commentary, fury, racist attacks, and death threats.  Journal editor Julian Savulescu wrote a blog post about the affair, below which are 624 comments.   It’s enlightening and depressing reading in equal measure.  Quick declaration of interest here – my academic background (such as it is) is in philosophy, and I used to work at Keele University’s Centre for Professional Ethics marketing their courses.  I know some of the people involved in the JME’s response, though not Savulescu or the authors of the paper.

There’s a lot that can (and probably should) be said about the deep misunderstanding that occurred between professional bioethicists and non-academics concerned about ethical issues who read the paper, or who heard about it.  Part of that misunderstanding is about what ethicists do – they explore arguments, analyse concepts, test theories, follow the arguments.  They don’t have any special access to moral truth, and while their private views are often much better thought out than most people, most see their role as helping to understand arguments, not pushing any particular position.  Though some of them do that too, especially if it gets them on Newsnight.  I’m not really well informed enough to comment too much on this, but it seems to me that the ethicists haven’t done a great job of explaining what they do to those more moderate and sensible critics.  Those who post death threats and racist abuse are probably past reasoned argument and probably love having something to rail against because it justifies their peculiar world view, but for everyone else, I think it ought to be possible to explain.  Perhaps the notion of a lay summary that I mentioned last time might be helpful here.

Part of the reason for the fuss might have been because the article wasn’t available via open access, so some critics may not have had the opportunity to read the article and make up their own mind.  This might be thought of as a major argument in favour of open access – and of course, it is – the reasonable and sensible would have at least skim-read the article, and it’s easier to marshal a response when what’s being complained about is out there for reference.

However….. the unfortunate truth is that there are elements out there who are looking for the next scandal, for the next chance to whip up outrage, for the next witch hunt.  And I’m not just talking about the blogosphere, I’m talking about elements of the mainstream media, who (regardless of our personal politics) have little respect or regard for notions of truth, integrity and fairness.  If they get their paper sales, web  hits, outraged comments, and resulting manufactured “scandal”, then they’re happy.  Think I’m exaggerating?  Ask Hilary Mantel, who was on the receiving end of an entirely manufactured fuss with comments she made in a long and thoughtful lecture being taken deliberately and dishonestly out of context.

While open access will make things easier for high quality journalism and for the open-minded citizen and/or professional, it’ll also make it easier for the scandal-mongers (in the mainstream media and in the blogosphere) to identify the next victim to be thrown to the ravenous outrage-hungry wolves that make up their particular constituency.  It’s already risky to be known to be researching and publishing in certain areas – anything involving animal research; climate change; crop science; evolutionary theory; Münchhausen’s by Proxy; vaccination; and (oddly) chronic fatigue syndrome/ME – appears to have a hostile activist community ready to pounce on any research that comes back with the “wrong” answer.

I don’t want to go too far in presenting the world outside the doors of the academy as being a swamp of unreason and prejudice.  But the fact is that alongside the majority of the general public (and bloggers and journalists) who are both rational and reasonable, there is an element that would be happy to twist (or invent) things to suit their own agenda, especially if that agenda involves whipping out manufactured outrage to enable their constituency to confirm their existing prejudices. Never mind the facts, just get angry!

Doubtless we all know academics who would probably relish the extra attention and are already comfortable with the public spotlight.  But I’m sure we also know academics who do not seek the limelight, who don’t trust the media, and who would struggle to cope with even five minutes of (in)fame(y).  One day you’re a humble bioethicist, presumably little known outside your professional circles, and the next, hundreds of people are wishing you dead and calling you every name under the sun.  While Richard Dawkins seems to revel in his (sweary) hate mail, I think a lot of people would find it very distressing to receive emails hoping for their painful death.  I know it would upset me a lot, so please don’t send me any, okay?  And be nice in the comments…..

Of course, even if things never get that far or go that badly, with open access there’s always a greater chance of hostile comment or criticism from the more mainstream and reasonable media, who have a much bigger platform from which to speak than an academic journal.  This criticism need not be malicious, could be legitimate opinion, could be based on a misunderstanding.  Open access opens up the academy to greater scrutiny and greater criticism.

As for what we do about this….. it’s hard to say.  I certainly don’t say that we retreat behind the safety of our paywalls and sally forth with our research only when guarded by a phalanx of heavy infantry to protect us from the swinish multitude besieging our ivory tower.  But I think that there are things that we can do in order to be better prepared.  The use of lay summaries, and greater consideration of the lay reader when writing academic papers will help guard against misunderstandings.

University external relations departments need to be ready to support and defend academic colleagues, and perhaps need to think about planning for these kind of problems, if they don’t do so already.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.