Prêt-à-non-portability? Implications and possible responses to the phasing out of publication portability

“How much as been decided about the REF? About this much. And how much of the REF period is there to go? Well, again…

Last week Recently, I attended an Open Forum Events one day conference with the slightly confusing title ‘Research Impact: Strengthening the Excellence Framework‘ and gave a short presentation with the same title as this blog post. It was a very interesting event with some great speakers (and me), and I was lucky enough to meet up with quite a few people I only previously ‘knew’ through Twitter. I’d absolutely endorse Sarah Hayes‘ blogpost for Research Whisperer about the benefits of social media for networking for introverts.

Oh, and if you’re an academic looking for something approaching a straightforward explanation about the REF, can I recommend Charlotte Mathieson‘s excellent blog post. For those of you after in-depth half-baked REF policy stuff, read on…

I was really pleased with how the talk went – it’s one thing writing up summaries and knee-jerk analyses for a mixed audience of semi-engaged academics and research development professionals, but it’s quite another giving a REF-related talk to a room full of REF experts. It was based in part on a previous post I’ve written on portability but my views (and what we know about the REF) has moved on since then, so I thought I’d have a go at summarising the key points.

I started by briefly outlining the problem and the proposed interim arrangements before looking at the key principles that needed to form part of any settled solution on portability for the REF after next.

Why non-portability? What’s the problem?

I addressed most of this in my previous post, but I think the key problem is that it turns what ought to be something like a football league season into an Olympic event. With a league system, the winner is whoever earns the most points over a long, drawn out season. Three points is three points, whatever stage of the season it comes in. With Olympic events, it’s all about peaking at the right time during the cycle – and in some events within the right ten seconds of that cycle. Both are valid as sporting competition formats, but for me, Clive the REF should be more like a league season than to see who can peak best on census day. And that’s what the previous REF rules encourages – fractional short term appointments around the census date; bulking out the submission then letting people go afterwards; rent-seeking behaviour from some academics holding their institution to ransom; poaching and instability, transfer window effects on mobility; and panic buying.

If the point of the REF is to reward sustained excellence over the previous REF cycle with funding to institutions to support research over the next REF cycle, surely it’s a “league season” model we should be looking at, not an Olympic model. The problem with portability is that it’s all about who each unit of assessment has under contract and able to return at the time, even if that’s not a fair reflection of their average over the REF cycle. So if a world class researcher moves six months before the REF census date, her new institution would get REF credit for all of her work over the last REF cycle, and the one which actually paid her salary would get nothing in REF terms. Strictly speaking, this isn’t a problem of publication portability, it’s a problem of publication non-retention. Of which more later.

I summarised what’s being proposed as regards portability as a transition measure in my ‘Initial Reactions‘ post, but briefly by far most likely outcome for this REF is one that retains full portability and full retention. In other words, when someone moves institution, she takes her publications with her and leaves them behind. I’m going to follow Phil Ward of Fundermentals and call these Schrodinger’s Publications, but as HEFCE point out, plenty of publications were returned multiple times by multiple institutions in the last REF, as each co-author could return it for her institution. It would be interesting to see what proportion of publications were returned multiple times, and what the record is for the number of times that a single publication has been submitted.

Researcher Mobility is a Good Thing

Marie Curie and Mr Spock have more in common than radiation-related deaths – they’re both examples of success through researcher mobility. And researcher mobility is important – it spreads ideas and methods, allows critical masses of expertise to be formed. And researchers are human too, and are likely to need to relocate for personal reasons, are entitled to seek better paid work and better conditions, and might – like any other employee – just benefit from a change of scene.

For all these reasons, future portability rules need to treat mobility as positive, and as a human right. We need to minimise ‘transfer window’ effects that force movement into specific stages of the REF cycle – although it’s worth noting that plenty of other professions have transfer windows – teachers, junior doctors (I think), footballers, and probably others too.

And for this reason, and for reasons of fairness, publications from staff who have departed need to be assessed in exactly the same way as publications from staff who are still employed by the returning UoA. Certainly no UoA should be marked down or regarded as living on past glories for returning as much of the work of former colleagues as they see fit.

Render unto Caesar

Institutions are entitled to a fair return on investment in terms of research, though as I mentioned earlier, it’s not portability that’s the problem here so much as non-retention. As Fantasy REF Manager I’m not that bothered by someone else submitting some of my departed star player’s work written on my £££, but I’m very much bothered if I can’t get any credit for it. Universities are given funding on the basis of their research performance as evaluated through the previous REF cycle to support their ongoing endeavors in the next one. This is a really strong argument for publication retention, and it seems to me to be the same argument that underpins impact being retained by the institution.

However, there is a problem which I didn’t properly appreciate in my previous writings on this. It’s the investment/divestment asymmetry issue, as absolutely no-one except me is calling it. It’s an issue not for the likely interim solution, but for the kind of full non-portability system we might have for the REF after next.

In my previous post I imagined a Fantasy REF Manager operating largely a one-in, one-out policy – thus I didn’t need new appointee’s publications because I got to keep their predecessors. And provided that staff mobility was largely one-in, one-out, that’s fine. But it’s less straightforward if it’s not. At the moment the University of Nottingham is looking to invest in a lot of new posts around specific areas (“beacons”) of research strength – really inspiring projects, such as the new Rights Lab which aims to help end modern slavery. And I’m sure plenty of other institutions have similar plans to create or expand areas of critical mass.

Imagine a scenario where I as Fantasy REF Manager decide to sack a load of people  immediately prior to the REF census date. Under the proposed rules I get to return all of their publications and I can have all of the income associated for the duration of the next REF cycle – perhaps seven years funding. On the other hand, if I choose to invest in extra posts that don’t merely replace departed staff, it could be a very long time before I see any return, via REF funding at least. It’s not just that I can’t return their publications that appeared before I recruited them, it’s that the consequences of not being able to return a full REF cycle’s worth of publications will have funding implications for the whole of the next REF cycle. The no-REF-disincentive-to-divest and long-lead-time-for-REF-reward-for-investment looks lopsided and problematic.

I’m a smart Fantasy REF Manager, it means I’ll save up my redundancy axe wielding (at worst) or recruitment freeze (at best) for the end of the REF cycle, and I’ll be looking to invest only right at the beginning of the REF cycle. I’ve no idea what the net effect of all this will be repeated across the sector, but it looks to me as if non-portability just creates new transfer windows and feast and famine around recruitment. And I’d be very worried if universities end up delaying or cancelling or scaling back major strategic research investments because of a lack of REF recognition in terms of new funding.

Looking forward: A settled portability policy

A few years back, HEFCE issued some guidance about Open Access and its place in the coming REF. They did this more or less ‘without prejudice’ to any other aspect of the REF – essentially, whatever the rest of the REF looks like, these will be the open access rules. And once we’ve settled the portability rules for this time (almost certainly using the Schrodinger’s publications model), I’d like to see them issue some similar ‘without prejudice’ guidelines for the following REF.

I think it’s generally agreed that the more complicated but more accurate model that would allow limited portability and full retention can’t be implemented at such short notice. But perhaps something similar could work with adequate notice and warning for institutions to get the right systems in place, which was essentially the point of the OA announcement.

I don’t think a full non-portability full-retention system as currently envisaged could work without some finessing, and every bid of finessing for fairness comes at the cost of complication.  As well as the investment-divestment asymmetry problem outlined above, there are other issues too.

The academic ‘precariat’ – those on fixed term/teaching only/fractional/sessional contracts need special rules. An institution employing someone to teach one module with no research allocation surely shouldn’t be allowed to return that person’s publications. One option would be to say something like ‘teaching only’ = full portability, no retention; and ‘fixed term with research allocation’ = the Schrodinger system of publications being retained and being portable. Granted this opens the door to other games to be played (perhaps turning down a permanent contract to retain portability?) but I don’t think these are as serious as current games, and I’m sure could be finessed.

While I argued previously that career young researchers had more to gain than to lose from a system whereby appointments are made more on potential rather than track record, the fact that so many are as concerned as they are means that there needs to be some sort of reassurance or allowance for those not in permanent roles.

Disorder at the border. What happens about publications written on Old Institution’s Time, but eventually published under New Institution’s affiliation? We can also easily imagine publication filibustering whereby researchers delay publication to maximise their position in the job market. Not only are delays in publication bad for science, but there’s also the potential for inappropriate pressure to be applied by institutions to hold something back/rush something out. It could easily put researchers in an impossible position, and has the potential to poison relationships with previous employers and with new ones. Add in the possible effects of multiple job moves on multi-author publications and this gets messy very quickly.

One possible response to this would be to allow a portability/retention window that goes two ways – so my previous institution could still return my work published (or accepted) up to (say) a year after my official leave date. Of course, this creates a lot of admin, but it’s entirely up to my former institution whether it thinks that it’s worth tracking my publications once I’ve gone.

What about retired staff? As far as I can see there’s nothing in any documents about the status of the publications of retired staff either in this REF or in any future plans. The logic should be that they’re returnable in the same way as those of any other researcher who has left during the REF period. Otherwise we’ll end up with pressure to say on and perhaps other kinds of odd incentives not to appoint people who retire before the end of a REF cycle.

One final suggestion…

One further half-serious suggestion… if we really object to game playing, perhaps the only fair to properly reward excellent research and impact and to minimise game playing is to keep the exact rules of REF a secret for as long as possible in each cycle. Forcing institutions just to focus on “doing good stuff” and worrying less about gaming the REF.

  • If you’re really interested, you can download a copy of my presentation … but if you weren’t there, you’ll just have to wonder about the blank page…

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

How useful is reading examples of successful grant applications?

This article is prompted by a couple of twitter conversations around a Times Higher Education article which quotes Ross Mounce, founding editor of Research Ideas and Outcomes, who argues for open publication at every stage of the research process, including (successful and unsuccessful) grant applications. The article acknowledges that this is likely to be controversial, but it got a few of us thinking about the value of reading other people’s grant applications to improve one’s own.

I’m asked about this a lot by prospective grant applicants – “do you have any examples of successful applications that you can share?” – and while generally I will supply them if I have access to them, I also add substantial caveats and health warnings about their use.

The first and perhaps most obvious worry is that most schemes change and evolve over time, and what works for one call might not work in another. Even if the application form hasn’t changed substantially, funder priorities – both hard priorities and softer steers – may have changed. And even if neither have changed, competitive pressures and improved grant writing skills may well be raising the bar, and an application that got funded – say – three or four years ago might not get funding today. Not necessarily because the project is weaker, but because the exposition and argument would now need to be stronger. This is particularly the case for impact – it’s hard to imagine that many of the impact sections on RCUK applications written in the early days of impact would pass muster now.

The second, and more serious worry, is that potential applicants take the successful grant application far too seriously and far too literally. I’ve seen smart, sensible, sophisticated people become obsessed with a successful grant application and try to copy everything about it, whether relevant or not, as if there was some mystical secret encoded into the text, and any subtle deviation would prevent the magic from working. Things like… the exact balance of the application, the tables/diagrams used or not used (“but the successful application didn’t have diagrams!”), the referencing system, the font choice, the level of technical detail, the choice and exposition of methods, whether there are critical friends and/or a steering group, the number of Profs on the bid, the amount of RA time, the balance between academic and stakeholder impact.

It’s a bit like a locksmith borrowing someone else’s front door key, making as exact a replica as she can, and then expecting it to open her front door too. Or a bit like taking a recipe that you’ve successfully followed and using it to make a completely different dish by changing the ingredients while keeping the cooking processes the same. Is it a bit like cargo cult thinking? Attempting to replicate an observed success or desired outcome by copying everything around it as closely as possible, without sufficient reflection on cause and effect? It’s certainly generalising inappropriately from a very small sample size (often n=1).

But I think – subject to caveats and health warnings – it can be useful to look at previously successful applications from the same scheme. I think it can sometimes even be useful to look at unsuccessful applications. I’ve changed my thinking on this quite a bit in the last few years, when I used to steer people away from them much more strongly. I think they can be useful in the following ways:

  1. Getting a sense of what’s required. It’s one thing seeing a blank application form and list of required annexes and additional documents, it’s another seeing the full beast. This will help potential applicants get a sense of the time and commitment that’s required, and make sensible, informed decisions about their workload and priorities and whether to apply or not.
  2. It also highlights all of the required sections, so no requirement of the application should come as a shock. Increasingly with the impact agenda it’s a case of getting your ducks in a row before you even think about applying, and it’s good to find that out early.
  3. It makes success feel real, and possible, especially if the grant winner is someone the applicant knows, or who works at the same institution. Low success rates can be demoralising, but it helps to know not only that someone, somewhere is successful, but that someone here and close by has been successful.
  4. It does set a benchmark in terms of the state of readiness, detail, thoroughness, and ducks-in-a-row-ness that the attentive potential applicant should aspire to at least equal, if not exceed. Early draft and early stage research applications often have larger or smaller pockets of vaguery and are often held together with a generous helping of fudge. Successful applications should show what’s needed in terms of clarity and detail, especially around methods.
  5. Writing skills. Writing grant applications is a very different skill to writing academic papers, which may go some way towards explaining why the Star Wars error in grant writing is so common. So it’s going to be useful to see examples of that skill used successfully… but having said that, I have a few examples in my library of successes which were clearly great ideas, but which were pretty mediocre as examples of how to craft a grant application.
  6. Concrete ideas and inspiration. Perhaps about how to use social media, or ways to engage stakeholders, or about data management, or other kinds of issues, questions and challenges if (and only if) they’re also relevant for the new proposal.

So on balance, I think reading (funder and scheme) relevant, recent, and highly rated (even if not successful) funding applications can help prospective applicants…. provided that they remember that what they’re reading and drawing inspiration from is a different application from a different team to do different things for different reasons at a different time.

And not a mystical, magical, alchemical formula for funding success.

MOOCing about: My experience of a massively open online course

I’ve just completed my first Massively Open Online Course (or MOOC) entitled ‘The mind is flat: the shocking shallowness of human psychology run via the Futurelearn platform.  It was run by Professor Nick Chater and PhD student Jess Whittlestone of Warwick Business School and this is the second iteration of the course, which I understand will be running again at some point. Although teaching and learning in general (and MOOCs in particular) are off topic for this blog, I thought it might be interesting to jot down a few thoughts about my very limited experience of being on the receiving end of a MOOCing.  There’s been a lot of discussion of MOOCs which I’ve been following in a kind of half-hearted way, but I’ve not seen much (if anything) written from the student perspective.

“Alright dudes… I’m the future of higher education, apparently. Could be worse… could be HAL 9000”

I was going to explain my motivations for signing up for the course to add a bit of context, but one of the key themes of the MOOC has been the shallowness and instability of human reasons and motivations.  We can’t just reach back into our minds, it seems, and retrieve our thinking and decision making processes from a previous point in time.  Rather, the mind is an improviser, and can cobble together – on demand – all kinds of retrospective justifications and explanations for our actions which fit the known facts including our previous decisions and the things we like to think motivate us.

So my post-hoc rationalisation of my decision to sign up is probably three-fold. Firstly, I think a desire for lifelong learning and in particular an interest in (popular) psychology are things I ascribe to myself.  Hence an undergraduate subsidiary module in psychology and having read Stuart Sutherland’s wonderful book ‘Irrationality‘.  A second plausible explanation is that I work with behavioural economists in my current role, and this MOOC would help me understand them and their work better.  A third possibility is that I wanted to find out what MOOCs were all about and what it was like to do one, not least because of their alleged disruptive potential for higher education.

So…. what does the course consist of?  Well, it’s a six week course requiring an estimated five hours of time per week.  Each week-long chunk has a broad overarching theme, and consists of a round-up of themes arising from questions from the previous week, and then a series of short videos (generally between 4 and 20 minutes) either in a lecture/talking head format, or in an interview format.  Interviewees have included other academics and industry figures.  There are a few very short written sections to read, a few experiments to do to demonstrate some of the theories, a talking point, and finally a multiple choice test.  Students are free to participate whenever they like, but there’s a definite steer towards trying to finish each week’s activities within that week, rather than falling behind or ploughing ahead. Each video or page provides the opportunity to add comments, and it’s possible for students to “like” each other’s comments and respond to them.  In particular there’s usually one ‘question of the week’ where comment is particularly encouraged.

The structure means that it’s very easy to fit alongside work and other commitments – so far I’ve found myself watching course videos during half time in Champions League matches (though the half time analysis could have told its own story about the shallowness of human psychology and the desire to create narratives), last thing at night in lieu of bedtime reading, and when killing time between finishing work and heading off to meet friends.  The fact that the videos are short means that it’s not a case of finding an hour or more at a time for uninterrupted study. Having said that, this is a course which assumes “no special knowledge or previous experience of studying”, and I can well imagine that other MOOCs require a much greater commitment in terms of time and attention.

I’ve really enjoyed the course, and I’ve found myself actively looking forward to the start of a new week, and to carving out a free half hour to make some progress into the new material.  As a commitment-light, convenient way of learning, it’s brilliant.  The fact that it’s free helps.  Whether I’d pay for it or not I’m not sure, not least because I’ve learnt that we’re terrible at working out absolute value, as our brains are programmed to compare.  Once a market develops and gives me some options to compare, I’d be able to think about it.  Once I had a few MOOCs under my belt, I’d certainly consider paying actual money for the right course on the right topic at the right level with the right structure. At the moment it’s possible to pay for exams (about £120, or £24 for a “statement of participation”) on some courses, but as they’re not credit bearing it’s hard to imagine there would be much uptake. What might be a better option to offer is a smaller see for a self-printable .pdf record of courses completed, especially once people start racking up course completions.

One drawback is the multiple choice method of examining/testing, which doesn’t allow much sophistication or nuance in answers.  A couple of the questions on the MOOC I completed were ambiguous or poorly phrased, and one in particular made very confusing use of “I” and “you” in a scenario question, and I’d still argue (sour grapes alert) that the official “correct” answer was wrong. I can see that multiple choice is the only really viable way of having tests at the moment (though one podcast I was listening to the other day mooted the possibility of machine text analysis marking for short essays based on marks given to a sample number), but I think a lot more work needs to go into developing best (and better) practice around question setting.  It’s difficult – as a research student I remember being asked to come up with some multiple choice questions about the philosophy of John Rawls for an undergraduate exam paper, and struggled with that.  Though I did remove the one from the previous paper which asked how many principles of justice there were (answer: it depends how you count them).

But could it replace an undergraduate degree programme?  Could I imagine doing a mega-MOOC as my de facto full time job, watching video lectures, reading course notes and core materials, taking multiple choice questions and (presumably) writing essays?  I think probably not.  I think the lack of human interaction would probably drive me mad – and I say this as a confirmed introvert.  Granted, a degree level MOOC would probably have more opportunities for social interaction – skype tutorials, better comments systems, more interaction with course tutors, local networks to meet fellow students who live nearby – but I think the feeling of disconnection, isolation, and alienation would just be too strong.  Having said that, perhaps to digital natives this won’t be the case, and perhaps compared (as our brains are good at comparing) to the full university experience a significantly lighter price tag might be attractive.  And of course, for those in developing countries or unable or unwilling to relocate to a university campus (for whatever reason), it could be a serious alternative.

But I can certainly see a future that blends MOOC-style delivery with more traditional university approaches to teaching and learning.  Why not restructure lectures into shorter chunks and make them available online, at the students’ convenience?  There are real opportunities to bring in extra content with expert guest speakers, especially industry figures, world leading academic experts, and particularly gifted and engaging communicators.  It’s not hard to imagine current student portals (moodle, blackboard etc) becoming more and more MOOC-like in terms of content and interactivity.  In particular, I can imagine a future where MOOCs offer opportunities for extra credit, or for non-credit bearing courses for students to take alongside their main programme of study.  These could be career-related courses, courses that complement their ‘major’, or entirely hobby or interest based.

One thought that struck me was whether it was FE rather than HE that might be threatened by MOOCs.  Or at least the Adult Ed/evening classes aspect of FE.  But I think even there a motivation to – say – decide to learn Spanish, is only one motivation – another is often to meet new people and to learn together, and I don’t think that that’s an itch that MOOCs are entirely ready to scratch. But I can definitely see a future for MOOCs as the standard method of continuing professional development in any number of professional fields, whether these are university-led or not. This has already started to happen, with a course called ‘Discovering Business in Society‘ counting as an exemption towards one paper of an accounting qualification.  I also understand that Futurelearn are interested in pilot schemes for the use of MOOCs 16-19 year olds to support learning outcomes in schools.

It’s also a great opportunity for hobbyists and dabblers like me to try something new and pursue other intellectual interests.  I can certainly imagine a future in which huge numbers of people are undertaking a MOOC of one kind or another, with many going from MOOC to MOOC and building up quite a CV of virtual courses, whether for career reasons, personal interest, or a combination of both.  Should we see MOOCs as the next logical and interactive step from watching documentaries? Those who today watch Horizon and Timewatch and, well, most of BBC4, might in future carry that interest forward to MOOCs.

So perhaps rather than seeing MOOCs in terms of what they’re going to disrupt or displace or replace, we’re better off seeing them as something entirely new.

And I’m starting my next MOOC on Monday – Cooperation in the contemporary world: Unlocking International Politics led by Jamie Johnson of the University of Birmingham.  And there are several more that look tempting… How to read your boss from colleagues at the University of Nottingham, and England in the time of Richard III from – where else – the University of Leicester.

Demand mismanagement: a practical guide

I’ve written an article on Demand (Mis)management for Research Professional. While most of the site’s content is behind a paywall, they’ve been kind enough to make my article open access.  Which saves me the trouble of cutting and pasting it here.

Universities are striving to make their grant applications as high in quality as possible, avoid wasting time and energy, and run a supportive yet critical internal review process. Here are a few tips on how not to do it. [read the full article]

In other news, I was at the ARMA conference earlier this week and co-presented a session on Research Development for the Special Interest Group with Dr Jon Hunt from the University of Bath.  A copy of the presentation and some further thoughts will follow once I’ve caught up with my email backlog….

The consequences of Open Access, part 2: Are researchers prepared for greater scrutiny?

In part 1 of this post, I raised questions about how academic writing might have to change in response to the open access agenda.  The spirit of open access surely requires not just the availability of academic papers, but the accessibility of those papers to research users and stakeholders.  I argued that lay summaries and context pieces will increasingly be required, and I was pleased to discover that at least some open access journals are already thinking about this.  In this second part, I want to raise questions about whether researchers and those who support them are ready for the potential extra degree of scrutiny and attention that open access may bring.

On February 23rd 2012, the Journal of Medical Ethics published a paper called After-birth abortion: why should the baby live? by Alberto Giubilini and Francesca Minerva.   The paper was not to advocate “after birth abortion” (i.e infanticide), but to argue that many of the arguments that are said to justify abortion also turn out to justify infanticide.  This isn’t a new argument by any means, but presumably there was sufficient novelty in the construction of the argument to warrant publications.  To those familiar with the conventions of applied ethics – the intended readers of the article – it’s understood that it was playing devil’s advocate, seeing how far arguments can be stretched, taking things to their logical conclusion, seeing how far the thin end of the edge will drive, what’s at the bottom of the slippery slope, just what kind of absurdium can be reductio-ed to.  While the paper isn’t satire in the same way as Jonathan Swift’s A Modest Proposal, no sensible reader would have concluded that the authors were calling for infanticide to be made legal, in spite of the title.

I understand that what happened next was that the existence of the article – for some reason – attracted attention in the right wing Christian blogosphere, prompting a rash of complaints, hostile commentary, fury, racist attacks, and death threats.  Journal editor Julian Savulescu wrote a blog post about the affair, below which are 624 comments.   It’s enlightening and depressing reading in equal measure.  Quick declaration of interest here – my academic background (such as it is) is in philosophy, and I used to work at Keele University’s Centre for Professional Ethics marketing their courses.  I know some of the people involved in the JME’s response, though not Savulescu or the authors of the paper.

There’s a lot that can (and probably should) be said about the deep misunderstanding that occurred between professional bioethicists and non-academics concerned about ethical issues who read the paper, or who heard about it.  Part of that misunderstanding is about what ethicists do – they explore arguments, analyse concepts, test theories, follow the arguments.  They don’t have any special access to moral truth, and while their private views are often much better thought out than most people, most see their role as helping to understand arguments, not pushing any particular position.  Though some of them do that too, especially if it gets them on Newsnight.  I’m not really well informed enough to comment too much on this, but it seems to me that the ethicists haven’t done a great job of explaining what they do to those more moderate and sensible critics.  Those who post death threats and racist abuse are probably past reasoned argument and probably love having something to rail against because it justifies their peculiar world view, but for everyone else, I think it ought to be possible to explain.  Perhaps the notion of a lay summary that I mentioned last time might be helpful here.

Part of the reason for the fuss might have been because the article wasn’t available via open access, so some critics may not have had the opportunity to read the article and make up their own mind.  This might be thought of as a major argument in favour of open access – and of course, it is – the reasonable and sensible would have at least skim-read the article, and it’s easier to marshal a response when what’s being complained about is out there for reference.

However….. the unfortunate truth is that there are elements out there who are looking for the next scandal, for the next chance to whip up outrage, for the next witch hunt.  And I’m not just talking about the blogosphere, I’m talking about elements of the mainstream media, who (regardless of our personal politics) have little respect or regard for notions of truth, integrity and fairness.  If they get their paper sales, web  hits, outraged comments, and resulting manufactured “scandal”, then they’re happy.  Think I’m exaggerating?  Ask Hilary Mantel, who was on the receiving end of an entirely manufactured fuss with comments she made in a long and thoughtful lecture being taken deliberately and dishonestly out of context.

While open access will make things easier for high quality journalism and for the open-minded citizen and/or professional, it’ll also make it easier for the scandal-mongers (in the mainstream media and in the blogosphere) to identify the next victim to be thrown to the ravenous outrage-hungry wolves that make up their particular constituency.  It’s already risky to be known to be researching and publishing in certain areas – anything involving animal research; climate change; crop science; evolutionary theory; Münchhausen’s by Proxy; vaccination; and (oddly) chronic fatigue syndrome/ME – appears to have a hostile activist community ready to pounce on any research that comes back with the “wrong” answer.

I don’t want to go too far in presenting the world outside the doors of the academy as being a swamp of unreason and prejudice.  But the fact is that alongside the majority of the general public (and bloggers and journalists) who are both rational and reasonable, there is an element that would be happy to twist (or invent) things to suit their own agenda, especially if that agenda involves whipping out manufactured outrage to enable their constituency to confirm their existing prejudices. Never mind the facts, just get angry!

Doubtless we all know academics who would probably relish the extra attention and are already comfortable with the public spotlight.  But I’m sure we also know academics who do not seek the limelight, who don’t trust the media, and who would struggle to cope with even five minutes of (in)fame(y).  One day you’re a humble bioethicist, presumably little known outside your professional circles, and the next, hundreds of people are wishing you dead and calling you every name under the sun.  While Richard Dawkins seems to revel in his (sweary) hate mail, I think a lot of people would find it very distressing to receive emails hoping for their painful death.  I know it would upset me a lot, so please don’t send me any, okay?  And be nice in the comments…..

Of course, even if things never get that far or go that badly, with open access there’s always a greater chance of hostile comment or criticism from the more mainstream and reasonable media, who have a much bigger platform from which to speak than an academic journal.  This criticism need not be malicious, could be legitimate opinion, could be based on a misunderstanding.  Open access opens up the academy to greater scrutiny and greater criticism.

As for what we do about this….. it’s hard to say.  I certainly don’t say that we retreat behind the safety of our paywalls and sally forth with our research only when guarded by a phalanx of heavy infantry to protect us from the swinish multitude besieging our ivory tower.  But I think that there are things that we can do in order to be better prepared.  The use of lay summaries, and greater consideration of the lay reader when writing academic papers will help guard against misunderstandings.

University external relations departments need to be ready to support and defend academic colleagues, and perhaps need to think about planning for these kind of problems, if they don’t do so already.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.