HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

The rise of the machines – automation and the future of research development

"I've seen research ideas you people wouldn't believe. Impact plans on fire off the shoulder of Orion. I watched JeS-beams glitter in the dark near the Tannhäuser ResearchGate. All those proposals will be lost in time, like tears...in...rain. Time to revise and resubmit."
“I’ve seen first drafts you people wouldn’t believe. Impact plans on fire off the shoulder of Orion. I watched JeS beams glitter in the dark near the Tannhäuser ResearchGate. All those research proposals will be lost in time, like tears…in…rain. Time to resubmit.”

In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?

In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.

A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools  – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.

While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.

I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).

But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.

It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.

Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.

Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.

Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.

But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.

I for one welcome our new research development AI overlords.

MOOCing about: My experience of a massively open online course

I’ve just completed my first Massively Open Online Course (or MOOC) entitled ‘The mind is flat: the shocking shallowness of human psychology run via the Futurelearn platform.  It was run by Professor Nick Chater and PhD student Jess Whittlestone of Warwick Business School and this is the second iteration of the course, which I understand will be running again at some point. Although teaching and learning in general (and MOOCs in particular) are off topic for this blog, I thought it might be interesting to jot down a few thoughts about my very limited experience of being on the receiving end of a MOOCing.  There’s been a lot of discussion of MOOCs which I’ve been following in a kind of half-hearted way, but I’ve not seen much (if anything) written from the student perspective.

“Alright dudes… I’m the future of higher education, apparently. Could be worse… could be HAL 9000”

I was going to explain my motivations for signing up for the course to add a bit of context, but one of the key themes of the MOOC has been the shallowness and instability of human reasons and motivations.  We can’t just reach back into our minds, it seems, and retrieve our thinking and decision making processes from a previous point in time.  Rather, the mind is an improviser, and can cobble together – on demand – all kinds of retrospective justifications and explanations for our actions which fit the known facts including our previous decisions and the things we like to think motivate us.

So my post-hoc rationalisation of my decision to sign up is probably three-fold. Firstly, I think a desire for lifelong learning and in particular an interest in (popular) psychology are things I ascribe to myself.  Hence an undergraduate subsidiary module in psychology and having read Stuart Sutherland’s wonderful book ‘Irrationality‘.  A second plausible explanation is that I work with behavioural economists in my current role, and this MOOC would help me understand them and their work better.  A third possibility is that I wanted to find out what MOOCs were all about and what it was like to do one, not least because of their alleged disruptive potential for higher education.

So…. what does the course consist of?  Well, it’s a six week course requiring an estimated five hours of time per week.  Each week-long chunk has a broad overarching theme, and consists of a round-up of themes arising from questions from the previous week, and then a series of short videos (generally between 4 and 20 minutes) either in a lecture/talking head format, or in an interview format.  Interviewees have included other academics and industry figures.  There are a few very short written sections to read, a few experiments to do to demonstrate some of the theories, a talking point, and finally a multiple choice test.  Students are free to participate whenever they like, but there’s a definite steer towards trying to finish each week’s activities within that week, rather than falling behind or ploughing ahead. Each video or page provides the opportunity to add comments, and it’s possible for students to “like” each other’s comments and respond to them.  In particular there’s usually one ‘question of the week’ where comment is particularly encouraged.

The structure means that it’s very easy to fit alongside work and other commitments – so far I’ve found myself watching course videos during half time in Champions League matches (though the half time analysis could have told its own story about the shallowness of human psychology and the desire to create narratives), last thing at night in lieu of bedtime reading, and when killing time between finishing work and heading off to meet friends.  The fact that the videos are short means that it’s not a case of finding an hour or more at a time for uninterrupted study. Having said that, this is a course which assumes “no special knowledge or previous experience of studying”, and I can well imagine that other MOOCs require a much greater commitment in terms of time and attention.

I’ve really enjoyed the course, and I’ve found myself actively looking forward to the start of a new week, and to carving out a free half hour to make some progress into the new material.  As a commitment-light, convenient way of learning, it’s brilliant.  The fact that it’s free helps.  Whether I’d pay for it or not I’m not sure, not least because I’ve learnt that we’re terrible at working out absolute value, as our brains are programmed to compare.  Once a market develops and gives me some options to compare, I’d be able to think about it.  Once I had a few MOOCs under my belt, I’d certainly consider paying actual money for the right course on the right topic at the right level with the right structure. At the moment it’s possible to pay for exams (about £120, or £24 for a “statement of participation”) on some courses, but as they’re not credit bearing it’s hard to imagine there would be much uptake. What might be a better option to offer is a smaller see for a self-printable .pdf record of courses completed, especially once people start racking up course completions.

One drawback is the multiple choice method of examining/testing, which doesn’t allow much sophistication or nuance in answers.  A couple of the questions on the MOOC I completed were ambiguous or poorly phrased, and one in particular made very confusing use of “I” and “you” in a scenario question, and I’d still argue (sour grapes alert) that the official “correct” answer was wrong. I can see that multiple choice is the only really viable way of having tests at the moment (though one podcast I was listening to the other day mooted the possibility of machine text analysis marking for short essays based on marks given to a sample number), but I think a lot more work needs to go into developing best (and better) practice around question setting.  It’s difficult – as a research student I remember being asked to come up with some multiple choice questions about the philosophy of John Rawls for an undergraduate exam paper, and struggled with that.  Though I did remove the one from the previous paper which asked how many principles of justice there were (answer: it depends how you count them).

But could it replace an undergraduate degree programme?  Could I imagine doing a mega-MOOC as my de facto full time job, watching video lectures, reading course notes and core materials, taking multiple choice questions and (presumably) writing essays?  I think probably not.  I think the lack of human interaction would probably drive me mad – and I say this as a confirmed introvert.  Granted, a degree level MOOC would probably have more opportunities for social interaction – skype tutorials, better comments systems, more interaction with course tutors, local networks to meet fellow students who live nearby – but I think the feeling of disconnection, isolation, and alienation would just be too strong.  Having said that, perhaps to digital natives this won’t be the case, and perhaps compared (as our brains are good at comparing) to the full university experience a significantly lighter price tag might be attractive.  And of course, for those in developing countries or unable or unwilling to relocate to a university campus (for whatever reason), it could be a serious alternative.

But I can certainly see a future that blends MOOC-style delivery with more traditional university approaches to teaching and learning.  Why not restructure lectures into shorter chunks and make them available online, at the students’ convenience?  There are real opportunities to bring in extra content with expert guest speakers, especially industry figures, world leading academic experts, and particularly gifted and engaging communicators.  It’s not hard to imagine current student portals (moodle, blackboard etc) becoming more and more MOOC-like in terms of content and interactivity.  In particular, I can imagine a future where MOOCs offer opportunities for extra credit, or for non-credit bearing courses for students to take alongside their main programme of study.  These could be career-related courses, courses that complement their ‘major’, or entirely hobby or interest based.

One thought that struck me was whether it was FE rather than HE that might be threatened by MOOCs.  Or at least the Adult Ed/evening classes aspect of FE.  But I think even there a motivation to – say – decide to learn Spanish, is only one motivation – another is often to meet new people and to learn together, and I don’t think that that’s an itch that MOOCs are entirely ready to scratch. But I can definitely see a future for MOOCs as the standard method of continuing professional development in any number of professional fields, whether these are university-led or not. This has already started to happen, with a course called ‘Discovering Business in Society‘ counting as an exemption towards one paper of an accounting qualification.  I also understand that Futurelearn are interested in pilot schemes for the use of MOOCs 16-19 year olds to support learning outcomes in schools.

It’s also a great opportunity for hobbyists and dabblers like me to try something new and pursue other intellectual interests.  I can certainly imagine a future in which huge numbers of people are undertaking a MOOC of one kind or another, with many going from MOOC to MOOC and building up quite a CV of virtual courses, whether for career reasons, personal interest, or a combination of both.  Should we see MOOCs as the next logical and interactive step from watching documentaries? Those who today watch Horizon and Timewatch and, well, most of BBC4, might in future carry that interest forward to MOOCs.

So perhaps rather than seeing MOOCs in terms of what they’re going to disrupt or displace or replace, we’re better off seeing them as something entirely new.

And I’m starting my next MOOC on Monday – Cooperation in the contemporary world: Unlocking International Politics led by Jamie Johnson of the University of Birmingham.  And there are several more that look tempting… How to read your boss from colleagues at the University of Nottingham, and England in the time of Richard III from – where else – the University of Leicester.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..