Promotions and Commotions – part one: APM staff

King Louie, from ‘The Jungle Book’ (1967)

This is the first part of a three part series about promotions and careers in UK universities. This first post focuses on “administrative, professional, and managerial” (APM) staff, although I touch issues related to other job families, especially research and teaching. A second blogpost will have more to say about academic promotions, and a third with some thoughts on possible changes and reforms, and a few things I’ve learned over the years. I’ve not written the second or third yet, but I’m going to publish the first in the hope it motivates me to write the others faster.

Opportunities for career progression and promotion and the level of fairness and transparency and consistency (or lack thereof) is inevitably a hot topic in every sector. However, I have a theory that the situation in universities can be particularly problematic because of mutual envy and incomprehension between academic and non-academic promotions.

To a non-academic like me, academic promotions are odd. Sorry, but they are. It’s hard to think of many professions where it’s possible to be doing largely the same job – teaching, research, administration/management – while still having the potential for advancement from Assistant to Associate to full Prof, and then potentially up the various Professorial pay bandings.

Of course, that’s not entirely fair – the level of performance and expertise and expectations and responsibilities in those three core areas increases up the academic payscale. Or at least they should. I guess medical doctors are a good parallel case. And professional footballers.

APM staff – by which I mean “administrative, professional, and managerial” staff – careers work very differently. I’d note in passing that every institution seems to believe that its own chosen nomenclature for grades and job families (APM4, APM5, APM6) is universal and understood sector wide, when it’s only the pay spine that’s common, not the grade boundaries.

Re-grading of APM jobs is not really a thing


For APM staff, it’s almost impossible to be promoted in-post. For a job to be re-graded. I found this out the hard way. Instead, APM staff need to apply for an entirely new role. This didn’t always used to be the case. When I started what I laughably call my career, I knew some APM staff who started as something like ‘School Secretary’ and finished as ‘School Manager’ without a single competitive interview process. But I think that’s very much in the past now. We now have open competition for roles – apart from some specific cases during restructures – and that’s a Good Thing.

APM staff feel of level of promotion-envy because, as I said, job re-grading is rare. Because it’s not about how good you are at your job but about the requirements and remit of that job. No amount of over-performance makes the job bigger than it is on paper and in the organogram. This happened to me at a previous institution.

In hindsight I can see it’s because I was doing things that not only weren’t in my job description but also that weren’t envisaged (or indeed a requirement) for the role as originally set out. But I had an additional Very Specific Set of Skills. I was supposed to a Centre Administrator. I was not supposed to be writing marketing materials, finding opportunities for income-generating professional development programmes etc and therefore could not be rewarded or recognised for having done so. A similar thing happened to a friend working in the NHS, where what he actually did was completely different from his actual job description.

My sense is that institutions hate having to re-grade jobs, because it risks Setting a Precedent, and we all know how much management hates that. Re-grading one job has the potential to disrupt entire structures. It also raises the uncomfortable question of whether an ‘upgraded’ role could or should go to the existing postholder, or whether there should be a recruitment process. Again, I’ve known people have their jobs re-graded upwards and then deemed unappointable to the new role after interview or passed over for someone else.

King Louie syndrome

I’m posting the live action version of this song, because… well… Christopher Walken.


No, not any of the legion of French monarchs, but the character in the Jungle Book. (“I’ve reached the top and had to stop, and that’s what’s botherin’ me”). I’m hardly a ‘Jungle VIP’ and anyone who accuses me of being ‘King of the Swingers’ will be hearing from my lawyers.

But I have long since reached the top of my payband, which is the summit of what any organisation is realistically prepared to pay me for the role I currently have. It’s a tricky place to be… although the employers’ attempts to spin increments as counting towards cost-of-living pay rises are obvious and disingenuous nonsense, it’s certainly true that inflation bites more as a result of not getting an annual increment any more. Add to that the fact that pay increases tend to be targeted at lower spine points, which is fair enough in and of itself. But rinse and repeat often enough and pay differentials start to reduce and the premium for experience is lower and lower.

[To be fair, my own institution did add extra points onto the top of grades, allowing another increment for one year. Kudos, and sincere thanks for that. It still meant another substantial year-on-year real terms pay cut, but less bad that it could have been.]

It’s nice to move up an increment because it feels like progress, even apart from the extra cash. Not moving up an increment feels like stagnation and it can start to feel like failure. Even though it isn’t.

I suspect there are a lot of ‘King Louies’ in universities. Partly because university roles tend to be quite specialised, perhaps with no direct private sector equivalent. In some parts of the country there are groupings of institutions that are reasonably close together and which permit a bit of an internal market. In others, not so much. Where universities are very close, they’re usually very different – an ‘old’ university and a post-92. Given that, staff mobility doesn’t tend to be very high, which can limit internal promotion opportunities.

There’s another big issue which I’ll return to in a later post because I think it affects academics too. That’s that much of the sector is structured very much like a monopsony. A monopoly is when the is one seller… a monopsony is when there is one buyer. Or at least one pay spine. I’ll say more about common pay spines in part 3.

Many pay scales have what’s known as a “super-maximum”. There’s a top end of each pay grade, where progression to further increments is much harder. But my sense (and I could be wrong) is that progression into the super-maximum used to be more common and easier than it is now.

Does your institution include statements in job ads like “progression beyond this salary range is subject to performance”? And if so, is that actually true? At mine, it is true… technically… but until I started doing a bit of research for this article, I didn’t know that it was, and I certainly didn’t know the process. It’s never been raised with me, and I’ve never been invited to make that case for an extra increment. And when I did raise it, no-one seemed to know how it works in practice or even what “exceptional performance” looks in my kind of role. Would I be wasting my time? What was the benchmark to hit? No-one could offer advice. Felt like the response was just to shrug it out until the deadline passed for the year, or I stopped asking.

I mean, don’t get me wrong, we do have an employee reward scheme of which I am a proud beneficiary. I’m grateful for the John Lewis voucher and the towels I bought with them are lovely, but it’s no consolidated increment into the supermaximum, is it?

What to do, what to think

Leonard Rossiter as Reggie Perrin, in ‘The Fall and Rise of Reginald Perrin, (BBC, 1976-1979)


At this point there are four options for King Louie, and I’ve experimented with four at various points and I still have no settled view.

  • Make your peace. The higher the grade, the fewer the roles, the steeper the slopes, the greater the competition, the harder to get evidence for ability to perform at a higher grade, and the harder to progress. Anyway… your pay isn’t bad, do you really want a lot more stress for a little more money? If you’re good at what you do, enjoy it (on the whole), and have tolerable colleagues and humane line management, what are you complaining about? We’ve all been raised on television to believe that one day we’d all be millionaires, and movie gods, and rock stars. But we won’t. Universities talk a good game on career progression, as if that’s infinite, as if we’ll all get promoted. We won’t. Accept it, count your blessings, work to live, not live to work.
  • Up, up, up the ziggurat. Lickety-split”. Stalk jobs.ac, stalk Research Professional… lordy, even stalk the job pages of those rankers at Times Higher if things have really got that bad. Get across every possible opportunity, have a Career Plan, work out what you need to get to the next rung, and do it. Do, or do not. There is no try. Registrar before I’m 50 or bust! Crush your enemies, see them driven before you, and hear the lamentations of your line reports!
  • Bit from column A, bit from column B. Eyes open for opportunities, but think carefully before leaping. How much more money would it take to induce me to have a longer commute? How much £££ would it take to apply for a role in a Faculty/institution that everyone tells me is an Unhappy Ship? How much £££ to move from oop north to dahn sarf, where the higher costs of living (and especially housing) will quickly gobble up your pay bump?  How do I measure the amount of time and effort it takes to write some of these bloody excruciating applications against the actual probability of landing the job?
  • Don’t derive all your validation and esteem from your day job. Develop a social media presence, write blogs, develop a professional network, visibility outside your organisation, and eventually a small side-line in external work made of up journalism and consultancy. And start running marathons, or some other classic midlife crisis behaviour. It’s worked for me. Kinda.

There’s a fifth option too, and I advise very strongly against it, because it’s an outstanding way to make yourself bitter and twisted. It’s…

  • Combine being discontented with your current role/salary/opportunities with not doing anything about it. That’s letting opportunities pass you by because they’re not perfect, or because you don’t want to step out of your comfort zone. Whether that comfort zone is your role, your institution, your colleagues, your line manager. That’s wanting promotion/more money/greater status, but not being willing to do what it takes to compete for those opportunities.

We must pay the price of the life we choose to live – if we choose comfort and familiarity and the known, we can’t expect more money and status. If we choose to chase promotion and reward and challenge, we can’t expect stability and comfort. At least not for a while.

At one point in my career, I turned down a promising opportunity to apply for a role at another institution that would have required either moving house or commuting. It would have been a perfectly normal, manageable commute, but it would have been significantly longer and more expensive than my commute at the time. It would have cost me money in the short term.

I decided that I valued the extra time saved by not commuting more than the extra cash/challenge/opportunity. Fine. My choice. I’m not saying I’d have got the role, ‘cos I know it attracted some great candidates. But I can’t then rail against my fate. I had an opportunity. I decided against pursuing it. I pay the price of the life I choose to live. I can’t then complain about a lack of opportunities. And if you made the same decisions, you can’t either.

Why your salary can’t keep going up forever

Bit of a thought experiment. Imagine someone appointed at a young age to an administrative role that’s fairly routine in nature. They come in, and inside x years, work their way up to the top of their pay scale. Let’s further imagine that every year they get better at their job – they know more people, understand the institution better, know the systems better. That improvement may well continue even after the increments have stopped. A classic example is a receptionist who’s been in post for years and knows everything and everyone. They know where the bodies are buried, who hid them there and why, and the dark web contact details of the hired killers should a similar the issue arise again.

Tim (Martin Freeman) and Dawn (Lucy Davis), ‘The Office’ (BBC) (2001-2003)

Now, we can have meaningful discussion about the importance of that kind of experienced receptionist to the operation of the unit and (more importantly) its culture and atmosphere. We could have a discussion about what fair rate of pay would be, and the value of the person who manages all of the birthday/retirement/leaving cards and collections. Right there with you on being appalled at discovering what grade some key APM colleagues were on.

But I think few people would argue that their pay could just keep going up an increment at a time indefinitely. We’d then potentially have a very experienced receptionist paid a lot more than his line manager, and a lot more than his newer colleagues doing what’s basically the same role. On one level, it’s an obvious point to make, but it’s what some people seem to expect because they see it (or think they see it) in academic staff.

Ultimately, should a receptionist be paid the same as (or more than) someone with a lot more responsibilities and a rarer skillset like a contract specialist or a financial manager or a librarian? I mean, maybe there’s a case for parity, but if you think that, then your problem is with Late Capitalism, not universities. What some people seem to want – perhaps without knowing it – is an indefinite (or at least vastly extended payscale) that’ll pay them more and more for doing more or less the same thing.

The reality is that all our jobs have a maximum salary, which we can pretend is solely determined by market forces, even though that’s not entirely the case. In the absence of unicorns, when we reach that increment limit it’s a case of (try to) step up, or put up.

This is all pretty obvious when you think about it, but I think a lot of people don’t. Especially when there’s the example of academic promotions, where it can look from the outside like salaries can just keep going up and up. Of course, in reality it’s not as simple as that, but it can appear that way.

What about academics?


Yeah, what about academics? Well… I think some of what I said is relevant – the preceding two sections apply partially to academics too. At least in the sense of attitude to feeling stuck at the top of your grade. Make your peace with it, look for another job/go all out for internal promotion, take up running, or a bit of all of it. But don’t do the railing against it/doing nothing about it thing. Please.

I said that academic promotions set a bad example to APM staff, and vice versa. I think some academics envy APM staff – being able to “just apply for another job”, perhaps not understanding how scarce/competitive they are. One complaint I regularly hear from academics is about the absurdity of it being easier to apply for promotion via a new role elsewhere than via your own institution.

But I don’t think that’s a bug. I think that’s a feature. And I’ll try to explain why in the second post.

Reflections on #ResearchFishGate

Reflections on #ResearchFishGate

So… this is a quick post because I’m pushed for time, but if you’ve not heard about #ResearchFishGate, then’s here’s a quick primer from Research Professional’s Sophie Inge.

Short version… academics have been complaining on social media about having to make their annual returns on their funded projects. In the best of all possible worlds, with the best possible system for collecting such information, academics would still complain about having to do it. Academics always complain about admin. However, I don’t think that accounting for how you’ve used public (or charity) money is itself unreasonable.

“I have shared my concerns with your funder”

However, I think the bulk of the complaints in this case have been less about having to do it at all, but about the software/platform that’s used to do it and whether this information is every actually used. I’ve not been involved in supporting ResearchFish returns for some time now, but my impression is that the platform has improved. But clearly not as fast as some people would like.

ResearchFish have – for some time – been, er, trawling twitter for mentions and have been responding to any criticism with a fairly standard form of words.

We understand that you’re not keen on reporting on your funding through Researchfish but this seems quite harsh and inappropriate. We have shared our concerns with your funder.

https://twitter.com/Researchfish/status/1504542085369712640

There have since been a number of apologies and attempted apologies, UKRI and other funders have weighted in, and it’s been a bit of a mess. At the time of writing it remains unclear whether concerns were shared with the funder in question, though there are stories on twitter of academics being ordered in front of HR/Heads of School to explain themselves. So something has been going on.

Responding to legitimate criticism with threats to report critics to funders has gone down very poorly indeed. Researchers have questioned the GDPR implications… how does ResearchFish know which funder to “share their concerns” with? Is it misusing data?

Anyway, never mind all that

I’m less interested in the specifics of #ResearchFishGate and more interested in the broader issues raised about social media use. I’m sure I’m not the only one who saw the tweet and had a moment of alarm… who or what have I criticised? Have I gone too far? Is anyone going to share their concerns with my funder?

I have permission, approval and (occasionally) encouragement for my social media activities. (It helps when your meta-meta-meta-boss is Registrarism). With the proviso that I don’t “start slagging off funders on twitter”. And I have been a good boy.

However… you don’t get very far on Twitter if you’re in corporate drone mode. I wrote something about social media personas in 2014 (2014!), in which I argued for “smart casual” as a sensible Twitter approach. Adam-at-work, if you will. By showing my human side, I build relationships. If I didn’t, I wouldn’t. And if I didn’t build networks and relationships, then what’s the point?

One key point from #ResearchFishgate is that few/none of the critiques actively @-ed ResearchFish onto the discussion. They were talking about ResearchFish, not to ResearchFish. This is a really important point. However….

Of course ResearchFish has a Twitter search column for mentions of their name. I have one for links to my blog so I can find out if anyone’s tweeting about it (spoiler: they’re usually not). I have one for an LSE Impact Blog article I wrote in which I definitely don’t slag off funders, and occasionally I’ll set them up for Research Professional articles I’ve written. And I’m just a vain blogger who craves validation, not a corporate behemoth.

So anyone who tweets about ‘ResearchFish’ or any other funder or ecosystem platform or player, even without @-ing them in is being naive in thinking that they won’t see it. Perhaps even if you disguise the name to evade searches.. they might have that search set up too. Replacing all the vowels with “*” in the style of some newspapers and swearing isn’t that original.

The traditional social media advice was always that it’s public and permanent… don’t tweet anything you wouldn’t want everyone else to see…. who knows what will go viral (possibly wildly out of context)? Of the comments I’ve seen, some do include industrial language, but if there have been any that are abusive of individuals or even @-ing ResearchFish in, I’ve not seen them.

I’m sure it’s not nice to read that people don’t like your product… especially if it’s something you’ve worked very hard on trying to improve… none of us like criticism when we’re doing our best. We especially don’t like it if we can’t use it to improve in any way… but the answer is to grow a thicker skin and ignore it. It would be entirely sensible to use twitter as sentiment analysis, and to look for feedback – especially if there are concerns or issues that can be addressed instantly with user guide advice, or which can be fed back to the Devs. That’s okay.

It’s not okay to trawl twitter for mentions and then issue threats. It might be okay… just about… to make a polite enquiry in response to criticism and ask how the product could be improved. But it’s still barging – uninvited – into someone else’s conversation, even if it’s conversation that’s in the public square.

And I think that’s something that has changed during the pandemic. The tweets that drew the ire of the Fish of Research are the kind of thing that would – in the before-times – probably have been said around the metaphorical water cooler. Only we’re not there any more so often… we’re working from home, or our colleagues are. We have our Teams chats, but that’s generally work stuff, or work-flavoured. But Twitter’s right there, it’s a different and broader social circle. We’re all feeling more alone, more atomised… so those of us on Twitter are perhaps leaning to it more for conversation, companionship, interaction, and validation than before.

I’ve complained about an issue that… in hindsight… I probably shouldn’t have done, as it’s an internal University of Nottingham issue. But I learned that it’s a problem elsewhere too, that people agreed it was a problem and I heard some extra-egregious examples of the kind of thing I complained about. So I don’t regret doing it. I have raised it with my colleagues, but I think they’ve had enough of me moaning about it. Also… what can they really say? We’re in agreement about it.

Conclusions?

Are there any? I guess so. A few lessons.

(1) Big Brother is watching you. Criticise any product and organisation on Twitter – even without @-ing them in – and you should assume that they’ll see it. None of the old advice has changed about social media use and who might see it. Indications are that employers are getting more stringent/intrusive about this.

(2) The default assumption for any organisation (or public figure) being criticised is that they’re talking about you, not to you. Without an @, it’s a private conversation, and you should think very carefully before intruding. And then you probably shouldn’t unless you think your intervention might be welcome.

(3) In spite 0f (1), I do think that the pandemic/wider social media use means that there should be greater allowances for social media use. A conversation can both be in the public square and be a private conversation, with at least some allowances for language and tone. Perhaps X wouldn’t have criticised ResearchFish in precisely those terms and with precisely that language if X knew they were eavesdropping, but the overall sentiment would be the same. That’s not to say that there aren’t still lines that shouldn’t be crossed… just that perhaps the tolerance band should be broader than before.

My defined contribution to the UCU strike ballot debate

At the time of writing, UCU are balloting on strike action in response to (among other things) draconian cuts to USS. My gut reaction is also three letters…. FFS.

At the time of writing, UCU are balloting on strike action in response to (among other things) draconian cuts to USS. My gut reaction is also three letters…. FFS.

Though – spoiler alert – I am going to be voting for strike action very reluctantly and with a very heavy heart.

“Freedom for the University of Tooting!”

I wrote a post about strike action and the importance of union membership back in 2013 and on the pensions strike back in 2018. I think both posts hold up pretty well. But briefly, and contrary to popular demand, here are all the things I think about pensions.

  • Pension planning seems like a technocratic problem that ought to have a technocratic solution. Or, more properly, a range of technocratic solutions to chose from, depending on our priorities and preference. Which, again, we can talk about.
  • Is there is a genuine problem with the pension scheme that’s not been resolved by all of the many, many previous pension cuts we’ve had since I signed up to USS about twenty years ago? Each time we were promised that this cut would resolve the genuine problems with our pension scheme. Each time it hasn’t. If there is still a problem, this ought to be understandable and communicable. And something that can be negotiated about, around, and through.
  • But UUK/university management has made this impossible through failures of transparency, dubious consultations, and a level of spin that borders on the Trumpian. It’s all massively counterproductive – we’re not stupid, so stop treating us as if we are. Those minded to at least entertain the thought that there’s an issue with our pension scheme don’t trust UUK, because they have acted – and continue to act – in bad faith.
  • My suspicion is that they’ll be back again, and again, and again, and again for as long as they can get away with it. Same arguments each time. Back in 2018 we needed draconian cuts, apparently, and then after sustained industrial action, we didn’t any more. It’s almost as if… etc and so on. Universities may not be for-profit, but university management wants surpluses for reinvestment in their pet projects (at least some of which are genuinely good ideas) because they tend to want to make their mark. So it is in their interests to drive costs down as low as possible and keep them there.
  • Colleagues not paying into their pensions because contributions are too high is a problem with our pension. Even if framing the issue as if this was the sole consideration and not mentioning the, you know, massive cuts is disingenuous in the extreme.
  • Pensions are not a perk, but deferred salary. Organisations whose continued existence is very certain (broadly, public services) are in a position to provide better pensions. As a trade-off, salaries are lower. We knew this when we chose our careers and expect the deal to be honoured. Why should we have better pensions that some other sectors? Because that was always part of the deal.
  • I hate being on strike. I hate the arranging to have some of my work covered with colleagues who are themselves busy, especially when we’re several posts down. I hate the divisions it causes. I hate the stress it imposes and the difficult decisions about who to let down and how. I hate coming back to work to find that I’ve got a huge backlog that only I can clear. I don’t like not getting paid, and essentially having to work for free to catch up.
  • Some people like being on strike and the conflict and the associated rituals and the ‘winter of discontent’ cosplaying just that little bit too much.
  • Media coverage of all industrial action is always disgracefully one-sided. Management want ‘reform’ and is presented positively… management talking points always lead and they are never challenged by the reporter. Workers are striking for ‘pay and conditions’… presented selfishly or short-shortsightedly. And are always challenged. The framing is always that of management. Always. Strikers will be vilified – ‘won’t somebody think of the [whoever is inconvenienced]’ – with no sense of awareness. The work that the strikers do isn’t important until they stop doing it, apparently. Often they’ll quote someone affected saying how annoyed they are. Again, this will be framed as the fault of the strikers rather than the failure of employers to manage their industrial relations in a competent manner. That question is never even asked, never mind answered. It’s a dance as old as time. Or at least as old as capitalism.

The Four Fights

But it’s not just about pensions. It’s about the ‘Four Fights‘ too.

address the scandal of the gender, ethnic, and disability pay gap

end contract casualisation and rising job insecurity

tackle the rising workloads driving our members to breaking point

increase to all spine points on the national pay scale of £2,500. [to make up for a 17.6% real terms pay cut between 2009-2019]

(UCU website, accessed 18th Oct 2021)

All laudable goals. Especially the pay gaps… it really ought not to be beyond the wit of folks of good will in university management and UCU to come up with an action plan to start to address this. Granted, we cannot solve the problems of discrimination and inequality in wider society, but we can do our bit, and it ought not to be that expensive. I don’t really understand why this is so hard to agree on.

The others, though. Pragmatically… how are they to be achieved? And can they be achieved without costing more? And if not, can we afford all of them, and how do we prioritise?

Let’s get a few red herrings out of the way first.

First, you might very reasonably be very cross about the above-average-inflation pay awards to some vice chancellors and some senior university staff. You might be one of those people who – consistently – thinks this is an issue across the whole economy. Or you may be one of those people who – inconsistently – thinks nothing of the worst excesses of the private sector’s snouts-in-troughery, but objects to anyone in public service being ‘paid more than the Prime Minister’. But… even if we cut executive pay by… let’s say 1/3… this will give us nowhere near enough money by itself to address any of our issues.

Second, you might form the view that there are two many ‘faceless’ managers and administrators. If so, I would invite you to (a) read this piece and reconsider; (b) reflect on the fact that ‘faceless’ just means you don’t know them or understand what they do; and (c)… we’re right here, folks. Striking alongside you. Those of us who can afford to.

“Down with this sort of thing!”

Low(er) cost solutions

Let’s consider what could be done quickly and relatively cheaply. How far can humane management/HR practices go in addressing casualisation and job insecurity? A fair bit, I’d imagine. We could be much better at giving researchers permanent or open-ended contracts. Even if the reality is that redundancy processes can and will still be used to terminate contracts where there’s a lack of funding. We can treat our fixed term staff better, and we can take our duty to support the development of our staff much more seriously. We should be setting them up for their next role, whether that’s with the same institution or elsewhere.

Demand for academic posts exceeds supply. This is a topic for another blogpost, because scarcity of opportunity and funding are wicked problems which drive a lot of what’s wrong with research culture. But we could do better about not ruthlessly exploiting that fact for fun and profit. To avoid a race to the bottom for competitive advantage, we need sector wide norms and standards. And as far as I understand it (and correct me if I’m wrong, which I often am) this is what’s being resisted. I don’t believe that we have the most humane management/HR practices, and that’s why I’m reluctantly supporting industrial action on this point.

Can we tackle rising workloads without spending a lot of money? Again, there are certainly some things we can do. I’ve been coordinating my university’s response to the Review of Research Bureaucracy, which has been an eye-opening experience. It’s only looked at externally imposed bureaucracy, and only research bureaucracy. It may be that the real issue is internally imposed research bureaucracy, teaching bureaucracy (especially), and, well, administrative bureaucracy. I’m sure there’s more that can be done, and some of that may involve employing more administrative and managerial support. My vision is of a university where academics do academia, and administrators and managers do administration and management. And we do leadership together.

We might expect universities to take a long, hard look at what’s expected of the average academic and review what expectations are reasonable. Too many institutions have grant income targets that are scarcely on a nodding acquaintance with reality. They appear not to understand that limited funding means that for some to succeed, others must fail. I know it’s fashionable to blame the REF for everything. But actually the last REF rules that moved away from demanding four publications per researcher opened the door to greater flexibility of roles and expectations.

But for all the talk of ‘be kind’ and yoga and wellness and mindfulness and whatever else, there’s still far too much unkind management and unrealistic expectations. Personally, I’m currently lucky to benefit from supportive, enlightened and empowering management (hello, if you’re reading), but I’ve also experienced the opposite and there’s far too much of it about. Whether sector-wide strike action is the way to address rising workloads I’m not sure. What could we do at a national, sector-wide level? What would that look like? I’m convinced of the importance of the issue, but less so for the case for national strike action as the mechanism to resolve it. But I’m open to persuasion.

Fantasy Head of School

But another possible response to rising workloads is… well… sorry… it’s casualisation and job insecurity.

Sandra Oh, in ‘The Chair’ (Netflix)

Let’s play Fantasy Head of School. Or University Management Simulator. Hypothetically, anyway. Pressures on your permanent staff too great? Use your limited resources to buy in as much extra teaching capacity as possible… which means sessional teachers and short term teaching fellowships or teaching-focused roles. Or… we treat those staff better, give them more professional development time, more scholarship time, and we get less teaching capacity for our buck. And increase workloads.

Look, if you don’t know who this is, just go and watch Community. Thank me later. It’s much funnier than ‘The Chair’.

That’s not the only issue – creating better bottom-rung-of-the-academic-ladder jobs – more hours, longer contracts – almost certainly means fewer such opportunities. Is that a good thing? I think, on balance, probably… but it’s not straightforward. My one remaining reader will no doubt be sounding the ‘false dichotomy’ klaxon at this point. Correctly so. We can, of course, find a compromise or a balance of sorts, but let’s not pretend it’s straightforward. We can’t have everything.

Do we employ more staff (to reduce workloads) on more secure contracts (to reduce insecurity)? Or do we address the real terms 17.6% pay cut by increasing all spine points by £2.5k? And – dare I say it – paying higher employers’ pension contributions, if that is indeed actually needed. What gets priority? You’ll forgive me if I would rather see any £££ for pay rises focused at the lower end of the pay scale rather than giving Profs another two and half grand. Whisper it quietly, but I’d rather spend it on the lowest paid university employees who tend to be represented by UNITE or UNISON rather than UCU. And a focus on the lowest paid/lower spine grades and spine points might also be a good way to start addressing pay gaps.

Pay costs and non pay costs

As Fantasy University Manager, could we hack away at non-pay costs? Conference funding? Seedcorn funding for new research ideas? Research facilities and infrastructure? The university’s core infrastructure and systems which – when working well – create efficiency savings and minimise friction? Estates and buildings? Student spaces? Lecture theatres? Seminar rooms? One of my constant frustrations as a Research Development Manager is working with brilliant colleagues with outstanding ideas who we can’t support with kit/seedcorn ££/infrastructure as they deserve.

I’ve read in a number of places that the percentage of average university income spent on staff costs has been in decline for some time. The best source I can find for this is this UCU article from 2017. I’m wary about trying to dive into HESA stats as I’m not competent to play with financial data without water wings and a lifeguard. If anyone has any better sources/more up to date info, please let me know via twitter, email, or in the comments. This decline may or may coincide with a long run of real term pay cuts, and that may be related. Or not. I’m also not sure what the percentage of staff costs for an organisation ought to look like… my instinct is that under 54.6% seems very, very low. But I’m not sure why I think that… some half-remembered presentation? Or a Business School research grant application? But if it is low, I don’t know why that might be, or what it might mean.

I’m not sure what I think about grand estates/infrastructure projects. Obviously some have gone very well, others very badly. Can we reduce investment on estates and infrastructure to spend more on staffing? There’s a balance to be struck. One option is that we say the balance has swung too far, and we cut back and spend more on staff. Another option is that we end everything but essential maintenance to spend more on staff, but that’s not sustainable in the long run. Unless we want dilapidated lecture theatres and ageing research kit, because if that happened we’d be the first to complain about a lack of investment.

Let’s assume for the sake of argument that the pendulum has swung too far, and that there is extra money at all or most institutions if it were to cut back or delay or cancel some estates and infrastructure projects. Even on top of whatever COVID-related cuts have been made. If there is that money available, how do we spend it? Because I’m not convinced that there’s enough of a saving there to cover everything that UCU is asking for.

There isn’t a magic money tree. Pragmatically speaking. The resource envelope is what it is. Unless anyone is willing to spend, spend, spend and dare the government to shut them down or bale them out. Perhaps £££ will be increased under a future government of a more progressive frame of mind willing to invest more in public and quasi-public services. But that won’t happen in the short, or perhaps even medium term. And when it does, I suspect that universities will be some way down the priority list. As Fantasy Head of School, you need to make decisions now.

I’m aware, of course, that UCU’s demands are a wish list, a negotiating position. It’s also a way of achieving a broad consensus among colleagues whose interests are not precisely aligned. If we look at the Four Fights and the Pensions situation purely selfishly, we’d not all have the same list of priorities.

But ultimately, we have a long list of demands. Some of which can be met or addressed without prohibitively expensive measures… but for others, if there is money available, we’ll need to prioritise. And that priorisiation is likely to be controversial and uncomfortable. And we can either engage with prioritisation, or we can leave it to university management.

I know which I’d rather do.

Prêt-à-non-portability? Implications and possible responses to the phasing out of publication portability

“How much as been decided about the REF? About this much. And how much of the REF period is there to go? Well, again…

Last week Recently, I attended an Open Forum Events one day conference with the slightly confusing title ‘Research Impact: Strengthening the Excellence Framework‘ and gave a short presentation with the same title as this blog post. It was a very interesting event with some great speakers (and me), and I was lucky enough to meet up with quite a few people I only previously ‘knew’ through Twitter. I’d absolutely endorse Sarah Hayes‘ blogpost for Research Whisperer about the benefits of social media for networking for introverts.

Oh, and if you’re an academic looking for something approaching a straightforward explanation about the REF, can I recommend Charlotte Mathieson‘s excellent blog post. For those of you after in-depth half-baked REF policy stuff, read on…

I was really pleased with how the talk went – it’s one thing writing up summaries and knee-jerk analyses for a mixed audience of semi-engaged academics and research development professionals, but it’s quite another giving a REF-related talk to a room full of REF experts. It was based in part on a previous post I’ve written on portability but my views (and what we know about the REF) has moved on since then, so I thought I’d have a go at summarising the key points.

I started by briefly outlining the problem and the proposed interim arrangements before looking at the key principles that needed to form part of any settled solution on portability for the REF after next.

Why non-portability? What’s the problem?

I addressed most of this in my previous post, but I think the key problem is that it turns what ought to be something like a football league season into an Olympic event. With a league system, the winner is whoever earns the most points over a long, drawn out season. Three points is three points, whatever stage of the season it comes in. With Olympic events, it’s all about peaking at the right time during the cycle – and in some events within the right ten seconds of that cycle. Both are valid as sporting competition formats, but for me, Clive the REF should be more like a league season than to see who can peak best on census day. And that’s what the previous REF rules encourages – fractional short term appointments around the census date; bulking out the submission then letting people go afterwards; rent-seeking behaviour from some academics holding their institution to ransom; poaching and instability, transfer window effects on mobility; and panic buying.

If the point of the REF is to reward sustained excellence over the previous REF cycle with funding to institutions to support research over the next REF cycle, surely it’s a “league season” model we should be looking at, not an Olympic model. The problem with portability is that it’s all about who each unit of assessment has under contract and able to return at the time, even if that’s not a fair reflection of their average over the REF cycle. So if a world class researcher moves six months before the REF census date, her new institution would get REF credit for all of her work over the last REF cycle, and the one which actually paid her salary would get nothing in REF terms. Strictly speaking, this isn’t a problem of publication portability, it’s a problem of publication non-retention. Of which more later.

I summarised what’s being proposed as regards portability as a transition measure in my ‘Initial Reactions‘ post, but briefly by far most likely outcome for this REF is one that retains full portability and full retention. In other words, when someone moves institution, she takes her publications with her and leaves them behind. I’m going to follow Phil Ward of Fundermentals and call these Schrodinger’s Publications, but as HEFCE point out, plenty of publications were returned multiple times by multiple institutions in the last REF, as each co-author could return it for her institution. It would be interesting to see what proportion of publications were returned multiple times, and what the record is for the number of times that a single publication has been submitted.

Researcher Mobility is a Good Thing

Marie Curie and Mr Spock have more in common than radiation-related deaths – they’re both examples of success through researcher mobility. And researcher mobility is important – it spreads ideas and methods, allows critical masses of expertise to be formed. And researchers are human too, and are likely to need to relocate for personal reasons, are entitled to seek better paid work and better conditions, and might – like any other employee – just benefit from a change of scene.

For all these reasons, future portability rules need to treat mobility as positive, and as a human right. We need to minimise ‘transfer window’ effects that force movement into specific stages of the REF cycle – although it’s worth noting that plenty of other professions have transfer windows – teachers, junior doctors (I think), footballers, and probably others too.

And for this reason, and for reasons of fairness, publications from staff who have departed need to be assessed in exactly the same way as publications from staff who are still employed by the returning UoA. Certainly no UoA should be marked down or regarded as living on past glories for returning as much of the work of former colleagues as they see fit.

Render unto Caesar

Institutions are entitled to a fair return on investment in terms of research, though as I mentioned earlier, it’s not portability that’s the problem here so much as non-retention. As Fantasy REF Manager I’m not that bothered by someone else submitting some of my departed star player’s work written on my £££, but I’m very much bothered if I can’t get any credit for it. Universities are given funding on the basis of their research performance as evaluated through the previous REF cycle to support their ongoing endeavors in the next one. This is a really strong argument for publication retention, and it seems to me to be the same argument that underpins impact being retained by the institution.

However, there is a problem which I didn’t properly appreciate in my previous writings on this. It’s the investment/divestment asymmetry issue, as absolutely no-one except me is calling it. It’s an issue not for the likely interim solution, but for the kind of full non-portability system we might have for the REF after next.

In my previous post I imagined a Fantasy REF Manager operating largely a one-in, one-out policy – thus I didn’t need new appointee’s publications because I got to keep their predecessors. And provided that staff mobility was largely one-in, one-out, that’s fine. But it’s less straightforward if it’s not. At the moment the University of Nottingham is looking to invest in a lot of new posts around specific areas (“beacons”) of research strength – really inspiring projects, such as the new Rights Lab which aims to help end modern slavery. And I’m sure plenty of other institutions have similar plans to create or expand areas of critical mass.

Imagine a scenario where I as Fantasy REF Manager decide to sack a load of people  immediately prior to the REF census date. Under the proposed rules I get to return all of their publications and I can have all of the income associated for the duration of the next REF cycle – perhaps seven years funding. On the other hand, if I choose to invest in extra posts that don’t merely replace departed staff, it could be a very long time before I see any return, via REF funding at least. It’s not just that I can’t return their publications that appeared before I recruited them, it’s that the consequences of not being able to return a full REF cycle’s worth of publications will have funding implications for the whole of the next REF cycle. The no-REF-disincentive-to-divest and long-lead-time-for-REF-reward-for-investment looks lopsided and problematic.

I’m a smart Fantasy REF Manager, it means I’ll save up my redundancy axe wielding (at worst) or recruitment freeze (at best) for the end of the REF cycle, and I’ll be looking to invest only right at the beginning of the REF cycle. I’ve no idea what the net effect of all this will be repeated across the sector, but it looks to me as if non-portability just creates new transfer windows and feast and famine around recruitment. And I’d be very worried if universities end up delaying or cancelling or scaling back major strategic research investments because of a lack of REF recognition in terms of new funding.

Looking forward: A settled portability policy

A few years back, HEFCE issued some guidance about Open Access and its place in the coming REF. They did this more or less ‘without prejudice’ to any other aspect of the REF – essentially, whatever the rest of the REF looks like, these will be the open access rules. And once we’ve settled the portability rules for this time (almost certainly using the Schrodinger’s publications model), I’d like to see them issue some similar ‘without prejudice’ guidelines for the following REF.

I think it’s generally agreed that the more complicated but more accurate model that would allow limited portability and full retention can’t be implemented at such short notice. But perhaps something similar could work with adequate notice and warning for institutions to get the right systems in place, which was essentially the point of the OA announcement.

I don’t think a full non-portability full-retention system as currently envisaged could work without some finessing, and every bid of finessing for fairness comes at the cost of complication.  As well as the investment-divestment asymmetry problem outlined above, there are other issues too.

The academic ‘precariat’ – those on fixed term/teaching only/fractional/sessional contracts need special rules. An institution employing someone to teach one module with no research allocation surely shouldn’t be allowed to return that person’s publications. One option would be to say something like ‘teaching only’ = full portability, no retention; and ‘fixed term with research allocation’ = the Schrodinger system of publications being retained and being portable. Granted this opens the door to other games to be played (perhaps turning down a permanent contract to retain portability?) but I don’t think these are as serious as current games, and I’m sure could be finessed.

While I argued previously that career young researchers had more to gain than to lose from a system whereby appointments are made more on potential rather than track record, the fact that so many are as concerned as they are means that there needs to be some sort of reassurance or allowance for those not in permanent roles.

Disorder at the border. What happens about publications written on Old Institution’s Time, but eventually published under New Institution’s affiliation? We can also easily imagine publication filibustering whereby researchers delay publication to maximise their position in the job market. Not only are delays in publication bad for science, but there’s also the potential for inappropriate pressure to be applied by institutions to hold something back/rush something out. It could easily put researchers in an impossible position, and has the potential to poison relationships with previous employers and with new ones. Add in the possible effects of multiple job moves on multi-author publications and this gets messy very quickly.

One possible response to this would be to allow a portability/retention window that goes two ways – so my previous institution could still return my work published (or accepted) up to (say) a year after my official leave date. Of course, this creates a lot of admin, but it’s entirely up to my former institution whether it thinks that it’s worth tracking my publications once I’ve gone.

What about retired staff? As far as I can see there’s nothing in any documents about the status of the publications of retired staff either in this REF or in any future plans. The logic should be that they’re returnable in the same way as those of any other researcher who has left during the REF period. Otherwise we’ll end up with pressure to say on and perhaps other kinds of odd incentives not to appoint people who retire before the end of a REF cycle.

One final suggestion…

One further half-serious suggestion… if we really object to game playing, perhaps the only fair to properly reward excellent research and impact and to minimise game playing is to keep the exact rules of REF a secret for as long as possible in each cycle. Forcing institutions just to focus on “doing good stuff” and worrying less about gaming the REF.

  • If you’re really interested, you can download a copy of my presentation … but if you weren’t there, you’ll just have to wonder about the blank page…

‘Unimaginative’ research funding models and picking winners

XKCD 1827 – Survivorship Bias  (used under Creative Commons Attribution-NonCommercial 2.5 License)

Times Higher Education recently published an interesting article by Donald Braben and endorsed by 36 eminent scholars including a number of nobel laureates. They criticise “today’s academic research management” and claim that as an unforeseen consequence, “exciting, imaginative, unpredictable research without thought of practical ends is stymied”. The article fires off somewhat scattergun criticism of the usual betes noire – the inherent conservatism of peer review; the impact agenda, and lack of funding for blue skies research; and grant application success rates.

I don’t deny that there’s a lot of truth in their criticisms… I think in terms of research policy and deciding how best to use limited resources… it’s all a bit more complicated than that.

Picking Winners and Funding Outsiders

Look, I love an underdog story as much as the next person. There’s an inherent appeal in the tale of the renegade scholar, the outsider, the researcher who rejects the smug, cosy consensus (held mainly by old white guys) and whose heterodox ideas – considered heretical nonsense by the establishment – are  ultimately triumphantly vindicated. Who wouldn’t want to fund someone like that? Who wouldn’t want research funding to support the most radical, most heterodox, most risky, most amazing-if-true research? I think I previously characterised such researchers as a combination of Albert Einstein and Jimmy McNulty from ‘The Wire’, and it’s a really seductive picture. Perhaps this is part of the reason for the MMR fiasco.

The problem is that the most radical outsiders are functionally indistinguishable from cranks and charlatans. Are there many researchers with a more radical vision that the homeopathist, whose beliefs imply not only that much of modern medicine is misguided, but that so is our fundamental understanding of the physical laws of the universe? Or the anti-vaxxers? Or the holocaust deniers?

Of course, no-one is suggesting that these groups be funded, and, yes I’ll admit it’s a bit of a cheap shot aimed at a straw target. But even if we can reliably eliminate the cranks and the charlatans, we’ll still be left with a lot of fringe science. An accompanying THE article quotes Dudley Herschbach, joint winner of the 1986 Nobel Prize for Chemistry, as saying that his research was described as being at the “lunatic fringe” of chemistry. How can research funders tell the difference between lunatic ideas with promise (both interesting-if-true and interesting-even-if-not-true) and lunatic ideas that are just… lunatic. If it’s possible to pick winners, then great. But if not, it sounds a lot like buying lottery tickets and crossing your fingers. And once we’re into the business of having a greater deal of scrutiny in picking winners, we’re back into having peer review again.

One of the things that struck me about much of the history of science is that there are many stories of people who believe they are right – in spite of the scientific consensus and in spite of the state of the evidence available at the time – but who proceed anyway, heroically ignoring objections and evidence, until ultimately vindicated. We remember these people because they were ultimately proved right, or rather, their theories were ultimately proved to have more predictive power than those they replaced.

But I’ve often wondered about such people. They turned out to be right, but were they right because of some particular insight, or were they right because they were lucky in that their particular prejudice happened to line up with the actuality? Was it just that the stopped clock is right twice per day? Might their pig-headedness equally well have carried them along another (wrong) path entirely, leaving them to be forgotten as just another crank? And just because someone is right once, is there any particular reason to think that they’ll be right again? (Insert obligatory reference to Newton’s dabblings with alchemy here). Are there good reasons for thinking that the people who predicted the last economic crisis will also predict the next one?

A clear way in which luck – interestingly rebadged as ‘serendipity’ – is involved is through accidental discoveries. Researchers are looking at X when… oh look at Y, I wonder if Z… and before you know it, you have a great discovery which isn’t what you were after at all. Free packets of post-it notes all round. Or when ‘blue skies’ research which had no obvious practical application at the time becomes a key enabling technology or insight later on.

The problem is that all these stories of serendipity and of surprise impact and of radical outsider researchers are all examples of lotteries in which history only remembers the winning tickets. Through an act of serendipity, the XKCD published a cartoon illustrating this point nicely (see above) just as I was thinking about these issues.

But what history doesn’t tell us is how many lottery tickets research funding agencies have to buy in order to have those spectacular successes. And just as importantly, whether or not a ‘lottery ticket’ approach to research funding will ultimately yield a greater return on investment than a more ‘unimaginative’ approach to funding using the tired old processes of peer review undertaken by experts in the relevant field followed by prioritisation decisions taken by a panel of eminent scientists drawn from across the funder’s remit. And of course, great successes achieved through this method of having a great idea, having the greatness of the idea acknowledged by experts, and then carrying out the research is a much less compelling narrative or origin story, probably to the point of invisibility.

A mixed ecosystem of conventional and high risk-high reward funding streams

I think there would be broad agreement that the research funding landscape needs a mixture of funding methods and approaches. I don’t take Braben and his co-signatories to be calling for wholesale abandonment of peer review, of themed calls around particular issues, or even of the impact agenda. And while I’d defend all those things, I similarly recognise merit in high risk-high reward research funding, and in attempts by major funders to try to address the problem of peer review conservatism. But how do we achieve the right balance?

Braben acknowledges that “some agencies have created schemes to search for potentially seminal ideas that might break away from a rigorously imposed predictability” and we might include the European Research Council and the UK Economic and Social Research Council as examples of funders who’ve tried to do this, at least in some of their schemes. The ESRC in particular on one scheme abandoned traditional peer review for a Dragon’s Den style pitch-to-peers format, and the EPSRC is making increasing use of sandpits.

It’s interesting that Braben mentions British Petroleum’s Venture Research Initiative as a model for a UCL pilot aimed at supporting transformative discoveries. I’ll return to that pilot later, but he also mentions that the one project that scheme funded was later funded by an unnamed “international benefactor”, which I take to be a charity or private foundation or other philanthropic endeavor rather than a publically-funded research council or comparable organisation. I don’t think this is accidental – private companies have much more freedom to create blue skies research and innovation funding as long as the rest of the operation generates enough funding to pay the bills and enough of their lottery tickets end up winning to keep management happy. Similarly with private foundations with near total freedom to operate apart perhaps from charity rules.

But I would imagine that it’s much harder for publically-funded research councils to take these kinds of risks, especially during austerity.  (“Sorry Minister, none of our numbers came up this year, but I’m sure we’ll do better next time.”) In a UK context, the Leverhulme Trust – a happy historical accident funded largely through dividend payments from its bequeathed shareholding in Unilever – seeks to differentiate itself from the research councils by styling itself as more open to risky and/or interdisciplinary research, and could perhaps develop further in this direction.

The scheme that Braben outlines is genuinely interesting. Internal only within UCL, very light touch application process mainly involving interviews/discussion, decisions taken by “one or two senior scientists appointed by the university” – not subject experts, I infer, as they’re the same people for each application. Over 50 applications since 2008 have so far led to one success. There’s no obligation to make an award to anyone, and they can fund more than one. It’s not entirely clear from this article where the applicant was – as Braben proposes for the kinds of schemes he calls for – “exempt from normal review procedures for at least 10 years. They should not be set targets either, and should be free to tackle any problem for as long as it takes”.

From the article I would infer that his project received external funding after 3 years, but I don’t want to pick holes in a scheme which is only partially outlined and which I don’t know any more about, so instead I’ll talk about Braben’s more general proposal, not the UCL scheme in particular.

It’s a lot of power in a very few hands to give out these awards, and represents a very large and very blank cheque. While the use of interviews and discussion cuts down on grant writing time, my worry is that a small panel and interview based decision making may open the door to unconscious bias, and greater successes for more accomplished social operators. Anyone who’s been on many interview panels will probably have experienced fellow panel members making heroic leaps of inference about candidates based on some deep intuition, and in the tendency of some people to want to appoint the more confident and self-assured interviewee ahead of a visibly more nervous but far better qualified and more experienced rival. I have similar worries about “sand pits” as a way of distributing research funding – do better social operators win out?

The proposal is for no normal review procedures, and for ten years in which to work, possibly longer. At Nottingham – as I’m sure at many other places – our nearest equivalent scheme is something like a strategic investment fund which can cover research as well as teaching and other innovations. (Here we stray into things I’m probably not supposed to talk about, so I’ll stop). But these are major investments, and there’s surely got to be some kind of accountability during decision-making processes and some sort of stop-go criteria or review mechanism during the project’s life cycle. I’d say that courage to start up some high risk, high reward research project has to be accompanied by the courage to shut it down too. And that’s hard, especially if livelihoods and professional reputations depend upon it – it’s a tough decision for those leading the work and for the funder too. But being open to the possibility of shutting down work implies a review process of some kind.

To be clear, I’m not saying let’s not have more high-risk high-reward curiosity driven research. By all means let’s consider alternative approaches to peer review and to decision making and to project reporting. But I think high risk/high reward schemes raise a lot of difficult questions, not least what the balance should be between lottery ticket projects and ‘building society savings account’ projects. We need to be aware of the ‘survivor bias’ illustrated by the XKCD cartoon above and be aware that serendipity and vindicated radical researchers are both lotteries in which we only see the winning tickets. We also need to think very carefully about fair selection and decision making processes, and the danger of too much power and too little accountability in too few hands.

It’s all about the money, money, money…

But ultimately the problem is that there are a lot more researchers and academics than there used to be, and their numbers – in many disciplines – is determined not by the amount of research funding available nor the size of the research challenges, but by the demand for their discipline from taught-course students. And as higher education has expanded hugely since the days in which most of Braben’s “500 major discoveries” there are just far more academics and researchers than there is funding to go around. And that’s especially true given recent “flat cash” settlements. I also suspect that the costs of research are now much higher than they used to be, given both the technology available and the technology required to push further at the boundaries of human understanding.

I think what’s probably needed is a mixed ecology of research funders and schemes. Probably publically funded research bodies are not best placed to fund risky research because of accountability issues, and perhaps this is a space in which private foundations, research funding charities, and universities themselves are better able to operate.

HEFCE publishes ‘Consultation on the second Research Excellence Framework (REF 2021)’

“Let’s all meet up in the Year… 2021”

In my previous post I wrote about the Stern Review, and in particular the portability issue – whereby publications remained with the institution where they were written, rather than moving institutions with the researcher – which seemed by some distance the most vexatious and controversial issue, at least judging by my Twitter feed.

Since then there has been a further announcement about a forthcoming consultation exercise which would seek to look at the detail of the implementation of the Stern Review, giving a pretty clear signal that the overall principles and rationale had been accepted, and that Lord Stern’s comments that his recommendations were meant to be taken as a whole and were not amenable to cherry picking, had been heard and taken to heart.

Today – only ten days or so behind schedule – the consultation has been launched.  It invites “responses from higher education institutions and other groups and organisations with an interest in the conduct, quality, funding or use of research”. In paragraph 15, this invitation is opened out to include “individuals”. So as well as contributing to your university response, you’ve also got the opportunity to respond personally. Rather than just complain about it on Twitter.

Responses are only accepted via an online form, although the questions on that online form are available for download in a word document. There are 44 questions for which responses are invited, and although these are free text fields, the format of the consultation is to solicit responses to very specific questions, as perhaps would be expected given that the consultation is about detail and implementation. Paragraph 10 states that

“we have taken the [research excellence] framework as implemented in 2014 as our starting position for this consultation, with proposals made only in those areas where our evidence suggests a need or desire for change, or where Lord Stern’s Independent Review recommends change. In developing our proposals, we have been mindful of the level of burden indicated, and have identified where certain options may offer a more deregulated approach than in the previous framework. We do not intend to introduce new aspects to the assessment framework that will increase burden.”

In other words, I think we can assume that 2014 plus Stern = the default and starting position, and I would be surprised if any radical departures from this resulted from the consultation. Anyone wanting to propose something radically different is wasting their time, even if the first question invites “comments on the proposal to maintain an overall continuity of approach with REF 2014.”

So what can we learn from the questions? I think the first thing that strikes me it’s that it’s a very detailed and very long list of questions on a lot of issues, some of which aren’t particularly contentious. But it’s indicative of an admirable thoroughness and rigour. The second this is that they’re all about implementation. The third is that reduction of burden on institutions is a key criterion, which has to be welcome.

Units of Assessment 

It looks as if there’s a strong preference to keep UoAs pretty much as they are, though the consultation flags up inconsistencies of approach from institutions around the choice of which of the four Engineering Panels to submit to. Interestingly, one of the issues is comparability of outcome (i.e. league tables) which isn’t technically supposed to be something that the REF is concerned with – others draw up league tables using their own methodologies, there’s no ‘official’ table.

It also flags up concerns expressed by the panel about Geography and Archaeology, and worries about forensic science, criminology and film and media studies, I think around subject visibility under current structures. But while some tweaks may be allowed, there will be no change to the current structure of Main Panel/Sub Panel, so no sub-sub-panels, though one of the consultation possibilities is is about sub-panels setting different sub-profiles for different areas that they cover.

Returning all research active staff

This section takes as a starting point that all research active staff will be returned, and seeks views on how to mitigate game-playing and unintended consequences. The consultation makes a technical suggestion around using HESA cost centres to link research active staff to units of assessment, rather than leaving institutions to the flexibility to decide to – to choose a completely hypothetical example drawn in no way from experience with a previous employer – to submit Economists and Educationalists into a beefed up Business and Management UoA. This would reduce that element of game playing, but would also negatively effect those whose research identity doesn’t match their teaching/School/Department identity – say – bioethicists based in medical or veterinary schools, and those involved in area studies and another discipline (business, history, law) who legitimately straddle more than one school. A ‘get returned where you sit’ approach might penalise them and might affect an institution’s ability to tell the strongest possible story about each UoA.

As you’d expect, there’s also an awareness of very real worries about this requirement to return all research active staff leading to the contractual status of some staff being changed to teaching-only. Just as last time some UoAs played the ‘GPA game’ and submitted only their best and brightest, this time they might continue that strategy by formally taking many people out of ‘research’ entirely. They’d like respondents to say how this might be prevented, and make the point that HESA data could be used to track such wholesale changes, but presumably there would need to be consequences in some form, or at least a disincentive for doing so. But any such move would intrude onto institutional autonomy, which would be difficult. I suppose the REF could backdate the audit point for this REF, but it wouldn’t prevent such sweeping changes for next time. Another alternative would be to use the Environment section of the REF to penalise those with a research culture based around a small proportion of staff.

Personally, I’m just unclear how much of a problem this will be. Will there be institutions/UoAs where this happens and where whole swathes of active researchers producing respectable research (say, 2-3 star) are moved to teaching contracts? Or is the effect likely to be smaller, with perhaps smaller groups of individuals who aren’t research active or who perhaps haven’t been producing being moved to teaching and admin only? And again, I don’t want to presume that will always be a negative move for everyone, especially now we have the TEF on the horizon and we are now holding teaching in appropriate esteem. But it’s hard to avoid the conclusion that things might end up looking a bit bleak for people who are meant to be research active, want to continue to be research active, but who are deemed by bosses not to be producing.

Decoupling staff from outputs

In the past, researchers were returned with four publications minus any reductions for personal circumstances. Stern proposed that the number of publications to be returned should be double the number of research active staff, with each person being about to return between 0 and 6 publications. A key advantage of this is that it will dispense with the need to consider personal circumstances and reductions in the number of publications – straightforward in cases of early career researchers and maternity leaves, but less so for researchers needing to make the case on the basis of health problems or other potentially traumatic life events. Less admin, less intrusion, less distress.

One worry expressed in the document is about whether this will allow panel members to differentiate between very high quality submissions with only double the number of publications to be returned. But they argue that sampling would be required if a greater multiple were to be returned.

There’s also concern that allowing a maximum of six publications could allow a small number of superstars to dominate a submission, and a suggestion is that the minimum number moves from 0 to 1, so at least one publication from every member of research active staff is returned. Now this really would cause a rush to move those perceived – rightly or wrongly – as weak links off research contracts! I’m reminded of my MPhil work on John Rawls here, and his work on the difference principle, under which nearly just society seeks to maximise the minimum position in terms of material wealth – to have the richest poorest possible. Would this lead to a renewed focus on support for career young researchers, for those struggling for whatever reason, to attempt to increase the quality of the weakest paper in the submission and have the highest rated lowest rated paper possible?

Or is there any point in doing any of that, when income is only associated with 3 (just) and 4? Do we know how the quality of the ‘tail’ will feed into research income, or into league tables if it’s prestige that counts? I’ll need to think a bit more about this one. My instinct is that I like this idea, but I worry about unintended consequences (“Quick, Professor Fourstar, go and write something – anything – with Dr Career Young!”).

Portability

On portability – whether a researcher’s publications move with them (as previously) or stay with the institution where they were produced (like impact) – the consultation first notes possible issues about what it doesn’t call a “transfer window” round about the REF census date. If you’re going to recruit someone new, the best time to get them is either at the start of a REF cycle or during the meaningless end-of-season games towards the end of the previous one. That way, you get them and their outputs for the whole season. True enough – but hard to see that this is worse than the current situation where someone can be poached in the 89th minute and bring all their outputs with them.

The consultation’s second concern is verification. If someone moves institution, how do we know which institution can claim what? As we found with open access, the point of acceptance isn’t always straightforward to determine, and that’s before we get into forms of output other than journal articles. I suppose my first thought is that point-of-submission might be the right point, as institutional affiliation would have to be provided, but then that’s self declared information.

The consultation document recognises the concern expressed about the disadvantage that portability may have for certain groups – early career researchers and (a group I hadn’t considered) people moving into/out of industry. Two interesting options are proposed – firstly, that publications are portable for anyone on a fixed term contract (though this may inadvertently include some Emeritus Profs) or for anyone who wasn’t returned to REF 2014.

One other non-Stern alternative is proposed – that proportionate publication sharing between old and new employer take place for researchers who move close to the end date. But this seems messy, especially as different institutions may want to claim different papers. For example if Dr Nomad wrote a great publication with co-authors from Old and from New, neither would want it as much as a great publication that she wrote by herself or with co-authors from abroad. This is because both Old and New could still return that publication without Dr Nomad because they had co-authors who could claim that publication, and publications can only be returned once per UoA, but perhaps multiple times by different UoAs.

Overall though – that probable non-starter aside – I’d say portability is happening, and it’s just a case of how to protect career young researchers. And either non-return last time, or fixed term contract = portability seem like good ideas to me.

Interestingly, there’s also a question about whether impact should become portable. It would seem a bit odd to me of impact and publications were to swap over in terms of portability rules, so I don’t see impact becoming portable.

Impact

I’m not going to say too much about impact here and now- this post is already too long, and I suspect someone else will say it better.

Miscellaneous 

Other than that…. should ORCID be mandatory? Should Category C (staff not employed by the university, but who research in the UOA) be removed as an eligible category? Should there be a minimum fraction of FTE to be returnable (to prevent overseas superstars being returnable on slivers of contracts)? What exactly is a research assistant anyway? Should a reserve publication be allowed when publication of a returned article is expected horrifyingly close to the census date? Should quant data be used to support assessment in disciplines where it’s deemed appropriate? Why do birds suddenly appear, every time you are near, and what metrics should be used for measuring such birds?

There’s a lot more to say about this, and I’ll be following discussions and debates on twitter with interest. If time allows I’ll return to this post or write some more, less knee-jerky comments over the next days and weeks.

The rise of the machines – automation and the future of research development

"I've seen research ideas you people wouldn't believe. Impact plans on fire off the shoulder of Orion. I watched JeS-beams glitter in the dark near the Tannhäuser ResearchGate. All those proposals will be lost in time, like tears...in...rain. Time to revise and resubmit."
“I’ve seen first drafts you people wouldn’t believe. Impact plans on fire off the shoulder of Orion. I watched JeS beams glitter in the dark near the Tannhäuser ResearchGate. All those research proposals will be lost in time, like tears…in…rain. Time to resubmit.”

In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?

In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.

A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools  – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.

While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.

I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).

But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.

It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.

Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.

Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.

Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.

But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.

I for one welcome our new research development AI overlords.

MOOCing about: My experience of a massively open online course

I’ve just completed my first Massively Open Online Course (or MOOC) entitled ‘The mind is flat: the shocking shallowness of human psychology run via the Futurelearn platform.  It was run by Professor Nick Chater and PhD student Jess Whittlestone of Warwick Business School and this is the second iteration of the course, which I understand will be running again at some point. Although teaching and learning in general (and MOOCs in particular) are off topic for this blog, I thought it might be interesting to jot down a few thoughts about my very limited experience of being on the receiving end of a MOOCing.  There’s been a lot of discussion of MOOCs which I’ve been following in a kind of half-hearted way, but I’ve not seen much (if anything) written from the student perspective.

“Alright dudes… I’m the future of higher education, apparently. Could be worse… could be HAL 9000”

I was going to explain my motivations for signing up for the course to add a bit of context, but one of the key themes of the MOOC has been the shallowness and instability of human reasons and motivations.  We can’t just reach back into our minds, it seems, and retrieve our thinking and decision making processes from a previous point in time.  Rather, the mind is an improviser, and can cobble together – on demand – all kinds of retrospective justifications and explanations for our actions which fit the known facts including our previous decisions and the things we like to think motivate us.

So my post-hoc rationalisation of my decision to sign up is probably three-fold. Firstly, I think a desire for lifelong learning and in particular an interest in (popular) psychology are things I ascribe to myself.  Hence an undergraduate subsidiary module in psychology and having read Stuart Sutherland’s wonderful book ‘Irrationality‘.  A second plausible explanation is that I work with behavioural economists in my current role, and this MOOC would help me understand them and their work better.  A third possibility is that I wanted to find out what MOOCs were all about and what it was like to do one, not least because of their alleged disruptive potential for higher education.

So…. what does the course consist of?  Well, it’s a six week course requiring an estimated five hours of time per week.  Each week-long chunk has a broad overarching theme, and consists of a round-up of themes arising from questions from the previous week, and then a series of short videos (generally between 4 and 20 minutes) either in a lecture/talking head format, or in an interview format.  Interviewees have included other academics and industry figures.  There are a few very short written sections to read, a few experiments to do to demonstrate some of the theories, a talking point, and finally a multiple choice test.  Students are free to participate whenever they like, but there’s a definite steer towards trying to finish each week’s activities within that week, rather than falling behind or ploughing ahead. Each video or page provides the opportunity to add comments, and it’s possible for students to “like” each other’s comments and respond to them.  In particular there’s usually one ‘question of the week’ where comment is particularly encouraged.

The structure means that it’s very easy to fit alongside work and other commitments – so far I’ve found myself watching course videos during half time in Champions League matches (though the half time analysis could have told its own story about the shallowness of human psychology and the desire to create narratives), last thing at night in lieu of bedtime reading, and when killing time between finishing work and heading off to meet friends.  The fact that the videos are short means that it’s not a case of finding an hour or more at a time for uninterrupted study. Having said that, this is a course which assumes “no special knowledge or previous experience of studying”, and I can well imagine that other MOOCs require a much greater commitment in terms of time and attention.

I’ve really enjoyed the course, and I’ve found myself actively looking forward to the start of a new week, and to carving out a free half hour to make some progress into the new material.  As a commitment-light, convenient way of learning, it’s brilliant.  The fact that it’s free helps.  Whether I’d pay for it or not I’m not sure, not least because I’ve learnt that we’re terrible at working out absolute value, as our brains are programmed to compare.  Once a market develops and gives me some options to compare, I’d be able to think about it.  Once I had a few MOOCs under my belt, I’d certainly consider paying actual money for the right course on the right topic at the right level with the right structure. At the moment it’s possible to pay for exams (about £120, or £24 for a “statement of participation”) on some courses, but as they’re not credit bearing it’s hard to imagine there would be much uptake. What might be a better option to offer is a smaller see for a self-printable .pdf record of courses completed, especially once people start racking up course completions.

One drawback is the multiple choice method of examining/testing, which doesn’t allow much sophistication or nuance in answers.  A couple of the questions on the MOOC I completed were ambiguous or poorly phrased, and one in particular made very confusing use of “I” and “you” in a scenario question, and I’d still argue (sour grapes alert) that the official “correct” answer was wrong. I can see that multiple choice is the only really viable way of having tests at the moment (though one podcast I was listening to the other day mooted the possibility of machine text analysis marking for short essays based on marks given to a sample number), but I think a lot more work needs to go into developing best (and better) practice around question setting.  It’s difficult – as a research student I remember being asked to come up with some multiple choice questions about the philosophy of John Rawls for an undergraduate exam paper, and struggled with that.  Though I did remove the one from the previous paper which asked how many principles of justice there were (answer: it depends how you count them).

But could it replace an undergraduate degree programme?  Could I imagine doing a mega-MOOC as my de facto full time job, watching video lectures, reading course notes and core materials, taking multiple choice questions and (presumably) writing essays?  I think probably not.  I think the lack of human interaction would probably drive me mad – and I say this as a confirmed introvert.  Granted, a degree level MOOC would probably have more opportunities for social interaction – skype tutorials, better comments systems, more interaction with course tutors, local networks to meet fellow students who live nearby – but I think the feeling of disconnection, isolation, and alienation would just be too strong.  Having said that, perhaps to digital natives this won’t be the case, and perhaps compared (as our brains are good at comparing) to the full university experience a significantly lighter price tag might be attractive.  And of course, for those in developing countries or unable or unwilling to relocate to a university campus (for whatever reason), it could be a serious alternative.

But I can certainly see a future that blends MOOC-style delivery with more traditional university approaches to teaching and learning.  Why not restructure lectures into shorter chunks and make them available online, at the students’ convenience?  There are real opportunities to bring in extra content with expert guest speakers, especially industry figures, world leading academic experts, and particularly gifted and engaging communicators.  It’s not hard to imagine current student portals (moodle, blackboard etc) becoming more and more MOOC-like in terms of content and interactivity.  In particular, I can imagine a future where MOOCs offer opportunities for extra credit, or for non-credit bearing courses for students to take alongside their main programme of study.  These could be career-related courses, courses that complement their ‘major’, or entirely hobby or interest based.

One thought that struck me was whether it was FE rather than HE that might be threatened by MOOCs.  Or at least the Adult Ed/evening classes aspect of FE.  But I think even there a motivation to – say – decide to learn Spanish, is only one motivation – another is often to meet new people and to learn together, and I don’t think that that’s an itch that MOOCs are entirely ready to scratch. But I can definitely see a future for MOOCs as the standard method of continuing professional development in any number of professional fields, whether these are university-led or not. This has already started to happen, with a course called ‘Discovering Business in Society‘ counting as an exemption towards one paper of an accounting qualification.  I also understand that Futurelearn are interested in pilot schemes for the use of MOOCs 16-19 year olds to support learning outcomes in schools.

It’s also a great opportunity for hobbyists and dabblers like me to try something new and pursue other intellectual interests.  I can certainly imagine a future in which huge numbers of people are undertaking a MOOC of one kind or another, with many going from MOOC to MOOC and building up quite a CV of virtual courses, whether for career reasons, personal interest, or a combination of both.  Should we see MOOCs as the next logical and interactive step from watching documentaries? Those who today watch Horizon and Timewatch and, well, most of BBC4, might in future carry that interest forward to MOOCs.

So perhaps rather than seeing MOOCs in terms of what they’re going to disrupt or displace or replace, we’re better off seeing them as something entirely new.

And I’m starting my next MOOC on Monday – Cooperation in the contemporary world: Unlocking International Politics led by Jamie Johnson of the University of Birmingham.  And there are several more that look tempting… How to read your boss from colleagues at the University of Nottingham, and England in the time of Richard III from – where else – the University of Leicester.

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..