Stammering and the Academy: What the rest of you need to know

This post is something of a departure from my usual focus and normal service will be resumed next time. This post is going to be about my work-related (and higher-education related) experiences of coping with a stammer.  Is “coping” the right word?  I’m not entirely sure – working with, working around, working through.. something along those lines, anyway.

You should continue to read this (even if it’s not the kind of topic you normally come here for) because you’re a decent human being who is interested in finding out what it’s like to walk in someone else’s shoes and how you can avoid making their lives harder than they need to be.  And in return, you can ask me anything you like about stammering in the comments below, or via email if you prefer.  I absolutely don’t set myself up as any kind of authority or spokesperson  – especially as my own stammer is atypical and relatively mild – but I have my own experiences, my own opinions, I’m open about my stammer, and willing to answer questions.

Some facts

Approximately 5% of under 5s and around 1% of adults have a stammer, and these numbers are pretty consistent across cultures, social class, and over time.  Between 3.5 and 4 times as many men stammer as women. There’s a weak genetic component.  There’s no cure. While psychological factors may make a stammer worse or better at any given time, research points to a neurological cause. A stammer is not a sign of low intelligence, slow wits, childhood trauma, low resilience, or mental illness, and is not the same as – or caused by – shyness, nervousness or introversion. Though it may itself cause or contribute to those things.  Stuttering and Stammering are interchangeable terms.

Stammering ‘disfluency’ generally takes three forms. Repetition (“r…r….repet-repetition”), stretching (“stretttccchhhhiiinnnng”), and blocking (“……. blocking”).  Repetition is probably what most people would recognise as stammering.  My own stammer is about 95% blocking, 5% stretching.

“Stammer? You don’t have a stammer”

My stammer is fairly mild and a lot of people don’t notice, or if they do, think it’s just something idiosyncratic about me rather than making the connection. Some people don’t believe me when I tell them. They don’t notice because I’m pretty good at passing for fluent through various tricks and techniques and distractions.  Where repetition-stammering is hard to hide, blocks can be worked around and disguised.  This can lead to a form of stammering called “covert” or (better) “interiorised“.  Like the swan gliding across the lake, there’s a huge amount of effort and kicking and splashing going into maintaining that illusion of effortless serenity. Everything about the way I speak is carefully crafted to hide my stammer. It’s hard to explain what that’s like, because I’ve never known anything different, but the best analogy I can come up with is a resource-hungry computer program, always running in the background, always taking up some measure of resource that could be better used for something else. For people who are not open about their stammer, being ‘caught out’ can be something they dread, and although some of that dread has dissipated for me, I still find myself working very hard to pass for fluent.

Particular Issues for Higher Education

1% of the adult population is really quite a lot, and I expect that’s a lot higher than most people think.  Probably 1% of your students, 1% of your colleagues.  The chances of you not knowing anyone who has a stammer is really quite small. Stammering usually counts as a disability under the Disability Discrimination Act – yet I’ve never heard it mentioned on any disability awareness training, whether in relation to supporting students or staff.

Meetings and Tutorials

Sorry to keep labouring this point, but if you have five tutorial groups with twenty students each, chances are that one of your students will have a stammer.  As a module tutor, or as chair of a meeting or a conference session, there are a few things you need to be aware of, and to consider.  For general information about how not to embarrass yourself when talking to someone who stammers, I’d suggest this [.pdf] as a brief guide, but if you’re chairing meetings or tutorials, you have to consider and control the behaviour of other people too.

At one point in my working life (details removed to protect the guilty), I had to attend a semi-regular meeting. I used to dread this meeting, because it had the perfect storm of a weak chair and a number of participants who sought to dominate. People would regularly talk over each other (increasingly loudly), not listen properly, and interrupt. When I was able to get a word in edgeways, I’d manage about half a sentence before someone would take a wild guess at what contribution I was going to make and then interrupt.  This was a problem for me for four reasons:

  • it’s damn rude. Were other people not taught this when growing up?
  • the view that had been ascribed to me or the impression I’d been permitted to give was often not what I’d intended to say.  What I hoped would be a nuanced and sophisticated and useful contribution would regularly be cut off before I’d made it past the obvious
  • I plan what I’m going to say quite carefully to maximise my chances of being able to say it, and I can’t recalibrate quickly to take account of interruptions or questions
  • I don’t have the vocal agility to verbally fend off people when they interrupt me

Back then I wasn’t open about my stammer, and virtually never mentioned it to anyone. These days I wouldn’t stand for being treated like that, but that’s because I have more life experience, more personal and professional confidence and higher expectations of the reactions of people around me.  You’ll note, though, that of the four reasons why you shouldn’t allow your meeting or tutorial to run as a Competitive Interruptathon, only two of them apply exclusively to people who stammer.  So there’s not really any special pleading here – just another reason why if you’re running a tutorial or chairing a meeting you actually need to chair it. It’s your job to stop people interrupting, to watch for signs that people are trying to contribute, to allow the less vocally assertive to have the chance to contribute, and for that contribution to be listened to. Please do your job.

It’s also a potential issue at conferences. As a postgraduate student I once went to a presentation from a visiting speaker and plucked up the courage to ask a question, which was answered.  But before I could follow up, the self-appointed Alpha Academic in the room (you know, the man – always a man – who always prefaces asking questions with a deep sigh) interrupted me. Not just spoke at the same time, actually deliberately interrupted and spoke right across me – effectively dismissing me and my ideas and my contribution as worthless, to ask yet another question about his hobby horse.  ‘Was there anything else you wanted to…” murmured the chair, afterwards, far too little and far too late.  I just shook my head and went home feeling furious and humiliated, and I can’t be sure that this incident didn’t play some role in me deciding that I didn’t want to pursue an academic career.

Another task as chair or tutor or even in everyday conversation is not to accidentally humiliate yourself or someone else through your own thoughtlessness.  If you ask me an open-ended question, I can navigate around my stammer and give you a fairly fluent answer.  If you ask me a closed question with only one possible answer, there’s a chance I might not be able to give it to you immediately. This is particularly common when I’m asked for my name, or for some other obvious snippet of information where I can’t cover my block by pretending to be thinking about it, making up my mind, or trying to remember. When I moved house a couple of years back, my ability to fluently state the address of potential abodes was a minor consideration in decision-making. Growing up, I used to dread maths (and to a lesser extent, sciences), where there was only one answer that could be given, and using faux-doubt to hide a slow answer wasn’t always an option.

Something that still happens to me occasionally is that someone asks me my name, and sometimes I’m unable to answer within the apparently-obligatory two seconds, and the next question is sometimes along the lines of whether I’ve forgotten my name. I’ve already said that 1% of adults have a stammer, and although I don’t have the figure to hand for spontaneous amnesia, I’m prepared to stick my neck out and guess that it’s much lower.  So if it happens to you when you’re speaking to someone…. you know… play the percentages, keep your mouth shut, and be a fraction more patient.

Laughing and asking me whether I’ve forgotten my name gives me a difficult choice to make. One option is just laugh along with everyone else, use that to buy a bit of time, get an answer out somehow, try not to show how humiliated I’ve just been made to feel, and then say nothing else.  The other option is that I call my interlocutor out on their behaviour, and explain that I have a stammer.  But by doing that, even calmly and politely, I embarrass the person who asked the question, risk putting everyone else present on edge, and reveal personal information which may not be appropriate. In any case, most people who do this aren’t malicious, and are just reacting (however inappropriately) to cover a moment that they perceive as awkward.

I used to think this was more of a dilemma than I do now – these days I think I have an educational duty, and I’m much less prepared to take the hit for someone else’s thoughtlessness. But if you factor in power differentials it’s much harder.  Did I call tutors out on this as a student?  Absolutely not.  Would I call out senior colleagues on it today?  I’d like to think so, but I’m honestly not sure.

Other issues

Presentations can be a particular issue for people who stammer. Public speaking is something that most people worry about, but for people who stammer it can be an even greater concern. I’m fairly fortunate in that I’ve had quite a lot of practice of public speaking/teaching/coaching, and that I’m usually able to process it as performance or acting rather than me speaking. I’ve taken to telling the audience at the start of presentations that I have a slight stammer, and I’ve found that that helps me. Even if I don’t subsequently stammer during the presentation, I know that if I do, it won’t come entirely as a surprise. In fact, the very first time I mentioned it was as part of a presentation for my current role, and the positive reaction told me a lot about the culture of the place.

It’s hard to say whether it’s best to expect someone who stammers to give a presentation – either as part of a module assessment or as part of their job – or make an exception for them. Certainly it’s hard to imagine being an academic without a requirement to present, and I’d imagine that presentation skills would be important for most graduate career paths. So I tend to think that unless someone has some combination of a very high level of anxiety and/or a severe stammer, it’s probably best to encourage them to present and make sure that there’s a supportive environment. Even little things like subtly asking if they have a preference about when to present may help – generally people don’t like being first up or having a nerve-shredding wait until the end. Of course, in a group exercise it may be that someone who stammers decides to contribute to their team in other ways, but I think it would be a shame not to get some experience.  But equally, getting another team member to present might well be a “reasonable adjustment” of the kind expected in the workplace by the DDA.

One final issue to mention is telephone calls. I hate using the phone, and will avoid it if I possibly can. Non-verbal communication and body language suddenly doesn’t work.  If I get a block, it just comes over as a silence on the other end, and that’s quite hard to cover for. I’m usually okay once I get going, and it’s easier if someone calls me than if I call them, as they already know who I am, and what the topic of the conversation will be. So you may well find that people who stammer prefer email or face-to-face conversations than phone calls.

Finally…

I’ve gradually”come out” about my stammer over the last three years or so, and although it’s not been my experience that being open about it has made it go away or reduced it, what it has done is enable me to worry about it less. Of course, many people who stammer don’t have the option of hiding it, but what I would say is that my experience of being much more open about it has been entirely positive. The vast majority of people are more than capable of focusing on what’s being said rather than how it’s being said, and these days (especially in university environments) I think there’s more acceptance and understanding of difference and disability than in the past.  Four out of five children who stammer will grow out of it, and I’d say that an even higher proportion of children who mock those who are different from them will also – largely – grow out of it. The workplace or the university campus is not the playground.

Right. That’s more than enough from me. Any comments or questions?

 

Posted in Frustrations, University culture | 3 Comments

The consequences of Open Access, part 2: Are researchers prepared for greater scrutiny?

In part 1 of this post, I raised questions about how academic writing might have to change in response to the open access agenda.  The spirit of open access surely requires not just the availability of academic papers, but the accessibility of those papers to research users and stakeholders.  I argued that lay summaries and context pieces will increasingly be required, and I was pleased to discover that at least some open access journals are already thinking about this.  In this second part, I want to raise questions about whether researchers and those who support them are ready for the potential extra degree of scrutiny and attention that open access may bring.

On February 23rd 2012, the Journal of Medical Ethics published a paper called After-birth abortion: why should the baby live? by Alberto Giubilini and Francesca Minerva.   The paper was not to advocate “after birth abortion” (i.e infanticide), but to argue that many of the arguments that are said to justify abortion also turn out to justify infanticide.  This isn’t a new argument by any means, but presumably there was sufficient novelty in the construction of the argument to warrant publications.  To those familiar with the conventions of applied ethics – the intended readers of the article – it’s understood that it was playing devil’s advocate, seeing how far arguments can be stretched, taking things to their logical conclusion, seeing how far the thin end of the edge will drive, what’s at the bottom of the slippery slope, just what kind of absurdium can be reductio-ed to.  While the paper isn’t satire in the same way as Jonathan Swift’s A Modest Proposal, no sensible reader would have concluded that the authors were calling for infanticide to be made legal, in spite of the title.

I understand that what happened next was that the existence of the article – for some reason – attracted attention in the right wing Christian blogosphere, prompting a rash of complaints, hostile commentary, fury, racist attacks, and death threats.  Journal editor Julian Savulescu wrote a blog post about the affair, below which are 624 comments.   It’s enlightening and depressing reading in equal measure.  Quick declaration of interest here – my academic background (such as it is) is in philosophy, and I used to work at Keele University’s Centre for Professional Ethics marketing their courses.  I know some of the people involved in the JME’s response, though not Savulescu or the authors of the paper.

There’s a lot that can (and probably should) be said about the deep misunderstanding that occurred between professional bioethicists and non-academics concerned about ethical issues who read the paper, or who heard about it.  Part of that misunderstanding is about what ethicists do – they explore arguments, analyse concepts, test theories, follow the arguments.  They don’t have any special access to moral truth, and while their private views are often much better thought out than most people, most see their role as helping to understand arguments, not pushing any particular position.  Though some of them do that too, especially if it gets them on Newsnight.  I’m not really well informed enough to comment too much on this, but it seems to me that the ethicists haven’t done a great job of explaining what they do to those more moderate and sensible critics.  Those who post death threats and racist abuse are probably past reasoned argument and probably love having something to rail against because it justifies their peculiar world view, but for everyone else, I think it ought to be possible to explain.  Perhaps the notion of a lay summary that I mentioned last time might be helpful here.

Part of the reason for the fuss might have been because the article wasn’t available via open access, so some critics may not have had the opportunity to read the article and make up their own mind.  This might be thought of as a major argument in favour of open access – and of course, it is – the reasonable and sensible would have at least skim-read the article, and it’s easier to marshal a response when what’s being complained about is out there for reference.

However….. the unfortunate truth is that there are elements out there who are looking for the next scandal, for the next chance to whip up outrage, for the next witch hunt.  And I’m not just talking about the blogosphere, I’m talking about elements of the mainstream media, who (regardless of our personal politics) have little respect or regard for notions of truth, integrity and fairness.  If they get their paper sales, web  hits, outraged comments, and resulting manufactured “scandal”, then they’re happy.  Think I’m exaggerating?  Ask Hilary Mantel, who was on the receiving end of an entirely manufactured fuss with comments she made in a long and thoughtful lecture being taken deliberately and dishonestly out of context.

While open access will make things easier for high quality journalism and for the open-minded citizen and/or professional, it’ll also make it easier for the scandal-mongers (in the mainstream media and in the blogosphere) to identify the next victim to be thrown to the ravenous outrage-hungry wolves that make up their particular constituency.  It’s already risky to be known to be researching and publishing in certain areas – anything involving animal research; climate change; crop science; evolutionary theory; Münchhausen’s by Proxy; vaccination; and (oddly) chronic fatigue syndrome/ME – appears to have a hostile activist community ready to pounce on any research that comes back with the “wrong” answer.

I don’t want to go too far in presenting the world outside the doors of the academy as being a swamp of unreason and prejudice.  But the fact is that alongside the majority of the general public (and bloggers and journalists) who are both rational and reasonable, there is an element that would be happy to twist (or invent) things to suit their own agenda, especially if that agenda involves whipping out manufactured outrage to enable their constituency to confirm their existing prejudices. Never mind the facts, just get angry!

Doubtless we all know academics who would probably relish the extra attention and are already comfortable with the public spotlight.  But I’m sure we also know academics who do not seek the limelight, who don’t trust the media, and who would struggle to cope with even five minutes of (in)fame(y).  One day you’re a humble bioethicist, presumably little known outside your professional circles, and the next, hundreds of people are wishing you dead and calling you every name under the sun.  While Richard Dawkins seems to revel in his (sweary) hate mail, I think a lot of people would find it very distressing to receive emails hoping for their painful death.  I know it would upset me a lot, so please don’t send me any, okay?  And be nice in the comments…..

Of course, even if things never get that far or go that badly, with open access there’s always a greater chance of hostile comment or criticism from the more mainstream and reasonable media, who have a much bigger platform from which to speak than an academic journal.  This criticism need not be malicious, could be legitimate opinion, could be based on a misunderstanding.  Open access opens up the academy to greater scrutiny and greater criticism.

As for what we do about this….. it’s hard to say.  I certainly don’t say that we retreat behind the safety of our paywalls and sally forth with our research only when guarded by a phalanx of heavy infantry to protect us from the swinish multitude besieging our ivory tower.  But I think that there are things that we can do in order to be better prepared.  The use of lay summaries, and greater consideration of the lay reader when writing academic papers will help guard against misunderstandings.

University external relations departments need to be ready to support and defend academic colleagues, and perhaps need to think about planning for these kind of problems, if they don’t do so already.

Posted in Frustrations, Open Access, Research Costs, Research Impact, Social Media, University culture | 6 Comments

The consequences of Open Access: Part 1: Is anyone thinking about the “lay” reader?

The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post.  Sorry about that.  I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”.  No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up.  I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals.  Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.

I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet.  This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.

I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions.  It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing.  I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields.  I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.

If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with.  One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area.  By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline.  Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise.  Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself.  At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness.  I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract.  I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered.  I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).

For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion.   In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.

How would open access help me here?  Well, obviously it would give me access to the paper.  But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature.  Those numbers in that abstract look great, but I don’t have the first clue what they mean.  Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination.  But it’ll still be difficult for me to compare and contrast and form my own conclusions.  And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.

I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments.  The intellectually dishonest are already doing this, and open access might make it easier.

I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research.  Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing.  But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online.  Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?

What do we do with papers that present evidence for a conclusion that further research demonstrates to be false?  In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate.  Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.

I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar.  This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides.  Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users.  One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news.  It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted.  But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.

Has there been any discussion of these issues that I’ve missed?  This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers.  I’ve talked mainly about health/exercise examples in this entry, but all this could apply  just as well to pretty much any other field of research where non-academics might take an interest.

Posted in Open Access, Post-Award, Public Sector, Research Costs, Research Impact, University culture | 9 Comments

Best wishes for 2013, via the medium of my favourite university-related youtube clips of 2012….

Yes, I know I used the same picture last year.  You can write to the usual address for your money back....

Yes, I know I used the same picture last year. You can write to the usual address for your money back….

Hello everyone, and happy new year’s eve.  Or probably more likely by the time you’re reading this, happy first day back at work of 2013 and a prosperous new email backlog from people who had less time off over Christmas than you, and are anxious to demonstrate their productivity.  My last new year’s message was a bit of a whingeathon, so I’m going to be more positive this season and share some youtubes that I’ve enjoyed over the last year.  I know you’ve got a lot to do today, but why not leave this page open and watch the clips over lunch?

1. John Cleese, Jonathan Miller – Words… and things

This is a sketch from 1977 starring John Cleese and Jonathan Miller, which I think I’ve tweeted before with the title “Philosophers preparing their REF Impact statement”.  And while there’s a bit of that, what I like most about this is the superbly well observed and subtly exaggerated academic mannerisms.  Are those mannerisms a peculiar philosophy affectation, or are they more widespread?

2.  Armstrong and Miller Physics Special

In which Ben Miller (who I think has a PhD in physics) demonstrates how not to do public engagement/media work.  Watching this, it’s hard not to appreciate the effort that does go into communicating very complex science to the general public, particularly the efforts to explain the search for the Higgs via the medium of rap, when I suspect that the reality is pretty much as Miller’s character says.  Special hat tip on the science public engagement front to m’colleagues from the Periodic Videos team at the University of Nottingham’s School of Chemistry, though apparently they prefer Dubstep (whatever that is) to rap music.

3. A Very Peculiar Practice

I finally got round to watching this late 1980s TV series about a medical practice at a university.  It’s both very current (debates about research v. teaching; working with industry; student finances; university politics; the role of the university; the place of the arts/humanities) and very dated (haircuts; weird theme music and opening credits; accents – some weird London accents that have either died out or never existed at all).  On the down side, it does require the viewer to accept the premise that a university medical practice is a department of the university (was this ever the case anywhere?), and the overall tone and level of (sur)realism uneasily shifts between sitcom and comedy-drama.  On the up side, it’s an interesting view of 1980s campuses (Birmingham and Keele – my former stomping ground) and has a superb cast – Peter Davison, Barbara Flynn, John Bird, plus small early roles for Hugh Grant and Kathy Burke.  It’s worth a look – I’ve embedded the trailer for the DVD complete box set, though a fair bit of it is also on youtube if you want more of a taster before investing.

4. “Don’t wanna work in admin”

I’ve been a fan of Nick Helm’s brand of on-the-edge-of-a-breakdown stand-up and musical comedy since seeing him in Nottingham a few years back – hilarious and terrifying at the same time.  What I remember most about that performance was a song that will resonate with anyone who has or has had a basic admin job.  It’s very sweary and therefore not work safe, so I’m only going to link it rather than embed it.

Enjoy.  But probably not in the office.

Posted in Research Impact, Social Media, University culture | Comments Off on Best wishes for 2013, via the medium of my favourite university-related youtube clips of 2012….

ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

There was an interesting snippet of information in an article in this week’s Times Higher about the latest research council success rates.

 [A] spokeswoman for the ESRC said that since the research council had begun requiring institutions from June 2011 to internally sift applications before submitting them, it had recorded an overall success rate of 24 per cent, rising to 33 per cent for its most recent round of responsive mode grants.  She said that application volumes had also dropped by 37 per cent, “which is an encouraging start towards our demand management target of a 50 per cent reduction” by the end of 2014-15.

Back in October last year I noticed what I thought was a change in tone from the ESRC which gave the impression that they were more confident that institutions had taken note of the shot across the bows of the “demand management” measures consultation exercise(s), and that perhaps asking for greater restraint in putting forward applications would be sufficient.  I hope it is, because as the current formal demand management proposals that will be implemented if required unfairly and unreasonably include co-applicants in any sanction.

I’ve written before (and others have added very interesting comments) about how I think we arrived at the situation where social science research units were flinging as many applications in as possible in the hope that some of them would stick.  And I hope the recent improvements in success rates to around 1-in-3, 1-in-4 don’t serve to re-encourage this kind of behaviour. We need long term, sustainable, careful, restraint in terms of what applications are submitted by institutions to the ESRC (and other major funders, for that matter) and the state in which they’re submitted.

Everyone will want to improve the quality of applications, and internal mentoring and peer review and the kind of lay review that I do will assist with that, but we also need to make sure that the underlying research idea is what I call ‘ESRC-able’.  At Nottingham University Business School, I secured agreement a while ago now to introduce a ‘proof of concept’ review phase for ESRC applications, where we review a two page outline first, before deciding whether to give the green light for the development of a full application.  I think this allows time for changes to be made at the earliest stage, and makes it much easier for us to say that the idea isn’t right and shouldn’t be developed than if a full application was in front of us.

And what isn’t ‘ESRC-able’?  I think a look at the assessment schema gives some useful clues – if you can’t honestly say that your application would fit in the top two categories on the final page, you probably shouldn’t bother.  ‘Dull but worthy’ stuff won’t get funded, and I’ve seen the phrase “incremental progress” used in referees’ comments to damn with faint praise.  There’s now a whole category of research that is of good quality and would doubtless score respectably in any REF exercise, but which simply won’t be competitive with the ESRC.  This, of course, raises the question about how non-groundbreaking stuff gets funded – the stuff that’s more than a series of footnotes to Plato, but which builds on and advances the findings of ground-breaking research by others.  And to that I have no answer – we have a system which craves the theoretically and methodologically innovative, but after a paradigm has been shifted, there’s no money available to explore the consequences.

*     *     *     *     *

Also in the Times Higher this week is the kind of story that appears every year – some universities have done better this year at getting research funding/with their success rates than in previous years, and some have done worse.  Some of those who have done better and worse are the traditional big players, and some are in the chasing pack.  Those who have done well credit their brilliant internal systems and those who have done badly will contest the figures or point to extenuating circumstances, such as the ending of large grants.

While one always wants to see one’s own institution doing well and doing better, and everyone always enjoys a good bit of schadenfreude at the expense of their rivals benchmark institutions and any apparent difficulties that a big beast find themselves in, are any of these short term variations of actual, real, statistical significance?  Apparently big gains can be down to a combination of a few big wins, grants transferring in with new staff, and just… well… the kind of natural variation you’d expect to see.  Big losses could be big grants ending, staff moving on, and – again – natural variance.  Yes, you could ascribe your big gains to your shiny new review processes, but would you also conclude that there’s a problem with those same processes and people the year after when performance is apparently less good?

Why these short term (and mostly meaningless) short term variations are more newsworthy than the radical variation in ESRC success rates for different social science disciplines I have no idea….

Posted in ESRC, Frustrations, Funding, Research Costs, University culture | Comments Off on ESRC “demand management” measures working….. and why rises and falls in institutions’ levels of research funding are not news

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

Posted in ESRC, Frustrations, Funding, Funding Policy, Public Sector, Research Costs, Research Impact, University culture | 8 Comments

Book review: The Research Funding Toolkit (Part 1)

For the purposes of this review, I’ve set aside my aversion to the use of terms like ‘toolkit’ and ‘workshop’.

The existence of a market for The Research Funding Toolkit, by Jacqueline Aldridge and Andrew Derrington, is yet more evidence of how difficult it is to get research funding in the current climate.  Although the primary target audience is an academic one, research managers and those in similar roles “will also find most of this book useful”, and I’d certainly have no hesitation in recommending this book to researchers who want to improve their chances of getting funding, and also to new and to experienced research managers.  In particular, academics who don’t have regular access to research managers (or similar) and to experienced grant getters and givers at their own institution should consider this book essential reading if they entertain serious ambitions about obtaining research funding.  While no amount of skill in grant writing will get a poor idea funded, a lack of skill in grant writing can certainly prevent an outstanding idea from getting the hearing it deserves if the application lacks clarity, fails to highlight the key issues, or fails to make a powerful case for its importance.

The authors have sought to distil a substantial amount of advice and experience down into one short book which covers finding appropriate funding sources, planning an application, understanding application forms, and assembling budgets.  But it goes beyond mere administrative advice, and also addresses writing style, getting useful (rather than merely polite) feedback on draft versions, the internal politics of grant getting, the challenges of collaborative projects, and the key questions that need to be addressed in every application.  Crucially, it demystifies what really goes on at grant decision making meetings – something that far too many applicants know far too little about.  Applicants would love to think that the scholarly and eminent panel spend hours subjecting every facet of their magnum opus to detailed, rigorous, and forensic analysis.  The reality is – unavoidably given application numbers  – rather different.

Aldridge and Derrington are well-situated to write a book about obtaining research funding.  Aldridge is Research Manager at Kent Business School and has over eight years’ experience of research management and administration.  Derrington is Pro-Vice Chancellor for Humanities and Social Sciences at the University of Liverpool, and has served on grant committees for several UK research councils and for the Wellcome Trust.  His research has been “continuously funded” by various schemes and funders for 30 years.  I think a book like this could only have been written in close collaboration between an academic with grant getting and giving experience, and a research manager with experience of supporting applications over a number of years.

The book practices what it preaches by applying the principles of grant writing that it advocates to the style and layout of the book itself.  It is organised into 13 distinct chapters, each containing a summary and introduction, and a conclusion at the end to summarise the key points and lessons to be taken.  It includes 19 different practical tools, as well as examples from successful grant applications. One of the appendixes offers advice on running institutional events on grant getting.  As it advises applicants, it breaks the text down into small chunks, makes good use of headings and subheadings, and uses clear, straightforward language.  It’s certainly an easy, straightforward read which won’t take too long to read cover-to-cover, and the structure allows the reader to dip back in to re-read appropriate sections later.  Probably the most impressive thing for me about the style is how lightly it wears its expertise – genuinely useful advice without falling into the traps of condescension, smugness, or preaching.  Although the prose sacrifices sparkle for clarity and brevity, the book coins a number of useful phrases or distinctions that will be of value, and I’ll certainly be adopting one or two of them.

Writing a book of this nature raises a number of challenges about specificity and relevance.  Different subjects have different funders with different priorities and conventions, and arrangements vary from country to country, and – of course – over time.  The authors have deliberately sought to use a wide range of example funders, including funders from Australia, America, and from Europe – though as you might expect the majority of exemplar funders are UK-based.  However, different Research Councils are used as case studies, and I would imagine that the advice given is generalisable enough to be of real value across academic disciplines and countries.  It’s harder to tell how this book will date, (references to web resources all date from Oct 2011), but much of the advice flows directly from (a) the scarcity of resources, and (b) the way that grant panels are organised and work, and it’s hard to imagine either changing substantially.  The authors are careful not to make generalisations or sweeping assertions based on any particular funder or scheme, so I would be broadly optimistic about the book’s continuing relevance and utility in years to come.  There’s also a website to accompany the book where new materials and updates may be added in the future.  There are already a number of blog posts subsequent to the publication date of the book.

Worries about appearing dated may account for the book having comparatively little to say about the impact agenda and how to go about writing an impact statement.  Only two pages address this directly, and much of these are taken up with examples.  Although not all UK funders ask for impact statements yet, the research councils have been asking for them for some time, and indications are that other countries are more likely to follow suit than not.  However, I think the authors were right not to devote a substantial section to this, as understandings and approaches to impact are still comparatively in their infancy, and such a section would probably be likely to date.

I’ve attempted a fairly general review in this post, and I’ll save most of my personal reaction for Part 2 of this post.  As well as highlighting a few areas that I found particularly useful, I’m going to raise a few issues that arise from the book as a bit of a jumping off point for debate and discussion.  Attempting to do that in this first post will make it too long, and unbalance the review by placing excessive focus on areas where I’d tentatively disagree, rather than the overwhelming majority of the points and arguments made in the book which I’d thoroughly agree with and endorse absolutely.

‘The Research Funding Toolkit(£21.99 for the paperback version) is available from Sage.  The Sage website also mentions an ebook version, but the link doesn’t appear to be working at the time of writing.

Declarations of interest:
Publishers Sage were kind enough to provide me with a free review copy of this book.  I have had some very brief Twitter interactions with Derrington and I met Aldridge briefly at the ARMA conference earlier this year.

Posted in Application advice, Funding | Comments Off on Book review: The Research Funding Toolkit (Part 1)

News from the ESRC: International co-investigators and the Future Leaders Scheme

"They don't come over here, they take our co-investigator jobs..."I’m still behind on my blogging – I owe the internet the second part of the impact series, and a book review I really must get round to writing.  But I picked up an interesting nugget of information regarding the ESRC and international co-investigators that’s worthy of sharing and commenting upon.

ESRC communications send round an occasional email entitled ‘All the latest from the ESRC’, which is well worth subscribing to, and reading very carefully as often quite big announcements and changes are smuggled out in the small print.  In the latest version, for example, the headline news is the Annual Report (2011-12), while the announcement of the ESRC Future Leaders call for 2012 is only the fifth item down a list of funding opportunities.  To be fair, it was also announced on Twitter and perhaps elsewhere too, and perhaps the email has a wider audience than people like me.  But even so, it’s all a bit low key.

I’ve not got much to add to what I said last year about the Future Leaders Scheme other than to note with interest the lack of an outline stage this year, and the decision to ring fence some of the funding for very early career researchers – current doctoral students and those who have just passed their PhD.  Perhaps the ESRC are now more confident in institutions’ ability to regulate their own submission behaviour, and I can see this scheme being a real test of this.  I know at the University of Nottingham we’re taking all this very seriously indeed, and grant writing is now neither a sprint nor a marathon but more like a steeplechase, and my impression from the ARMA conference is that we’re far from alone in this.  Balancing ‘demand management’ with a desire to encourage applications is a topic for another blog post.  As is the effect of all these calls with early Autumn deadlines – I’d argue it’s much harder to demand manage over the summer months when applicants, reviewers, and research managers are likely to be away on holiday and/or researching.

Something else mentioned in the ESRC is a light touch review of the ESRC’s international co-investigator policy.  One of the findings was that

“…grant applications with international co-investigators are nearly twice as likely to be successful in responsive mode competitions as those without, strengthening the argument that international cooperation delivers better research.”

This is very interesting indeed.  My first reaction is to wonder whether all of that greater success can be explained by higher quality, or whether the extra value for money offered has made a difference.  Outside of the various international co-operation/bilateral schemes, the ESRC would generally expect only to pay directly incurred research costs for ICo-Is, such as travel, subsistence, transcription, and research assistance.  It won’t normally pay for investigator time and will never pay overheads, which represents a substantial saving on naming a UK-based Co-I.

While the added value for money argument will generally go in favour of the application, there are circumstances where it might make it technically ineligible.  When the ESRC abolished the small grants scheme and introduced the floor of £200k as the minimum to be applied for through the research grants scheme, the figure of £200k was considered to represent the minimum scale/scope/ambition that they were prepared to entertain.  But a project with a UK Co-I may sneak in just over £200k and be eligible, yet an identical project with an ICo-I would not be eligible as it would not have salary costs or overheads to bump up the cost.  I did raise this with the ESRC a while back when I was supporting an application that would be ineligible under the new rules, but we managed to submit it before the final deadline for Small Grants.  The issue did not arise for us then, but I’m sure it will (and probably has) arisen for others.

The ESRC has clarified the circumstances under which they will pay overseas co-investigator salary costs:

“….only in circumstances where payment of salaries is absolutely required for the research project to be conducted. For example, where the policy of the International Co-Investigator’s home institution requires researchers to obtain funding for their salaries for time spent on externally-funded research projects.

In instances where the research funding structure of the collaborating country is such that national research funding organisations equivalent to the ESRC do not normally provide salary costs, these costs will not be considered. Alternative arrangements to secure researcher time, such as teaching replacement costs, will be considered where these are required by the co-investigator’s home institution.”

This all seems fairly sensible, and would allow the participation of researchers involved in Institutes where they’re expected to bring in their own salary, and those where there isn’t a substantial research time allocation that could be straightforwardly used for the project.

While it would clearly be inadvisable to add on an ICo-I in the hope of boosting chances of success or for value for money alone, it’s good to know that applications with ICo-Is are doing well with the ESRC even outside of the formal collaborative schemes, and that we shouldn’t shy away from looking abroad for the very best people to work with.   Few would argue with the ESRC’s contention that

[m]any major issues requiring research evidence (eg the global economic crisis, climate change, security etc.) are international in scope, and therefore must be addressed with a global research response.

Posted in Application advice, Career Young Researchers, ESRC, Funding, Funding Policy, Research Costs, University culture | 1 Comment

The ARMA conference, social media, the future of this blog, and some downtime

The Association of Research Managers and Administrators conference was held in Southampton last week, and I’ve only got time to scribble a few words about it.  It’s a little frustrating, really – I’ve come back from the conference with various ideas and schemes for work, and a few for the blog, but I’m on annual leave until the end of July.  While I’ve always written this blog in my own time, I’m going to have a near-complete break (apart from perhaps a little Twitter lurking) so my reader will have to wait until July at the very earliest for the second instalment of my impact series.

I co-presented a session at ARMA on ‘Social Media in Research Support’ with Phil Ward of ‘Fundermentals‘ and the University of Kent, Julie Northam (Bournemouth University Research blog), and David Young (Northumbria University Research blog).  Phil has written a concise summary of the plenary sessions, and our presentation can be found on the Northumbria blog.

I have a slight stammer that I’m told that most people don’t notice, so I’m not a ‘natural’ public speaker, but I’m very pleased with the way that the session went.  I’m very grateful to my three co-presenters for their efforts and for what really amounted to quite a lot of preparation time, including a meeting in London.  I’m also very grateful to the delegates who attended – I think I counted 50 or so, which for the final session of the conference and scheduled against a very strong line-up of parallel sessions, was pretty good.    It was a very warm afternoon, but energy and attention levels in the room felt high, and this helped enormously.  So if you made it, thank you for coming, thank you for your attention, and most importantly of all, thank you for laughing at our jokes.

David opened the session by asking about the audience’s experience with social media, I was surprised at how much experience there was in the room.  We weren’t far short of 100% on Facebook, probably about 20% or more on or having using Twitter, and four or five bloggers.  Perhaps it shouldn’t have been a surprise, as perhaps the title of the session would have particularly appealed to those with an interest or previous experience.  But it was good to have an idea of the level to pitch things.

The session consisted of a brief introduction and explanation of social media, followed by four case studies.  Phil and I talked about our motivations in setting up our own blogs, our experiences, lessons learnt, and benefits and challenges.  Julie and David talked about their experience in setting up institutional research blogs, and how they went about getting institutional acceptance and academic buy-in.  It was interesting to see that the Open University had a poster presentation about a research blog that they’ve set up, though that’s internal only at the moment.  ARMA itself is now on Twitter, and this was the first year that the conference had an official hashtag – #ARMA2012.  While there’s no need for an official one – sometimes they just emerge – it’s very helpful to have an element of coordination.  I don’t think blogging or social media are going away any time soon, and I can only see their usage increasing. – though I do have reservations about scalability and sustainability.

As I said in the presentation, my motivations in setting up a blog were to try to join in a broader conversation with academics, funders, and people like me.  We get to do a lot of that at the annual ARMA conference, but it would be good to keep that going throughout the rest of the year too.  A secondary motivation was to learn by doing – I’m expected to help academics write their pathways to impact, which almost inevitably involve social media, and by getting involved myself I understand it in a way that I could never have understood as a mere bystander.

My blog is now a few weeks shy of its first birthday, an auspicious event marked by a birthday card invoice from my hosting provider, and a time for reflection.  I’ve managed reasonably well to hit an average of 2-3 posts per month – some reactions to news, some more detailed think pieces, and some lighter reflections on university culture and life.  That’s not too bad, but looking into the future I wonder whether I’ll be able to sustain this, and whether I’ll want to spend my own time writing about these things.  While I’m hopeful that I might be able to shift a little of the blog into my ‘day job’ (discussions on that to follow), one other option is to share the load, and I think the future for most blogs is multi-author.  Producing semi-regular, consistent quality content is a challenge, and I’m going to be soliciting guest posts in the future to feature alongside my own – whether that’s semi-regular or one off.  So, if you’d like to write occasionally but don’t want a whole blog, this might be a good opportunity.  Happy to discuss anything that’s a good fit with the overall theme of the blog.  Please drop me an email if you’re interested – I don’t bite.

One issue that came up in the questions (and afterwards on Twitter), was the question of the personal and the professional.  My sense was that a fair few people in the room had their own Twitter accounts already, but used them for personal purposes, rather than for professional purposes, and were concerned about mixing the two.  Probably there was little or no reference to their job in their bio, and they tweet about their interests and talk to family and friends.  This issue of the personal and the professional was something we touched on only very briefly in our talk, and mainly in reference to blogs rather than Twitter.  But it’s clearly something that concerns people, and may be an active barrier to more people getting involved in Twitter conversations.  Probably the one thing I’d do differently about the presentation would be to say more about this, and I’ve added it to my list of topics for blog posts for the future.

Unless anyone else wants to write it?

Posted in Research Impact, Social Media, University culture | 2 Comments

An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Posted in Funding, Funding Policy, Research Impact, University culture | 3 Comments