Best wishes for 2013, via the medium of my favourite university-related youtube clips of 2012….

Yes, I know I used the same picture last year.  You can write to the usual address for your money back....
Yes, I know I used the same picture last year. You can write to the usual address for your money back….

Hello everyone, and happy new year’s eve.  Or probably more likely by the time you’re reading this, happy first day back at work of 2013 and a prosperous new email backlog from people who had less time off over Christmas than you, and are anxious to demonstrate their productivity.  My last new year’s message was a bit of a whingeathon, so I’m going to be more positive this season and share some youtubes that I’ve enjoyed over the last year.  I know you’ve got a lot to do today, but why not leave this page open and watch the clips over lunch?

1. John Cleese, Jonathan Miller – Words… and things

This is a sketch from 1977 starring John Cleese and Jonathan Miller, which I think I’ve tweeted before with the title “Philosophers preparing their REF Impact statement”.  And while there’s a bit of that, what I like most about this is the superbly well observed and subtly exaggerated academic mannerisms.  Are those mannerisms a peculiar philosophy affectation, or are they more widespread?

2.  Armstrong and Miller Physics Special

In which Ben Miller (who I think has a PhD in physics) demonstrates how not to do public engagement/media work.  Watching this, it’s hard not to appreciate the effort that does go into communicating very complex science to the general public, particularly the efforts to explain the search for the Higgs via the medium of rap, when I suspect that the reality is pretty much as Miller’s character says.  Special hat tip on the science public engagement front to m’colleagues from the Periodic Videos team at the University of Nottingham’s School of Chemistry, though apparently they prefer Dubstep (whatever that is) to rap music.

3. A Very Peculiar Practice

I finally got round to watching this late 1980s TV series about a medical practice at a university.  It’s both very current (debates about research v. teaching; working with industry; student finances; university politics; the role of the university; the place of the arts/humanities) and very dated (haircuts; weird theme music and opening credits; accents – some weird London accents that have either died out or never existed at all).  On the down side, it does require the viewer to accept the premise that a university medical practice is a department of the university (was this ever the case anywhere?), and the overall tone and level of (sur)realism uneasily shifts between sitcom and comedy-drama.  On the up side, it’s an interesting view of 1980s campuses (Birmingham and Keele – my former stomping ground) and has a superb cast – Peter Davison, Barbara Flynn, John Bird, plus small early roles for Hugh Grant and Kathy Burke.  It’s worth a look – I’ve embedded the trailer for the DVD complete box set, though a fair bit of it is also on youtube if you want more of a taster before investing.

4. “Don’t wanna work in admin”

I’ve been a fan of Nick Helm’s brand of on-the-edge-of-a-breakdown stand-up and musical comedy since seeing him in Nottingham a few years back – hilarious and terrifying at the same time.  What I remember most about that performance was a song that will resonate with anyone who has or has had a basic admin job.  It’s very sweary and therefore not work safe, so I’m only going to link it rather than embed it.

Enjoy.  But probably not in the office.

ESRC success rates by discipline: what on earth is going on?

Update – read this post for the 2012/13 stats for success rates by discipline

The ESRC have recently published a set of ‘vital statistics‘ which are “a detailed breakdown of research funding for the 2011/12 financial year” (see page 22).  While differences in success rates between academic disciplines are nothing new, this year’s figures show some really quite dramatic disparities which – in my view at least – require an explanation and action.

The overall success rate was 14% (779 applications, 108 funded) for the last tranche of responsive mode Small Grants and response mode Standard Grants (now Research Grants).  However, Business and Management researchers submitted 68 applications, of which 1 was funded.  One.  One single funded application.  In the whole year.  For the whole discipline.  Education fared little better with 2 successes out of 62.

Just pause for a moment to let that sink in.  Business and Management.  1 of 68.  Education.  2 of 62.

Others did worse still.  Nothing for Demographics (4 applications), Environmental Planning (8), Science and Technology Studies (4), Social Stats, Computing, Methods (11), and Social Work (10).  However, with a 14% success rate working out at about 1 in 7, low volumes of applications may explain this.  It’s rather harder to explain a total of 3 applications funded from 130.

Next least successful were ‘no lead discipline’ (4 of 43) and Human Geography (3 from 32).  No other subjects had success rates in single figures.  At the top end were Socio-Legal Studies (a stonking 39%, 7 of 18), and Social Anthropology (28%, 5 from 18), with Linguistics; Economics; and Economic and Social History also having hit rates over 20%.  Special mention for Psychology (185 applications, 30 funded, 16% success rate) which scored the highest number of projects – almost as many as Sociology and Economics (the second and third most funded) combined.

Is this year unusual, or is there a worrying and peculiar trend developing?  Well, you can judge for yourself from this table on page 49 of last year’s annual report, which has success rates going back to the heady days of 06/07.  Three caveats, though, before you go haring off to see your own discipline’s stats.  One is that the reports refer to financial years, not academic years, which may (but probably doesn’t) make a difference.  The second is that the figures refer to Small and Standard Grants only (not Future Leaders/First Grants, Seminar Series, or specific targeted calls).  The third is that funded projects are categorised by lead discipline only, so the figures may not tell the full story as regards involvement in interdisciplinary research.

You can pick out your own highlights, but it looks to me as if this year is only a more extreme version of trends that have been going on for a while.  Last year’s Education success rate?  5%.  The years before?  8% and 14%  Business and Management?  A heady 11%, compared to 10% and 7% for the preceding years. And you’ve got to go all the back to 9/10 to find the last time any projects were funded in Demography, Environmental Planning, or Social Work.  And Psychology has always been the most funded, and always got about twice as many projects as the second and third subjects, albeit from a proportionately large number of applications.

When I have more time I’ll try to pull all the figures together in a single spreadsheet, but at first glance many of the trends seem similar.

So what’s going on here?  Well, there are a number of possibilities.  One is that our Socio Legal Studies research in this country is tip top, and B&M research and Education research is comparatively very weak.  Certainly I’ve heard it said that B&M research tends to suffer from poor research methodologies.  Another possibility is that some academic disciplines are very collegiate and supportive in nature, and scratch each other’s backs when it comes to funding, while other disciplines are more back-stabby than back-scratchy.

But are any or all of these possibilities sufficient to explain the difference in funding rates?  I really don’t think so.  So what’s going on?  Unconscious bias?  Snobbery?  Institutional bias?  Politics?  Hidden agendas?  All of the above?  Anyone know?

More pertinently, what do we do about it?  Personally, I’d like to see the appropriate disciplinary bodies putting a bit of pressure on the ESRC for some answers, some assurances, and the production of some kind of plan for addressing the imbalance.  While no-one would expect to see equal success rates for every subject, this year’s figures – in my view – are very troubling.

And something needs to be done about it, whether that’s a re-thinking of priorities, putting the knives away, addressing real disciplinary weaknesses where they exist, ring-fenced funding, or some combination of all of the above.  Over to greater minds than mine…..

The ARMA conference, social media, the future of this blog, and some downtime

The Association of Research Managers and Administrators conference was held in Southampton last week, and I’ve only got time to scribble a few words about it.  It’s a little frustrating, really – I’ve come back from the conference with various ideas and schemes for work, and a few for the blog, but I’m on annual leave until the end of July.  While I’ve always written this blog in my own time, I’m going to have a near-complete break (apart from perhaps a little Twitter lurking) so my reader will have to wait until July at the very earliest for the second instalment of my impact series.

I co-presented a session at ARMA on ‘Social Media in Research Support’ with Phil Ward of ‘Fundermentals‘ and the University of Kent, Julie Northam (Bournemouth University Research blog), and David Young (Northumbria University Research blog).  Phil has written a concise summary of the plenary sessions, and our presentation can be found on the Northumbria blog.

I have a slight stammer that I’m told that most people don’t notice, so I’m not a ‘natural’ public speaker, but I’m very pleased with the way that the session went.  I’m very grateful to my three co-presenters for their efforts and for what really amounted to quite a lot of preparation time, including a meeting in London.  I’m also very grateful to the delegates who attended – I think I counted 50 or so, which for the final session of the conference and scheduled against a very strong line-up of parallel sessions, was pretty good.    It was a very warm afternoon, but energy and attention levels in the room felt high, and this helped enormously.  So if you made it, thank you for coming, thank you for your attention, and most importantly of all, thank you for laughing at our jokes.

David opened the session by asking about the audience’s experience with social media, I was surprised at how much experience there was in the room.  We weren’t far short of 100% on Facebook, probably about 20% or more on or having using Twitter, and four or five bloggers.  Perhaps it shouldn’t have been a surprise, as perhaps the title of the session would have particularly appealed to those with an interest or previous experience.  But it was good to have an idea of the level to pitch things.

The session consisted of a brief introduction and explanation of social media, followed by four case studies.  Phil and I talked about our motivations in setting up our own blogs, our experiences, lessons learnt, and benefits and challenges.  Julie and David talked about their experience in setting up institutional research blogs, and how they went about getting institutional acceptance and academic buy-in.  It was interesting to see that the Open University had a poster presentation about a research blog that they’ve set up, though that’s internal only at the moment.  ARMA itself is now on Twitter, and this was the first year that the conference had an official hashtag – #ARMA2012.  While there’s no need for an official one – sometimes they just emerge – it’s very helpful to have an element of coordination.  I don’t think blogging or social media are going away any time soon, and I can only see their usage increasing. – though I do have reservations about scalability and sustainability.

As I said in the presentation, my motivations in setting up a blog were to try to join in a broader conversation with academics, funders, and people like me.  We get to do a lot of that at the annual ARMA conference, but it would be good to keep that going throughout the rest of the year too.  A secondary motivation was to learn by doing – I’m expected to help academics write their pathways to impact, which almost inevitably involve social media, and by getting involved myself I understand it in a way that I could never have understood as a mere bystander.

My blog is now a few weeks shy of its first birthday, an auspicious event marked by a birthday card invoice from my hosting provider, and a time for reflection.  I’ve managed reasonably well to hit an average of 2-3 posts per month – some reactions to news, some more detailed think pieces, and some lighter reflections on university culture and life.  That’s not too bad, but looking into the future I wonder whether I’ll be able to sustain this, and whether I’ll want to spend my own time writing about these things.  While I’m hopeful that I might be able to shift a little of the blog into my ‘day job’ (discussions on that to follow), one other option is to share the load, and I think the future for most blogs is multi-author.  Producing semi-regular, consistent quality content is a challenge, and I’m going to be soliciting guest posts in the future to feature alongside my own – whether that’s semi-regular or one off.  So, if you’d like to write occasionally but don’t want a whole blog, this might be a good opportunity.  Happy to discuss anything that’s a good fit with the overall theme of the blog.  Please drop me an email if you’re interested – I don’t bite.

One issue that came up in the questions (and afterwards on Twitter), was the question of the personal and the professional.  My sense was that a fair few people in the room had their own Twitter accounts already, but used them for personal purposes, rather than for professional purposes, and were concerned about mixing the two.  Probably there was little or no reference to their job in their bio, and they tweet about their interests and talk to family and friends.  This issue of the personal and the professional was something we touched on only very briefly in our talk, and mainly in reference to blogs rather than Twitter.  But it’s clearly something that concerns people, and may be an active barrier to more people getting involved in Twitter conversations.  Probably the one thing I’d do differently about the presentation would be to say more about this, and I’ve added it to my list of topics for blog posts for the future.

Unless anyone else wants to write it?

An Impact Statement: Part 1: Impact and the REF

If your research leads directly or indirectly to this, we'll be having words.....

Partly inspired by a twitter conversation and partly to try to bring some semblance of order my own thoughts, I’m going to have a go about writing about impact.  Roughly, I’d argue that:

  • The impact agenda is – broadly – a good thing
  • Although there are areas of uncertainty and plenty of scope for collective learning, I think the whole area is much less opaque than many commentators seem to think
  • While the Research Councils and the REF have a common definition of ‘impact’, they’re looking at it from different ends of the telescope.

This post will come in three parts.  In part one, I’ll try to sketch a bit of background and say something position of impact in the REF.  In part two, I’ll turn to the Research Councils and think about how ‘impact’ differs from previous different – but related – agendas.  In part three, I’ll pose some questions that are puzzling me about impact and test my thinking with examples.

Why Impact?

What’s going on?  Where’s it come from?  What’s driving it?  I’d argue that to understand the impact agenda properly, it’s important to first understand the motivations.  Broadly speaking, I think there are two.

Firstly, I think it arises from a worry about a gap between academic research and those who might find it useful in some way.  How may valuable insights of various kinds from various disciplines have never got further than an academic journal or conference?  While some academics have always considered providing policy advice or writing for practitioner journals as a key part of their role as academics, I’m sure that’s not universally true.  I can imagine some of these researchers now complaining like music obsessives that they were into impact before anyone else and it sold out and went all mainstream.  As I’ve argued previously, one advantage of the impact agenda is that it gives engaged academics some long overdue recognition, as well as a much greater incentive for others to become involved in impact related activities.

Secondly, I think it’s about finding concrete, credible, and communicable evidence of the importance and value of academic research.  If we want to keep research funding at current levels, there’s a need to show return on investment and that the taxpayer is getting value for money.  Some will cringe at the reduction of the importance and value of research to such crude and instrumentalist terms, but we live in a crude and instrumentalist age.  There is an overwhelming case for the social and economic benefits of research, and that case must be made.  Whether we like it or not, no government of any likely hue is just going to keep signing the cheques.  The champions of research in policy circles do not intend to go naked into the conference chamber when they fight our corner.  To what extent the impact agenda comes directly from government, or whether it’s a pre-emptive move, I’m not quite sure.  But the effect is pretty much the same.

What’s Impact in the REF?

The REF definition of impact is as follows:

140. For the purposes of the REF, impact is defined as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia (as set out in paragraph 143).
141. Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally.
142. Impact includes the reduction or prevention of harm, risk, cost or other negative effects.
Assessment Framework and Guidance on Submissions
, page 26.

Paragraph 143 goes on to rule out academic impact on the grounds that it’s assessed in the outputs and environment section.  Fair enough.  More controversially, it goes on to state that “impacts on students, teaching, and other activities within the submitting HEI are excluded”.  But it’s possible to understand the reasoning.  If it were included, there’s a danger that far too impact case studies would be about how research affects teaching – and while that’s important, I don’t think we’d want it to dominate.  There’s also an argument that the link between research and teaching ought to be so obvious that there’s no need to measure it for particular reward.  In practical terms, I think it would be hard to measure.  I might know how my new theory has changed how I teach my module on (say) organisational behaviour to undergraduates, but it would be hard to track that change across all UK business schools.  I’d also worry about the possible perverse incentives on the shape of the curriculum that allowing impact on teaching might create.

The Main Panel C (the panel for most social sciences) criteria state that:

The main panel acknowledges that impact within its remit may take many forms and occur in a wide range of spheres. These may include (but are not restricted to): creativity, culture
and society; the economy, commerce or organisations; the environment; health and welfare; practitioners and professional services; public policy, law and services.
The categories used to define spheres of impact, for the purpose of this document, inevitably overlap and should not be taken as restrictive. Case studies may describe impacts which have affected more than one sphere. (para 77, pg. 68)

There’s actually a lot of detail and some good illustrations of what forms impact might take, and I’d recommend having a read.  I wonder how many academics not directly involved in REF preparations have read this?  One difficulty is finding it – it’s not the easiest document to track down.  For my non-social science reader(s), the other panel working methods can be found here.  Helpfully, nothing on that page will tell you which panel is which, but (roughly) Panel A is health and life sciences; B is natural sciences, computers, maths and engineering; C is social science; and D humanities.  Each panel criteria document has a table with examples of impact.

What else do we know about the place of impact in the REF?  Well, we know that impact has to have occurred in the REF period (1 January 2008 to 31 July 2013) and that impact has to be underpinned by excellent research (at least 2*) produced at the submitting university at some point between 1 January 1993 and 31 December 2013.  It doesn’t matter if the researchers producing the research are still at the institution – while publications move with the author, impact stays with the institution.  However, I can’t help wondering if an excessive reliance on research undertaken by departed staff won’t look too much like trading on past glories.  But probably it’s about getting the balance right.  The number of case studies required is approximately 1 per 8 FTE submitted, but see page 28 of the guidance document for a table.

Impact will have a weighting of 20%, with environment 15% and outputs (publications) 65%, and it looks likely that the weighting of impact will increase next time.  However, I wouldn’t be at all surprised if the actual contribution ends up being less than that.  If there’s a general trend that overall scores for impact are lower than that of (say) publications, then the contribution will end up being less than 20%.  My understanding is that for some units of assessment, environment was consistently rated more highly, thus de facto increasing the weighting.  Unfortunately this is just a recollection of something I read years ago, and which I can’t now find.  But if this is right, and if impact does come in with lower marks overall, we neglect environment at our peril.

Are institutions over-reacting to impact?

Interesting article and leader in this week’s Times Higher on the topic of impact, both of which carry arguments that “university managers” have over-reacted to the impact agenda.  I’m not sure whether that’s true or not, but I suspect that it’s all a bit more complicated than either article makes it appear.

The article quotes James Ladyman, Professor of Philosophy at the University of Bristol, as saying that university managers had overreacted and created “an incentive structure and environment in which an ordinary academic who works on a relatively obscure area of research feels that what they are doing isn’t valued”.

If that’s happened anywhere, then obviously things have gone wrong.  However, I do think that this need to be understood in the context of other groups and sub-groups of academics who likewise feel – or have felt – undervalued.  I can well understand why academics whose research does not lend itself to impact activities would feel alienated and threatened by the impact agenda, especially if it is wrongly presented (or perceived) as a compulsory activity for everyone – regardless of their area of research, skills, and comfort zone – and (wrongly) as a prerequisite for funding.

Another group of researchers who felt – and perhaps still feel – under-valued are those undertaking very applied research.  It’s very hard for them to get their stuff into highly rated (aka valued) journals.  Historically the RAE has not been kind to them.  The university promotions criteria perhaps failed to sufficiently recognise public engagement and impact activity – and perhaps still does.  While all the plaudits go to their highly theoretical colleagues, the applied researchers feel looked down upon, and struggle to get academic recognition.  If we were to ask academics whose roles are mainly teaching (or teaching and admin) rather than research, I think we may find that they feel undervalued by a system which many of them feel is obsessed by research and sets little store on excellent (rather than merely adequate) teaching.  Doubtless increased fees will change this, and perhaps we will hear complaints of the subsequent under-valuing of research relative to teaching.

So if academics working in non-impact friendly (NIFs, from now on) areas of research are now feeling under-valued, they’re very far from alone.  It’s true that the impact agenda has brought about changes to how we do things, but I think it could be argued that it’s not that the NIFs are now under valued, but that other kinds of research and academic endeavour  – namely applied research and impact activities (ARIA from now on) – are now being valued to a greater degree than before.  Dare I say it, to an appropriate degree?  Problem is, ‘value’ and ‘valuing’ tends to be seen as a zero sum game – if I decide to place greater emphasis on apples, the oranges may feel that they have lost fruit bowl status and are no longer the, er, top banana.  Even if I love oranges just as much as before.

Exactly how institutions ‘value’ (whatever we mean by that) NIF research and ARIA is an interesting question.  It seems clear to me that an institution/school/manager/grant giving body/REF/whatever could err either way by undervaluing and under-rewarding either.  We need both.  And we need excellent teachers.  And – dare I say it – non-academic staff too.  Perhaps the challenge for institutions is getting the balance right and making everyone feel valued, and reflecting different academic activities fairly in recruitment and selection processes and promotion criteria.  Not easy, when any increased emphasis on any one area seem to cause others to feel threatened.

Resourse list for academics new to social media

(This didn't happen)
"You will make sure that your research methodology links with your research questions, you snivelling little maggot!"

This week I was asked to be involved in a Research Grant application ‘bootcamp’ to talk in particular about the use of social media in pathways to impact plans, and academic blogging in general.  I was quick to disclaim expertise in this area – I’ve been blogging for a while now, but I’m not an academic and I’m certainly not an expert on social media.  I’m also not sure about this use of the word ‘bootcamp’.  We already have ‘workshop’ and ‘surgery’ as workplace-based metaphors for types of activity, and I’m not sure we’re ready for ‘bootcamp’.  So unless the event turns out to involve buzzcuts, a ten mile run, and an assault course, I’ll be asking for my money back.

But I thought I’d try to put together a list of resources and examples that I was already aware of in time for the session, and I then I wondered about ‘crowdsourcing’ (i.e. lazily ask my readers/twitter followers) some others that I might have missed.  Hopefully we’ll then end up with a general list of resources that everyone can use.  I’ve pasted some links below, along with a few observations of my own.  Please do chip in with your thoughts, experiences, tips, and recommendations for resources.

———————————–

Things I have learnt about using social media

Blogging

  • You must have a clear idea about your intended audience and what you hope to achieve.  Blogging for the sake of it or because it’s flavour of the month or because you think it is expected is unlikely to be sustainable or to achieve the desired results.
  • A good way to start is to search for people doing a similar thing and contact them asking if you can link to their blog.  Everyone likes being linked to, and this is a good way to start conversations.  Once established, support others in the same way.
  • You have to build something of a track record of posts and tweets to be credible as a consistent source of quality content – you’ve got to earn a following, and this takes time, work, and patience.  And even then, might not work.  Consider a ‘soft launch’ to build your track record, and then a second wave of more intensive effort to get noticed.
  • Posting quality comments on other people’s blogs, either in their comments section, or in a post on your blog, can be a good way to attract attention.
  • Illustrate blog posts with a picture (perhaps found through google images) – a lot of successful bloggers seem to do this.
  • Multi-author blogs and/or guest posts are a good way to share the load.
  • And consequently, offering guest posts or content to established blogs is a way to get noticed.
  • The underlying technology is now very straightforward.  Anyone who is reasonably computer literate will have little trouble learning the technical skills.  The editing frame where I’m writing this in looks a lot like Word, and I’ve used precisely no programming/HTML stuff – that can all be automated now.

Twitter

  • The technology of @s and # is fairly straightforward to pick up – find some relevant/interesting people to follow and you’ll soon pick it up, or read one of the guides below.
  • A good way to reach people is to get “retweets” – essentially when someone else with a bigger following forwards your message.  You do this by addressing posts to them using the @ symbol
  • Generally the pattern of retweets seems to be when people find something interesting and it suits their message.  So… the ESRC retweeted my blog post linking to their regional visit presentation when my blog post said nice things about the visit and linked to their presentation
  • Weird mix of personal and professional.  Some twitter accounts are uniquely professional, others uniquely personal, but many seem a mixture.  Some of the usual barriers seem not to apply, or apply only loosely.  Care needs to be taken here.

General

  • Social media is potentially a huge time sink – keep in mind costs in time versus benefits gained
  • It can be a struggle if you’re naturally shy and attention seeking doesn’t come easily to you

Resources and further reading:

Examples of individual UoN blogs:

Patter – Pat Thomson, School of Education http://patthomson.wordpress.com/
Political Apparitions – Steven Fielding, School of Politics http://stevenfielding.com/
Registrarism – Paul Greatrix, University Registrar  http://registrarism.wordpress.com/
Cash for Questions, Adam Golberg, NUBS  https://socialscienceresearchfunding.co.uk/

UoN Group/institutional/project blogs:

Bullets and Ballots – UoN School of Politics: http://nottspolitics.org/
China Policy Institute http://blogs.nottingham.ac.uk/chinapolicyinstitute/
Centre for Corporate Social Responsibility
http://blogs.nottingham.ac.uk/betterbusiness/

UoN blogs home http://blogs.nottingham.ac.uk/

Guides:

Twitter Guide – LSE Impact in Social Sciences
http://blogs.lse.ac.uk/impactofsocialsciences/2011/09/29/twitter-guide/

6 tips on blogging about research (Sarah Stewart (EdD Student, Otago University, NZ)
http://sarah-stewart.blogspot.co.uk/2012/04/my-top-6-tips-for-how-to-blog-about.html

Blogging about your research – first steps  (University of Warwick)
http://www2.warwick.ac.uk/services/library/researchexchange/topics/gd0007/

Is blogging or tweeting about research papers worth it? (Melissa Terras, UCL)
http://blogs.lse.ac.uk/impactofsocialsciences/2012/04/19/blog-tweeting-papers-worth-it/

A gentle introduction to twitter for the apprehensive academic, (Dorothy Bishop, University of Oxford)
http://deevybee.blogspot.co.uk/2011/06/gentle-introduction-to-twitter-for.html

Twitter accounts:

List of official University of Nottingham Twitter accounts
https://twitter.com/#!/UniofNottingham/uontwitteraccounts

Lists of academic twitter accounts (Curator: LSE Impact project team)
https://twitter.com/#!/LSEImpactBlog/soc-sci-academic-tweeters

https://twitter.com/#!/LSEImpactBlog/business-tweeters

https://twitter.com/#!/LSEImpactBlog/arts-academic-tweeters

https://twitter.com/#!/LSEImpactBlog/think-tanks

——————

Some of the links and choices of examples, are more than a little University of Nottingham-centric, but then this was an internal event.  I’ve not checked with the authors of the various resources I’ve linked to, and taken the liberty of assuming that they won’t mind the link and recognition.  But happy to remove any on request.

Any resources I’ve missed?  Any more thoughts and suggestions?  Please comment below….

A partial, qualified, cautious defence of the Research Excellence Framework (REF)

No hilarious visual puns on REF / Referees from me....

There’s been a constant stream of negative articles about the Research Excellence Framework (for non-UK readers, this is the “system for assessing the quality of research in UK higher education institutions”) over the last few months, and two more have appeared recently (from David Shaw, writing in the Times Higher, and from Peter Wells on the LSE Impact Blog)  which have prompted me to respond with something of a defence of the Research Excellence Framework.

One crucial fact that I left out of the description of the REF in the previous paragraph is that “funding bodies intend to use the assessment outcomes to inform the selective allocation of their research funding to HEIs, with effect from 2015-16”.  And I think this is a fact that’s also overlooked by some critics.  While a lot of talk is about prestige and ‘league tables’, what’s really driving the process is the need to have some mechanism for divvying out the cash for funding research – QR funding.  We could most likely do without a “system for assessing the quality of research” across every discipline and every UK university in a single exercise using common criteria, but we can’t do without a method of dividing up the cake as long as there’s still cake to share out.

In spite of the current spirit of perpetual revolution in the sector, money  is still paid (via HEFCE) to universities for research, without much in the way of strings attached.  This basic, core funding is one half of the dual funding system for research in the UK – the other half being funding for individual research projects and other activities through the Research Councils.  What universities do with their QR funding varies, but I think typically a lot of it is in staff salaries, so that the number of staff in any given discipline is partly a function of teaching income and research income.

I do have sympathy for some of the arguments against the REF, but I find myself returning to the same question – if not this way, then how? 

It’s unfair to expect anyone who objects to any aspect of the REF to furnish the reader with a fully worked up alternative, but constructive criticism must at least point the way.  One person who doesn’t fight shy of coming up with an alternative is Patrick Dunleavy, who has argued for a ‘digital census’ involving the use of citation data as a cheap, simple, and transparent replacement for the REF.  That’s not a debate I feel qualified to participate in, but my sense is that Dunleavy’s position on this is a minority one in UK academia.

In general, I think that criticisms of the REF tend to fall into the following broad categories.  I don’t claim to address decisively every last criticism made (hence the title), but for what it’s worth, here are the categories that I’ve identified, and what I think the arguments are.

1.  Criticism over details

The REF team have a difficult balancing act.  On the one hand,  they need rules which are sensitive to the very real differences between different academic disciplines.  On the other, fairness and efficiency calls for as much similarity in approach, rules, and working methods as possible between panels.  The more differences between panels, the greater the chances of confusion and of mistakes being made in the process of planning and submitting REF returns which could seriously affect both notional league table placing and cold hard cash.  The more complicated the process, the greater the transaction costs.   Which brings me onto the second balancing act.  On the one hand, it needs to be a rigorous and thorough process, with so much public money at stake.  On the other hand, it needs to be lean and efficient, minimising the demands on the time of institutions, researchers, and panel members.   This isn’t to say that the compromise reached on any given point between particularism and uniformity, and between rigour and efficiency, is necessarily the right one, of course.  But it’s not easy.

2.  Impact

The use of impact at all.  The relative weighting of impact.  The particular approach to impact.  The degree of uncertainty about impact.  It’s a step into the unknown for everyone, but I would have thought that the idea that there be some notion of impact, some expectation that where academic research makes a difference in the real world, we should ensure it does so.  I have much more sympathy for some academic disciplines than others as regards objections to the impact agenda.  Impact is really a subject for a blog post in itself, but for now, it’s worth noting that it would be inconsistent to argue against the inclusion of impact in the REF and also to argue that it’s too narrow in terms of what it values and what it assesses.

3.  Encouraging game playing

While it’s true that the REF will encourage game playing in similar (though different) ways to its predecessors, I can’t help but think this is inevitable and would also be true of every possible alternative method of assessment.  And what some would regard as gaming, others would regard as just doing what is asked of them.

One particular ‘game’ that is played – or, if you prefer, strategic decision that is made – is about what the threshold to submit is.  It’s clear that there’s no incentive to include those whose outputs are likely to fall below the minimum threshold for attracting funding.  But it’s common for some institutions for some disciplines to have a minimum above this, with one eye not only on the QR funding, but also on league table position.  There are two arguments that can be made against this.  One is that QR funding shouldn’t be so heavily concentrated on the top rated submissions and/or that more funding should be available.  But that’s not an argument against the REF as such.  The other is that institutions should be obliged to submit everyone.  But the costs of doing so would be huge, and it’s not clear to me what the advantages would be – would we really get better or more accurate results with which to share out the funding.  Because ultimately the REF is not about individuals, but institutions.

4. Perverse incentives

David Shaw, in the Times Higher, sees a very dangerous incentive in the REF.

REF incentivises the dishonest attribution of authorship. If your boss asked you to add someone’s name to a paper because otherwise they wouldn’t be entered into the REF, it could be hard to refuse.

I don’t find this terribly convincing.  While I’m sure that there will be game playing around who should be credited with co-authored publications, I’d see that as acceptable in a way that the fraudulent activity that Shaw fears (but stresses that he’s not experienced first-hand) just isn’t.  There is opportunity for  – and temptations to – fraud, bad behaviour and misconduct in pretty much everything we do, from marking students’ work to reporting our student numbers to graduate destinations.  I’m not clear how that makes any of these activities ‘unethical’ in the way his article seems to argue.  Fraud is low in our sector, and if anyone does commit fraud, it’s a huge scandal and heads roll.  It ruins careers and leaves a long shadow over institutions.  Even leaving aside the residual decency and professionalism that’s the norm in our sector, it would be a brave Machiavellian Research Director who would risk attempting this kind of fraud.  To make it work, you need the cooperation and the silence of two academic researchers for every single publication.  Risk versus reward – just not worth it.

Peter Wells, on the LSE blog, makes the point that the REF acts as an active disincentive for researchers to co-author papers with colleagues at their own institution, as only one can return the output to the REF.  That’s an oversimplification, but it’s certainly true that there’s active discouragement of the submission of the same output multiple times in the same return.  There’s no such problem if the co-author is at another institution, of course.  However, I’m not convinced that this theoretical disincentive makes a huge difference in practice.  Don’t academics co-author papers with the most appropriate colleague, whether internal or external?  How often – really – does a researcher chose to write something with a colleague at another institution rather than a colleague down the corridor?  For REF reasons alone?  And might the REF incentive to include junior colleagues as co-authors that Shaw identifies work in the other direction, for genuinely co-authored pieces?

In general, proving the theoretical possibility of a perverse incentive is not sufficient to prove its impact in reality.

5.  Impact on morale

There’s no doubt that the REF causes stress and insecurity and can add significantly to the workload of those involved in leading on it.  There’s no doubt that it’s a worrying time, waiting for news of the outcome of the R&R paper that will get you over whatever line your institution has set for inclusion.  I’m sure it’s not pleasant being called in for a meeting with the Research Director to answer for your progress towards your REF targets, even with the most supportive regime.

However…. and please don’t hate me for this…. so what?  I’m not sure that the bare fact that something causes stress and insecurity is a decisive argument.  Sure, there’s a prima facie for trying to make people’s lives better rather than worse, but that’s about it.  And again, what alternative system which would be equally effective at dishing out the cash while being less stressful?  The fact is that every job – including university jobs – is sometimes stressful and has downsides rather than upsides.  Among academic staff, the number one stress factor I’m seeing at the moment is marking, not the REF.

6.  Effect on HE culture

I’ve got more time for this argument than for the stress argument, but I think a lot of the blame is misdirected.  Take Peter Wells’ rather utopian account of what might replace the REF:

For example, everybody should be included, as should all activities.  It is partly by virtue of the ‘teaching’ staff undertaking a higher teaching load that the research active staff can achieve their publications results; without academic admissions tutors working long hours to process student applications there would be nobody to receive research-led teaching, and insufficient funds to support the University.

What’s being described here is not in any sense a ‘Research Excellence Framework’.  It’s a much broader ‘Academic Excellence Framework’, and that doesn’t strike me as something that’s particularly easy to assess.  How on earth could we go about assessing absolutely everything that absolutely everyone does?  Why would we give out research cash according to how good an admissions tutor someone is?

I suspect that what underlies this – and some of David Shaw’s concerns as well – is a much deeper unease about the relative prestige and status attached to different academic roles: the research superstar; the old fashioned teaching and research lecturer; those with heavy teaching and admin loads who are de facto teaching only; and those who are de jure teaching only.  There is certainly a strong sense that teaching is undervalued – in appointments, promotions, in status, and in other ways.  Those with higher teaching and admin workloads do enable others to research in precisely the way that Shaw argues, and respect and recognition for those tasks is certainly due.  And I think the advent of increased tuition fees is going to change things, and for the better in the sense of the profile and status of excellent teaching.

But I’m not sure why any of these status problems are the fault of the REF.  The REF is about assessing research excellence and giving out the cash accordingly.  If the REF is allowed to drive everything, and non-inclusion is such a badge of dishonour that the contributions of academics in other areas are overlooked, well, that’s a serious problem.  But it’s an institutional one, and not one that follows inevitably from the REF.  We could completely change the way the REF works tomorrow, and it will make very little difference to the underlying status problem.

It’s not been my intention here to refute each and every argument against the REF, and I don’t think I’ve even addressed directly all of Shaw and Wells’ objections.  What I have tried to do is to stress the real purpose of the REF, the difficulty of the task facing the REF team, and make a few limited observations about the kinds of objections that have been put forward.  And all without a picture of Pierluigi Collina.

New year’s wishes….

The new calendar year is traditionally a time for reflection and for resolutions, but in a fit of hubris I’ve put together a list of resolutions I’d like to see for the sector, research funders, and university culture in general.  In short, for everyone but me.  But to show willing, I’ll join in too.

No more of the following, please….

1.  “Impactful”

Just…. no.  I don’t think of myself a linguistic purist or a grammar-fascist, though I am a pedant for professional purposes.  I recognise that language changes and evolves over time, and I welcome changes that bring new colour and new descriptive power to our language.  While I accept that the ‘impact agenda’ is here to stay for the foreseeable future, the ‘impactful’ agenda need not be.  The technical case against this monstrosity of a word is outlined at Grammarist, but surely the aesthetic case is conclusive in itself.  I warn anyone using this word in my presence that I reserve the right to tell them precisely how annoyful they’re being.

2.  The ‘Einstein fallacy’

This is a mistaken and misguided delusion that a small but significant proportion of academics appear to be suffering from.  It runs a bit like this:
1) Einstein was a genius
2) Einstein was famously absent-minded and shambolic in his personal organisation
3) Conclusion:  If I am or pretend to be absent-minded and shambolic , either:
(3a) I will be a genius; or
(3b) People will think I am a genius; or
(3c) Both.

I accept that some academics are genuinely bad at administration and organisation. In some cases it’s a lack of practice/experience, in others a lack of confidence, and I accept  that this is just not where their interests and talent lies.  Fair enough.  But please stop being deliberately bad at it to try to impress people.  Oh, you can only act like a prima donna if you have the singing skills to back it up…

3)  Lack of predictability in funding calls

Yes, I’m looking at you, ESRC.  Before the comprehensive spending review and all of the changes that followed from that, we had a fairly predictable annual cycle of calls, very few of which had very early autumn deadlines.  Now we’re into a new cycle which may or may not be predictable, and a lot of them seem to be very early in the academic year.  Sure, let’s have one off calls on particular topics, but let’s have a predictable annual cycle for everything else with as much advance notice as possible.  It’ll help hugely with ‘demand management’ because it’ll be much easier to postpone applications that aren’t ready if we know there will be another call.  For example, I was aware of a couple of very strong seminar series ideas which needed further work and discussion within the relevant research and research-user communities.  My advice was to start that work now using the existence of the current call as impetuous, and to submit next year.  But we’ve taken a gamble, as we don’t know if there will be another call in the future, and you can’t tell me because apparently a decision has yet to be made.

4)  Lazy “please forward as appropriate” emails

Stuff sent to me from outside the Business School with the expectation that I’ll just send it on to everyone.  No.  Email overload is a real problem, and I write most of my emails with the expectation that I have ten seconds at most either to get the message across, or to earn an attention extension.  I mean, you’re not even reading this properly are you?  You’re probably skim reading this in case there’s a nugget of wit amongst the whinging.  Every email I sent creates work for others, and every duff, dodgy, or irrelevant email I send reduces my e-credit rating.  I know for a fact that at least some former colleagues deleted everything I sent without reading it – there’s no other explanation I can think of for missing two emails with the header including the magic words “sabbatical leave”.

So… will I be spending my e-credit telling my colleagues about your non-business school related event which will be of interested to no-one?  No, no, and most assuredly no.  I will forward it “as appropriate”, if by “appropriate” you mean my deleted items folder.

Sometimes, though, a handful of people might be interested.  Or quite a lot of people might be interested, but it’s not worth an individual email.  Maybe I’ll put it on the portal, or include it in one of my occasional news and updates emails.  Maybe.

If you’d like me to do that, though, how about sending me the message in a form I can forward easily and without embarrassment?  With a meaningful subject line, a succinct and accurate summary in the opening two sentences?  So that I don’t have to do it for you before I feel I can send it on.  There’s a lovely internet abbreviation – TL:DR – which stands for Too Long: Didn’t Read.  I think its existence tells us something.

5)  People who are lucky enough to have interesting, rewarding and enjoyable jobs with an excellent employer and talented and supportive colleagues, who always manage to find some petty irritants to complain about, rather than counting their blessings.

 

Outstanding researcher or Oustanding grant writer?

"It's all the game, yo....."

The Times Higher has a report on Sir Paul Nurse‘s ‘Anniversary Day’ address to the Royal Society.  Although the Royal Society is a learned society in the natural rather than the social sciences, he makes an interesting distinction that seems to have – more or less unchallenged – become a piece of received wisdom across many if not all fields of research.

Here’s part of what Sir Paul had to say (my underline added)

Given this emphasis on the primacy of the individuals carrying out the research, decisions should be guided by the effectiveness of the researchers making the research proposal. The most useful criterion for effectiveness is immediate past progress. Those that have recently carried out high quality research are most likely to continue to do so. In coming to research funding decisions the objective is not to simply support those that write good quality grant proposals but those that will actually carry out good quality research. So more attention should be given to actual performance rather than planned activity. Obviously such an emphasis needs to be tempered for those who have only a limited recent past record, such as early career researchers or those with a break in their careers. In these cases making more use of face-to-face interviews can be very helpful in determining the quality of the researcher making the application.

I guess my first reaction to this is to wonder whether interviews are the best way of deciding research funding for early career researchers.  Apart from the cost, inconvenience and potential equal opportunities issues of holding interviews, I wonder if they’re even a particularly good way of making decisions.  When it comes to job interviews, I’ve seen many cases where interview performance seems to take undue priority over CV and experience.  And if the argument is that sometimes the best researchers aren’t the best communicators (which is fair), it’s not clear to me how an interview will help.

My second reaction is to wonder about the right balance between funding excellent research and funding excellent researchers.  And I think this is really the point that Sir Paul is making.  But that’s a subject for another entry, another time.  Coming soon!

My third reaction – and what this entry is about – is the increasingly common assumption that there is one tribe of researchers who can write outstanding applications, and another which actually does outstanding research.  One really good expression of this can be found in a cartoon at the ever-excellent Research Counselling.  Okay, so it’s only a cartoon, but it wouldn’t have made it there unless it was tapping into some deeper cultural assumptions.  This article from the Times Higher back at the start of November speaks of ‘Dr Plods’ – for whom getting funding is an aim in itself – and ‘Dr Sparks’ – the ones who deserve it – and there seems to be little challenge from readers in the comments section below.

But does this assumption have any basis in fact?  Are those who get funded mere journeymen and women researchers, mere average intellects, whose sole mark of distinction is their ability to toady effectively to remote and out-of-touch funding bodies?  To spot the research priority flavour-of-the-month from the latest Delivery Plan, and cynically twist their research plans to match it?  It’s a comforting thought for the increasingly large number of people who don’t get funding for their project.  We’d all like to be the brilliant-but-eccentric-misunderstood-radical-unappreciated genius, who doesn’t play by the rules, cuts a few corners but gets the job done, and to hell with the pencil pushers at the DA’s office in city hall in RCUK’s offices in downtown Swindon.  A weird kind of cross between Albert Einstein and Jimmy McNulty from ‘The Wire’.

While I don’t think anyone is seriously claiming that the Sparks-and-Plods picture should be taken literally, I’m not even sure how much truth there is in it as a parable or generalisation.  For one thing, I don’t see how anyone could realistically Plod their way very far from priority to priority as they change and still have a convincing track record for all of them.  I’m sure that a lot of deserving proposals don’t get funded, but I doubt very much that many undeserving proposals do get the green light.  The brute fact is that there are more good ideas than there is money to spend on funding them, and the chances of that changing in the near future are pretty much zero.  I think that’s one part of what’s powering this belief – if good stuff isn’t being funded, that must be because mediocre stuff is being funded.  Right?  Er, well…. probably not.  I think the reality is that it’s the Sparks who get funded, but it’s those Sparks who are better able to communicate their ideas and make a convincing case for fit with funders’ or scheme priorities.  Plods, and their ‘incremental’ research (a term that damns with faint praise in some ESRC referee’s reports that I’ve seen) shouldn’t even be applying to the ESRC – or at least not to the standard Research Grants scheme.

A share of this Sparks/Plods view is probably caused by the impact agenda.  If impact is hard for the social sciences, it’s at least ten times as hard for basic research in many of the natural sciences.  I can understand why people don’t like the impact agenda, and I can understand why people are hostile.  However, I’ve always understood the impact agenda as far as research funding applications are concerned is that if a project has the potential for impact, it ought to, and there ought to be a good, solid, thought through, realistic, and defensible plan for bringing it about.  If there genuinely is no impact, argue the case in the impact statement.  Consider this, from the RCUK impact FAQ.

How do Pathways to Impact affect funding decisions within the peer review process?

The primary criterion within the peer review process for all Research Councils is excellent research. This has always been the case and remains unchanged. As such, problematic research with an excellent Pathways to Impact will not be funded. There are a number of other criteria that are assessed within research proposals, and Pathways to Impact is now one of those (along with e.g. management of the research and academic beneficiaries).

Of course, how this plays out in practice is another matter, but every indication I’ve had from the ESRC is that this is taken very seriously.  Research excellence comes first.  Impact (and other factors) second.  These may end up being used in tie-breakers, but if it’s not excellent, it won’t get funded.  Things may be different at the other Research Councils that I know less about, especially the EPSRC which is repositioning itself as a sponsor of research, and is busy dividing and subdividing and prioritising research areas for expansion or contraction in funding terms.

It’s worth recalling that it’s academics who make decisions on funding.  It’s not Suits in Swindon.  It’s academics.  Your peers.  I’d be willing to take seriously arguments that the form of peer review that we have can lead to conservatism and caution in funding decisions.  But I find it much harder to accept the argument that senior academics – researchers and achievers in their own right – are funding projects of mediocre quality but good impact stories ahead of genuinely innovative, ground-breaking research which could drive the relevant discipline forward.

But I guess my message to anyone reading this who considers herself to be more of a ‘Doctor Spark’ who is losing out to ‘Doctor Plod’ is to point out that it’s easier for Sparky to do what Ploddy does well than vice versa.  Ploddy will never match your genius, but you can get the help of academic colleagues and your friendly neighbourhood research officer – some of whom are uber-Plods, which in at least some cases is a large part of the reason why they’re doing their job rather than yours.

Want funding?  Maximise your chances of getting it.  Want to win?  Learn the rules of the game and play it better.  Might your impact plan be holding you back?  Take advantage of any support that your institution offers you – and if it does, be aware of the advantage that this gives you.  Might your problem be the art of grant writing?  Communicating your ideas to a non-specialised audience?  To reviewers and panel members from a cognate discipline?  To a referee not from your precise area?  Take advice.  Get others to read it.  Take their impressions and even their misunderstandings seriously.

Or you could write an application with little consideration for impact, with little concern for clarity of expression or the likely audience, and then if you’re unsuccessful, you can console yourself with the thought that it’s the system, not you, that’s at fault.

What would wholesale academic adoption of social media look like?

A large crowd of people
Crowded out?

Last week, the LSE Impact of Social Sciences blog was asking for its readers and followers to nominate their favourite academic tweeters.  This got me thinking.  While that’s a sensible question to ask now, and one that could create a valuable resource, I wonder whether the question would make as much sense if asked in a few years time?

The drivers for academics (and arguably academic-related types like me) to start to use twitter and to contribute to a blog are many – brute self-promotion; desire to join a community or communities; to share ideas; to test ideas; to network and make new contacts; to satisfy the impact requirements of the research funder; and so on and so forth.  I think most current PhD students would be very well advised to take advantage of social media to start building themselves an online presence as an early investment in their search for a job (academic or otherwise).   I’d imagine that a social media strategy is now all-but-standard in most ESRC ‘Pathways to Impact’ documents.  Additionally, there are now many senior, credible, well-established academic bloggers and twitterers, many of whom are also advocates for the use of social media.

So, what would happen if there was a huge upsurge in the number of academics (and academic-relateds) using social media?  What if, say, participation rates reach about 20% or so?  Would the utility of social media scale, or would the noise to signal ratio be such that its usefulness would decrease?

This isn’t a rhetorical question – I’ve really no idea and I’m curious.  Anyone?  Any thoughts?

I guess that there’s a difference between different types of social media.  I have friends who are outside the academy and who have Twitter accounts for following and listening, rather than for leading or talking.  They follow the Brookers and the Frys and the Goldacres, and perhaps some news sources.  They use Twitter like a form of RSS feed, essentially.

But what about blogging, or using Twitter to transmit, rather than to receive?  If even 10% of academics have an active blog, will it still be possible or practical to keep track of everything relevant that’s written.  In my field, I think I’ve linked to pretty much every related blog (see links in the sidebar) in the UK, and one from Australia.  In certain academic fields it’s probably similarly straightforward to keep track of everyone significant and relevant.  If this blogging lark catches on, there will come a point at which it’s no longer possible for anyone to keep up with everything in any given field.  So, maybe people become more selective and we drop down to sub-specialisms, and it becomes sensible to ask for our favourite academic tweeters on non-linear economics, or something like that.

On the other hand, it might be that new entrants to the blogging market will be limited and inhibited by the number already present.  Or we might see more multi-author blogs, mergers etc and so on until we re-invent the journal.  Or strategies that involve attracting the attention and comment of influential bloggers and the academic twitterati (a little bit of me died inside typing that, I hope you’re happy….).  Might that be what happens?  That e-hierarchies form (arguably they already exist) that echo real world hierarchies, and effectively squeeze out new entrants?  Although… I guess good content will always have a chance of ‘going viral’ within relevant communities.

Of course, it may well be that something else will happen.  That Twitter will end up in the same pile as MySpace.  Or that it simply won’t be widely adopted or become mainstream at all.  After all, most academics still don’t have much of a web 1.0 presence beyond a perfunctory page on their Department website.

That’s all a bit rambly and far longer than I meant it to be.  But as someone who is going to be recommending the greater use of social media to researchers, I’d like to have a sense of where all this might be going, and what the future might hold.  Would the usefulness of social media as an academic communication, information sharing, and networking tool effectively start to diminish once a certain point is reached?  Or would it scale?