Using Social Media to Support Research Management – ARMA training and development event

Last week I gave a brief presentation at a training and development event organised by ARMA (Association of Research Managers and Administrators) entitled ‘Using Social Media to Support Research Management’. Also presenting were Professor Andy Miah of the University of Salford, Sierra Williams of the LSE Impact of Social Sciences blog, Terry Bucknell of Altmetric. and Phil Ward of Fundermentals and the University of Kent.   A .pdf of my wibblings as inflicted can be found here.

I guess there are three things from the presentation and from the day as a whole that I’d pick out for particular comment.

Firstly, if you’re involved in research management/support/development/impact, then you should be familiar with social media, and by familiar I don’t mean just knowing the difference between Twitter and Friends Reunited – I mean actually using it. That’s not to say that everyone must or should dash off and start a blog – for one thing, I’m not sure I could handle the competition. But I do think you should have a professional presence on Twitter. And I think the same applies to any academics whose research interests involve social media in any way – I’ve spoken to researchers wanting to use Twitter data who are not themselves on Twitter. Call it a form of ethnography if you like (or, probably better, action research), I think you only really understand social media by getting involved – you should “inhabit the ecosystem”, as Andy Miah put it in a quite brilliant presentation that you should definitely make time to watch.

I’ve listed some of the reasons for getting involved, and some of the advantages and challenges, in my presentation. But briefly, it’s only by using it and experiencing for yourself the challenge of finding people to follow, getting followers, getting attention for the messages you want to transmit, risking putting yourself and your views out there that you come to understand it. I used to just throw words like “blog” and “twitter” and “social media engagement” around like zeitgeisty confetti when talking to academic colleagues about their various project impact plans, without understanding any of it properly. Now I can talk about plans to get twitter followers, strategies to gain readers for the project blog, the way the project’s social media presence will be involved in networks and ecosystems relevant to the topic.

One misunderstanding that a lot of people have is that you have to tweet a lot of original content – in fact, it’s better not to. Andy mentioned a “70/30” rule – 70% other people’s stuff, 30% yours, as a rough rule of thumb. Even if your social media presence is just as a kind of curator – finding and retweeting interesting links and making occasional comments, you’re still contributing and you’re still part of the ecosystem, and if your interests overlap with mine, I’ll want to follow you because you’ll find things I miss. David Gauntlett wrote a really interesting article for the LSE impact blog on the value of “publish, then filter” systems for finding good content, which is well worth a read. Filtering is important work.

The second issue I’d like to draw out is an issue around personal and professional identity on Twitter. When Phil Ward, Julie Northam, David Young and I gave a presentation on social media at the ARMA conference in 2012, many delegates were already using Twitter in a personal capacity, but were nervous about mixing the personal and professional. I used to think this was much more of a problem/challenge than I do now. In last week’s presentation, I argued that there were essentially three kinds of Twitter account – the institutional, the personal, and what I called “Adam at work”. Institutional wears a shirt and tie and is impersonal and professional. Personal is sat in its pants on the sofa tweeting about football or television programmes or politics. Adam-at-work is more ‘smart casual’ and tweets about professional stuff, but without being so straight-laced as the institutional account.

Actually Adam-at-Work (and, for that matter You-at-Work) are not difficult identities to work out and to stick to. We all manage it every day.  We’re professional and focused and on-topic, but we also build relations with our office mates and co-workers, and some of that relationship building is through sharing weekend plans, holidays, interests etc. I want to try to find a way of explaining this without resorting to the words “water cooler” or (worse) “banter”, but I’m sure you know what I mean. Just as we need to show our human sides to bond with colleagues in everyday life, we need to do the same on Twitter. Essentially, if you wouldn’t lean over and tell it to the person at the desk next to you, don’t tweet about it. I think we’re all well capable of doing this, and we should trust ourselves to do it. By all means keep a separate personal twitter account (because you don’t want your REF tweets to send your friends to sleep) and use that to shout at the television if you’d like to.

I think it’s easy to exaggerate the dangers of social media, not least because of regular stories about people doing or saying something ill-advised. But it’s worth remembering that a lot of those people are famous or noteworthy in some way, and so attract attention and provocation in a way that we just don’t. While a footballer might get tweeted all kinds of nonsense after a poor performance, I’m unlikely to get twitter-trolled by someone who disagrees with something I’ve written, or booed while catching a train. Though I do think a football crowd style crescendo of booing might be justified in the workplace for people who send mass emails without the intended attachment/with the incorrect date/both.

Having said all that… this is just my experience, and as a white male it may well be that I don’t attract that kind of negative attention on social media. I trust/hope that female colleagues have had similar positive experiences and I’ve no reason to think they haven’t, but I don’t want to pass off my experience as universal. (*polishes feminist badge*).

The third thing is to repeat an invitation which I’ve made before – if anyone would like to write a guest post for my blog on any topic relevant to its general themes, please do get in touch. And if anyone has an questions about twitter, blogging, social media that they think I might have a sporting chance of answering, please ask away.

Grant Writing Mistakes part 94: The “Star Wars”

Have you seen Star Wars?  Even if you haven’t, you might be aware of the iconic opening scene, and in particular the scrolling text that begins

“A long time ago, in a galaxy far, far away….”

(Incidentally, this means that the Star Wars films are set in the past, not the future. Which is a nice bit of trivia and the basis for a good pub quiz question).  What relevance does any of this have for research grant applications?  Patience, Padawan, and all will become clear.

What I’m calling the “Star Wars” error in grant writing is starting the main body of your proposal with the position of “A long time ago…”. Before going on to review the literature at great length, quoting everything that calls for more research, and in general taking a lot of time and space to lay the groundwork and justify the research.  Without yet telling the reader what it’s about, why it’s important, or why it’s you and your team that should do it.

This information about the present project will generally emerge in its own sweet time and space, but not until two thirds of the way through the available space.  What then follows is a rushed exposition with inadequate detail about the research questions and about the methods to be employed.  The reviewer is left with an encyclopaedic knowledge of all that went before it, of the academic origin story of the proposal, but precious little about the project for which funding is being requested.  And without a clear and compelling account of what the project is about, the chances of getting funded are pretty much zero.  Reviewers will not unreasonably want more detail, and may speculate that its absence is an indication that the applicants themselves aren’t clear what they want to do.

Yes, an application does need to locate itself in the literature, but this should be done quickly, succinctly, clearly, and economically as regards to the space available.  Depending on the nature of the funder, I’d suggest not starting with the background, and instead open with what the present project is about, and then zoom out and locate it in the literature once the reader knows what it is that’s being located.  Certainly if your background/literature review section takes up more than between a quarter of the available space, it’s too long.

(Although I think “the Star Wars”  is a defensible name for this grant application writing mistake, it’s only because of the words “A long time ago, in a galaxy far, far away….”. Actually the scrolling text is a really elegant, pared down summary of what the viewer needs to know to make sense of what follows… and then we’re straight into planets, lasers, a fleeing spaceship and a huge Star Destroyer that seems to take forever to fly through the shot.)

In summary, if you want the best chance of getting funded, you should, er… restore balance to the force…. of your argument. Or something.

ESRC success rates 2013/2014

The ESRC Annual Report for 2013-14 has been out for quite a while now, and a quick summary and analysis from me is long overdue.

Although I was tempted to skip straight through all of the good news stories about ESRC successes and investments and dive straight in looking for success rates, I’m glad I took the time to at least skim read some of the earlier stuff.  When you’re involved in the minutiae of supporting research, it’s sometimes easy to miss the big picture of all the great stuff that’s being produced by social science researchers and supported by the ESRC.  Chapeau, everyone.

In terms of interesting policy stuff, it’s great to read that the “Urgency Grants” mechanism for rapid responses to “rare or unforeseen events” which I’ve blogged about before is being used, and has funded work “on the Philippines typhoon, UK floods, and the Syrian crisis”.  While I’ve not been involved in supporting an Urgency Grant application, it’s great to know that the mechanism is there, that it works, and that at least some projects have been funded.

The “demand management” agenda

This is what the report has to say on “demand management” – the concerted effort to reduce the number of applications submitted, so as to increase the success rates and (more importantly) reduce the wasted effort of writing and reviewing applications with little realistic chance of success.

Progress remains positive with an overall reduction in application numbers of 41 per cent, close to our target of 50 per cent. Success rates have also increased to 31 per cent, comparable with our RCUK partners. The overall quality of applications is up, whilst peer review requirements are down.

There are, however, signs that this positive momentum may
be under threat as in certain schemes application volume is
beginning to rise once again. For example, in the Research
Grants scheme the proposal count has recently exceeded
pre-demand management levels. It is critical that all HEIs
continue to build upon early successes, maintaining the
downward pressure on the submission of applications across
all schemes.

It was always likely that “demand management” might be the victim of its own success – as success rates creep up again, getting a grant appears more likely and so researchers and research managers encourage and submit more applications.  Other factors might also be involved – the stage of the REF cycle, for example.  Or perhaps now talk of researcher or institutional sanctions has faded away, there’s less incentive for restraint.

Another possibility is that some universities haven’t yet got the message or don’t think it applies to them.  It’s also not hard to imagine that the kinds of internal review mechanisms that some of us have had for years and that we’re all now supposed to have are focusing on improving the quality of applications, rather than filtering out uncompetitive ideas.  But is anyone disgracing themselves?

Looking down the list of successes by institution (p. 41) it’s hard to pick out any obvious bad behaviour.  Most of those who’ve submitted more than 10 applications have an above-average success rate.  You’d only really pick out Leeds (10 applications, none funded), Edinburgh (8/1) and Southampton (14/2), and a clutch of institutions on 5/0, (including top-funded Essex, surprisingly) but in all those cases one or two more successes would change the picture.  Similarly for the top performers – Kings College (7/3), King Leicester III (9/4), Oxford (14/6) – hard to make much of a case for the excellence or inadequacy of internal peer review systems from these figures alone.  What might be more interesting is a list of applications by institution which failed to reach the required minimum standard, but that’s not been made public to the best of my knowledge.  And of course, all these figures only refer to the response mode Standard Grant applications in the financial year (not academic year) 2013-14.

Concentration of Funding

Another interesting stat (well, true for some values of “interesting”) concerns the level of concentration of funding.  The report records the expenditure levels for the top eleven (why 11, no idea…) institutions by research expenditure and by training expenditure.  Interesting question for you… what percentage of the total expenditure do the top 11 institutions get?  I could tell you, but if I tell you without making you guess first, it’ll just confirm what you already think about concentration of funding.  So I’m only going to tell you that (unsurprisingly) training expenditure is more concentrated than research funding.  The figures you can look up for yourself.  Go on, have a guess, go and check (p. 44) and see how close you are.

Research Funding by Discipline

On page 40, and usually the most interesting/contentious.  Overall success rate was 25% – a little down from last year, but a huge improvement on 14% two years ago.

Big winners?  History (4 from 6); Linguistics (5 from 9), social anthropology (4 from 9), Political and International Studies (9 from 22), and Psychology (26 from 88, – just under 30% of all grants funded were in psychology).  Big losers?  Education (1 from 27), Human Geography (1 from 19), Management and Business Studies (2 from 22).

Has this changed much from previous years?  Well, you can read what I said last year and the year before on this, but overall it’s hard to say because we’re talking about relatively small numbers for most subjects, and because some discipline classifications have changed over the last few years.  But, once again, for the third year in a row, Business and Management and Education do very, very poorly.

Human Geography has also had a below average success rate for the last few years, but going from 1 in 19 from 3 from 14 probably isn’t that dramatic a collapse – though it’s certainly a bad year.  I always make a point of trying to be nice about Human Geography, because I suspect they know where I live.  Where all of us live.  Oh, and Psychology gets a huge slice of the overall funding, albeit not a disproportionate one given the number of applications.

Which kinds of brings us back to the same questions I asked in my most-read-ever piece – what on earth is going on with Education and Business and management research, and why do they do so badly with the ESRC?  I still don’t have an entirely satisfactory answer.

I’ve put together a table showing changes to disciplinary success rates over the last few years which I’m happy to share, but you’ll have to email me for a copy.  I’ve not uploaded it here because I need to check it again with fresh eyes before it’s used – fiddly, all those tables and numbers.

Pre-mortems: Tell me why your current grant application or research project will fail

I came across a really interesting idea the other day week via the Freakonomics podcast – the idea of a project “pre-mortem” or “prospective hindsight”  They interviewed Gary Klein who described it as follows:

KLEIN:  I need you to be in a relaxed state of mind.  So lean back in your chair. Get yourself calm and just a little bit dreamy. I don’t want any daydreaming but I just want you to be ready to be thinking about things. And I’m looking in a crystal ball. And uh, oh, gosh…the image in the crystal ball is a really ugly image. And this is a six-month effort. We are now three months into the effort and it’s clear that this project has failed. There’s no doubt about it. There’s no way that it’s going to succeed. Oh, and I’m looking at another scene a few months later, the project is over and we don’t even want to talk about it. And when we pass each other in the hall, we don’t even make eye contact. It’s that painful. OK. So this project has failed, no doubt about it [….] I want each of you to write down all the reasons why this project has failed. We know it failed. No doubts. Write down why it failed.

The thinking here is that such an approach to projects reduces overconfidence, and elsewhere the podcast discusses the problems of overconfidence, “go fever”, the Challenger shuttle disaster, and how cultural/organisational issues can make it difficult to bring up potential problems and obstacles.  The pre-mortem exercise might free people from that, and encourages people (as a team) to find reasons for failure and then respond to them.  I don’t do full justice to the arguments here, but you can listen to it for yourself (or read the transcript) at the link above.  It reminds me of some of the material covered in a MOOC I took which showed how very small changes in the way that questions are posed and framed can make surprisingly large differences to the decisions that people make, so perhaps this very subtle shift in mindset might be useful.

How might we use the idea of a pre-mortem in research development?  My first thought was about grant applications.  Would it help to get the applicants to undertake the pre-mortem exercise?  I’m not sure that overconfidence is often a huge problem among research teams (a kind of grumpy, passive-aggressive form of entitled pessimism is probably more common), so perhaps the kind of groupthink overconfidence/excessive positivity is less of an issue than in larger project teams where nobody wants to be the one to be negative.  But perhaps there’s value in asking the question anyway, and re-focusing applicants on the fact that they’re writing an application for reviewers and for a funding body, not for themselves.  A reminder that the views, priorities, and (mis)interpretations of others are crucial to their chances of success or failure.

Would it help to say to internal reviewers “assume this project wasn’t funded – tell me why”?  Possibly.  It might flush out issues that reviewers may be too polite or insufficiently assertive to raise otherwise, and again, focuses minds on the nature of the process as a competition.  It could also help reviewers identify where the biggest danger for the application lies.

Another way it could usefully be used is in helping applicants risk assess their own project.  Saying to them “you got funded, but didn’t achieve the objectives you set for yourself.  Why not?” might be a good way of identifying project risks to minimise in the management plan, or risks to alleviate through better advanced planning.  It might prompt researchers to think more cautiously about the project timescale, especially around issues that are largely out of their control.

So… has anyone used anything like this before in research development?  Might it be a useful way of thinking?  Why will your current application fail?

MOOCing about: My experience of a massively open online course

I’ve just completed my first Massively Open Online Course (or MOOC) entitled ‘The mind is flat: the shocking shallowness of human psychology run via the Futurelearn platform.  It was run by Professor Nick Chater and PhD student Jess Whittlestone of Warwick Business School and this is the second iteration of the course, which I understand will be running again at some point. Although teaching and learning in general (and MOOCs in particular) are off topic for this blog, I thought it might be interesting to jot down a few thoughts about my very limited experience of being on the receiving end of a MOOCing.  There’s been a lot of discussion of MOOCs which I’ve been following in a kind of half-hearted way, but I’ve not seen much (if anything) written from the student perspective.

“Alright dudes… I’m the future of higher education, apparently. Could be worse… could be HAL 9000”

I was going to explain my motivations for signing up for the course to add a bit of context, but one of the key themes of the MOOC has been the shallowness and instability of human reasons and motivations.  We can’t just reach back into our minds, it seems, and retrieve our thinking and decision making processes from a previous point in time.  Rather, the mind is an improviser, and can cobble together – on demand – all kinds of retrospective justifications and explanations for our actions which fit the known facts including our previous decisions and the things we like to think motivate us.

So my post-hoc rationalisation of my decision to sign up is probably three-fold. Firstly, I think a desire for lifelong learning and in particular an interest in (popular) psychology are things I ascribe to myself.  Hence an undergraduate subsidiary module in psychology and having read Stuart Sutherland’s wonderful book ‘Irrationality‘.  A second plausible explanation is that I work with behavioural economists in my current role, and this MOOC would help me understand them and their work better.  A third possibility is that I wanted to find out what MOOCs were all about and what it was like to do one, not least because of their alleged disruptive potential for higher education.

So…. what does the course consist of?  Well, it’s a six week course requiring an estimated five hours of time per week.  Each week-long chunk has a broad overarching theme, and consists of a round-up of themes arising from questions from the previous week, and then a series of short videos (generally between 4 and 20 minutes) either in a lecture/talking head format, or in an interview format.  Interviewees have included other academics and industry figures.  There are a few very short written sections to read, a few experiments to do to demonstrate some of the theories, a talking point, and finally a multiple choice test.  Students are free to participate whenever they like, but there’s a definite steer towards trying to finish each week’s activities within that week, rather than falling behind or ploughing ahead. Each video or page provides the opportunity to add comments, and it’s possible for students to “like” each other’s comments and respond to them.  In particular there’s usually one ‘question of the week’ where comment is particularly encouraged.

The structure means that it’s very easy to fit alongside work and other commitments – so far I’ve found myself watching course videos during half time in Champions League matches (though the half time analysis could have told its own story about the shallowness of human psychology and the desire to create narratives), last thing at night in lieu of bedtime reading, and when killing time between finishing work and heading off to meet friends.  The fact that the videos are short means that it’s not a case of finding an hour or more at a time for uninterrupted study. Having said that, this is a course which assumes “no special knowledge or previous experience of studying”, and I can well imagine that other MOOCs require a much greater commitment in terms of time and attention.

I’ve really enjoyed the course, and I’ve found myself actively looking forward to the start of a new week, and to carving out a free half hour to make some progress into the new material.  As a commitment-light, convenient way of learning, it’s brilliant.  The fact that it’s free helps.  Whether I’d pay for it or not I’m not sure, not least because I’ve learnt that we’re terrible at working out absolute value, as our brains are programmed to compare.  Once a market develops and gives me some options to compare, I’d be able to think about it.  Once I had a few MOOCs under my belt, I’d certainly consider paying actual money for the right course on the right topic at the right level with the right structure. At the moment it’s possible to pay for exams (about £120, or £24 for a “statement of participation”) on some courses, but as they’re not credit bearing it’s hard to imagine there would be much uptake. What might be a better option to offer is a smaller see for a self-printable .pdf record of courses completed, especially once people start racking up course completions.

One drawback is the multiple choice method of examining/testing, which doesn’t allow much sophistication or nuance in answers.  A couple of the questions on the MOOC I completed were ambiguous or poorly phrased, and one in particular made very confusing use of “I” and “you” in a scenario question, and I’d still argue (sour grapes alert) that the official “correct” answer was wrong. I can see that multiple choice is the only really viable way of having tests at the moment (though one podcast I was listening to the other day mooted the possibility of machine text analysis marking for short essays based on marks given to a sample number), but I think a lot more work needs to go into developing best (and better) practice around question setting.  It’s difficult – as a research student I remember being asked to come up with some multiple choice questions about the philosophy of John Rawls for an undergraduate exam paper, and struggled with that.  Though I did remove the one from the previous paper which asked how many principles of justice there were (answer: it depends how you count them).

But could it replace an undergraduate degree programme?  Could I imagine doing a mega-MOOC as my de facto full time job, watching video lectures, reading course notes and core materials, taking multiple choice questions and (presumably) writing essays?  I think probably not.  I think the lack of human interaction would probably drive me mad – and I say this as a confirmed introvert.  Granted, a degree level MOOC would probably have more opportunities for social interaction – skype tutorials, better comments systems, more interaction with course tutors, local networks to meet fellow students who live nearby – but I think the feeling of disconnection, isolation, and alienation would just be too strong.  Having said that, perhaps to digital natives this won’t be the case, and perhaps compared (as our brains are good at comparing) to the full university experience a significantly lighter price tag might be attractive.  And of course, for those in developing countries or unable or unwilling to relocate to a university campus (for whatever reason), it could be a serious alternative.

But I can certainly see a future that blends MOOC-style delivery with more traditional university approaches to teaching and learning.  Why not restructure lectures into shorter chunks and make them available online, at the students’ convenience?  There are real opportunities to bring in extra content with expert guest speakers, especially industry figures, world leading academic experts, and particularly gifted and engaging communicators.  It’s not hard to imagine current student portals (moodle, blackboard etc) becoming more and more MOOC-like in terms of content and interactivity.  In particular, I can imagine a future where MOOCs offer opportunities for extra credit, or for non-credit bearing courses for students to take alongside their main programme of study.  These could be career-related courses, courses that complement their ‘major’, or entirely hobby or interest based.

One thought that struck me was whether it was FE rather than HE that might be threatened by MOOCs.  Or at least the Adult Ed/evening classes aspect of FE.  But I think even there a motivation to – say – decide to learn Spanish, is only one motivation – another is often to meet new people and to learn together, and I don’t think that that’s an itch that MOOCs are entirely ready to scratch. But I can definitely see a future for MOOCs as the standard method of continuing professional development in any number of professional fields, whether these are university-led or not. This has already started to happen, with a course called ‘Discovering Business in Society‘ counting as an exemption towards one paper of an accounting qualification.  I also understand that Futurelearn are interested in pilot schemes for the use of MOOCs 16-19 year olds to support learning outcomes in schools.

It’s also a great opportunity for hobbyists and dabblers like me to try something new and pursue other intellectual interests.  I can certainly imagine a future in which huge numbers of people are undertaking a MOOC of one kind or another, with many going from MOOC to MOOC and building up quite a CV of virtual courses, whether for career reasons, personal interest, or a combination of both.  Should we see MOOCs as the next logical and interactive step from watching documentaries? Those who today watch Horizon and Timewatch and, well, most of BBC4, might in future carry that interest forward to MOOCs.

So perhaps rather than seeing MOOCs in terms of what they’re going to disrupt or displace or replace, we’re better off seeing them as something entirely new.

And I’m starting my next MOOC on Monday – Cooperation in the contemporary world: Unlocking International Politics led by Jamie Johnson of the University of Birmingham.  And there are several more that look tempting… How to read your boss from colleagues at the University of Nottingham, and England in the time of Richard III from – where else – the University of Leicester.