[I’m delighted to be able to publish the first guest post on the blog, written by Stephanie Harris, Contracts Manager at City, University of London.Look out for more of Stephanie’s work over the next weeks and months. Where my posts focus mainly on pre-award, Stephanie’s work in post-award brings important insights – once the funding has been won, what happens next?If anyone else is interested in writing guests posts/having their work hosted on my blog, I’d be delighted to hear from you. Unless you’re one of those affiliate/SEO spammers, in which case I won’t be – AG]
A version of this article first appeared in Funding Insight on 9th March 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com
My previous post posed a question about whether applying for research funding was worth it or not, and concluded with a list of questions to consider to work out the answer. This follow-up is a list of costs and benefits associated with applying for external research funding, whether successful or unsuccessful. Weirdly, my list appears to contain more costs than benefits for success and more benefits than costs for failure, but perhaps that’s just me being contrary…
If you’re successful:
Benefits….
You get to do the research you really want to do
In career terms, whether for moving institution or internal promotion, there’s a big tick in the box marked ‘external research funding’.
Your status in your institution and within your discipline is likely to rise. Bringing in funding via a competitive external process gives you greater external validation, and that changes perceptions – perhaps it marks you out as a leader in your field, perhaps it marks a shift from career young researcher to fulfilling your evident promise.
Success tends to begat success in terms of research funding. Deliver this project and any future application will look more credible for it.
Costs…
You’ve got to deliver on what you promised. That means all the areas of fudge or doubt or uncertainty about who-does-what need to be sorted out in practice. If you’ve under-costed any element of the project – your time, consumables, travel and subsistence – you’ll have to deal with it, and it might not be much fun.
Congratulations, you’ve just signed yourself up for a shedload of admin. Even with the best and most supportive post-award team, you’ll have project management to do. Financial monitoring; recruitment, selection, and line management of one or more research associates. And it doesn’t finish when the research finishes – thanks to the impact agenda, you’ll probably be reporting on your project via Researchfish for years to come.
Every time any comparable call comes round in the future, your colleagues will ask you give a presentation about your application/sit on the internal sifting panel/undertake peer review. Once a funding agency has given you money, you can bet they’ll be asking you to peer review other applications. Listed as a cost for workload purposes, but there are also a lot of benefits to getting involved in peer reviewing applications because it’ll improve your own too. Also, the chances are that you benefited from such support/advice from senior colleagues, so pay it forward. But be ready to pay.
You’ve just raised the bar for yourself. Don’t be surprised if certain people in research management start talking about your next project before this one is done as if it’s a given or an inevitability.
Unless you’re careful, you may not see as much recognition in your workload as you might have expected. Of course, your institution is obliged to make the time promised in the grant application available to you, but unless you’ve secured agreement in advance, you may find that much of this is taken out of your existing research allocation rather than out of teaching and admin. Especially as these days we no longer thing of teaching as a chore to buy ourselves out from. Think very carefully about what elements of your workload you would like to lose if your application is successful.
The potential envy and enmity of colleagues who are picking up bits of what was your work.
If you’re unsuccessful…
Benefits…
The chances are that there’s plenty to be salvaged even from an unsuccessful application. Once you’ve gone through the appropriate stages of grief, there’s a good chance that there’s at least one paper (even if ‘only’ a literature review) in the work that you’ve done. If you and your academic colleagues and your stakeholders are still keen, the chances are that there’s something you can do together, even if it’s not what you ideally wanted to do.
Writing an application will force you to develop your research ideas. This is particularly the case for career young researchers, where the pursuit of one of those long-short Fellowships can be worth it if only to get proper support in developing your research agenda.
If you’ve submitted a credible, competitive application, you’ve at least shown willing in terms of grant-getting. No-one can say that you haven’t tried. Depending on the pressures/expectations you’re under, having had a credible attempt at it buys you some license to concentrate on your papers for a bit.
If it’s your first application, you’ll have learnt a lot from the process, and you’ll be better prepared next time. Depending on your field, you could even add a credible unsuccessful application to a CV, or a job application question about grant-getting experience.
If your institution has an internal peer review panel or other selection process, you’ve put you and your research onto the radar of some senior people. You’ll be more visible, and this may well lead to further conversations with colleagues, especially outside your school. In the past I’ve recommended that people put forward internal expressions of interest even if they’re not sure they’re ready for precisely this reason.
Costs…
You’ve just wasted your time – and quite a lot of time at that. And not just work time… often evenings and weekends too.
It’ll come as a disappointment, which may take some time to get over
Even if you’ve kept it quiet, people in your institution will know that you’ve been unsuccessful.
I’ve written two longer pieces on what to do if your research grant application is unsuccessful, which can be found here and here.
In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?
In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.
A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.
While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.
I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).
But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.
It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.
Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.
Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.
Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.
But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.
I for one welcome our new research development AI overlords.
The thorny issue of “open access” – which I take to mean the question of how to make the fruits of publicly-funded research freely and openly available to the public – is one that’s way above my pay grade and therefore not one I’ll be resolving in this blog post. Sorry about that. I’ve been following the debates with some interest, though not, I confess, an interest which I’d call “keen” or “close”. No doubt some of the nuances and arguments have escaped me, and so I’ll be going to an internal event in a week or so to catch up. I expect it’ll be similar to this one helpfully written up by Phil Ward over at Fundermentals. Probably the best single overview of the history and arguments about open access is an article in this week’s Times Higher article by Paul Jump – well worth a read.
I’ve been wondering about some of the consequences of open access that I haven’t seen discussed anywhere yet. This first post is about the needs of research users, and I’ll be following it up with a post about what some consequences of open access for academics that may require more thought.
I wonder if enough consideration is being given to the needs and interests of potential readers and users of all this research which is to be liberated from paywalls and other restrictions. It seems to me that if Joe Public and Joanna Interested-Professional are going to be able to get their mitts on all this research, then this has very serious implications for academic research and academic writing. I’d go as far as to say it’s potentially revolutionary, and may require radical and permanent changes to the culture and practice of academic writing for publication in a number of research fields. I’m writing this to try to find out what thought has been given to this, amidst all the sound and fury about green and gold.
If I were reading an academic paper in a field that I was unfamiliar with, I think there are two things I’d struggle with. One would be properly and fully understanding the article in itself, and the second would be understanding the article in the context of the broader literature and the state of knowledge in that area. By way of example, a few years back I was looking into buying a rebounder – a kind of indoor mini-trampoline. Many vendors made much of a study attributed to NASA which they interpreted as making dramatic claims about the efficacy of rebounder exercising compared to other kinds of exercise. Being of a sceptical nature and armed with campus access to academic papers that weren’t open access, I went and had a look myself. At the time, I concluded that these claims weren’t borne out by the study, which was really aimed at looking at helping astronauts recover from spending time in weightlessness. I don’t have access to the article as I’m writing this, so I can’t re-check, but here’s the abstract. I see that this paper is over 30 years old, and that eight people is a very small sample size…. so… perhaps superseded and not very highly powered. I think the final line of the abstract may back up my recollection (“… a finding that might help identify acceleration parameters needed for the design of remedial procedures to avert deconditioning in persons exposed to weightlessness”).
For the avoidance of doubt, I infer no dishonesty nor nefarious intent on the part of rebounder vendors and advocates – I may be wrong in my interpretation, and even if I’m not, I expect this is more likely to be a case of misunderstanding a fairly opaque paper rather than deliberate distortion. In any case, my own experience with rebounders has been very positive, though I still don’t think they’re a miracle or magic bullet exercise.
How would open access help me here? Well, obviously it would give me access to the paper. But it won’t help me understand it, won’t help me draw inferences from it, won’t help me place it in the context of the broader literature. Those numbers in that abstract look great, but I don’t have the first clue what they mean. Now granted, with full open access I can carry out my own literature search if I have the time, knowledge and inclination. But it’ll still be difficult for me to compare and contrast and form my own conclusions. And I imagine that it’ll be harder still for others without a university education and a degree of familiarity with academic papers, or who haven’t read Ben Goldacre’s excellent Bad Science.
I worry that open access will only make it easier for people with an agenda (to sell products, or to push a certain political agenda) to cherry-pick evidence and put together a new ill-deserved veneer of respectability by linking to academic papers and presenting (or feigning to present) a summary of their contents and arguments. The intellectually dishonest are already doing this, and open access might make it easier.
I don’t present this as an argument against open access, and I don’t agree with a paternalist elitist view that holds that only those with sufficient letters after their name can be trusted to look at the precious research. Open access will make it easier to debunk the charlatans and the quacks, and that’s a good thing. But perhaps we need to think about how academics write papers from now on – they’re not writing just for each other and for their students, but for ordinary members of the public and/or research users of various kinds who might find (or be referred to) their paper online. Do we need to start thinking about a “lay summary” for each paper to go alongside the abstract, setting out what the conclusions are in clear terms, what it means, and what it doesn’t mean?
What do we do with papers that present evidence for a conclusion that further research demonstrates to be false? In cases of research misconduct, these can be formally withdrawn, but we wouldn’t want to do that in cases of papers that have just been superseded, not least because they might turn out to be correct after all, and are still a valid and important part of the debate. Of course, where the current scientific consensus on any particular issue may not be clear, and it’s less clear still how the state of the debate can be impartially communicated to research users.
I’d argue that we need to think about a format or template for an “information for non-academic readers” or something similar. This would set out a lay summary of the research, its limitations, links to key previous studies, details of the publishing journal and evidence of its bona fides. Of course, it’s possible that what would be more useful would be regularly written and re-written evidence briefings on particular topics designed for research users. One source of lay reviews I particularly like is the NHS Behind the Headlines which comments on the accuracy (or otherwise) of media coverage of health research news. It’s nicely written, easily accessible, and isn’t afraid to criticise or praise media coverage when warranted. But even so, as the journals are the original source, some kind of standard boiler plate information section might be in order.
Has there been any discussion of these issues that I’ve missed? This all seems important to me, and I wouldn’t want us to be in a position of finally agreeing what colour our open access ought to be, only to find that next to no thought has been given to potential readers. I’ve talked mainly about health/exercise examples in this entry, but all this could apply just as well to pretty much any other field of research where non-academics might take an interest.
This week I was asked to be involved in a Research Grant application ‘bootcamp’ to talk in particular about the use of social media in pathways to impact plans, and academic blogging in general. I was quick to disclaim expertise in this area – I’ve been blogging for a while now, but I’m not an academic and I’m certainly not an expert on social media. I’m also not sure about this use of the word ‘bootcamp’. We already have ‘workshop’ and ‘surgery’ as workplace-based metaphors for types of activity, and I’m not sure we’re ready for ‘bootcamp’. So unless the event turns out to involve buzzcuts, a ten mile run, and an assault course, I’ll be asking for my money back.
But I thought I’d try to put together a list of resources and examples that I was already aware of in time for the session, and I then I wondered about ‘crowdsourcing’ (i.e. lazily ask my readers/twitter followers) some others that I might have missed. Hopefully we’ll then end up with a general list of resources that everyone can use. I’ve pasted some links below, along with a few observations of my own. Please do chip in with your thoughts, experiences, tips, and recommendations for resources.
———————————–
Things I have learnt about using social media
Blogging
You must have a clear idea about your intended audience and what you hope to achieve. Blogging for the sake of it or because it’s flavour of the month or because you think it is expected is unlikely to be sustainable or to achieve the desired results.
A good way to start is to search for people doing a similar thing and contact them asking if you can link to their blog. Everyone likes being linked to, and this is a good way to start conversations. Once established, support others in the same way.
You have to build something of a track record of posts and tweets to be credible as a consistent source of quality content – you’ve got to earn a following, and this takes time, work, and patience. And even then, might not work. Consider a ‘soft launch’ to build your track record, and then a second wave of more intensive effort to get noticed.
Posting quality comments on other people’s blogs, either in their comments section, or in a post on your blog, can be a good way to attract attention.
Illustrate blog posts with a picture (perhaps found through google images) – a lot of successful bloggers seem to do this.
Multi-author blogs and/or guest posts are a good way to share the load.
And consequently, offering guest posts or content to established blogs is a way to get noticed.
The underlying technology is now very straightforward. Anyone who is reasonably computer literate will have little trouble learning the technical skills. The editing frame where I’m writing this in looks a lot like Word, and I’ve used precisely no programming/HTML stuff – that can all be automated now.
Twitter
The technology of @s and # is fairly straightforward to pick up – find some relevant/interesting people to follow and you’ll soon pick it up, or read one of the guides below.
A good way to reach people is to get “retweets” – essentially when someone else with a bigger following forwards your message. You do this by addressing posts to them using the @ symbol
Generally the pattern of retweets seems to be when people find something interesting and it suits their message. So… the ESRC retweeted my blog post linking to their regional visit presentation when my blog post said nice things about the visit and linked to their presentation
Weird mix of personal and professional. Some twitter accounts are uniquely professional, others uniquely personal, but many seem a mixture. Some of the usual barriers seem not to apply, or apply only loosely. Care needs to be taken here.
General
Social media is potentially a huge time sink – keep in mind costs in time versus benefits gained
It can be a struggle if you’re naturally shy and attention seeking doesn’t come easily to you
Some of the links and choices of examples, are more than a little University of Nottingham-centric, but then this was an internal event. I’ve not checked with the authors of the various resources I’ve linked to, and taken the liberty of assuming that they won’t mind the link and recognition. But happy to remove any on request.
Any resources I’ve missed? Any more thoughts and suggestions? Please comment below….