The rise of the machines – automation and the future of research development

"I've seen research ideas you people wouldn't believe. Impact plans on fire off the shoulder of Orion. I watched JeS-beams glitter in the dark near the Tannhäuser ResearchGate. All those proposals will be lost in time, like tears...in...rain. Time to revise and resubmit."
“I’ve seen first drafts you people wouldn’t believe. Impact plans on fire off the shoulder of Orion. I watched JeS beams glitter in the dark near the Tannhäuser ResearchGate. All those research proposals will be lost in time, like tears…in…rain. Time to resubmit.”

In the wake of this week’s Association of Research Managers and Administrator‘s conference in Birmingham, Research Professional has published an interesting article by Richard Bond, head of research administration at the University of the West of England. The article – From ARMA to avatars: expansion today, automation tomorrow? – speculates about the future of the research management/development profession given the likely advances of automation and artificial intelligence. Each successive ARMA conference is hailed as the largest ever, and ARMA’s membership has grown rapidly over recent years, probably reflecting increasing numbers of research support roles, increased professionalism, an increased awareness of ARMA and the attractiveness of what it offers in terms of professional development. But might better, smarter computer systems reduce, and perhaps even eliminate the need for some research development roles?

In many ways, the future is already here. In my darker moments I’ve wondered whether some colleagues might be replicants or cylons. But many universities already have (or are in the progress of getting) some form of cradle-to-grave research management information system which has the potential to automate many research support tasks, both pre and post award. Although I wasn’t in the session where the future of JeS, the online submission grant system used by RCUK UKRI, tweets from the session indicate that JeS 2.0 is being seen as a “grant getting service” and a platform to do more than just process applications, which could well include distribution of funding opportunities. Who knows what else it might be able to do? Presumably it can link much better to costing tools and systems, allowing direct transfer of costing and other informations to and from university systems.

A really good costing tool might be able to do a lot of things automatically. Staff costs are already relatively straightforward to calculate with the right tools  – the complication largely comes from whether funders expect figures to include inflation and cost of living/salary increment pay rises to be included or not. But greater uniformity across funders could help, and setting up templates for individual funders could be done, and in many places is already done. Non-pay costs are harder, but one could imagine a system that linked to travel and bookings websites and calculated the average cost of travel from A to B. Standard costs could be available for computers and for consumables, again, linking to suppliers’ catalogues. This could in principle allow the applicant (rather than a research administrator) to do the budget for the grant application, but I wonder if there’s much appetite for doing that from applicants who don’t do this. I also think there’s a role for the research costing administrator in terms of helping applicants flush out all of the likely costs – not all of which will occur to the PI – as well as dealing with the exceptions that the system doesn’t cover. But even if specialist human involvement is still required, giving people better tools to work smarter and more efficiently – especially if the system is able to populate the costings section application form directly without duplication – would reduce the amount of humans required.

While I don’t think we’re there yet, it’s not hard to imagine systems which could put the right funding opportunities in front of the right academics at the right time and in the right format. Research Professional has offered a customisable research funding alerts service for many years now, and there’s potential for research management systems to integrate this data, combine it with what’s known about individual researchers and research team’s interests, and put that information in front of them automatically.

I say we’re not there yet, because I don’t think the information is arriving in the right format – in a quick and simple summary that allows researchers to make very quick decisions about whether to read on, or move on to the next of the twelvety-hundred-and-six unread emails. I also wonder whether the means of targeting the right academics are sufficiently nuanced. A ‘keywords’ approach might help if we could combine research interest keyword sets used by funders, research intelligence systems, and academics. But we’d need a really sophisticated set of keywords, coving not just discipline and sub-discipline, but career stage, countries of interest, interdisciplinary grand challenges and problems etc. Another problem is that I don’t think call summaries are – in general – particularly well-written (though they are getting better) by funders, though we could perhaps imagine them being tailored for use in these kinds of systems in the future. A really good research intelligence system could also draw in data about previous bids to the scheme from the institution, data about success rates for previous calls, access to previously successful applications (though their use is not without its drawbacks).

But even with all this in place, I still think there’s a role for human research development staff in getting opportunities out there. If all we’re doing is forwarding Research Professional emails, then we could and should be replaced. But if we’re adding value through our own analysis of the opportunity, and customising the email for the intended audience, we might be allowed to live. A research intelligence system inevitably just churns out emails that might be well targeted or poorly targeted. A human with detailed knowledge of the research interests, plans, and ambitions of individual researchers or groups can not only target much better, but can make a much more detailed, personalised, and context sensitive analysis of the advantages and disadvantages of a possible application. I can get excited about a call and tell someone it’s ideal for them, and because of my existing relationship with them, that’ll carry weight … a computer can tell them that it’s got a 94.8% match.

It’s rather harder to see automation replacing training researchers in grant writing skills or undertaking lay review of draft grant applications, not least because often the trick with lay review is spotting what’s not there rather than what is. But I’d be intrigued to learn what linguistic analysis tools might be able to do in terms of assessing the required reading level, perhaps making stylistic observations or recommendations, and perhaps flagging up things like the regularity with which certain terms appear in the application relative to the call etc. All this would need interpreting, of course, and even then may not be any use. But it would be interesting to see how things develop.

Impact is perhaps another area where it’s hard to see humans being replaced. Probably sophisticated models of impact development could and should be turned in tools to help academics identify the key stakeholders, come up with appropriate strategies, and identify potential intermediaries with their own institution. But I think human insight and creativity could still add substantial value here.

Post-award isn’t really my area these days, but I’d imagine that project setup could become much easier and involve fewer pieces of paper and documents flying around. Even better and more intuitive financial tools would help PIs manage their project, but there are still accounting rules and procedures to be interpreted, and again, I think many PIs would prefer someone else to deal with the details.

Overall it’s hard to disagree with Bond’s view that a reduction in overall headcount across research administration and management (along with many other areas of work) is likely, and it’s not hard to imagine that some less research intensive institutions might be happy that the service that automated systems could deliver is good enough for them. At more research intensive institutions, better tools and systems will increase efficiency and will enable human staff to work more effectively. I’d imagine that some of this extra capacity will be filled by people doing more, and some of it may lead to a reduction in headcount.

But overall, I’d say – and you can remind me of this when I’m out of a job and emailing you all begging for scraps of consultancy work, or mindlessly entering call details into a database – that I’m probably excited by the possibilities of automation and better and more powerful tools than I am worried about being replaced by them.

I for one welcome our new research development AI overlords.

How useful is reading examples of successful grant applications?

This article is prompted by a couple of twitter conversations around a Times Higher Education article which quotes Ross Mounce, founding editor of Research Ideas and Outcomes, who argues for open publication at every stage of the research process, including (successful and unsuccessful) grant applications. The article acknowledges that this is likely to be controversial, but it got a few of us thinking about the value of reading other people’s grant applications to improve one’s own.

I’m asked about this a lot by prospective grant applicants – “do you have any examples of successful applications that you can share?” – and while generally I will supply them if I have access to them, I also add substantial caveats and health warnings about their use.

The first and perhaps most obvious worry is that most schemes change and evolve over time, and what works for one call might not work in another. Even if the application form hasn’t changed substantially, funder priorities – both hard priorities and softer steers – may have changed. And even if neither have changed, competitive pressures and improved grant writing skills may well be raising the bar, and an application that got funded – say – three or four years ago might not get funding today. Not necessarily because the project is weaker, but because the exposition and argument would now need to be stronger. This is particularly the case for impact – it’s hard to imagine that many of the impact sections on RCUK applications written in the early days of impact would pass muster now.

The second, and more serious worry, is that potential applicants take the successful grant application far too seriously and far too literally. I’ve seen smart, sensible, sophisticated people become obsessed with a successful grant application and try to copy everything about it, whether relevant or not, as if there was some mystical secret encoded into the text, and any subtle deviation would prevent the magic from working. Things like… the exact balance of the application, the tables/diagrams used or not used (“but the successful application didn’t have diagrams!”), the referencing system, the font choice, the level of technical detail, the choice and exposition of methods, whether there are critical friends and/or a steering group, the number of Profs on the bid, the amount of RA time, the balance between academic and stakeholder impact.

It’s a bit like a locksmith borrowing someone else’s front door key, making as exact a replica as she can, and then expecting it to open her front door too. Or a bit like taking a recipe that you’ve successfully followed and using it to make a completely different dish by changing the ingredients while keeping the cooking processes the same. Is it a bit like cargo cult thinking? Attempting to replicate an observed success or desired outcome by copying everything around it as closely as possible, without sufficient reflection on cause and effect? It’s certainly generalising inappropriately from a very small sample size (often n=1).

But I think – subject to caveats and health warnings – it can be useful to look at previously successful applications from the same scheme. I think it can sometimes even be useful to look at unsuccessful applications. I’ve changed my thinking on this quite a bit in the last few years, when I used to steer people away from them much more strongly. I think they can be useful in the following ways:

  1. Getting a sense of what’s required. It’s one thing seeing a blank application form and list of required annexes and additional documents, it’s another seeing the full beast. This will help potential applicants get a sense of the time and commitment that’s required, and make sensible, informed decisions about their workload and priorities and whether to apply or not.
  2. It also highlights all of the required sections, so no requirement of the application should come as a shock. Increasingly with the impact agenda it’s a case of getting your ducks in a row before you even think about applying, and it’s good to find that out early.
  3. It makes success feel real, and possible, especially if the grant winner is someone the applicant knows, or who works at the same institution. Low success rates can be demoralising, but it helps to know not only that someone, somewhere is successful, but that someone here and close by has been successful.
  4. It does set a benchmark in terms of the state of readiness, detail, thoroughness, and ducks-in-a-row-ness that the attentive potential applicant should aspire to at least equal, if not exceed. Early draft and early stage research applications often have larger or smaller pockets of vaguery and are often held together with a generous helping of fudge. Successful applications should show what’s needed in terms of clarity and detail, especially around methods.
  5. Writing skills. Writing grant applications is a very different skill to writing academic papers, which may go some way towards explaining why the Star Wars error in grant writing is so common. So it’s going to be useful to see examples of that skill used successfully… but having said that, I have a few examples in my library of successes which were clearly great ideas, but which were pretty mediocre as examples of how to craft a grant application.
  6. Concrete ideas and inspiration. Perhaps about how to use social media, or ways to engage stakeholders, or about data management, or other kinds of issues, questions and challenges if (and only if) they’re also relevant for the new proposal.

So on balance, I think reading (funder and scheme) relevant, recent, and highly rated (even if not successful) funding applications can help prospective applicants…. provided that they remember that what they’re reading and drawing inspiration from is a different application from a different team to do different things for different reasons at a different time.

And not a mystical, magical, alchemical formula for funding success.

Getting research funding: the significance of significance

"So tell me, Highlander, what is peer review?"
“I’m Professor Connor Macleod of the Clan Macleod, and this is my research proposal!”

In a excellent recent blog post, Lachlan Smith wrote about the “who cares?” question that potential grant applicants ought to consider, and that research development staff ought to pose to applicants on a regular basis.

Why is this research important, and why should it be funded? And crucially, why should we fund this, rather than that? In a comment on a previous post on this blog Jo VanEvery quoted some wise words from a Canadian research funding panel member: “it’s not a test, it’s a contest”. In other words, research funding is not an unlimited good like a driving test or a PhD viva where there’s no limit to how many people can (in principle) succeed. Rather, it’s more like a job interview, qualification for the Olympic Games, or the film Highlander – not everyone can succeed. And sometimes, there can be only one.

I’ve recently been fortunate enough to serve on a funding panel myself, as a patient/public involvement representative for a health services research scheme. Assessing significance in the form of potential benefit for patients and carers is a vitally important part of the scheme, and while I’m limited in what I’m allowed to say about my experience, I don’t think I’m speaking out of turn when I say that significance – and demonstrating that significance – is key.

I think there’s a real danger when writing – and indeed supporting the writing – of research grant applications that the focus gets very narrow, and the process becomes almost inward looking. It becomes about improving it internally, writing deeply for subject experts, rather than writing broadly for a panel of people with a range of expertise and experiences. It almost goes without saying that the proposed project must convince the kinds of subject expert who will typically be asked to review a project, but even then there’s no guarantee that reviewers will know as much as the applicant. In fact, it would be odd indeed if there were to be an application where the reviewers and panel members knew more about the topic than the applicant. I’d probably go as far as to say that if you think the referees and the reviewers know more than you, you probably shouldn’t be applying – though I’m open to persuasion about some early career schemes and some very specific calls on very narrow topics.

So I think it’s important to write broadly, to give background and context, to seek to convince others of the importance and significance of the research question. To educate and inform and persuade – almost like a briefing. I’m always badgering colleagues for what I call “killer stats” – how big is the problem, how many people does it affect, by how much is it getting worse, how much is it costing the economy, how much is it costing individuals, what difference might a solution to this problem make? If there’s a gap in the literature or in human knowledge, make a case for the importance or potential importance in filling that gap.

For blue skies research it’s obviously harder, but even here there is scope for discussing the potential academic significance of the possible findings – academic impact – and what new avenues of research may be opened out, or closed off by a decisive negative finding which would allow effort to be refocused elsewhere. If all research is standing on the shoulders of giants, what could be seen by future researchers standing on the shoulders of your research?

It’s hugely frustrating for reviewers when applicants don’t do this – when they don’t give decision makers the background and information they need to be able to draw informed conclusions about the proposed project. Maybe a motivated reviewer with a lighter workload and a role in introducing your proposal may have time to do her own research, but you shouldn’t expect this, and she shouldn’t have to. That’s your job.

It’s worth noting, by the way, that the existence of a gap in the literature is not itself an argument for it being filled, or at least not through large amounts of scarce research funding. There must be a near infinite number of gaps, such as the one that used to exist about the effect of peanut butter on the rotation of the earth – but we need more than the bare fact of the existence of a gap – or the fact that other researchers can be quoted as saying there’s a gap – to persuade.

Oh, and if you do want to claim there’s a gap, please check google scholar or similar first – reviewers, panel members (especially introducers) may very well do that. And from my limited experience of sitting on a funding panel, there’s nothing like one introducer or panel member reeling of a list of studies on a topic where there’s supposedly a gap (and which aren’t referenced in the proposal) to finish off the chance of an application. I’ve not seen enthusiasm or support for a project sucked out of the room so completely and so quickly by any other means.

And sometimes, if there aren’t killer stats or facts and figures, or if a case for significance can’t be made, it may be best to either move on to another idea, or a different and cheaper way of addressing the challenge. While it may be a good research idea, a key question before deciding to apply is whether or not the application is competitive for significance given the likely competition, the scale of the award, the ambition sought by the funder, and the number of successful projects to be awarded. Given the limits to research funding available, and their increasing concentration into larger grants, there really isn’t much funding for dull-but-worthy work which taken together leads to the aggregation of marginal gains to the sum of human knowledge.I think this is a real problem for research, but we are where we are.

Significance may well be the final decider in research funding schemes that are open to a range of research questions. There are many hurdles which must be cleared before this final decider, and while they’re not insignificant, they mainly come down to technical competence and feasibility. Is the methodology not only appropriate, but clearly explained and robustly justified? Does the team have the right mix of expertise? Is the project timescale and deliverables realistic? Are the research questions clearly outlined and consistent throughout? All of these things – and more – are important, but what they do is get you safely though into the final reckoning for funding.

Once all of the flawed or technically unfeasible or muddled or unpersuasive or unclear or non-novel proposals have been knocked out, perhaps at earlier stages, perhaps at the final funding panel stage, what’s left is a battle of significance. To stand the best chance of success, your application needs to convince and even inspire non-expert reviewers to support your project ahead of the competition.

But while this may be the last question, or the final decider between quality projects, it’s one that I’d argue potential grant applicants should consider first of all.

The significance of significance is that if you can’t persuasively demonstrate the significance of your proposed project, your grant application may turn out to be a significant waste of your time.

I’m running a marathon….

“Tcroydonhalf2015 12he first rule of Running Club is that you DO NOT stop talking about running.”

It starts with the couch-to-5k running programme. This is a relatively gentle start to talking about running, with typical sessions involving only talking about running for a minute or so before resting for another minute while someone else talks about something else before you continue to talk about running. A good way to start is to talk about all your new gear – your suspicion that “gait analysis” may have a slightly dodgy scientific basis and that that nice bloke at the shop might not be fully-qualified podiatrist, but having said that, your new shoes fit brilliantly and running now feels so much easier on your joints.

Once you’re a couch-to-5k graduate, you get to talk about Parkrun – free, weekly, inclusive 5k runs which take places all over the UK (and Ireland, and a few other places) on Saturday mornings. You can talk about how surprised you were about how supportive everyone was, and about perhaps how you felt like a real runner for the first time, and about how they’re open to everyone from serious club runners to couch-to-5k graduates. After you’ve been a few times, you can start talking about “PBs” and how much time you’ve beaten your previous best by, and what your target is now. You can drop “building towards a sub-25” into your conversations.

So once you can run 5k without stopping, you can probably talk about running non-stop for a decent length of time. Attempting a 10k sounds daunting, as you’re doubling the duration of both running and talking about running. But the first 5k/30 minutes is the hardest, and after you’ve done that it’s easier than you’d think to build towards 10k by doing more of what you’ve been doing. By this time (if you’re not already) you might be a member of a local running club or a lone wolf getting advice off the interweb. And you’ve got a whole more terms to sprinkle your running talk with…. tempo runs, hill training, the LSR, interval training, fartleks. You might even be talking about being able to run “negative splits” on race-day, though you should probably explain that’s a good thing and not a terrible injury. And if you did join a running club, you’ve got all your new mates to talk about as well as regional cross country or summer league races.

So things are going great – double it again, add interest, and you’re at the half marathon stage. At this stage, you must seriously advise anyone who’ll listen (and those who won’t) that a half marathon is not a half of anything, and although that’s logically and mathematically false, if you say it in a serious enough tone, no one will pick you up on it. At half marathon stage, you can litter your running talk with pacing strategies and “race day” strategies, carb loading, and about not wanting to be overtaken by a bloke dressed as a gorilla.

If you’re a bloke, you can regale your soon-to-be-former friends with tales of nipple chafing, and associated micropore/vaseline dilemmas, and of course there’s runner’s trots (- if you don’t know, don’t ask).

And this the stage I’m at at the moment. I’ve run five half marathons and I’m going to run my first full marathon in Nottingham at the end of September. I can comfortably talk about running for at least three hours, but on race day I’m going to have to stretch it out to between 3:45 and 4:00 to go the full distance. My training is going really well, and I can’t be happier at the progress I’m making in turn into a monumental bore. I’m having to spend a full three hours every weekend out on my “long slow run”, talking about “nutrition” and I’ve even caught myself referring to the question of what snacks to take with me as a “refuelling strategy”. Believe me, that all this is turning me into a five star prick, and my only redeeming feature is that I don’t wear lycra for training or racing.

And that’s before we get started on requests for sponsorship. So far in my running career I’ve taken the view that it’s basically my leisure activity and I shouldn’t ask people to donate their money to a charity of my choice whose work is clearly in my own interest. But this is a marathon… it’s a monumental challenge even for a semi-regular half-marathoner and underwhelming club runner like me, and to be honest I’m scared. So scared that I have to spend ages talking about it getting reassurance.

So, for the first and almost certainly last time, I’m asking for sponsorship.

If the excellent work that Crohn’s and Colitis UK do won’t motivate you to sponsor me, and if you’ve not got sufficient value out of my blog in the last few years to warrant even a small donation, then please consider the effect of all this on my ever-more-distant-nearest and dearest. Won’t someone think of my colleagues, who dare not ask “how was your weekend” in my hearing any more?

And if all that doesn’t move you, consider this….. at least I’m not a cyclist. Cyclist bores are the worst.

ESRC success rates 2014/2015 – a quick and dirty commentary

"meep meep"
Success rates. Again.

The ESRC has issued its annual report and accounts for the financial year 2014/15, and they don’t make good reading. As predicted by Brian Lingley and Phil Ward back in January on the basis of the figures from the July open call, the success rate is well down – to 13% –  from the 25% I commented on last year , 27% on 2012-13 and 14% of 2011-2012.

Believe it or not there is a staw-grasping positive way of looking at these figures… of which more later.

This research professional article has a nice overview which I can’t add much to, so read it first. Three caveats about these figures, though…

  • They’re for the standard open call research grant scheme, not for all calls/schemes
  • They relate to the financial year, not the academic year
  • It’s very difficult to compare year-on-year due to changes to the scheme rules, including minimum and maximum thresholds which have changed substantially.

In previous years I’ve focused on how different academic disciplines have got on, but there’s probably very little to add. You can read them for yourself (p. 38), but the report only bothers to calculate success rates for the disciplines with the highest numbers of applications – presumably beyond that there’s little statistical significance. I could be claiming that it’s been a bumper year for Education research, which for years bumped along at the bottom of the league table with Business and Management Studies in terms of success rates, but which this year received 3 awards from 22 applications, tracking the average success rate. Political Science and Socio-Legal Studies did well, as they always tend to do. But it’s generalising from small numbers.

As last year, there is also a table of success rates by institution. In an earlier section on demand management, the report states that the ESRC “are discussing ways of enhancing performance with those HEIs where application volume is high and quality is relatively weak”. But as with last year, it’s hard to see from the raw success rate figures which these institutions might be – though of course detailed institutional profiles showing the final scores for applications might tell a very different story. Last year I picked out Leeds (10/0), Edinburgh (8/1), and Southampton (14/2) as doing poorly, and Kings College (7/3), King Leicester III (9/4), Oxford (14/6) as doing well – though again, one more or less success changes the picture.

This year, Leeds (8/1) and Edinburgh (6/1) have stats that look much better. Southampton doesn’t look to have improved (12/0) at all, and is one of the worst performers. Of those who did well last year, none did so well this year – Kings were down to 11/1, Leicester 2/0, and Oxford 11/2. Along with Southampton, this year’s poor performers were Durham (10/0), UCL (15/1)  and Sheffield (11/0) – though all three had respectable enough scores last time. This year’s standouts were Cambridge at 10/4. Perhaps someone with more time than me can combine success rates from the last two years, and I’m sure someone at the ESRC already has….

So… on the basis of success rates alone, probably only Southampton jumps out as doing consistently poorly. But again, much depends on the quality profile of the applications being submitted – it’s entirely possible that they were very unlucky, and that small numbers mask much more slapdash grant submission behaviour from other institutions. And of course, these figures only relate to the lead institution as far as I know.

It’s worth noting that demand management has worked… after a fashion.

We remain committed to managing application volume, with
the aim of focusing sector-wide efforts on the submission
of a fewer number of higher quality proposals with a
genuine chance of funding. General progress is positive.
Application volume is down by 48 per cent on pre-demand
management levels – close to our target of 50 per cent.
Quality is improving with the proportion of applications now
in the ‘fundable range’ up by 13 per cent on pre-demand
management levels, to 42 per cent. (p. 21).

I remember the target of reducing the numbers of applications received by 50% as being regarded as very ambitious at the time, and even if some of it was achieved by changing scheme rules to increase the minimum value of a grant application and banning resubmissions, it’s still some achievement. Back in October 2011 I argued that the ESRC had started to talk optimistically about meeting that target after researcher sanctions (in some form) had started to look inevitable. And in November 2012 things looked nicely on track.

But reducing brute numbers of applications is all very well. But if only 42% of applications are within the “fundable range”, then that’s a problem because it means that a lot of applications being submitted still aren’t good enough.This is where there’s cause for optimism – if less than half of the applications are fundable, your own chances should be more than double the average success rate – assuming that your application is of “fundable” quality. So there’s your good news. Problem is, no-one applies who doesn’t think their application is fundable.

Internal peer review/demand management processes are often framed in terms of improving the quality of what gets submitted, but perhaps not enough of a filtering process. So we refine and we polish and we make 101 incremental improvements… but ultimately you can’t polish a sow’s ear. Or something.

Proper internal filtering is really, really hard to do – sometimes it’s just easier to let stuff from people who won’t be told through and see if what happens is exactly what you think will happen, which it always is. There’s also a fine line (though one I think that can be held and defended) between preventing perceived uncompetitive applications from doing so and impinging on academic freedom. I don’t think telling someone they can’t submit a crap application is infringing their academic freedom, but any such decisions need to be taken with a great deal of care. There’s always the possibility of suspicion of ulterior motives – be it personal, be it subject or methods-based prejudice, or senior people just overstepping the mark and inappropriately imposing their convictions (ideological, methodological etc) on others. Like the external examiner who insists on “more of me” on the reading list….

The elephant in the room, of course, is the flat cash settlement and the fact that that’s now really biting, and that there’s nowhere near enough funding to go around for all of the quality social science research that’s badly needed. But we can’t do much about that – and we can do something about the quality of the applications we’re submitting and allowing to be submitted.

I wrote something for research professional a few years back on how not to do demand management/filtering processes, and I think it still stands up reasonably well and is even quite funny in places (though I say so myself). So I’m going to link to it, as I seem to be linking to a disproportionate amount of my back catalogue in this post.

A combination of a new minimum of £350k for the ESRC standard research grants scheme and the latest drop in success rates makes me think it’s worth writing a companion piece to this blog post about potential ESRC applicants need to consider before applying, and what I think is expected of a “fundable” application.

Hopefully something for the autumn…. a few other things to write about first.