Reflections on university research strategy

Science Advisor, Civilization II (Meier, 1996). If you know, you know.

Below are both parts of a short series of articles I wrote for Research Professional back in May 2025.

The first is ‘Research strategy: what the strategists get wrong’. It’s a summary of everything I’d liked to have been able to successfully articulate in response to research strategy consultations, often led by people whose understanding of strategy and planning far exceeds their knowledge of research.

The second has the subtitle “Asking the right questions” and is a more positive look at what questions strategists should be asking when considering a new research strategy.

Combining the two into a single article does make it a longer read. But the two are companion pieces and are best read together,  rather than published a few weeks apart as originally.

——

A version of this article first appeared in Funding Insight in May 2025. and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

Research strategy: what the strategists get wrong

Part one of two asking what a meaningful, positive research strategy might look like

Opening credits card from ‘Battlestar Galactica’ (Moore et al, 2004). This is how I feel whenever anyone talks about a new research strategy.

Cards on the table, folks—I struggle with the idea and purpose of research strategies. Likely it’s that I’ve seldom had the opportunity to breathe the rarified air of strategy development. I’ve seen a lot of research strategies at more than one institution and at various levels (school/ faculty/ institutional) so I’m not necessarily talking about my own current institution or its current strategies. In fact, I’m almost certainly not.

That’s not to say that I’ve only ever had operational concerns—I’ve been involved in development pipelines, idea generation, staff development, network and community building, and influencing research policies at various levels up to and including the national level. Research strategies, though… I’ve often sensed a disconnect between theory and practice, and aspiration and reality. That’s a nice strategy, but what do we do differently as a result? I’ve tended to be left somewhere between puzzled, underwhelmed and annoyed.

These two articles are probably best understood as thoughts from someone who doesn’t know much about writing strategies, but who does understand something about research funding. Alternatively, these articles can be considered as a list of all the points I wish I’d been able to make with colleagues who are expert in strategy but less so when it comes to research. Too often their confidence has interrupted my expertise or I’ve failed to make my point with sufficient clarity, brevity and/or rigour to have an impact. It’s l’esprit d’escalier, but for strategy consultations.

In this first article, I’m going to talk about common mistakes and misunderstandings.  In part two, I’ll be more positive and set out some considerations that need to underpin successful strategies.

Mistaking funding complexity for abundance of opportunity

Research funding is limited and constrained. Just looking at the national picture—so in my case, the UK—the research funding landscape is complex. So much so that it took me two articles to give it the briefest overview I could manage. This complexity is sometimes mistaken for abundance by those who don’t understand it. The reality is that there are far more ideas that are worthy of funding than there is money to support them. There probably aren’t lots of hitherto untapped sources of potential income, and competition is fierce everywhere.

Funding is limited in at least two dimensions. Firstly, there isn’t an infinite range of funders and calls. For some ideas, there may be between zero and one suitable funder. While bottom-up, open-call-within-remit funding opportunities do abound, they’re not infinite in number or budget and usually don’t permit resubmissions.

Secondly, there isn’t an infinite source of funding, since research funding is a zero-sum game. There is no easy money being left on the table. No low-hanging fruit ripe for the plucking. For my institution to win more, others need to lose more. If we were selling chocolate bars, we could grow by increasing our market share, or by increasing the overall size of the market. But we’ve little or no direct control over funder budgets or the percentage of GDP devoted to research. There are two areas of exception, which I’ll discuss in the second article.

Research funding is lumpy.There are few medium-sized research grants. This is true regardless of discipline, and how much money counts as medium. Typically, there are more small grants, fewer large grants, and occasionally, massive grants. There are fewer and fewer available the more money is on offer. I don’t want to commit to saying that it’s an exponential decrease, but I’m not saying that it’s not.

Not understanding this leads to another frequent mistake, which is to wrongly bemoan rather than celebrate the number of smaller grants held. Yes, smaller grants are expensive to bid for and to administer. But it’s a mistake to think that there’s a simple way to move up the value chain to medium-sized (if they even exist) and large grants. Just because an idea and an applicant are competitive for a smaller grant doesn’t mean that they would have been competitive for a larger one. Or even that the opportunity existed for a larger one.

I’m not saying that no funded researcher has ever sold themselves short or been under-ambitious, but I’m suspicious of sweeping, evidence-free assumptions that this is the case on a systematic basis. I’ve yet to see any evidence that lots of successful small grant ideas could have been retooled (successfully) for larger grants and/or on better terms. Anyone who believes this should be challenged to produce some evidence and examples.

Unrealistic, confused, contradictory or inconsistent targets

There are two research strategy mistakes that leave me with my head in my hands. The first is prophesising wildly ambitious increases in income, without any obvious reason for such optimism or any accompanying investment. Maybe this isn’t quite Underpants Gnome territory, but it’s not far off.

The second is any attempt to increase research income and research margin at the same time. Or flip-flopping between them as the preferred measure of success. Research income is what it sounds like, while research margin is about how much of the costs of research are covered. So, funders that pay investigator costs and/or contributions towards overheads lead to a better margin, compared to a charity funder, for example, who might only fund the direct costs.

If you want to increase research income, you’ll probably need to diversify funding sources. And that normally means diversifying from national funders and focusing more on research charities and quasi-charities. But if you do that successfully, you bring in more research funding on worse terms. Too often, targets are set to increase research income… and then criticism is made of the resulting fall in research margin. Pick one!

Pretending that all researchers are equally fundable

Something obviously true but rarely acknowledged is that not all researchers are equal in terms of their potential for generating large-scale, novel and fundable research ideas. I don’t subscribe to the “great man” view of science, I’m fully supportive of the team science agenda and yet… It does need a certain sort of research leader to be able to credibly bid for seven or eight-figure sums of research funding.

Strategy specialists who are newish to research funding are often surprised to find that a large percentage of research income comes from a small percentage of researchers. This is a feature, not a bug. It’s probably true at your institution, and it’s true nationally. A lot of funding is recorded in the principal investigator’s name, and there are only very few major awards to be won. The reality is that very few researchers have the track record to lead such investments anyway.

Being realistic, how many do you really have at your institution who could compete—today—for massive money?

Over-interpreting research funding data without context

The margins between success and failure can be razor thin, and the consequences are huge. Like offside decisions in football, extremely tight decisions going for or against you can have consequences out of all proportion to the margin of the decision.

Ever supported a research grant proposal for tens of millions, got to the final two, and lost out? I have. Congratulations-through-gritted-teeth to the successful team, but their proposal was not tens-of-millions better than ours, as I’ve no doubt they’d acknowledge.

It follows from all of this—margins of success and failure, the lack of medium sized grants—that income targets at individual researcher level are obviously a terrible idea, right?

So many things can skew year-on-year figures for research income: one big success, one big grant coming to an end, one themed call in an institutional sweet spot, one big-name staff arrival or departure… Unlike student recruitment, where we’re looking at the net result of a lot of individual decisions, with research income we’re looking at the net result of a far smaller number. It’s a much smaller sample size, and therefore much more prone to distortion.

Good data is useful, but without context, it can be worse than useless. Research funding data is a guide to what questions to ask, but it can’t answer them alone.


Research strategy: asking the right questions

Part two of two on building a meaningful, positive research funding strategy

Col. Kurtz (Marlon Brando) and Capt. Willard (Martin Sheen) in ‘Apocalpyse Now’ (Coppola, 1979). Definitely not talking about research strategy.

I mostly went negative in my first article on research strategy, listing key mistakes that experienced research funding strategists tend to make when setting about their work. Having got all that out the way, I’m going to attempt to be more positive and make some suggestions about sensible ways to approach strategy development.

No blank slates

Unless you’re strategizing for an academic unit that does negligible research or attracts negligible research funding and wants to change that, you’re not starting with a clean slate. It’s important to remember that.

My limited experience of strategy and change / transformation people is that they tend to start from the assumption that the status quo must, axiomatically, be awful, and that change must be an improvement. They’re right in the sense that if you do the same things in the same way with the same resources, you’ll likely get the same results. And if those results aren’t satisfactory, then something does indeed need to change.

But they’re wrong if they assume that just any change is axiomatically for the better (please see this wonderful Alex Norris cartoon). A need for change should be determined by evidence, not assumed from the start. Evolution should be the default option, unless there is very strong evidence that revolution is needed.

Furthermore, a failure to acknowledge and/or respect existing effort and expertise will alienate potential allies and undermine the credibility of any new strategy. Researchers have a lot of intrinsic motivation and incentives to bring in funding and optimise their efforts to minimise wasted effort, and those supporting them are similarly motivated. People are already doing their best, not idling around waiting for strategic direction from visionary leaders.

I could say things about what evidence for change should be considered here, but it would probably take me a whole article to do so as it’s a complex topic. For now, I’ll just briefly note Goodhart’s law, which states that a measure that becomes a target ceases to be a good measure.

So with the blank slate approach rejected, and the discussion of statistics and indicators circumvented, what questions should be considered when aiming to create a new research strategy? Here’s a non-exhaustive list. Some require investment, but the good news is that some of them are relatively cheap.

Do we have a sensible talent strategy?

The best way to improve a professional football team is to recruit, develop, and retain better players. Yet many research strategies are silent on this, making the mistake that I mentioned in the previous article assuming that all researchers are equal. If you want more research income, you need more researchers who are competitive (or have potential to be competitive) in winning it. And you need to keep them if you possibly can.

Are funding opportunities being shared in a timely and accessible way?

I’ve written a two-part series on how to summarise a call, and how to disseminate opportunities. You can’t apply for what you don’t know about or don’t understand, and you’ll be less competitive if you get information late.

Are all of our proposals competitive?

Do they all have a realistic chance of success? Is support being provided to researchers (by research development managers or senior colleagues) to sense check their ideas and look at eligibility, fit, and competitiveness at a very early stage? Do we have any misaligned incentives (perhaps around promotion) which inadvertently encourage applying more, regardless of quality or chances of success? Spamming funders with uncompetitive proposals is an easy way to look busy but is profoundly damaging waste of time and effort for everyone.

I should add that this doesn’t necessarily mean shying away from schemes with low success rates (though it might) or not submitting riskier, polarising proposals that a panel might hate. What it does mean is being aware of these issues and taking a clear-eyed decision about whether to apply or not.

Are we training our researchers in grant writing?

Grant writing is very different from academic writing. It’s possible to be an outstanding writer of academic papers and a terrible writer of grant proposals. Do researchers know how funding panels work? It’s particularly hard for early career researchers, because they’re suddenly writing for a very different audience, and those who most need the training often don’t know they need it. Many institutions can and do run their own training, but there’s no shortage of external trainers (including me) who can provide training.

Is there a culture of openness and sharing?

Do researchers seek (and get) supportive and developmental feedback, either informally or through formal peer review processes, or both? Is there an acceptance that not every application succeeds, and do we celebrate the effort that goes into competitive applications? Some academic units are now running ‘festivals of failure’ (or similar) to normalise and celebrate and commiserate knock-backs to papers, funding proposals, or job applications.

How are support mechanisms perceived?

Is support for researchers experienced as helpful, developmental, and flexible? Or is it experienced as gatekeeping—a series of hoops to jump through? To put it another way, is the support prized and sought out, or is it evaded and engaged with only reluctantly, and sometimes bypassed? This is usually the litmus test.

If the former, great. If the latter, you have a problem. The problem might be in the visibility, tone, capacity, and flexibility of what’s on offer. But the problem might be in researcher perceptions themselves—do researchers understand what’s on offer, and why it could be valuable to them? Or it could be both problems at once.

Are we planting seeds and priming pumps?

There are a lot of small and relatively inexpensive initiatives that can serve to plant seeds for future success, though we have to accept that they won’t all grow. Inviting current (and potential) collaborators to seminars must be done strategically, and should be more than just inviting our mates to come and visit. Also consider seedcorn funding for data generation or other preliminary activities, and interdisciplinary events and networks. Such initiatives take time, money and effort. Non-pay costs are under intense pressure at the moment, but cuts may be a false economy.

Are we maximising time for research?

My sense is that in the conflicting demands of teaching and administration, research time is the most easily squeezed. That’s time for scholarship, for idea generation, for grant writing, for paper writing, for attending events. We should be waging constant marginal-gains style wars on inefficiencies which are leaching away our research time, not to mention damaging work-life balance.

Are we looking at opportunities outside traditional research funders?

In the first part, I said that there were two areas which are exceptions to the general rule that competition for research funding is a zero sum game, in which others must lose more if you are to win more. The areas I had in mind are direct funding from industry and philanthropic donations from alumni or others. These are exceptions because they aren’t subject to formal open peer review process, but rather private negotiation. These avenues are definitely worth exploring, even if neither are simple or straightforward. A serious expansion of either or both will almost certainly involve the recruitment of specialists to support the process.

Is there a plan for the future?

If you’re expecting to submit a lot more proposals, that support needs scaling up as the proposal numbers scale up. If you’re winning more, your post-award capacity likewise needs to expand. I remember a strategy-person looking astonished when I asked about plans for matching capacity growth, but this is a vital component for sustainable growth.

Similarly, are you going to review the strategy and process?Or are you just going to launch it, forget about it, and then come back in a few years’ time with another one, while decrying the failure of the previous one.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

This site uses Akismet to reduce spam. Learn how your comment data is processed.