The art of the sift

A version of this article first appeared in Funding Insight in February 2022 and is reproduced with kind permission of Research Professional. For more articles like this, visit

How to select bids when funders restrict the number that each university can submit

Nicolas Poussin (1594–1665), The Judgment of Solomon (1649), oil on canvas, 101 x 150 cm, Musée du Louvre, Paris. Wikimedia Commons.

One of the most awkward challenges in research development is responding to a ‘restricted’ funding call that only permits a limited number of applications per university. This requires an internal selection process. I’m going to share some of the things I do when I set one up. I don’t have all the answers, and I’d be interested to hear what others do, via twitter, or email or the comments.

This article refers primarily to funding schemes with a hard limit on application numbers. In the UK, that includes the Leverhulme Trust’s major calls and the Academy of Medical Science’s Springboard Awards. Some of the suggestions may also be relevant to panels for schemes with a ‘soft’ limit. These typically don’t set a formal limit on application numbers, but require universities to have a process to manage demand, submit only their most competitive applications, and not support others. There are good arguments for saying that we should be doing this sift anyway, if only to prevent our researchers wasting time and effort.

Don’t shy away from considering radical options. The rest of this article is going to assume that we want a bottom-up, open call approach to selecting the university’s candidates. But a university could adopt a top-down approach and pick winners instead. No burdensome selection processes, just choose your candidate and give them a clear run.  I’m going to further assume that we even want to participate in schemes which require a complex sift process. We could decide that some smaller and/or low success rate schemes aren’t worth our time and effort. Or we could decide our candidate for lower value awards using a lottery.

But I think a bottom-up approach is more likely to be successful in finding strong candidates and proposals, especially for major calls, and is a better fit for an open, democratic university with a positive research culture. So that’s the approach I’m going with.

Start the process before the call is out. However much time the funder allows between official call launch and deadline, it’s probably not enough. With a restricted call, you need: time for applicants to hone their ideas; time to run a selection process; and time to support the development of the chosen proposal.

Offer informal advice to applicants. Avoid wasted effort by having someone—perhaps a research development manager in a central team—serving as call contact point. Anyone who’s interested should talk through their ideas and get feedback about their suitability in terms of fit-to-scheme and internal competitiveness. Encouraging those who need encouraging and redirecting the energies of those whose research doesn’t fit this opportunity is time well spent.

Don’t decide too late. Most applicants, perfectly reasonably, will only put in a finite amount of effort into their proposals in advance of the internal selection process. Make sure that there’s plenty of time for onward development and further rounds of review and feedback after your candidate has been selected. If you publicise the call and the likely timeline and offer informal advice in advance of the formal launch, you give yourself license to have an earlier deadline and therefore an earlier decision. After all, potential applicants have been given plenty of notice, and should be working up their ideas in preparation.

Mirror the call… in miniature. The internal review documentation should resemble a cut-down version of the funder’s application form. But pick out only the key sections and ask applicants to submit only those sections that will help an internal panel reach a decision. If there are long ‘case for support’ sections, reduce the page limit and concentrate on key elements. If there are sections on career history and publications, ask for a short, targeted CV. Only interview if the funder does as part of the selection process. Ask for an outline costing to ensure the bid fits the budget limits.

Select your panel members. It’s important to do this early so you can get a date in the diary. Try to get senior colleagues who are broadly representative of the disciplines that are likely to apply. Look for previous award winners, and those with experience of the funder. The aim is to have a balanced panel that has the right mix of expertise and will be regarded as fair by applicants.

It’s a good idea to look to your senior university research leadership for panel members. This sort of service ought to be part of their role. But I’ve also seen early/mid-career researchers make outstanding review panel contributions. If it’s a new responsibility, they tend to take it very seriously and it’s a good developmental opportunity to see how a review panel works from the inside.

Transparency about panel membership is an interesting issue. I’ve found that potential applicants want to satisfy themselves that their research will get a fair hearing. I’ll normally describe panel members in terms of jobs/role titles: senior researcher in School X or director of research Faculty Y for example, but without naming names. It’s not fully transparent, but it’s not opaque. Applicants want a sense of what kinds of reviewers and what roles they’ll play, and that’s often enough to satisfy some of their inherent suspicion of any ‘central’ panel outside their own field.

Prime your panel. I’ve run short briefing sessions for panel members in advance of them having sight of proposals. I run through the funder and the scheme and what they’re looking for so that we’re all looking for the same things. I usually suggest one criterion: which proposal is most likely to be successful, given what we know of the funder and the scheme. This primes members to think in terms of competitiveness, not just their own personal view of merit.

Similarly, reviewers should be told to assume that fixable faults are fixed and that they are reviewing the best possible iteration of each proposal, given the limited time available for further development before the deadline. Be clear to panel members about their role, which should be to bring their given expertise to the table, not to be an advocate at the panel for School X.

Structure your panel meetings like funder panel meetings. Don’t leave your selection panel to talk through proposals one at a time. They’ll spend ages on the first few and then run out of time. I ask panel members to read and rank each application independently in advance and send me the rankings to collate. If you use Microsoft Forms to collect responses, it’ll do this automatically and generate neat infographics. Though it defaults to a linear weighting system for rankings (i.e. 3 points for first place, 2 for second, 1 for third) – you might prefer a greater points weighting for higher ranked proposals.

I share these rankings at the start of the meeting, putting the proposals that need most discussion at the top of the agenda. If we can only submit one proposal, there’s no point spending ages discussing the proposal that nobody ranked higher than third. If we can submit three or four bids and have unanimity about the best two, we should focus on the three others jostling for the remaining slots.

Independent rankings give equal weight to the views of all reviewers. It places all the cards on the table and shows the mood of the meeting straight away. It makes it harder for any one panel member to aggressively champion their preferred proposal if we have a clear ranking showing a lack of broader support. Of course, reviewers may change their ranking after discussion, but rankings show a clear direction of initial views which focuses minds and discussion.

Pre-nominate one reviewer to ‘introduce’ each proposal and another to ‘second’. They’re responsible for leading on feedback and summarising the proposal for the panel.

Take extra care with feedback to unsuccessful applicants. As this is internal, this is extra sensitive. Institutionally, you have an ongoing duty to support these researchers. The first line of feedback should just be that the panel thought other proposals had a better chance of success, and that has as much to do with the proposal(s) that were chosen as those that were not. Appropriately edited reviewer comments should be shared, but in the context of ‘this is what reviewers thought of your application’ not ‘this is why we didn’t pick your bid’. The aim should be to support development, and where appropriate, identify other funding opportunities.

Meet successful applicants. I’d usually send written comments and then try to arrange a meeting with a subsection of the review panel. The aim of that meeting is to elaborate on the written feedback and to give the applicant(s) an opportunity to respond and ask questions, to further develop their proposal. This is a long list, I realise. But take comfort in the fact that once you have concluded this process and your decision is made, you have nothing more to do. That is, apart from all the usual work that goes into supporting, developing and submitting a competitive grant application.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

This site uses Akismet to reduce spam. Learn how your comment data is processed.