Some application and assessment processes are for limited goods, and some are for unlimited goods, and it’s important to understand the difference. PhD vivas and driving tests are assessments for unlimited goods – there’s no limit on how many PhDs or driving licenses can be issued. In principle, everyone could have one if they met the requirements. You’re not going to fail your driving test because there are better drivers than you. Other processes are for limited goods – there is (usually) only one job vacancy that you’re all competing for, only so many papers that a top journal accept, and only so much grant money available.
You’d think this was a fairly obvious point to make. But talking to researchers who have been unsuccessful with a particular application, there’s sometimes more than a hint of hurt in their voices as they discuss it, and talk in terms of their research being rejected, or not being judged good enough. They end up taking it rather personally. And given the amount of time and effort that must researchers put into their applications, that’s not surprising.
It reminds me of an unsuccessful job applicant whose opening gambit at a feedback meeting was to ask me why I didn’t think that she was good enough to do the job. Well, my answer was that I was very confident that she could do the job, it’s just that there was someone more qualified and only one post to fill. In this case, the unsuccessful applicant was simply unlucky – an exceptional applicant was offered the job, and nothing she could have said or done (short of assassination) would have made much difference. While I couldn’t give the applicant the job she wanted or make the disappointment go away, I could at least pass on the panel’s unanimous verdict on her appointability. My impression was that this restored some lost confidence, and did something to salve the hurt and disappointment. You did the best that you could. With better luck you’ll get the next one.
Of course, with grant applications, the chances are that you won’t get to speak to the chair of the panel who will explain the decision. You’ll either get a letter with the decision and something about how oversubscribed the scheme was and how hard the decisions were, which might or might not be true. Your application might have missed out by a fraction, or been one of the first into the discard pile.
Some funders, like the ESRC, will pass on anonymised referees’ comments, but oddly, this isn’t always constructive and can even damage confidence in the quality of the peer review process. In my experience, every batch of referees’ comments will contain at least one weird, wrong-headed, careless, or downright bizarre comment, and sometimes several. Perhaps a claim about the current state of knowledge that’s just plain wrong, a misunderstanding that can only come from not reading the application properly, and/or criticising it on the spurious grounds of not being the project that they would have done. These apples are fine as far as they go, but they should really taste of oranges. I like oranges.
Don’t get me wrong – most referees’ reports that I see are careful, conscientious, and insightful, but it’s those misconceived criticisms that unsuccessful applicants will remember. Even ahead of the valid ones. And sometimes they will conclude that its those wrong criticisms that are the reason for not getting funded. Everything else was positive, so that one negative review must be the reason, yes? Well, maybe not. It’s also possible that that bizarre comment was discounted by the panel too, and the reason that your project wasn’t funded was simply that the money ran out before they reached your project. But we don’t know. I really, really, really want to believe that that’s the case when referees write that a project is “too expensive” without explaining how or why. I hope the panel read our carefully constructed budget and our detailed justification for resources and treat that comment with the fECing contempt that it deserves.
Fortunately, the ESRC have announced changes to procedures which allow not only a right of reply to referees, but also to communicate the final grade awarded. This should give a much stronger indication of whether it was a near miss or miles off. Of course, the news that an application was miles off the required standard may come gifted wrapped with sanctions. So it’s not all good news.
But this is where we should be heading with feedback. Funders shouldn’t be shy about saying that the application was a no-hoper, and they should be giving as much detail as possible. Not so long ago, I was copied into a lovely rejection letter, if there’s any such thing. It passed on comments, included some platitudes, but also told the applicant what the overall ranking was (very close, but no cigar) and how many applications there were (many more than the team expected). Now at least one of the comments was surprising, but we know the application was taken seriously and given a thorough review. And that’s something….
So… in conclusion…. just because your project wasn’t funded doesn’t (necessarily) mean that it wasn’t fundable. And don’t take it personally. It’s not personal. Just the business of research funding.
I was once at the same table as the then vice-president of programs for one of the Canadian councils. He described your initial point rather succinctly: “it’s not a test, it’s a contest”. I often use an Olympic metaphor (something that might be apt in England this year). Every single competitor at the Olympics is a world class athlete, but only 3 in each event go home with a medal.
I agree that knowing your score or your ranking can be helpful. SSHRC offers this information and while it is still disappointing to be the highest ranking unfunded project, it is a pretty clear indication that it isn’t you. I think ranking is probably better information than scores, though, because of the competition thing. It’s not like there is a score that guarantees funding in a competitive environment. Knowing if you came in 4th or 40th is always useful.
I like that test-contest distinction – that’s it exactly, and I might well be flattering that particular line through imitation in the future.
I guess my only concern about ranking is the extra burden it could place on the reviewing panel if it ends up having to discuss the minutiae of rankings, and whether one largely hopeless application is better than another. But I guess it depends – I think some EU funding schemes are formally ‘marked’ which generates a league table and a cut-off line for funding, and I think they already release ranking. But for those who assess more holistically it’s a bit more of a challenge.
The SSHRC process works through rankings. There is one deadline. Each committee marks and ranks the applications. Then the top x applications are funded (depending on how much money is available). The bottom 35% are considered unfundable and don’t get told their specific ranking. Frankly, given that only the top 30% get funding, if you are that far down the list you have a lot of work to do to get in the fundable range.
There is currently a judicial review in process of the post-doctoral fellowships competition precisely because there wasn’t enough detail in the comments. One specific issue in that case (which I only know about from what has been written publicly by Johannes Wheeldon who brought it, there’s a recent piece in the HuffPo) is that the published scoring weightings for different evaluation criteria are not used in the comments process.
It is also important to understand that bureaucratic institutions like funding councils only want appeals in extreme cases. This can affect how much detail is provided in comments. No one wants a situation where people are routinely challenging decisions in what is really a tough race.
My olympic metaphor often extends to talking about the fact that the differences between medalists and the next few finishers is often measured in hundredths of a second. When we’re timing a race, we have pretty fancy accurate timepieces to make that determination. Judging grant applications is not that precise but the differences between the funded and the next several applications in the pack are similarly small.
Thanks for a brilliant post, Adam.
I love the driving test / job application (test / contest) distinction.
Decades ago, the Australian Research Grants Committee visited all States and met with almost all applicants. This was run just like a job interview. At the appointed time, applicants would meet with a panel of three or four committee members, who would pose questions (often drawn from assessments) about their application. In the interview, most applicants were so nervous that I’m surprised they could speak coherently, much less formulate sensible answers to the committee’s questions.
These days, the Australian Research Council passes on referees comments, with a right of reply, as part of the assessment process. This is an excellent way to do things, although it does lead to some odd reactions from applicants.
I find that you need to work closely with your applicants to help them to see, and understand, the criticisms. Some people can only see the positive comments, and brush aside anything negative. Others fixate on anything negative, particularly comments that are ill-informed or wrong, as evidence that the assessors all hate them/ their work/ their university/ the universe. Almost everybody feels that they know who the assessors are from the text, the tone or teasing clues in the assessments.
As a result, responses need to be drafted as carefully as applications. First drafts are often used to release some tension and are then set aside, where they spontaneously combust from the deadly mixture of vitriol and pleading contained within. Second drafts are often more reasoned and sensible. They are not constructed from letters cut out of newspapers, for instance.
It does shed a bit of light on the interior of the black box, though. At the very least, applicants come to understand (in a very real way) that their application was sent to other people who read it and responded.