Sometimes, it feels like applying for funding in science is a form of high-stakes gambling. You put in weeks of work assembling a grant application, making sure that it’s exciting and relevant and contains all the obnoxious buzzwords you’re supposed to use…and in the end, it gets approved or rejected for reasons that seem entirely out of your control.
What if, instead, you were actually gambling?
That’s the philosophy behind a 2016 proposal by Ferric Fang and Arturo Casadevall, recently summarized in an article on Vox by Kelsey Piper. The goal is to cut down on the time scientists waste applying for money from various government organizations (for them, the US’s National Institute of Health) by making part of the process random. Applications would be reviewed to make sure they met a minimum standard, but past that point every grant would have an equal chance of getting funded. That way scientists wouldn’t spend so much time perfecting grant applications, and could focus on the actual science.
It’s an idea that seems, on its face, a bit too cute. Yes, grant applications are exhausting, but surely you still want some way to prioritize better ideas over worse ones? For all its flaws, one would hope the grant review process at least does that.
Well, maybe not. The Vox piece argues that, at least in medicine, grants are almost random already. Each grant is usually reviewed by multiple experts. Several studies cited in the piece looked at the variability between these experts: do they usually agree, or disagree? Measuring this in a variety of ways, they came to the same conclusion: there is almost no consistency among ratings by different experts. In effect, the NIH appears to already be using a lottery, one in which grants are randomly accepted or rejected depending on who reviews them.
What encourages me about these studies is that there really is a concrete question to ask. You could argue that physics shouldn’t suffer from the same problems as medicine, that grant review is really doing good work in our field. If you want to argue that, you can test it! Look at old reviews by different people, or get researchers to do “mock reviews”, and test statistical measures like inter-rater reliability. If there really is no consistency between reviews then we have a real problem in need of fixing.
I genuinely don’t know what to expect from that kind of study in my field. But the way people talk about grants makes me suspicious. Everyone seems to feel like grant agencies are biased against their sub-field. Grant-writing advice is full of weird circumstantial tips. (“I heard so-and-so is reviewing this year, so don’t mention QCD!”) It could all be true…but it’s also the kind of superstition people come up with when they look for patterns in a random process. If all the grant-writing advice in the world boils down to “bet on red”, we might as well admit which game we’re playing.