NIH Grant Lottery

April 18, 2016

Fang and Casadevall have a new piece up that advocates turning NIH grant selection into a modified lottery. There is a lot of the usual dissection of the flaws of the NIH grant funding system here, dressed up as “the Case for” but really they don’t actually make any specific or compelling argument, beyond “it’s broken, here’s our RealSolution”.

we instead suggest a two-stage system in which (i) meritorious applications are identified by peer review and (ii) funding decisions are made on the basis of a computer-generated lottery . The size of the meritorious pool could be adjusted according to the payline. For example, if the payline is 10%, then the size of the meritorious pool might be expected to include the top 20 to 30% of applications identified by peer review.

They envision eliminating the face to face discussion to arrive at the qualified pool of applications:

Critiques would be issued only for grants that are considered nonmeritorious, eliminating the need for face-to-face study section meetings to argue over rankings,

Whoa back up. Under current NIH review, critiques are not a result of the face-to-face meeting. This is not the “need” for meeting to discuss the applications. They are misguided in a very severe and fundamental way about this. Discussion serves, ideally, to calibrate individual review, to catch errors, to harmonize disparate opinions, to refine the scoring….but in the majority of cases the written critiques are not changed a whole lot by the process and the resume of the discussion is a minor outcome.

Still, this is a minor point of my concern with their argument.

Let us turn to the juxtaposition of

New investigators could compete in a separate lottery with a higher payline to ensure that a specific portion of funding is dedicated to this group or could be given increased representation in the regular lottery to improve their chances of funding.


we emphasize that the primary advantage of a modified lottery would be to make the system fairer by eliminating sources of bias. The proposed system should improve research workforce diversity, as any female or underrepresented minority applicant who submits a meritorious application will have an equal chance of being awarded funding.

Huh? If this lottery is going to magically eliminate bias against female or URM applicants, why is it going to fail to eliminate bias against new investigators? I smell a disingenuous appeal to fairness for the traditionally disadvantaged as a cynical ploy to get people on board with their lottery plan. The comment about new investigators shows that they know full well it will not actually address review bias.

Their plan uses a cutoff. 20%, 30%…something. No matter what that cutoff line is, reviewers will know something about where it lies. And they will review/score grants accordingly. Just Zerhouni noted that when news of special ESI paylines got around, study sections immediately started giving ESI applications even worse scores. If there is a bias today that pushes new investigator, woman or URM PI’s applications outside of the funding, there will be a bias tomorrow that keeps them disproportionately outside of the Fang/Casadevall lottery pool.

There is a part of their plan that I am really unclear on and it is critical to the intended outcome.

Applications that are not chosen would become eligible for the next drawing in 4 months, but individual researchers would be permitted to enter only one application per drawing, which would reduce the need to revise currently meritorious applications that are not funded and free scientists to do more research instead of rewriting grant applications.

This sounds suspiciously similar to a plan that I advanced some time ago. This post from 2008 was mostly responding to the revision-queuing behavior of study sections.

So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a “desired” funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding.

The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.

What I am unclear on in the Fang/Casadevall proposal is the limit to one application “per drawing”. Is this per council round per IC? Per study section per Council round per IC? NIH-wide? Would the PI be able to stack up potentially-meritorious apps that go unfunded so that the get considered in series across many successive rounds of lotteries?

These questions address their underlying assumption that a lottery is “fair”. It boils down to the question of whether everyone is equally able to buy the same number of lottery tickets.

The authors also have to let in quite reasonable exceptions:

Furthermore, we note that program officers could still use selective pay mechanisms to fund individuals who consistently make the lottery but fail to receive funding or in the unlikely instance that important fields become underfunded due to the vagaries of luck.

So how is this any different from what we have now? Program Officers are already trusted to right the wrongs of the tyranny of peer review. Arguing for this lottery system implies that you think that PO flexibility on exception funding is either insufficient or part of the problem. So why let it back into the scheme?

Next, the authors stumble with a naked assertion

The proposed system would treat new and competing renewal applications in the same manner. Historically, competing applications have enjoyed higher success rates than new applications, for reasons including that these applications are from established investigators with a track record of productivity. However, we find no compelling reason to justify supporting established programs over new programs.

that is highly personal. I find many compelling reasons to justify supporting established programs. And many compelling reasons not to do so preferentially. And many compelling reasons to demand a higher standard, or to ban them entirely. I suspect many of the participants in the NIH system also favor one or the other of the different viewpoints on this issue. What I find to be unconvincing is nakedly asserting this “we find no compelling reason” as if there is not any reasonable discussion space on the issue. There most assuredly is.

Finally, the authors appeal to a historical example with is laughably bad for their argument:

we note that lotteries are already used by society to make difficult decisions. Historically, a lottery was used in the draft for service in the armed forces…If lotteries could be used to select those who served in Vietnam, they can certainly be used to choose proposals for funding.

As anyone who pays even the slightest attention realizes, the Vietnam era selective service lottery in the US was hugely biased and subverted by the better-off and more-powerful to keep their offspring safe. A higher burden was borne by the children of the lower classes, the unconnected and, as if we need to say it, ethnic minorities. Referring to this example may not be the best argument for your case, guys.