Well, well, well. How timely. We were just discussing the situation in which some ICs of the NIH fund some subset of their grant applications out of the order of initial peer review. And what should I stumble upon (thanks to writedit) but some actual data which bear on the matter.

The NIAID website has an interesting analysis up that compares productivity measures for R01 grants from FY01-FY04. It divides the grants into those that were funded after receiving a score within their operating payline(s) and those that were funded via “Select Pay”. This is the term for out-of-order, exception funded proposals. Colloquially known as “pickups”.

NIAID describes the approach as:

Here’s how we conducted the study.

To measure productivity, we analyzed the number of publications from 2,104 applications that ranked within the payline (the WP cohort) and from 122 select pay applications (the SP cohort) shown in Figure 1.

For each indictor, we show only the middle 80 percent of the distribution (we removed the top and bottom 10 percent to make the figures easier to read). The horizontal line within each box represents the median.

Numbers for total publications, impact factor, and citations were 16,389, 102,786 and 196,117, respectively, for the WP cohort, and 860, 5,407 and 11,158 for the SP cohort.

Each indicator was scored for six years; for example, grants issued in FY 2002 were scored from FY 2002 to FY 2007.

Not entirely sure what they are graphing here, a typical box and whiskers plot would be 25%-75% described by the box. The whiskers, however, can be any number of descriptors. I guess the NIAID is putting the whiskers on the 20th and 80th percentiles…lot of room between 20% and 25% and between 75% and 80% if this is the case. [update: 10%ile and 90%ile of course; On reflection, I guess I should be less worried about the distance between 10% and 25% and between 75% and 90%.]

At any rate, the take home message is “no difference”. Same for Journal Impact Factor and number of actual citations of the papers.

So far as we can take such objective measures of grant productivity as relevant* to a fuzzier concept of “excellent science” or “impactful project”, this confirms what many of us familiar with grant review insist. Within that zone of payline and near-payline scores, there is no way to say the one grant is going to be much better than the other. Different, sure. But they are all going to be approximately as productive as each other, considering the groups as a whole.

Thus, the kvetching about how horrible it is that the NIH ICs fund some subset of their awards out of the order emerging from peer review is not really well justified. The “performance” of the NIH’s funded extramural research** is unlikely to be negatively affected by doing this.
*yes, I realize. But c’mon. Better something somewhat objective than continuing to shoot off our half-baked opinions without any evidence, no?

**extrapolating from NIAID’s data and with the same caveat about such measures of “performance”