Initial outcome of limiting NIH apps to a single revision?
January 20, 2011
The latest Peer Review Notes issue [pdf] from the Center for Scientific Review of the US National Institutes of Health reports some initial data on their move to eliminate the second round of revision of grant applications.
Personally, I thought this was very likely only a partial fix to the problem. As I’ve discussed, I was no fan of the way that available rounds of revision led study sections to refuse to get serious about reviewing apps until they had returned from at least one round of review. So I think it is a good idea to try to break this particular time-wasting bit of study section culture.
However I looked back to the prior event (in 1996) when revisions were limited to two from the previous lack of any limit (-01A6 FTW! w00t Perrin, w00t Croce! Takes fortitude to go through the original plus 6 revisions to finally break through to funding!). In looking at the trends for funded apps that made it to funding unrevised versus on the first or second round of revision, I was unimpressed that the prior limitation to two revisions did anything about the trend for unrevised applications.
An apparent bump in unrevised grants being funded is more likely due to the great NIH Doubling (now UnDoubled) interval, IMO, because the numbers for A1 and A2 revisions kept marching upward year by year. As soon as the budget flattened out, those numbers came back down towards the trendline. As you can see in the following figure, things got even worse for original submissions in the 2006-2008 interval. (N.b. the data in this figure are raw numbers, the data below are percentages.)
The aforementioned issue of the Peer Review Notes has the below figure and the accompanying observations:
About 10 years ago, 60 percent of the R01 applications NIH funded were A0 applications–those that went through the review process just once. In recent years, only 30 percent of all funded R01 applications were A0s. “Applicants with exceptional applications were frustrated.” said CSR Director Dr. Toni Scarpa. “It was like they had to get a ticket and wait in line for a year or two. Reviewers were frustrated as well . . . reviewing the same applications over and over again with little effect on the final results.”
NIH abolished second resubmissions (A2s) of NIH grant applications so it could fund the best applications sooner. Recent data suggest that the policy is working: NIH now funds more A0s.
Now, as the CSR newsletter notes, the ARRA funding probably had “some effect” on the 2009-2010 numbers. I’d argue a big effect, but whatever. The most important thing here is that the A2s are clearing the system. There weren’t as many being submitted, particularly for the 2010 FY of funding. So they are dropping out of the pool of funded applications. But notice that the A1 application numbers aren’t budging? Or at least not by much. Yet? One hopes. But if this number doesn’t budge, then I am very much less impressed that CSR has fixed the problem.
I think this shows that study sections are still nitpicking the original submissions and putting them into the queue instead of doing what I think they should (i.e., taking a “fish-or-cut-bait” approach of looking for the broad strokes and ignoring minor violations that basically amount to grantsmithing issues, empirical predictions, etc). If this had been fixed, the relative proportion of A1s should be dropping just like the A2s….maybe with a slightly shallower trendline, but dropping.
Of course I also want better information on the even bigger issue here. I assume that PIs who get a halfway decent, but not fundable, score on their A1 applications are going to turn that baby right around as a new submission. In fact I recommend strongly that all of my readers do exactly this. CSR has made a lot of noise about how they are going to stringently weed out thinly disguised A2s coming in as new submission but I doubt whether they can do this on a consistent basis.
We applicants are quite clever when put in a corner, you know.
So there are going to be “original” submissions coming in that have the benefit of most of the ideas having been previously reviewed. These do not count if the NIH is going to brag on making the system more efficient!
I’d like to see them doing some anonymous exit surveying of study sections in which they asked the reviewers to report whether they evaluated any “original” submissions that were actually disguised A2s of applications previously reviewed in that section.