Your Grant In Review: “More than adequately revised…”. (Updated)
May 6, 2008
The NIH grant applications which will be reviewed Jun/Jul are going out to reviewers right about now. Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and “final” amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.
Oi. What a waste of everyone’s time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. “This more than adequately revised application….”
I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.
The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. Start with CRISP and search for new applications (1R01% is a handy wildcard) gated by your favorite study section or three. Or gate by your usual funding IC. What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. (If you do an IC search results will be contaminated by SEPs and in-house study sections. There is no easy way to discriminate RFA-funded proposals which are “unrevised”. I use quotes because there are cases in which an RFA may be reissued a year later such that some of the responding applications are revisions of previously-reviewed applications.) If you care to step back Fiscal Year by Fiscal Year in the CRISP search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here but since the overall numbers are low (~20-30 grants funded each FY), you should really run the numbers for several study sections to get a feeling for broad trends. Another thing you will notice if you review a series of closely related study sections is that the relative “preference” for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.
[Update 05/07/08: I notice that writedit points to a powerpoint from the Great Zerhouni which includes (slide #57) a graph much like my example! Also note slide #54 which includes the GZ’s little brag about New Investigator awards which were headed for the toilet in FY2006 and then they restored something like historical norms for FY2007. Which they accomplished with Program pickups! What NIH officiousness seems to fail to comprehend is that the problem needs to be fixed at the study section level. And that their proud stripping of assistant professors from the study sections (Scarpa presentation, slide #42) is working against this! ]
Getting back to the applications assigned to the study section on which I serve for this round, I note that many of these A2 applications received a gray zone score the last time. Percentile ranks that are in the neighborhood of 15%. While hard paylines have been running 8-10%ile recently, until we have more data we can assume that the eventual funding rate will reflect something more like 15% or more of all applications being funded. So some of these were close but not picked up the last time. If history is any judge, unless the PIs have really screwed up, the reviewers will respond to this situation by assigning scores in the 1.2 range and writing critiques meant to communicate “Will you just fund this thing already?” to the Program staff. Program will pick them up and everyone is happy, right?
No, everyone is NOT happy. It takes a lot of effort to revise a grant application, even when essentially no substantive changes are made. It takes a lot of effort for three people to review a grant application. And finally, it takes a lot of effort for the PI who has submitted her new-submission R01 that has essentially no chance of being considered closely in the study section discussion, nor of being funded. The grant-revision holding pattern wastes a lot of real NIH dollars too, a consideration that never seems to be part of the debate. For many PIs, the time they spend on grant writing is time they are not spending on optimizing the output of their lab. No, it is not supposed to work like this, technically NIH funded effort is not supposed to be spent on grant writing. But this is a scam. Of course, 100% time can be narrowly construed as 40 hrs per week, so anything over this you spend on grant writing is off the NIH clock. But c’mon. Lab resources are being maintained while awaiting funding too. Can you just fire your highly trained tech while revising your grant and just hire back a replacement 18 mo later? Hell no, you make compromises wherever you can just to keep the lights on, so to speak. Shift the tech salary (not to mention PI effort) onto the other grant you hold until the one you are revising hits. Which, of course, compromises the output of that grant. Maintaining mouse lines? Expensive research subjects like nonhuman primates? Access to institutional equipment and space (use it or lose it)? Check, check, check. There is a huge amount of taxpayer money being wasted while we natter about insisting that revised grants are “better”. We can improve on this.
So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a “desired” funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.
Yet a great deal of time and effort would be saved.
..and maybe. Just maybe. This would let study section members back away from the revision bias abyss and get serious about ranking merit at the -01 stage. Over a few rounds, it might even be the case that prioritizing A2 applications for the very top scores wouldn’t be such an obsession. And every so slooooowly, we might see the proportion of grants being funded at the -01 stage go back up. Personally I’d like to see as many as 70% of grants get funded unrevised.
UPDATE: PhysioProf supplies the long-term trends for all funded grants by revision status. Interesting to see how it developed over time. It reminds me of when the limitation for a maximum of two revisions of a given application were put into place in Oct 1996. One major argument was the relatively small number of applications that got funded as A2s and the further rarity of applications getting funded on additional revision. Assessing current trends by that logic should mean a return of A3 and A4 revisions, shouldn’t it?