Repost: Preferential Funding for First Submissions of NIH Grants
November 6, 2009
I have a post I’m working on that references a topic I’ve been talking about on the blog for a long time. I was about to quote extensively from this one but I figured I’d just better repost the whole thing. This originally appeared on 10 Sep 2007.
I’ve made reference a time or two to what I describe as “bias” for amended (revised) applications. In the lifecycle of the standard, investigator initiated research project grant (the R01) application, it is initially submitted and reviewed and if not funded, the application can be revised/amended one (called the A1) or two (A2) times. (Thereafter the PI must submit a substantially new proposal.) First, the evidence that revised applications score better and are more likely to get funded relative to initial submissions is readily available.
The case is made first by reviewing the data provided in the CSR FY2004 databook (sadly, this is the only one available). If you look through Section III on Review Outcomes you can familiarize yourself with the respective and cumulative probability of getting funded at each submission stage. The case is perhaps more strongly made by performing a simple CRISP search on funded grants. For this latter, it helps to wildcard the Grant Number field on the first funded year of projects (e.g., “1R01%”) and perhaps gate by your favorite study section or funding Institute. (Gating by study section is preferred because if you don’t gate out catchall Special Emphasis Panel review panel designations you are contaminated by the RFAs and other situations in which at least some grants will be funded unrevised by default. Unfortunately one can’t screen by type of SEP, so as to parse RFAs, Panel Conflicts and the like.
From the study sections that would be the most usual suspects in my core and shell areas (i.e., the brain, behavior and pharmacology ones), this type of analysis shows that at present only some 10% or so of funded grants got through on first submission (if you do your own review, comment on the outcome, eh?). Interestingly if you go back in FY time, you will find that in recent history the A1’s were the biggest proportion of funded grants, followed by the unamended grants with A2’s forming a lesser fraction. Right around 2004 when things started looking grim, the number of A2’s funded started increasing while the unamended started declining. It is very difficult to conclude anything other than that this real-world outcome reflects the overall “problem”, i.e., increased submissions with lower/flatter NIH budgets. This is one of the stronger hints that there is nothing objective about the trend, i.e., that it is not about improving the science, see below. In rough terms, if we assume the much-bandied hard fund line of 10% (even though the eventual rate is a bit higher) this means that your application has to be in the top 1% to get funded unrevised. I think this is stupid and I think one good fix we could put in place would be to overtly bias in favor of un-amended applications. PhysioProf wants to know why.
First, let us review the problems with the revision process. It may help to assume for this exercise that we are talking about the pool of grants which will eventually be funded, but I think the general principle applies if you consider that the PI will one way or another generally be funded eventually. It takes a lot of effort from the PI and the lab members to revise a grant. From generating additional preliminary data, to writing the actual revision, etc. With personnel costs eating up a majority of grant funds, this is a big expenditure of NIH money. Despite the “rules” that grant preparation is not supposed to be done on NIH grant time, well, c’mon. Let’s be real. There is only so much time in a work year. Let’s take the 2080 hours in a standard work-year (yes, I know you all work 60hrs+ a week). A mere 40 hrs is almost 2% of this. That may be a bare minimum for a revision, never mind a fresh new proposal. So what’s 2%? A mere nothing, right? Except that you are putting out at least 3 of these a year and probably more. 5%? 10% of your time on revising grants? Did you ever look at what the big PIs are putting as their “effort” on grants? 10% or less as PI (gack!) and 5% or less as a major collaborating PI. Just sayin’.
There are efficiency issues. With the cycle for a given application taking 9 mo from submission to first-available-funding date a lot can happen in the lab in each cycle. Critical resources can disappear including well-trained techs, postdocs, Institutional commitments, human subject pools and expensive animal models such as genetic colonies or nonhuman primates. If these have to be spooled-up again, the essentially duplicative costs can be enormous. And what if the PI chooses to hold on to resources, underutilized, in anticipation? Well generally these costs are going to be borne in some way by…you guessed it, the PIs other NIH grant(s). So overall, from a business and taxpayer perspective, the NIH is burning a lot of money on grant revision.
Is it justified? The assumption in rebuttal to this position and the subtext for revising grants in the first place is that the process improves the grant application and, more critically the eventual conduct of the science. Right? The only legitimate purpose here is not to generate a better application but to actually select for better science. From my perspective, looking at fellow investigators’ grantsmanship and to some extent the grants my section reviews, the presumption that the revision process changes, never mind improves, the eventual research conduct is false in most cases.
One presumption is that the reviewers are catching fatal flaws in the research plan that would otherwise go undetected, wasting much time and NIH money. Pshaw. The PI is a trained scientist with a lot of experience in what it take to publish a paper in a given subfield, what it takes to meet the standards for a decent demonstration supporting hypotheses, what the appropriate “controls” are, etc. These applications are not the product of wet-behind-the-ears naifs, no, not even the New Investigator ones! So for the most part, the supposed flaws in the research plan boil down to empirical predictions (which can only be resolved in the doing), minor grantsmanship issues (it IS a limited application after all, not everything under the sun can be considered), methodological minutia and other crap. All science is limited and the beauty of investigator-initiated is that each PI works on what s/he finds interesting, not on what the reviewers find interesting. The ultimate arbiter of most of this is the paper review process anyway. Again, in my experience, the number of cases in which revision of an essentially in-play (i.e., not complete junk) application changes the eventual scientific conduct is low. “Improves” the eventual science? Lower.
Reviewers are not stupid so why has this emerged? Nobody likes it from the applicant standpoint. Well, this is a way to put applications in the holding pattern, waiting for funding. To eat away at the pool by sloughing off that fraction of PIs that just can’t sustain what it takes to go to the A2 stage (the CSR databook shows the New Investigator disadvantage here). There is some degree of the hazing mentality at play. And there are cyclical issues. The A2 of today needs to be prioritized to make up for the screwing it got at the -01 stage…because there were other A2s ahead of it “in line”. How to break the cycle?
How to come back from the brink by prioritizing funding unrevised applications? Well, one way or another it is going to hurt. Whenever a change is put in place there will necessarily be some applications (such as the above) being hosed. On the whole I think the benefits are worth it. Many more-established PIs have a mix of revised and unrevised grants under consideration at the same time anyway so for these it will be relatively painless. And the NIH could sneak up on it by gradually shifting the fund-unrevised targets from round to round.
Of course the NIH decided to eliminate the A2, limiting applicants to a single revision as of initial submissions in early 2009. For more history, the limit to only the A2 revision was in 1996, before then you could keep revising endlessly. A CRISP wildcard search in the 1995 fiscal year turned up 25 %-01A4 , 5 %-01A5 and 1 -01A6 applications in case you were wondering.
November 6, 2009 at 2:49 pm
See also the stunt NHLBI is pulling now with different paylines for different revision numbers.
http://www.nhlbi.nih.gov/funding/policies/operguid.htm
-A0 paid to 16.0 percent
-A1 paid to 9.0 percent
-A2 paid to 7.0 percent
LikeLike
November 6, 2009 at 2:55 pm
dammit whimple wtf did you think I was working on!!!!
LikeLike
November 6, 2009 at 3:05 pm
so what happens in this round when a A1 = A2 in such that it is the final shot.
personally, I say screw the A2’s….they had their shot.
(obviously I have an A1 that doesn’t get another chance)
LikeLike
November 6, 2009 at 4:21 pm
Scooped! See what you get for presenting your preliminary data in an open forum… 🙂
LikeLike