Is it an outrage if your NIH grant score goes from nearly funded to ND after you revise it?
February 6, 2026
Yes, the answer is yes.
It is extremely painful to have your grant proposal just miss the cut for funding on one version and then to have the revised version end up way out of the race or Not Discussed. This has come up with regularity in the online discussions of NIH grant review. It starts, at root, with the issuance of grant review comments to the PI in the summary statement along with the opportunity to revise (amend) the grant proposal and re-submit it for another round of review. It is accelerated by the fact that reviewers of a revised version of a grant have access to the summary statement of the prior version.
It just makes sense, to the uninitiated, that in a Just World a grant which is revised in light of prior criticisms of peers should be scored no worse, and probably better, than the original version. Right?
But the answer is also no. No, because the NIH has been trying for my entire time in this business to break peer reviewers of their impulses. To get peers to review revised grants without reference to how the prior version scored in a prior study section.
I started writing NIH grants when the rule was that one could only amend the proposal twice (i.e. to the A2 version), after which it had to be submitted as a “new” proposal. This followed an era in which A6 and A7 amended versions sometimes were funded. It was also during an era in which the impact of the NIH budget doubling was forcing a grant holding pattern. In which seemingly one’s proposal was only going to get taken seriously on the A1 or A2 version. (Oh, and btw, this was an era in which there was no ESI designation or funding policy. No R29 FIRST award set-aside for newbies either. Yeah.)

I lived through the NIH’s decrease of the A2 limit to A1, their attempt to ban resubmitting essentially the same proposal as a “new” grant and the subsequent (and current) backdown. A charitable view might say the NIH was trying to restore a sort of “fish or cut bait” stance of reviewers on original submissions, in an attempt to help speed funding to scientists who had the best ideas. A less charitable view might be that NIH was just trying to juke their stats on time to funding from the original submission of an idea.
I have been trained on many study sections that we are not to somehow benchmark the review of an amended (revised) proposal to the score / percentile / outcome of the review of the prior proposal. We are not supposed to indicate that we had reviewed the prior version of any such proposal. Any hint of benchmarking to a prior score often leads to SRO correction, and possible muttering from other reviewers as well.
When I was first invited to study section, the “Review Criteria Format Sheet” listed a series of headers which started with Significance. The second header was for Response to Previous Review (for revised applications). This resonated with the discussion, to my memory, in which the quality of the response to the review of the prior version was a primary point of comment. Eventually they buried the review template box for commenting on the quality of the resubmission to Additional Review Criteria down below Biohazards.
All of this was required because many of the people who are doing peer review of NIH grants are constitutionally and professionally likely to be instructors and explainers who literally cannot overcome their prepotent desire to help the applicant do better next time. It is why we entered the long path into this business in the first place. It is part of our professional workaday behavior to help people improve their academic work product. In short, it is who we are. Relatedly, NIH started inserting a box for Additional Comments to Applicant way at the bottom of the scoring template under Additional Review Considerations which emphasized reviewers “should not consider them in providing an overall impact/priority score“. This was supposed to be a sort of pressure-relief valve.
We are at another transition point in which the pain for “just-missed” scores and the corresponding outrage over the scores of revised proposals getting worse is ramping up. The 2025 assault on the NIH included a multi-year funding plan, now continued into FY2026 because Congress failed to pare this back in the recent appropriations bill, which inevitably reduces the number of new grants that are funded. This means more “just-missed” applications, particularly from the historical perspective of what scores should have funded. More revised proposals coming back in for review. The CSR is between what is supposed to be only two rounds of review with enhanced triage procedures- about 70% of proposals will not be Discussed compared with the prior 50%. Time will tell if CSR decides to continue this, me I suspect they will. That means not only scores going backwards, but probably many revised proposals that will be ND after being scored the first time.
Cue more outrage.
I don’t know what the best path forward should be. As I repeatedly note, peer reviewers at the NIH are driven first and foremost by a sort of diffuse “fund / don’t fund” binary and a lot of what is said in the summary statement is more a justification of this position than it is a sober quantitative addition of strengths and weaknesses.
I’ve had the pleasure of two low single digit percentile scores in my career. These were on revised proposals, an A2 scored at 2%ile and an A1 at 1.6%ile. The A2 followed a 21%ile A1, back in a time and at an IC where such a score was a strong “maybe” for exception pay. The A1 followed a 19%ile, ditto. I still assert that it would be very hard to show where my 2%ile and 1.6%ile proposals were objectively far superior to the prior versions. These were likely the top scores in the study sections for that round but there is no way in hell these were “perfect” proposals. Nor that they were objectively superior to a whole host of my other grant proposals over the years that got worse scores, from middlin’ within-payline (~8-9%ile at a certain time), to reach/stretch percentiles (hey, I’ve had pickups in those ranges) to NDs. Point being, the excellence of those scores reflects a set of reviewers saying “jeez, fund this thing already” to program and NOT them saying “this is objectively such an exquisitely crafted grant proposal that we cannot help but give it a fine score“.
Friends, we were already down to paylines (and inferred paylines) of 7-10%ile. You can check the funding data for the last few pre-chaos fiscal years for yourself. NCI said theirs was going to drop to the 4%ile range in FY2025 due to the multi-year funding requirement.
We will undoubtedly have an immense traffic holding pattern of previously reviewed grants stacking up.
Study sections simply cannot give them all within-payline scores to reward them for improving upon an already excellent “fund this!” proposal.