Whither the NIH Investigator criterion?
December 9, 2025
One of the major changes, perhaps the most significant change, to the review of NIH grants under the new Simplified Peer Review framework is the Investigator criterion. In the past, Investigator was one of five allegedly co-equal criteria alongside Significance, Innovation, Approach and Environment. Significance and Innovation are now collapsed into Factor 1. Importance of the Research, and Approach is now called Factor 2. Rigor and Feasibility. Investigator and Environment are now in Factor 3. Expertise and Resources, however the review of this is supposed to be a simple choice of Adequate or Inadequate.
The change to having peer reviewers assess the adequacy of investigator expertise and institutional resources as a binary choice is designed to have reviewers evaluate Investigator and Environment with respect to the work proposed. It is intended to reduce the potential for general scientific reputation to have an undue influence.
“Undue” influence. Look, it is right there in the prior instructions to authors that Significance, Innovation, Investigator, Approach and Environment could be combined into an overall impact score in any weighting including zero. There was in fact no obligation that the reviewer actually use weightings, or that the reviewer combine the assessment of the five criteria in any specific way. There was no obligation for a reviewer to behave consistently from application to application when it came to integrating the criterion scores. It was in fact totally kosher by the rules to view the proposal as outstanding solely on the basis of an outstanding Investigator. No more, and no less, than viewing it as outstanding based on the Approach. Or Significance. Or Innovation.
Now of course the evolved culture of review is, in my experience, no where near this loose. And at least from the perspective of broad NIH wide statistics, there were analyses suggesting Approach drives overall scores the most, followed by Significance and Innovation. On the other hand it is a consistent frustration of first or second time reviewers that it seems to them that some established investigators seem to have scores awarded mostly on their status and reputation and less on the actual proposal. It is a consistent campfire chatter topic amongst disappointed applicants (i.e., almost all applicants) as well.
The FAQ page elaborates the NIH perspective on the reputational bias issue:
NIH can’t entirely prevent reputational bias but a change to evaluating the investigator and environment in the narrow context of the work proposed will help to put applicants on equal footing. NIH will manage peer review to ensure that reviewers follow guidance for Factor 3 and focus on expertise and resources as it relates to the proposed science, not general accomplishments. NIH will also be alert to statements related to established reputation in written critiques. Both reviewers and SROs will receive training on this point, and SROs will intervene when such biases appear in written critiques or during discussions.
I am pondering the presentation of this change. The focus here is now on “expertise” and not on “general accomplishments”. The mention of reputational bias is easy to integrate with these comments. I think (insert caveats) that my approach has generally been consistent with this over the year, particularly when it comes to early stage investigators.
Let us suppose that we are reviewing a R01 proposal from a brand new Assistant Professor. Submitted within the first year of the appointment. The thing we have to go from on “expertise” is their record of publication as a grad student and as a postdoc, along with any interactions we may have had at academic meetings, etc. If I saw someone who had published first author papers, and maybe some middle author papers, using the techniques central to the R01 proposal I basically viewed this as adequate evidence of “expertise”. Particularly if the grant proposal itself had preliminary data clearly identified as generated in the new lab. But my take is basically that by the time a young scientist gets into a grant-submitting position, they very likely have the “expertise” to continue doing the same kind of science as a lab head.
My approach is quite obviously related to previous review guidance that we were supposed to make allowances for career stage and lack of prior grant funding. Allowances for their lack of accomplishment, because they haven’t really had enough time to accomplish anything. I.e., to publish a lot of papers as senior author.
Now suppose roughly that same proposal has been submitted in Year 3 of the person’s new appointment. Or Year 5?
Their level of expertise as demonstrated by the publication record has not changed.
However.
My take on early independent folks is a presumption that they did roughly the usual on their papers. And that they will, if given the chance, have roughly the skills and drive to start leading research programs.
The trouble is that after a few years, there is new evidence on that presumption. Evidence of whether they have indeed begun to lead a research program. Start a lab, recruit help if needed, generate publication quality data….generate publications. And like it or not, this is where I start to focus on accomplishment. As do many other reviewers I have seen in action over the years.
And while I believe very strongly that good grant scores should not be a reward for past performance, an Assistant Professor in my fields of interest who has no publications by Year 5 raises an eybrow about necessary ability to accomplish progress on the proposed Aims.
It’s not just the more junior of us. A high rate of publications, a high rate of field-relevant work and a high ratio of generating papers related to the proposed Aims of prior awards underlies and drives scientific reputation. Someone with lower productivity and scientific verve will have a lesser reputation.
Personally, my solution has always been to try very hard to assess accomplishments in the context of job type (teaching load? service loads? etc?) and resources (i.e., prior grants and other sources of support for the lab). And to assess the Investigator criterion as a binary decision of “can they do the proposed work if they get the funding?”, as opposed to a scale of past productivity.
It seems reasonable to me to try to only use suspiciously low accomplishment as a caution, and not use quantities of accomplishment to decide on the merits of a new proposal.
But the NIH doesn’t want us to do that any more. We are supposed to only focus on “demonstrated background, training, and expertise“. Which is basically invariant once one has demonstrated it for the first time. There is no sense they mean “continued demonstration” because that is undoubtedly review of accomplishment.
I don’t know where this leaves our evaluation of Investigator, even in the current framework. I am going to have a hard time re-calibrating myself to essentially ignore intervals of suspiciously low accomplishent given resources.
I think we all know I think about the process of review considerably more than the usual NIH peer reviewing bear. It is going to be REALLY hard, I predict, for study sections to avoid contaminating their reviews with assessment of PI accomplishment.