As you are aware, Dear Reader, despite attempts by the NIH to focus the grant reviewer on the “Innovation” criterion, the available data show that the overall Impact score for a NIH Grant application correlates best with Significance and Approach.

Jeremy Berg first posted data from NIGMS showing that Innovation was a distant third behind Significance and Approach. See Berg’s blogposts for the correlations with NIGMS grants alone and a followup post on NIH-wide data broken out for each IC. The latter emphasized how Approach is much more of a driver than any other of the criterion scores.

This brings me to a query recently directed to the blog which wanted to know if the commentariat here had any brilliant ideas on how to effectively focus reviewer attention on the Innovation criterion.

There is a discussion to be had about novel approaches supporting innovative research. I can see that the Overall Impact score is correlated better with the Approach and not very well with the Innovation criterion score. This is the case even for funding mechanisms which are supposed to be targeting innovative research, including specific RFAs (i.e., not only the R21).

From one side, it is understandable because reviewers’ concerns over the high risk associated with innovative research and lack of solid preliminary data. But on the other side, risk is the very nature of innovative research and the application should not be criticized heavily for this supposed weakness. From my view, for innovative research, the overall score should be correlated well with Innovation score.

So, I am wondering whether the language for these existing review criteria should be revised, whether additional review criterion instructing reviewers to appropriately evaluate innovation should be added and how this might be accomplished. (N.b. heavily edited for anonymity and other reasons. Apologies to the original questioner for any inaccuracies this introduced -DM)

My take on NIH grant reviewer instruction is that the NIH should do a lot more of it, instead of issuing ill-considered platitudes and then wringing their hands about a lack of result. My experience suggests that reviewers are actually really good (on average) about trying to do a fair job of the task set in front of them. The variability and frustration that we see applicants express about significantly divergent reviews of their proposals reflects, I believe, differential reviewer interpretation about what the job is supposed to be. This is a direct reflection of the uncertainty of instruction, and the degree to which the instruction cannot possibly fit the task.

With respect to the first point, Significance is an excellent example. What is “Significant” to a given reviewer? Well, there is wide latitude.

Does the project address an important problem or a critical barrier to progress in the field? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Well? What is the reviewer to do with this? Is the ultimate pizza combo of “all of the above” the best? Is the reviewer’s pet “important problem” far more important than any sort of attempt to look at the field as a whole? For that matter, why should the field as a whole trump the Small Town Grocer interest…after all, the very diversity of research interests is what protects us from group-think harms. Is technical capability sufficient? Is health advance sufficient? Does the one trump the other? How the hell does anyone know what will prove to be a “critical” barrier and what will be a false summit?

To come back to my correspondent’s question, I don’t particularly want the NIH to get more focused on this criterion. I think any and all of the above CAN represent a highly significant aspect of a grant proposal. Reviewers (and applicants) should be allowed to wrangle over this. Perhaps even more important for today’s topic, the Significance recommendations from NIH seem to me to capture almost everything that a peer scientist might be looking for as “Significance”. It captures the natural distribution of what the extramural scientists feel is important in a grant proposal.

You may have noticed over the years that for me, “Significance” is the most important criterion. In particular, I would like to see Approach de-emphasized because I think this is the most kabuki-theatre-like aspect of review. (The short version is that I think nitpicking well-experienced* investigators’ description of what they plan to do is useless in affecting the eventual conduct of the science.)

Where I might improve reviewer instruction on this area is trying to get them to be clear about which of these suggested aspects of Significance are being addressed. Then to encourage reviewers to state more clearly why/why not these sub-criteria should be viewed as strengths or lack thereof.

With respect to second point raised by the correspondent, the Innovation criterion is a clear problem. One NIH site says this about the judgment of Innovation:

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

The trouble is not a lack of reviewer instruction, however. The fact is that many of us extramural scientists simply do not buy into the idea that every valuable NIH Grant application has to be innovative. Nor do we think that mere Innovation (as reflected in the above questions) is the most important thing. This makes it a different problem when this is co-equal with criteria for which the very existence as a major criterion is not in debate.

I think a recognition of this disconnect would go a long way to addressing the NIH’s apparent goal of increasing innovation. The most effective thing that they could do, in my view, is to remove Innovation as one of the five general review criteria. This move could then be coupled to increased emphasis on FOA criteria and an issuance of Program Announcements and RFAs that were highly targeted to Innovation.

For an SEP convened in response to an RFA or PAR that emphasizes innovation….well, this should be relatively easy. The SRO simply needs to hammer relentlessly on the idea that the panel should prioritize Innovation as defined by…whatever. Use the existing verbiage quoted above, change it around a little….doesn’t really matter.

As I said above, I believe that reviewers are indeed capable of setting aside their own derived criteria** and using the criteria they are given. NIH just has to be willing to give very specific guidance. If the SRO / Chair of a study section make it clear that Innovation is to be prioritized over Approach then it is easy during discussion to hammer down an “Approach” fan. Sure, it will not be perfect. But it would help a lot. I predict.

I’ll leave you with the key question though. If you were to try to get reviewers to focus on Innovation, how would you accomplish this goal?

___
*Asst Professor and above. By the time someone lands a professorial job in biomedicine they know how to conduct a dang research project. Furthermore, most of the objections to Approach in grant review are the proper province of manuscript review.

**When it comes to training a reviewer how to behave on study section, the first point of attack is the way that s/he has perceived the treatment of their own grant applications in the past***. The second bit of training is the first round or two of study section service. Every section has a cultural tone. It can even be explicit during discussion such as “Well, yes it is Significant and Innovative but we would never give a good score to such a crappy Approach section”. A comment like that makes it pretty clear to a new-ish reviewer on the panel that everything takes a back seat to Approach. Another panel might be positively obsessed with Innovation and care very little for the point-by-point detailing of experimental hypotheses and interpretations of various predicted outcomes.

***It is my belief that this is a significant root cause of “All those Assistant Professors on study section don’t know how to review! They are too nitpicky! They do not respect my awesome track record! What do you mean they question my productivity because I list three grants on each paper?” complaining.