Over at MWE&G we have additional comment on the impact of shorter NIH grant applications. There is a “proposal” being floated (aka “a done deal”) to reduce the length of the research plan section of the standard R01 type application from 25 to 15 pages. As outlined here, the thought is to focus review on significance and impact and to de-emphasize methodological critique. A second benefit imagined by the NIH is that this is a way to decrease the number of individuals needed for review. [As usual, a comprehensive understanding of real behavior is absent in this NIH-think. Shorter apps means even more apps-per-investigator thus leaving the review “burden” unchanged even if their rational about shorter apps was correct, which it is not. ] As MWE&G points out, a NIH survey found reviewers less than enthusiastic:

However, current reviewers weren’t raising their hands to take on more of these shorter applications, so the NIH will need to rely on expanding their reviewer pool – hopefully made easier by a reduced reading burden.

The problem is simple and anyone who has done any sustained reviewing (8+ apps per round, 3 rounds per year, 4 year commitment for “charter members” of panels) can point this out. The major burden of review is not simply reading the pages. Ten additional pages takes at best another 15-20 minutes to physically “read”. This is immaterial in the scheme of things. Understanding what is being conveyed in a grant is a synthetic process in which all major sections (Background, Preliminary Results, Research Plan) need to be integrated in the reviewer’s head. The major time commitment is the “figuring it out” process- how is this experiment supported by the preliminary data and background, what hole in the literature is being addressed, how does this address the Aims, etc. In fact, decreasing the length of the application is going to increase the reviewer burden in some cases because the reviewer will have to bring her/himself up to speed on things that were previously laid out cleanly in a 25 page proposal.

The great unknown is whether reviewers will adapt their behavior to the new approach. They certainly can but this is likely to be a long and very uneven process. The last few years have seen attempts to refocus review on “translational impact” and “significance” and “innovation”. It hasn’t worked in Your Humble Narrator’s study section. Some reviewers remain “old school”. Some espouse the newer “significance/innovation” approach. Some do both depending on the grant under discussion! Is this because people are unqualified or pernicious? No. It is because there are legitimate differences of opinion on various things that tie into the decision about what represents the “best possible science”. Unfortunately there is essentially zero discussion in any official capacity or forum to navigate the intent of review. There are some published guidelines but these don’t really get past the format of a review. Likely due to an understandable reluctance on the part of the CSR to “contaminate” the independence of review by telling reviewers how to review a proposal. But this leads to additional variability in the process because there is no commonality of approach. This leads to a great deal of frustration on the part of the PI reading the summary statement. “Reviewer one says it is ‘highly significant in providing a clear test of hypotheses to resolve two major theoretical approaches to the field’. Reviewer two says it ‘lacks significance because it the experiments do not address any significant question of public health’. AAAGGHHHH! What in the heck is ‘significance’ supposed to mean?”

What indeed.