Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

March 13, 2018

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant “merit”. If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole ‘nother research program? Eventually to quit your job and do something else when you don’t get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I’d like to see more chit chat on the implicit question from the last post.

No “agreement”. “Subjectivity”. Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, “subjective”. Anyone that pretends this process is “objective” is an idiot. Underinformed. Willfully in denial. Review by human is a “subjective” process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

__
*ok, maybe sometimes. but always?

13 Responses to “Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system”

  1. Ola Says:

    Never underestimate the stubbornness of academics. The spectrum looks something like this, in terms of responses to negative reviews…

    1) Oooh, thanks study section peeps, good idea, that makes a better proposal
    2) I don’t agree, but I’m gonna do what you ask anyway because you’re wiser, lick lick
    3) Not gonna do that, but here’s why, and here’s something even better
    4) Not gonna do that, no alternative, no explanation
    5) Not gonna do that ever, and BTW you guys are stupid
    6) OK I get that you didn’t like it the first 6 times, but pleeze fund me, JTFC you are stupid

    Guess which end of the spectrum gets funded? The stubborn end or the permissive end?

    Also don’t forget, we’re dealing here with academics ON STUDY SECTION who are also stubborn. If someone (esp. a standing member) has decided they are going to dig in on an issue such as using a tissue-specific knockout versus a global, there ain’t no other way you’re getting out alive unless you do it. Of course, you don’t have to actually do it, just say you will and then use the money for something else!

    This is the true secret to the system – you don’t actually have to DO what you propose. You just have to convince 30 people you will do it, and provided the other thing you end up doing is interesting and publishable, they’ll keep funding you because now you’re productive. The number of whiny folks at the 4-5-6 end of the spectrum who don’t get this is staggering. “But I can’t do that” does not have to reflect reality. You just gather the consultants, convince the reviewers it’s do-able, get funded, then go do what you originally intended and who cares because now you got a fucking grant! I would heartily bet that the people on the 1-2-3 end of the spectrum are experts at this strategy.

    Like

  2. Philapodia Says:

    What Ola^ said. Get the money and then do good science, no matter what the reviewers say.

    I’ve never had a PO for an R01 give me crap because I didn’t follow the funded grant exactly. Most of the time I just send my annual reports into the eRA Commons abyss, alternative avenues and all, and hear nary a peep about it.

    Like

  3. Morgan Price Says:

    In the popular press, the peer review funding system at NIH and NSF is often described as a guarantee of scientific quality. I’ve even seen the term “gold standard” used. So I think the idea that this system is objective is part of its broader appeal. This idea appears in official NIH documentation too, i.e. “The SRO is … responsible for ensuring that each application receives an objective and fair initial peer review.”

    Like

  4. DNAman Says:

    The problem with NIH review is the false precision of the percentile score: 13%? Why not 13.45%?

    The final scores should be 1,2,3, or 4. All the 1’s get funded. Then the program officers pick the 2’s such that they fulfill program goals and don’t overlap with the 1’s.

    Like

  5. AZF Says:

    I don’t have a problem with the way grant peer-review system works right now but I think the reaction you are observing is emotional, not rational. I think people take grant rejection very personally and think that it means they are bad scientists and they do bad science. This urge to demonstrate that it’s not them, it’s the system that sucks, (I think) is the follow-up defensive mechanism. Add to that their observation that the same grant rejected by one SS or funding agency, is funded by another, leads them to conclude that the system is random, instead of considering that the sections or organizations may have different funding priorities.

    This emotional reaction can also cloud people’s ability to read the reviewer comments properly. They miss the forest from the trees. I’ll get calls from PIs who first say “the reviewer 2 is an idiot because he wants us to do assay A, when assay A would never work with these cells.” But when I look at comments, all three reviewers listed some version of “lacks sufficient detail” as their biggest criticism. So the response to reviewers includes a half a page description of why assay A is wrong but the research strategy is going in with the same scant details as before.

    Like

  6. Jonathan Badger Says:

    Normally DM goes on and on about how various features of the funding environment are broken in his opinion (and that GenX scientists in particular got screwed), so it is somewhat unexpected that he’d side on the “things are fine the way they are” side on this.

    Like

  7. DrugMonkey Says:

    Things are not “fine”, JB. They simply are not broken in the way that people (and the authors) interpret this study to mean.

    Like

  8. drugmonkey Says:

    MP- the new scoring system (1-9) was supposed to do something like what you suggest. To create more ties and bigger %ile jumps. To make POs have to think rather than saying “179 is better than 181, merit!”.

    Like

  9. drugmonkey Says:

    This emotional reaction can also cloud people’s ability to read the reviewer comments properly

    Endorse! Another reason the Pier falsity is counterproductive for PIs. Rebuttal language that goes on and on about how a prior review comment is WRONGZ is usually less effective. Better to come up with a “assuming that might be true, here’s how we will address that eventuality” type of response.

    Like

  10. Grumpy Says:

    Both the great (and in some ways awful) thing about the NIH system is that there are so many opportunities to submit proposals and the review is relatively fast and easy to track. So you can average out “variability” by submitting a ton of proposals.

    The downside is this is a waste of time and favors quantity of proposals over quality.

    All in all, I’d say I still prefer it.

    IME there is a ton of noise (more than DM and others acknowledge), but the noise is similar for other agencies. And it beats the slow/tiny budget NSF approach, relationship based approach by DoD/DoE, winner-takes-all approach of DARPA/IARPA, etc.

    Like

  11. JL Says:

    @Grumpy, “relatively fast”, really? Maybe, considering the size of the enterprise it is fast. But I got friends in other countries and their processes are so much faster. In some countries you can go from submission to spending the money in a couple of months. A friend of mine went from: I got an idea to, I hired a postdoc in 3 months.

    Like

  12. Emaderton3 Says:

    @ Grumpy

    The review is relatively fast? I suppose you mean from the study section meeting date until the summary statements are available?

    The process from submission to summary statement (to funding) is agonizing lol!

    Like

  13. Grumpy Says:

    I have twice been able to resubmit an A1 just 4 months after submitting the A0.

    That kind of turn around is impossible at NSF (at least the directorates I submit to), with once-a-year submission windows and minimum 6 months before seeing reviews.

    Granted the time from receiving scores to award letter at NIH has been highly variable for me, but I’ve had similar experience with other agencies.

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: