Perspectives on the allegedly broken peer review system of the NIH

July 21, 2011

NIH head of Extramural Research Salley Rockey has a post up defending peer review.

There has been much talk on the peer review process in our blog comments recently. We are extremely proud of our peer review system and its acceptance as one of the premier systems of peer review around the world. Our peer review system is based on a partnership between NIH staff and our reviewers. Each year we review about 80,000 applications with the help of tens of thousands of outside experts. I greatly appreciate those who have served and encourage everyone to participate in the process when you can.

The reason for this post seems to be one prolific commenter who has a bone to pick and he just keeps getting nuttier. The last exchange was a trigger:

I merely express my firm opinion, based on my own numerous experiences and without undermining the rules of the respected blog – that is why I am restricted from providing any specific examples. Should my respected opponent be interested in seeing these specific examples, I shall be very happy to share them in a private manner.

“numerous experiences”. yeah. so have we all. Had numerous experiences. Mine come from my *cough*cough* years of putting in anywhere from ~2-6 proposals per year, to a 4 year term of service on a study section (~60-100 apps per round), to having local departmental colleagues with similar experiences and through writing a blog that fields many comments from other NIH funded investigators.

I hesitate to suggest I have a full picture on NIH grant review; and I seek data from the broader NIH-wide perspective wherever possible. To buttress my very limited personal experiences. Rockey’s post says they review 80,000 applications per year. I don’t think anyone’s personal experience as an applicant, ad hoc reviewer or even multi-term appointed reviewer is all that comprehensive.

– break- I’m going to return to this thought later-

No Responses Yet to “Perspectives on the allegedly broken peer review system of the NIH”

  1. Genomic Repairman Says:

    Dude you realized he wimped out and got Rockey to remove his comments?

    Like


  2. Should my respected opponent be interested in seeing these specific examples, I shall be very happy to share them in a private manner.

    LOON ALERT!!!!!!!!!!!

    Like

  3. drugmonkey Says:

    Dude you realized he wimped out and got Rockey to remove his comments?

    Well that must have happened right after I posted this, I guess. It was up earlier today…

    Like

  4. Genomic Repairman Says:

    They came down shortly after your post went live.

    Like

  5. whimple Says:

    Peer review evaluation is pretty simple. If Congress thinks NIH is doing a good job and providing good value for money, they’ll boost or maintain current funding levels, and we can conclude peer review is leading to the kind of progress the payers want. Conversely, if Congress thinks NIH is welfare for academics and not returning good value, they’ll cut the budget and we can conclude peer review has poor prioritization metrics and is not delivering the products the customer wants. It’s the American way!

    Like


  6. Dude, congress doesn’t “think” in the fantasyland manner you are positing.

    Like

  7. whimple Says:

    Holmes, congress wants to cut discretionary spending and not upset the voters too much. That’s your budget.

    Like

  8. drugmonkey Says:

    GR- maybe his Notice if Award showed up today?

    Like

  9. Replicative Says:

    One of my colleagues commented on a reading about NIH Peer Review and said that there are some NIH Institutes that are funding studies to replicate findings. I found that a great idea and it should be used in reviewing metrics. It’d be an excellent way to maximize use of public funds in these harsh times of financial constraints. Established investigators running Research Grants, Program Projects and other funding mechanisms for more than 15 yrs ought to be given the opportunity to show replicability factor, rather than citation index rank, by an independent body. It might not be difficult to set up the initiative. For example, investigators running their 13-14 yr of NIH funding should volunteer for replicability evaluation. At this point of funding, they should provide materials and guidelines to replicate their main findings supporting competitive renewals or new grants on related questions/hypotheses. NIH should establish replication evaluation bodies with infrastructure and personnel qualified for experimentation/review. NIH should ensure independence. Replicability factor is a more objective measurement than number of publications and citation rank.

    Like


Leave a comment