What about when Program does not interfere with the initial priority score of NIH applications?

January 10, 2011

NIGMS has released their annual update on the review outcome for NIH R01 applications directed their way for potential funding (see bottom of this post for prior Fiscal Year links). The most salient figure is the histogram of percentile ranks arising from the initial review, identified by whether they were selected for funding or not.
As far as I’ve ever seen, NIGMS is the only NIH Institute or Center that does this. As you can see from the figure, one of the most interesting features here is that we can identify how many “skips” and “exceptions” are in their pool of applications.
Skips refer to grant applications which appeared to score well within the apparent (or published) payline and did not get funded for one reason or other. Exceptions refer to those applications which did not score within the apparent payline but were selected for funding anyway. The latter are substantially more common than the former, of course. We’ve talked about these exceptions (i.e., “pickups“) before.


NIGMS-FY2010hitrateCorrected.jpg
Fig 1: Competing R01 applications reviewed (open rectangles) and funded (solid bars) in Fiscal Year 2010. [source]

UPDATE: An astute reader noticed that NIGMS had originally posted the FY2008 figure in their blog entry instead of the FY2010. I grabbed it and reposted it, recapitulating the error. This post has now been corrected with the right figure, thanks to Director Berg.
Looks like an effective payline (i.e., everything gets funded) of 16th %ile, a very excellent good chance of funding from 17-20th %ile and ranging from 1/3 to 1/5 chances of being picked up in the 20-25%ile. Intriguing that you have a hope and a prayer all the way up to 40th 35th %ile, if my eyes do not deceive me.
Now in the course of discussion some clown asserted that there is one of the NIH ICs that does not take advantage of their perogative to fund outside of the order of primary review.

My understanding from extensive discussion with numerous program staff at one particular IC is that for that IC, the curve would look *much* steeper than NIGMS’s, with essentially 100% of grants inside the payline getting funded, and then a precipitous drop in the success rate outside the payline, with much fewer “pick-ups”. This obviously represents a policy preference for study section rankings over program staff discretion.

One would imagine that if this is so (and not just a defensive, inaccurate PR message from the IC in question) then, yes.
And I have a problem with that. Program staff at ICs are quite obviously permitted to step outside the order created by initial peer review. They do so on a routine basis at other ICs; from the data presented by NIGMS we can see this (going back to prior FY data) .
This is a good thing. The problem with initial review, as presently constituted, is that it is an inherently conservative process. Those who have already been successful under the system are those who are brought in to opine on the quality of the next rounds of applications. People being what they are, they are going to prioritize scientific issues, questions and models that they like, understand, respect and find fascinating. Yes, there is a lot of diversity across reviewers, even those that review apps for a given IC. But still. There is going to be an inevitable similarity that changes only slightly or only slowly and across time.
There is also the problem that the individual reviewers are not in a position to see the broad portfolio issues, nor should they. Since they are supposed to take apps one at a time and assess merit, not attempt to balance the ICs portfolios.
So why the heck would a single Institute or Center abandon its responsibility to oversee the breadth of their portfolio or make sure that new trends (or old trends for that matter) are appropriately represented and distributed in their extramural expenditures?

Advertisements

5 Responses to “What about when Program does not interfere with the initial priority score of NIH applications?”

  1. mikka Says:

    Hot damn, that histogram looks exactly like the 2008 one! talk about reproducibility!

    Like

  2. mikka Says:

    Followup: seems to be their screwup
    2008
    2010
    Makes me all the more curious to see the real 2010 one. The 2009 one had the Recovery Act effect depicted.

    Like

  3. DrugMonkey Says:

    Nice catch, mikka. Director Berg says he posted the wrong figure and has supplied a corrected version…

    Like

  4. Jeremy Berg Says:

    mikka: Let me join DrugMonkey in thanking you for your good catch.
    DrugMonkey: Thanks for correcting the figure. We will update our post soon.

    Like

  5. DK Says:

    I don’t get it: How is it even possible for a histogram of percentile scores to be so flat? Even if the distribution of the scores is hugely skewed (we know it is), the frequency should monotonously increase with increasing percentile. Here it is either flat or with a slight trend for decrease. This seems to be the same for all other years. I say they are not true percentiles then.

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: