NIH Administrators Ignore the Advice of Peer Review Panels- OH NOES!!!!!

September 23, 2009

An article in the NYT [h/t: @salsb] is breathlessly aghast.

Managers at the National Institutes of Health are increasingly ignoring the advice of scientific review panels and giving hundreds of millions of dollars a year to scientists whose projects are deemed less scientifically worthy than those denied money.

The article gets a little better. It goes on to detail the NIH’s defense against the charge (see writedit link below) which boils down to “we’re saving the new investigators”. But it also continues with the skeptical tone that something is…wrong about Program re-shuffling the order of initial review when funding grants.
There is nothing wrong with this per se and in fact it is a good thing to have a multi-layered decision process.


I have commented before on the role of NIH Program Staff in making funding decisions. See the repost here, because there is some foofraw about the differences between “payline” and “success rate” that plagued my original formulation. The problem is that study sections are not perfect and are subject to certain biases (entirely unintentional in many cases). This is the deal when human decision making is involved. So I endorse multi-layered decision processes.
After defending the general structure of NIH’s Program interests:

Programmatic priorities dictate something other than the “best possible science” gets funded all the time. An Institute may decide that any of a whole host of issues are underrepresented in their portfolio for various reasons both internally scientific (i.e., Council recommendations, meeting or symposium discussions (Program attends meetings!), influential reviews, etc) or external (i.e., big media splash on some issue, Congressional “interest” via inquiry, Congressional mandate, etc). The Institute may decide that their portfolio is underrepresented with PIs of various gender, ethnic and geographic descriptions, under/overrepresented with grant mechanisms, New Investigators, etc. The Institute may decide that they “have an investment” in a given research program or resource and choose to keep it running. This really outrages people who fall just off the funding line and don’t get their applications “picked up” as you can imagine.
Grow up. This is why the Institutes exist. The notion of pure investigator-initiated science is a good one, but much like “democracy” can’t be carried to the extreme. Scientists, and the scientific enterprise, exhibit well discussed conservatism in many ways, see Nature editorial about Nobel-Prize-destined work being passed over. This is unsurprising given that we are human. We have a tendency to understand scientific models and domains that relate to our own work the best. We have a tendency to stick to these models and domains, particularly as our scientific careers mature. This is natural. But it means that the funding of science by the priorities of those doing the science leads to a suppression of innovation and novelty. Not to mention health domain coverage, the interest of the National Institutes of Health.

I concluded with a criticism:

With that said, there is a problem with Program’s behavior in that it is almost perfectly opaque. There is very little way to determine how many grants have been “picked up” at all. Imprecision in the budgeting/prediction/scoreoutcome process means that the number of grants funded in perfect line with the priority scores can vary due to unexpectedly low numbers of high scoring grants per round (percentiling is across three rounds), high scoring grants that meet Program priorities, etc. In any case, Program is very loathe to explain their “pick up” reasoning in specific terms no doubt hoping to avoid lengthy debates, Congressional inquiry and even lawsuits from someone who didn’t get funded. On the balance this seems silly. If Program is going to assert a priority, do so honestly and forthrightly. Just say, we picked up X number of women PIs and Y number of New Investigators and Z grants between the Mississippi and the Sierra Nevada! And then explain why. If the reason is good enough to use, it is good enough to defend, no?

Well, later events provided a defense of the Program interest in funding New Investigator grants. I’m not a big fan of the “affirmative action” language but it is apt: New Investigators were getting screwed by study section and the NIH finally got serious about redressing the bias against New Investigators. In the words of the prior NIH head, the Great Zerhouni:

[program directors] came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni. That is, a previous slight gap in review scores for new grant applications from first-time and seasoned investigators widened in 2007 and 2008, Berg says. It revealed a bias against new investigators, Zerhouni says.

FirstTimeAwardeeFigNov08Scimag.gif
Credit: NIH
The same argument is the NIH defense against the current foofraw laid out in the NYT article. Fortunately, the incomparable writedit already snooped out the background on this story and wrote a post on a General Accounting Office (GAO) probe [pdf; go read] of the NIH behavior vis a vis funding “exceptions” to the priority score order.
NIH-PickupStats-300.png
data source
Looking at the NIH’s response to the GAO appended at the bottom of the report I was struck by how ineffectual* their graph was at making their point. So I re-plotted the data. I think this is much more intuitive in making their point that the majority of the 2007 OMG-WTF effect on the total number of out-of-order funded grants comes from New Investigator pickups. Which were a response, I will note, to a decrease in New Investigator success at the point of primary review, see first graph in the post.
I buy this defense of the charge that FY 2007 was somehow a huge increase in the number of grants pulled out of line for funding. But the analysis from the GAO seems to ignore the dramatically changed funding (and therefore “payline”) climate. There are more issues here to discuss. Many. And such issues would go a long way towards ‘splaining whether the tone of the critique is warranted.
Still, I am a little suspicious of the NIH’s position that they do not know and do not care to know, more about their systematic processes and results for picking up apps out of line. That sort of thing, intentional ignorance of the actual function of your enterprise, is not cool. Maybe it’s the data geek in me but if you had all these data about how the units under your managerial responsibility perform-wouldn’t you want to know?
__
*[added] It would also be of interest to mention these data showing that the NIH funds about 9,500-10,000 Research Project Grants (of which around 6,000 are R01s) every year. Not entirely sure we’re talking the same population (R01s versus RPGs versus R01-equivalent) as the denominator for the above described exception data so it would be nice to have the workup in the same place. What fraction of RPG and/or R01s are within payline, how many are skipped and how many are exceptions?
UPDATE: The director of NIGMS came by to point out that his Institute goes ahead and publishes their distribution of funded R01s by percentile rank of the initial priority score. I am so enthused about this! These are exactly the sort of workups on the funding behavior that would form the basis of the type of oversight the GAO report is demanding. I don’t see where there is any harm and it would really focus the auditing eye. We drones don’t necessarily need to know but I’d think someone should be asking about those 3-5 grants being funded at ranks north of 35%ile. There may be very good reasons but they should be on the table.
FY 2008 data are below, click here for FY2007, FY2006 and FY2005.

NIGMS-20081125_fig5.gif
Figure 1. NIGMS R01 applications reviewed (white rectangles) and funded (black bars) in Fiscal Year 2008. All competing applications eligible for funding are included. (source)

No Responses Yet to “NIH Administrators Ignore the Advice of Peer Review Panels- OH NOES!!!!!”

  1. Greg R. Says:

    Quite recently, a Program Officer called me to clarify something. Later we chatted a little about tangential topics (revolving about the quality of the peer review process) and the PO expressed frustration over review panels suppressing whole scientific disciplines. Perhaps such frustration has started translating into decisions, and ignoring scores is not only tied to promoting new investigators…

    Like


  2. This is a totally rational response on the part of the ICs to the pernicious combination of the tight paylines and the absurd idea that there exists any actual objective difference in the scientific merit of a 10%ile grant and a 20%ile grant.

    Like

  3. Pascale Says:

    Completely fucking agree with CPP above.

    Like

  4. whimple Says:

    I wonder if the data above does or does not include the higher payline some institutes have for new investigators. Are the NI pickups being counted for NIs that missed not only the general payline, but also the NI payline as well?

    Like

  5. DrugMonkey Says:

    Good question whimple but I seem to recall that this Zerhouni brag on their saving of NI’s for 2007 actually predated the formal changes to NI paylines that the ICs started generating. My suspicion is that the only way the second dataset generated in response to the GAO report makes any sense is if we are talking about the general purpose paylines.
    CPP and Pascale- yes and this needs to be hammered over and over again. not just for CongressCritters and beat writers but also for our peer scientists who object to anyone other than themselves getting an exception funded.

    Like


  6. BTW, what about Institutes that don’t even have paylines? For example, here is NIMH’s RPG funding policy for 2009 (it has been like this for years):

    In general, NIMH assumes that research applications that fall below the 20th percentile are scientifically meritorious and that sufficient funds are available to support up to 80 percent of these new and competing research applications. Council and program staff may selectively recommend payment of grants that fall in this range, as well as beyond, based on: 1) Institute and division priorities; 2) balance in the existing research portfolio; 3) new investigator status (see below); and 4) availability of funds. Additional priorities include: first time grantees applying for their first renewal with the goal of avoiding serious attrition or closure of new laboratories; and, established grantees with insufficient other support with the goal of avoiding the loss of outstanding laboratories.

    Note that nowhere do they say that they use as a factor in deciding on which of the grants inside 20%ile to pay the actual %ile of the grant. This seems to be a very strong implication that NIMH has adopted institutional recognition that there is no fucking difference in scientific merit among any of the top 20%ile grants.

    Like

  7. DrugMonkey Says:

    I always assumed in the case of ICs that don’t publish their paylines that they actually had them, they just wanted to avoid the usual carping from people in the gray zone…
    Someone who served on Council might know something about that, if you happen to know anyone…

    Like


  8. Well, if NIMH has a real bright-line payline somewhere inside of 20%ile within which they fund nearly everything, and outside of which they fund almost nothing, but still uses that verbiage, it wouldn’t be a stretch to say they lie. This is very different from simply not publishing a payline, such as NIGMS.

    Like

  9. DrugMonkey Says:

    A soft-line payline would be a handy point of reference, much as it works for the more public ICs. Why is this so complicated? They have a variable pool of applications so they allow for some slop around the margins.
    My assumption is that these process of advocating for pickups works internally as well, with the line POs or at least divisions having to compete against each other to get more of the grants within their domains funded. It would be pretty unmanageable for them to compete against each other for each award. Far easier to set a basal level that is intentionally below what they know they can afford and then to only fight over the last little bit…

    Like

  10. DrugMonkey Says:

    Well, if NIMH has a real bright-line payline somewhere inside of 20%ile within which they fund nearly everything, and outside of which they fund almost nothing, but still uses that verbiage, it wouldn’t be a stretch to say they lie.
    Their failure to completely describe their criteria is hardly a lie.

    Council and program staff may selectively recommend payment of grants that fall in this range, as well as beyond

    The “may” is a pretty good tipoff that these are not their only considerations and strongly implies that they are not even their major considerations. I read this policy as being their declaration of intent to modify, but not to totally ignore, the priority rankings.
    One campfire consideration is whether people have heard a lot of complaining about NIHM skips. Failures to pick up grants in what is seemingly the funding zone. These should be equal to the “phew NIMH picked me up way out at 20th %-ile” if they are completely ignoring the ranks within 20th %ile, no?

    Like

  11. Jeremy Berg Says:

    As noted, NIGMS does not use paylines (although the priority score/percentile is certainly a key factor in determining funding). For several years, we have been posting graphs showing the percentage of applications funded as a function of percentile or priority score. See http://www.nigms.nih.gov/research/application/trends for these graphs for fiscal year 2008.

    Like

  12. DrugMonkey Says:

    FANTASTIC, Director Berg, fantastic. Is this a common practice and I am just not searching hard enough on the various IC sites? Or are you the only IC that does this?

    Like

  13. Jeremy Berg Says:

    As far as I know, NIGMS is the only IC that posts this information.

    Like

  14. frog Says:

    I’ve wondered if some “randomization” might not improve results. To a certain extent, the success of a scientific endeavour is “unknowable”. To that extent, any process that depends on predicting success may not only be useless, but positively counter-productive.
    At all levels, folks are going to pick techniques, investigators and disciplines that, in the past, have proven successful. That is praiseworthy to the extent that the past is a good predictor. But we know that basic breakthroughs are NOT predicted by the past — who gets a second Nobel? Breaking the old dogmas and expectations are an important issue in science.
    So, some amount of randomization would be very good. A simple approach is to set aside say 10%, and allocate it randomly among say top 50% applications. You put some minimum standard, and then intentionally give it to those that would not normally receive funding.

    Like


  15. I was at one of those FixingPeerReview town hall type dealios a few years ago, and one of the attendees–a famous dude–stated that it would make sense to take twice the number of fundable grants from the top of the percentile rankings, and then flip a coin for each one.

    Like

  16. frog Says:

    CPP: Exactly. You’re trying to search out a large landscape, so large that you don’t have a chance for full coverage.
    Where you’ve had “clues”, you put extra effort. But if you focus on that area, you’ll miss most of the landscape — you need to look at bad areas as well.
    It’s like the old joke. A man is searching intently at night in a parking lot under a lamp-post.
    A stranger asks, “What are you looking for?”
    “My wedding ring.”
    “How did you lose it?”
    “Oh, I probably dropped it while we were playing softball earlier today.”
    “Then why are you looking here rather than on the field?”
    “The light is better.”

    Like

  17. Omatix Says:

    Wait, why are there different numbers of grants scored at each percentile level in that last graph? Surely all the open boxes should reach precisely 1% of the total number of submitted grants, right?

    Like

  18. DrugMonkey Says:

    Omatix:
    Percentile ranks for each grant application are relative to a pool of reviewed applications that is independent of which NIH Institute or Center is assigned for potential funding (except in the case of a few in-house study sections maintained by an IC).
    In the generic case, the application’s priority score from initial peer review is percentiled against a moving pool of the scores from that study section for the current round and two previous rounds. In some cases the percentiling can be relative to the “CSR base” which I assume is all grants of that mechanism reviewed in CSR for that round and two prior rounds.
    Since grants can be referred to a given IC from multiple study sections there is no reason to expect perfect distributions.
    I’m sure there are a lot of complicated exceptions and whatnot but I believe that is the basic outline.

    Like

  19. Omatix Says:

    Sounds fair enough. I hadn’t considered that the percentile translation might be with respect to a group other than that which was scored, but I’ll let them away with it. This time. 😉

    Like

  20. ap Says:

    The data shown on awards funded outside the payline are fascinating. Any idea how high NCI funded RO1’s outside the pay line in 2009? It would certainly be interesting to know.

    Like

  21. foo Says:

    Crap. What suddenly happened at 21% in Figure 1? The success rate is below one half, while its above for 20 and 22%. Hope its noise, because I just got 21% on my summary statement.
    *head asplode*

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: