Your Grant in Review: IC Funding Caps

September 6, 2007

We’ve been hearing for some time now that the NIH ICs are using per-PI limits on grant awards as one of their strategies to deal with the budget issues. As always, the goal here for ICs is to keep as many of “their scientists” minimally funded in these times of low funding success rates. This is just one of a litany of strategies which includes the across-the-board-budget-cut, the R56 “bridge” for those who can’t get a fundable score and are dropping below a cut line of total NIH funding ($200K direct is the number I’ve heard) and prioritizing of small grant mechanisms. The award-limit strategy is, or should be, harder to “take”.

The lamest situation would be a PI who receives a score clearly within the funding line yet Program says “Sorry, you already have 2 (or 4!) R01s from the NIH and we’re just not going to fund that one. Too bad.”. I dimly recall a situation from a number of rounds back where a new grant likely got a very good score (“likely” because one knows the post-discussion recommendations but not the eventual full-panel average) and the grant hasn’ t funded at last CRISP check. So it can happen in reality. The gray areas, of course, are when Program staff fails to do a “pickup” where they otherwise would likely do so. Examples include scores in the soft-fund zone between published hard line and the eventual IC rate for that round and POs who say “Well, I was going to put your current R21 unfundable score app up to Council because I think it is worth the argument. However, I see you have a R01…-01A2 pending review with a decent prior score so let’s see how that one does, if no-go I’ll for sure put this one up next round”. Yeah, I heard about one of these recently.

This brings me to the grants I’m reviewing. As most know, real world considerations like how much funding the lab already has, or the overhead rate of the local institute are not supposed to contaminate first-line review in the study section. Nor are questions of whether the IC will really fund the grant supposed to be involved (for example NIDA grants that appear insufficiently anti-drug, NIMH grants that were too basic in the first Insel-purge, etc). Naturally, such considerations contaminate reviewer behavior all the time. Well, perhaps not “all the time” in the sense of every grant or every reviewer or every review round in a section. But it is frequent enough that if you know 4-5 people on different study sections you are going to hear cases of people willing to make comments to these effects in open discussion. Nevermind what others are thinking in private.

So suppose you’ve been beavering away like a good little PI, shooting out grants and revisions endlessly. Suppose you finally get some hits, renew that competing continuation and get that new R01 funded on the A2 at last. What happens to your other grants you’ve been working on? How does the reviewer deal with this application?

Well, as I’ve discussed, the reviewer’s pile of grants are in some senses in competition with each other if the reviewer is doing the job right. The review panels are not supposed to turn over a bunch of identical scores to the ICs for them to sort out, we your peers are supposed to be doing the sorting. Suppose your now somewhat superfluous (in the eyes of the per-PI capping ICs) grant is better than the next applicants. This grant which really won’t be funded then has the potential to push a lesser, but still meritorious, grant outside of the fund line or even outside of the gray area beyond the ability of Program to recall it.

And what is the reviewer to do? After all the “rules” are clear, just focus on the scientific merit. But we fail to do that all the time, particularly when it comes to the “A2” or the “competing continuation”. Should a reviewer be concerned that lock-step behavior is going to result in neither grant funding? Or should reviewers be a little more, er, practical?

10 Responses to “Your Grant in Review: IC Funding Caps”

  1. writedit Says:

    NIMH has a stated policy on the number of awards a given PI can hold, but I’m not aware of formal public declarations at other ICs.

    But more importantly, your fixation on funding is worrisome. That “we” – meaning the reviewer herd – fail to concentrate on the scientific merit of each application (judged against itself or what it could be if prepared properly) rather than the funding milieu is no excuse for doing so. Stop it. Now.

    Like

  2. whimple Says:

    You, as a solitary reviewer, don’t have the perspective that Program has on the totality of grants they are receiving that would enable you to usefully game the system. Suppose, for example, that given the State of the Field, studies X, Y and Z are meritorious, and doable, and *obvious* (albeit still innovative relative to previously done studies). Program receives 3 different R01s from 3 different study sections that all propose X, Y and Z. All 3 of these grants get excellent scores, as they should. What should Program do, in these tight money times? Note that *uniqueness* is NOT one of the five study section evaluation criteria, nor does the study section even have the capacity to evaluate uniqueness. I’d argue that Program should pass on one or two of the duplicative R01s, even though they all have “fundable scores”. I’d go further to suggest that it would serve Program’s best interests pass on the R01s from the otherwise better funded labs, in order to preserve diversity of scientific thought in the pool of “their scientists”. Why not just evaluate the science and let Program do their job?

    Like

  3. drugmonkey Says:

    “your fixation on funding is worrisome… Stop it. Now.”
    and
    “Why not just evaluate the science and let Program do their job?”

    Indeed. I had some comments on this a while ago and you are both quite right, in a way.

    What I am trying to explore here is twofold. First, further subtle explication of why grant review is not the simple process that some people seem to think it is without themselves being responsible for making the actual decisions on a pile of apps. Second, my usual attempt to show why natural, common human behavior can work against the way the system is “supposed to work”.

    Nobody but nobody “just evaluates the science”. We are creatures of (nearly?) inescapable biases and it shouldn’t just be those trained in experimental psych departments who know this by now! I submit that the bias for revised applications is severe enough to demonstrate this in and of itself just by CRISP-ing funded grants. Participation on a study section only reinforces this impression. Second, everyone leans toward the funding line and leans toward a Gestalt “fundable/not fundable” binary on grants. The stacking of scores around the perceived funding line, and the continued mantra of SRAs to “flatten the distribution” is again, proof of this. Yet, we are being told explicitly not to make funding judgments. Some are better at this than others and, no doubt each person thinks they are better at this than they in fact are. Since the outcome of peer review is our only metric, well, obviously these are untestable hypotheses…

    writedit, specific point: Your contention that apps should be judged against themselves is just flat wrong. This is not what we are being asked to do, we are being asked to judge on a per-round basis. This puts grants in competition with each other. Funding at the IC is per-round, so ditto. Scores are not to be benchmarked against the prior score for a revision so even if it is improved the score can go backward. Again, people have a hard time overcoming score benchmarking tendencies.

    wimple, specific point: of course reviewers can “usefully game the system”. there are no guarantees because you have to get the other reviewers and the panel to go along with you but, c’mon. a single reviewer can certainly torpedo an application’s chances. harder, but a single reviewer can rescue one as well. remember it is not just the hard funding line we are talking about but also making it possible or very difficult for program to pull one up. your example is a little confusing to me, first because it seems highly unlikely as a scenario and second because I don’t see how it applies to my point. We get grants that may be fairly different from each other and assigned to different ICs. They are still competing for the same percentile rankings.

    Like

  4. PhysioProf Says:

    “NIMH has a stated policy on the number of awards a given PI can hold, but I’m not aware of formal public declarations at other ICs.”

    NIGMS has an explicit policy for Council decisions on applications from “well-funded laboratories”, which they define as “those with over $750,000 in direct costs for research support, inclusive of the pending application”. This policy states that competing continuations “will receive normal consideration for funding”, but new applications will be funded “only when distinctively promising work will be pursued” as “[t]he Institute’s default position is to not pay such applications”.

    http://www.nigms.nih.gov/Research/Application/NAGMSCouncilGuidelines.htm

    I have been told independently by more than one Program Officer at NINDS that they have a “hard” payline, and that *all* applications within the payline, without exception, are funded. Whether this policy has been, or ever will be, reconsidered I don’t know.

    Like

  5. writedit Says:

    “we are being asked to judge on a per-round basis”

    This may be the case in the study section discussion, but the applications you are assigned to evaluate can only be assessed individually.

    All these musings about funding politics & trends outside the review process is fine. I just want to be reassured that when you sit down with an application – irrespective of whether it will likely be discussed & scored – that you are considering those specific aims, the rationale for working toward achieving those aims (& significance of achieving them), the progress already made, whether the methods proposed will actually achieve the stated aims, whether the investigative team has the expertise to achieve the stated aims, and whether the equipment and facilities to conduct this work are available to the investigative team. Your assessment of these factors is what is important to the PIs submitting these proposals and is why you are a valued resource to the scientific community.

    Like

  6. drugmonkey Says:

    “This may be the case in the study section discussion, but the applications you are assigned to evaluate can only be assessed individually.”

    Sorry writedit but you are quite incorrect on this score. At least as far as the instructions given repeatedly and firmly by our SRA. Conversations with colleagues suggests that this is common elsewhere as well. Is this really what your reviewing PIs are telling you?

    Think about it for a minute. Doing the “pure” evaluation is very likely to produce a distribution of scores that is closer to a normal curve than to a flat distribution. ICs want a flat distribution which is why they press reviewers to spread scores out.

    Like

  7. writedit Says:

    I think we’re talking about different things. I’m referring to the actual review/critique, not the score.

    Like

  8. PhysioProf Says:

    “I’m referring to the actual review/critique, not the score.”

    I’m not a philosopher, but isn’t it the case that *all* judgments–aesthetic, scientific, moral, etc.–are, by necessity, made relative to some standard? And in the case of grant review, doesn’t the standard have to, by necessity, derive ultimately from consideration of other grants?

    Like

  9. drugmonkey Says:

    “I think we’re talking about different things. I’m referring to the actual review/critique, not the score.”

    Yes that is bit different. In this sense, yes, the benchmarking should be against the given reviewer’s sum total of experiences with grant applications and professional judgment of the science. And yes, in theory a reviewer should give each and every critique the fullest and most honest review possible.

    This doesn’t always happen. Many reviewers really short on the review of apps they intend to recommend for streamlining. Or, as I’ve previously discussed a reviewer is using the critique to communicate and sometimes the audience is not just the applicant.

    We can then discuss whether this should indeed be the case. Should the reviewer try to “game the system” to produce a desired outcome? (one interpretation) Part of this gaming may be in how the critique is written. One can choose what to emphasize, what to minimize, what to ignore outright in reviewing a proposal.

    Or is it a matter of actual effective review of grants to figure out that convincing a panel of peers of the merits of a grant is not as simple as reading out a list of merits and demerits in the applications as written? That it is necessary to speak the right language and respect the outcome effects of certain behaviors.

    Like

  10. drugmonkey Says:

    well I’ll be. That “dimly remembered situation”? Looks like they finally picked it up, must be some 2-3 rounds after it should have been funded.

    i’ve occasionally proposed they should put grants which miss funding into a holding pattern and pick them up later instead of churning back with the revisions and all. I guess they sometimes do that.

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: