Some NIH grants should fail

May 17, 2010

We have another version of bash-the-R21 brewing, for previous work from PhysioProf on the topic see here, here and here.
The discussion ended up touching on the paralytic meme that it is impossible to get an R01 funded without copious preliminary data testifying specifically and empirically that a large part of the proposal is/will be supported.

It doesn’t help me to say, “You should go for the R01” when I have what I think are great ideas, prelim data to show the ideas are feasible, but not enough to justify an R01 or defend against “fishing expedition” criticisms. Not to mention a publication track record. I don’t resent this — if I was giving a PI $500K I’d give it to the PI who has years of great publications and boatloads of preliminary data too.
But this is why I (in a basic science dept) am primarily applying for NSF and R21s for now, hopefully in 1-3 years I’ll have the data for the R01. Is this wrong?

See what I mean about “paralytic”?

The commenter, MBench, goes on to ask:
Should I be floating out an R01 just in case I get the review panel that decides my project is terrific despite my few-pubs-as-an-independent-PI-and-bare-bones-prelim-data?

Three out of three Internet blovinards agree…
pinus: yes
whimple: yes
YHN: yes
…but musing on this topic returns me to one of my usual themes which is the underlying reason why the NIH/CSR review panels are so fixated on Preliminary Data. The best I can deduce is that it derives from a belief that part of the job of review is to weed out those projects which will “fail” from those which will “succeed”. And this is pervasive, let me quickly acknowledge. It is frequently at the root of the review comments about investigator juniority/seniority, environmental support, past-productivity and the like.
Sometimes the concept of failure is pretty explicit- as in “That experimental design will never provide any clear evidence bearing on the hypothesis”. or “That person, in that environment, cannot possible build, validate and deploy that research capacity in the requested time for the requested money”. But even where the criticism is reasonably explicit, it should be clear to us all that it is merely an empirical prediction. Sometimes the criticism comes with good supporting rationale, but sometimes it is supported only with lazy and unthinking rationale. Personally, I urge reviewers to do the best they can to focus on exactly what it is that they are predicting. To be clear about whether they are confusing their experimental-outcome predictions with an evaluation of whether a good empirical test of various experimental outcome predictions will be conducted. The latter is an appropriate area of concern, the former is far less frequently a valuable critique.
faceplant250.jpg

If you aren’t crashing, you aren’t trying
(source)

Nevertheless, one thing that I think has an insufficiently prominent role at the study section table is the consideration that failure can be a good thing! We rarely even consider what the notion of failure means. There are various failure modes in the NIH funded grant game, of course. Failure can mean that a junior investigator is awarded an R01 for 5 years, has trouble building a research program, the studies come out to crap, s/he never publishes any data and never seeks any subsequent research funding. Does that cover more or less a complete and utter failure of a NIH grant award? No data, no pubs, no subsequent awards based on promising work and no career. Pretty much a zero gain.
We move up from there. Perhaps parts of this scenario succeed and some others fail. Maybe it is only that the papers are mostly negative. Or there are papers but they are only methodological works because the intended investigations mostly turn out crappy, disappointing, frustrating or whatever. Or perhaps the award goes swimmingly with lots of specific pubs… but the investigator is subsequently lost forever to NIH-funded and/or public health related science. Sure there are some gains associated with the award but it is not a trifecta of papers, specific tests of the hypotheses and a continued productive research program.
Feel free to contrast this with some versions of the successful NIH grant award. A steady stream of publications which move, robotically, through a series of Specific Aims as originally proposed. Subsequent research awards which build on the work already completed. More awards, more papers and more Specific Aims.
Aims which are duplicated, more or less, in three other labs around the country working on nearly indistinguishable projects. All of them reference and cite each other, djinning up some decent cites and buzz. (Let us not be too snarky, we can admit that the general area is important to the mission of the NIH and to science, it is just that there is nothing uniquely awesome about any of the participating labs work and hypotheses…) The investigator goes on to acquire other awards, renews this award and generally becomes just another one of the gang. Is this a “success”? Well, yes it is. Science is indeed incremental- it is hard to predict what is going to later be seen as critical groundwork. Even the most similar of laboratories come up with a shining unique gem, now and then. A gem that may distinguish their program forevermore going forward.
But there is that little nagging problem of opportunity cost. What has been overlooked in the conservative rush to fund safe, unrisky me-too proposals that are well supported by the Preliminary Data?
The glorious? The paradigm shifting? The quantum leap in understanding perhaps?
To be honest, I think at the core of their being, many reviewers do recognize the need for a diversity of scientific investigations, from the safer/pedestrian to the risky/high reward. When it comes to actually reviewing grants, however, I think the nebulous concept of a grant success is given too great of a role.
If there is an NIH standard(ish) mechanism which begs for failed grants, the R21 is it. I wonder how many of them succeed and how many really turf out? How many lead to one great paper, how many lead to at least one subsequent grant award and how many lead to a big fat goose egg?
Remember NIH, if you aren’t crashing some of the time then you aren’t really trying to get better.

12 Responses to “Some NIH grants should fail”

  1. Mike Says:

    I disagree vehemently that a successful grant requires a trifecta of results as you state: specific tests of hypotheses, papers, and continued research program. I agree with parts 1 and 2 of the trifecta, but see no reason at all that a continuing research program has any relationship to whether or not a grant was successful.

    Like

  2. whimple Says:

    Wow. I’m in about total agreement here DM. I’d go a step further and assert that *all grants* should be allowed to fail, including R01s in particular.
    I keep mulling over the statistic that 2/3 of all R01s are new (1R01-x) and 1/3 of all R01s are newly renewed (2R01-x). In the steady state only 50% of all R01s get renewed. Why is that? We know grant renewals have higher success rates than new applications so it makes sense on the face of it to send in those renewals. Does that mean that half of the funded R01s “failed”, or did the project just wind up going in a different direction but still produced good science? If half of the funded R01s failed to deliver on the promised project outcomes but still lead to good science, it doesn’t make much sense for reviewers to get too hung up trying to predict “success” up front, since they are provably not very good at this kind of prediction, and it doesn’t really matter anyway.

    Like

  3. becca Says:

    I think the ‘continued research program’, while directly appealing to those that have an interest in being career scientists, is only an indirect measure for the real goal. It seems to me that the real goal is to advance the forefront of knowledge furthest. You can get, in some respects, deeper and more penetrating insights into the world if you are building off of 40 years of experimental tools and expertise. Of course, whether there’s a tradeoff in the form of a relative loss of the sort of creativity that gets incubated in environments where you have the synergy of more superficial expertise from many areas coming together, is a valid question. Some eventually fruitful ideas never get tried if you know too much about the area you’re doing research in.
    I don’t think anyone has magic answers here on how to get ‘the most important knowledge’. But I think it’s silly to pretend all hypotheses and papers are equally valid/interesting/fruitful, which is kind of what you do when you say only tested hypotheses and published papers count as production.

    Like

  4. Jim Austin Says:

    You should check out the Science Careers series on Audacity in Science — especially parts 3 and 4 (on funding) and the commentary Eleftherios P. Diamandis wrote in response, Audacity is Over-Rated
    Jim Austin (full disclosure: I’m the editor of Science Careers. Tweeting as @SciCareerEditor .

    Like

  5. daedalus2u Says:

    What “high risk” means is that there is a 0.01% chance that reviewers might look foolish for awarding grant money to this proposal.

    Like

  6. DrugMonkey Says:

    I disagree vehemently that a successful grant requires a trifecta of results as you state
    I was not trying to say that any particular constellation of outcomes was *required* to be viewed as a successful project. More that any of these three things could be considered evidence of success and if you hit all of them, this is considered to be very successful indeed.
    see no reason at all that a continuing research program has any relationship to whether or not a grant was successful.
    I don’t know how you can see a program built in the first competitive funding interval go on to produce a second strong interval and NOT score that as a success of the first interval of funding.

    Like

  7. MBench Says:

    Well put DM. But how much of the previous advice to, if I may paraphrase, damn the stock critiques and get the R01 out there, comes from this kind of (as I read it) plea for perspective, and how much is grounded in realistic expectation of what will happen at review? Or, to put it differently, how much of this thinking actually occurs in study sections?

    Like

  8. DrugMonkey Says:

    how much is grounded in realistic expectation of what will happen at review?
    I argue from at least three perspectives. First, the longitudinal NIH stats showing substantial numbers of investigators receive R01s as their first award. One example is here.
    https://drugmonkey.wordpress.com/2007/06/27/new-investigator-dont-cut-yourself-off-at-the-knees/
    Second, by indicating that I have personally received grant awards on the strength of applications that contain StockCritique bait.
    Third, by indicating that I have served on study sections from which grants were funded that, likewise, contained obvious StockCritique bait.
    If there is one thing I know for sure it is that the application that you do not submit has zero chance of being funded. If you send one in, there is a chance at getting an advocate or two to see past the bait and go to bat for your brilliant ideas.

    Like

  9. whimple Says:

    A new investigator really has no idea what their study section likes or dislikes. The only way to find out is to put in an application and let them tell you. Waiting to make your first grant “perfect” before submitting is problematic because you don’t know what the definition of “perfect” is in your particular context. For you all know, what you have is already perfect. You should also consider that your “fresh new investigator smell” is going to wear off pretty quickly, regardless of whether you still get to check the little box or not.

    Like

  10. nard Says:

    The purpose of every experiment is to DISPROVE the hypothesis. Unfortunately, if you are any good at this, you don’t last long in science.

    Like


  11. The purpose of every experiment is to DISPROVE the hypothesis.

    BZZZZZZTTTTT!!!!!!!!!! MOTHERFUCKING WRONGO!!!!!!
    Experiments are designed so that they are capable of disproving a hypothesis if the hypothesis turns out to have been false. However, it is not the purpose of an experiment to disprove the hypothesis. The experiment has still served its purpose if it fails to disprove the hypothesis, and thus leaves the hypothesis still in play as possibly correct.

    Like

  12. DrugMonkey Says:

    https://loop.nigms.nih.gov/index.php/2011/06/02/productivity-metrics-and-peer-review-scores/
    Notice the small number of grants with zero resulting pubs? Failed projects?

    Like


Leave a comment