Show me the money, Francis!

May 15, 2013

Later on Drugmonkey, we will be discussing this.


Later……

So this linked set of slides describes analysis of the DP1 award mechanism. Said DP1 was created to “address concerns that high risk, visionary research was not being supported due to the conservative nature of existing NIH funding mechanisms“. So, instead of gee, I dunno, FIXING this problem they did what they always do and created a new mechanism.

It was supposed to be creating review:

Based on the premise that “Person Based” application and review processes would reward past creativity and encourage innovators to go in new directions

The DP1 was open to all career stages and took a 5 page essay to describe how awesome and visionary you are. There was no requirement to submit a budget and the awards were for $500k direct costs for 5 years.

This analysis compares the DP1 awardees 2004-2006 with 1) matched R01s (on PI stage and background, topic, local institution, same time frame. combined budget was 50% of the DP1 Pioneers) 2) Random all-NIH portfolios of same total costs (but not matched on other PI characteristics), 3) HHMI investigators from 2005 competition ($600K direct costs, reappointment rate is 80% after 5 yrs and HHMI averages 15 yrs duration total) and 4) the DP1 applicants that weren’t selected.

As you will see from the slides, there is tremendous degree of overlap in the distribution of outcome measures that they are using. Tremendous. So mean differences need to be taken with a huge dose of “yeah but all is never held completely equal”. Still

1) DP1 produce the same number of publications per dollar as matched R01s, in higher Scimago ranked journals and the awards have a higher h-index rating (DM- interesting to give a grant award an h-index isn’t it?). “experts assess DP1 research as having more impact and innovation”.

Yeah. Big whoop. You select a group for innovation and awesomeness, take them off the cycle of grant churning by handing them a 2-3 grant award all at once (for the cost of a 5 page essay and a Biosketch, plus some big swanging reference letters) and they look incrementally better. See overlap, they aren’t awesomely better. Compared with labs unselected, fighting the regular grant wars and with half the money. Color me severely underimpressed by this analysis of the DP1 program. All it does is tell us to give more people the same damn deal. One might even suggest this deal approximates the deal that many of our older colleagues had in effect for much of their careers when success rates for competing continuations from established investigators was north of 45%.

2) Now you match these Pioneers on R01 direct costs there is no difference in pubs or cites. Impact factor is higher for the Pioneers but the h-index doesn’t differ. Experts assess Pioneers as higher impact and innovation.

These matched direct-costs PIs were in lower rank institutions (by some margin) and were longer past their terminal degrees. Since there was no matching on topic, institution or background characteristics I’m going to suggest that h-index really comes to the fore for this analysis. You simply cannot compare glamour chasing molecular eleventy labs with, say, clinical operations. There are too many differences in citation practices, GlamourHumping and what is conventionally viewed as “high impact”. h-index gives us a better approximation of real impact. So meh on this analysis too.

3) HHMI folks published more papers, had more citations but not if one accounted for the direct-cost differential. HHMI folks published in higher impact journals but the h-index was the same. Experts assess the impact and innovation the same. Interestingly (to me) the HHMI folks were closer to their terminal award than the mean of the other groups. The spread was tighter, this is by design of the HHMI but I guess I was a little surprise the DP1 time-since-degree was so diverse. Institutional rankings did not differ between HHMI, DP1.

Snoooooore……

4) The only loser-finalists data reported was on the Institution rankings and the time-since-degree which didn’t differ from the successful DP1 awardees.

Very frustrated on this one! An absolutely critical comparison group in my view. How did these folks do? Did they get other funding? Did they suck in terms of productivity? Did they get their money anyway and compete successfully?

The overall conclusion slide nails it in the first bullet point. We really don’t need to go on from:

It appears that higher funding leads to higher portfolio‐level impact.

Look, I’m not saying that other factors don’t contribute. But this is an “all else held equal” analysis. Or an attempt at it. If you match the PIs on their approximate fields, type of work, background, local institutions, etc and give them the same amount of research support, they do the same. Even the much-vaunted HHMI sinecure funding (at approximately 1 R21 or half a R01 greater value per year vs DP1) which can be expected to last for 15 year of programmatic support doesn’t make a radical difference! Note that the DP1 doesn’t come with any such guarantee of longer-term funding for the PI. S/he knows full well that they are right back in the hunt 2-3 years down the road. So I guess it is worth hammering the final bullet point:

DP1 vs HHMI: likely not attributable to flexibility of research, or riskiness of ideas, but may be due to funding level and stability, differences in PIs, or differences in areas of science.

Advertisements

29 Responses to “Show me the money, Francis!”

  1. odyssey Says:

    “Experts assess DP1 research as having more impact and innovation”

    HAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!!!!!!!!!!!

    Like

  2. fjordmaster Says:

    I would be interested to see a comparison of how the DP1 award affected subsequent funding. Maybe look at non-DP1 funding held by these investigators before and after the DP1 awards to determine if winning the DP1 provides an advantage in securing more traditional grants over the matched R01 group.

    Like

  3. DJMH Says:

    Taking it another way–if it’s no more (or less) successful to allot money based on a 5 page, personal-history app, why not just have everyone apply for all NIH grants like that? Seems like we’d all save time.

    Like

  4. Dave Says:

    Were the “expert reviewers” blinded to who was in each group?

    Like

  5. Dr. Noncoding Arenay Says:

    How does this compare to the DP2? Is the process similar?

    Like

  6. DrugMonkey Says:

    DJMH- I think that is where we’d need better attention paid to the fate of unsuccessful applicants, no?

    Like

  7. dsks Says:

    “Experts assess DP1 research as having more impact and innovation”

    HAHAHAHA…etc seconded.

    “These matched direct-costs PIs were in lower rank institutions (by some margin)”

    Alright, now it’s not even funny.

    When is the NIH going to submit the proposals for these little policy-directing studies for peer-reviewed before carrying them out? I mean peer review by we regular scientists out here in the cheap seats, not this ambiguous and oddly compliant group of “experts” I’m hearing about (I don’t know who they are, but I can’t help but picture The Smoking Man leading the discussion).

    What kind of whackaloon universe are we living in in which an institution tasked with funding good science is so perversely averse to applying good science in the guidance of policy and operating method?

    Like

  8. Jeremy Berg Says:

    As someone who was deeply involved with the Pioneer Award program, I want to offer a few comments.

    First, I think NIH is do be commended for taking on this evaluation. This sort of evaluation is quite challenging and I think NIH is making a sincere effort to do an unbiased job at determining the characteristics of the Pioneer program compared to other funding approaches.

    Second, the Pioneer program is intended to fund high risk-high reward research and the results of the program are consistent with that. Some of the Pioneer Awardees have done some truly ground-breaking work while others elected to work on hard and important problems and have not achieved their stated goals (although they may have done some solid work along the way). In this sense, I think the outlier papers (in terms of citations) in the slide set may be some of the most interesting and relevant items.

    Third, I think DM’s analysis is correct. Dollars produce productivity and it is difficult to find mechanisms that produce substantially different average results in terms of productivity per dollar.

    Lastly, many HHMI investigators also receive NIH support (sometimes substantial amounts). In this light, it is important to correct for total support for HHMI investigators.

    My two cents…

    Like

  9. Grumble Says:

    JB’s comment:

    “Dollars produce productivity and it is difficult to find mechanisms that produce substantially different average results in terms of productivity per dollar.”

    …suggests that what DJMH describes:

    “Taking it another way–if it’s no more (or less) successful to allot money based on a 5 page, personal-history app, why not just have everyone apply for all NIH grants like that?”

    …is a common-sense framework for radically changing how the NIH allocates money. What’s not to like?

    Like

  10. The Other Dave Says:

    I am with Jeremy. I too love it when NIH is willing to do some analysis and put it out there. Unfortunately, sounds like some ‘pioneers’ were better salesmen than scientists. I guess it’ll always be that way when funding is allocated ahead of time based on promises (proposals).

    So that got me thinking..

    DM: If you match the PIs on their approximate fields, type of work, background, local institutions, etc and give them the same amount of research support, they do the same.

    So PI details are sort of irrelevant?

    DJMH: Taking it another way–if it’s no more (or less) successful to allot money based on a 5 page, personal-history app, why not just have everyone apply for all NIH grants like that?

    But a 5-page Biosketch is enough?

    Putting it all together, it seems to me like NIH would basically get the same results if they scrapped the application process altogether, and just distributed wads of cash (via lottery, if there’s not enough to go around) to every qualified PI.

    Change your thinking. Don’t think of science as something driven by personalities. Think of scientific knowledge like any other publicly-funded infrastructure. Just allocate the money to whomever can get the job done, perhaps even the lowest bidder. Like highways and airports and soldiers. We are all just information grunts.

    Maybe we should all be paid after-the-fact for each accomplishment. And not before, based on proposal-writing skills. Institutions get reimbursed for the knowledge they create.

    Like

  11. Jeremy Berg Says:

    Grumble: The Pioneer process also involves interviews of the finalists (typically 15 minutes of presentation and 15 minutes of Q&A plus discussion among the reviewers). From my experience, this is a very important part of the process (some applicants moved up because of their performance during the interviews whereas; in other cases, ideas that seemed too good to be true seemed more so after the interview). This model is impossible to scale up for NIH-wide review.

    On a related point, DM comments that NIH should fix the overall review process rather than creating new mechanisms. While I agree that addressing the general review system is important, in my experience it is very difficult for most review groups to balance potentially high reward but risky research with very solid and important research (particularly with success rates where they are at present). By having separate processes, reviewers can rate competing applications and NIH can balance its portfolio appropriately.

    Like

  12. The Other Dave Says:

    @Grumble:

    I have reviewed a lot for European science agencies, and many of them allocate funds basically that way. I have *never* got to the end of one of those proposals and though ‘Boy, I wish I had more information’. Five pages is definitely enough to separate the wheat from the chaff. Basically, what goes through my head is: 1) ‘Oh, I know this person, they’re good (or: Who the hell is that, and I look them up). 2) This is a good idea, and they could do it! (or: WTF?). Then I score accordingly. It’s not really different from the way I review longer proposals here in the U.S., actually.

    Like

  13. Ola Says:

    DM, I like the idea of an h-index for a grant, but the problem is citations may not start flowing until long after the grant is finished. This would encourage holding out until the very last possible cycle before submitting competing renewal (i.e., taking a long no-cost extension). Alternatively you could front-load the funding period with LPUs, since there’s no incentive to hold out for the big CNS paper that might not come out until a year after the grant is over. The only way I can see this working, is if the grant h-index was calculated for -10 to -5 years (i.e. not the grant you just had, but the one before that).

    Or we could get our heads out of our collective asses regarding impact, and just get on with doing good science.

    Like

  14. Drugmonkey Says:

    So we should award grants to people “we already know”, TOD?

    Like

  15. The Other Dave Says:

    @DM: What would be different from the way grants are awarded now? What is the ‘Investigator’ criterion except a reworded version of ‘Do you know and respect this person’?

    And what’s the problem with that? Reviewers are supposed to be experts in the area. If experts in your area have no clue who you are, then there’s a problem. Even for new investigators, reviewers should at least recognize you by your work.

    But actually, I wasn’t advocating that in my comment. I was arguing for a post-hoc payment. In which case it doesn’t matter who you know, or who knows you. Only what you’ve produced.

    You put in a bid, get contracted for the work, and paid if you do the work. Just like everything else the government pays for.

    I don’t know whether it would work. But it’s worth thinking outside the box.

    Like

  16. Jonathan Says:

    @The Other Dave:

    ” Just allocate the money to whomever can get the job done, perhaps even the lowest bidder.”

    This could be interesting, and would mean concentrating the money on the square states and flyover country where cost of living is low and salaries are too.

    Like

  17. Drugmonkey Says:

    TOD,

    The difference is in emphasis of factors. And the ability for reviewers to use their own judgment about said balance.

    Like

  18. Grumble Says:

    JB writes that “this model [interviews with grant applicants] is impossible to scale up for NIH-wide review.”

    Fine. But the data seems to argue that pioneer award recipients really only perform marginally better than the average R01 recipient. So what’s the benefit of the interview process?

    The idea should be to come up with the simplest, most streamlined way of determining, broadly, who is likely to be reasonably productive with the NIH’s money. To decide that, one needs neither interviews nor the current absurd write-10-to-get-1-and-then-actually-do-totally-different-experiments-anyway-when-you-finally-get-the-money system.

    I’m with TOD. A 5 page essay /biosketch is plenty basis to determine whether someone is productive and deserving of $.

    Like

  19. Jeremy Berg Says:

    Two points: First, one has to be careful (as always) regarding the difference between the average performance and the distribution of performance within a group. One could argue that the distribution for Pioneer Awards should be bimodal with a subset (perhaps even a small subset) being very successful and another subset doing relatively poorly. This is because the applicants were supposed to be working on hard, risky, potentially high impact projects. Some of the data in the presentation are consistent with this. However, some of the Pioneer Awardees used the funds to work on less risky aspects of their project so that there are relatively few unproductive Pioneer Awards.

    Second, the interviews are important because the first phase reviewers are instructed to give the investigator the benefit of the doubt if they propose something, even if it is outside of their obvious experience and many of the reviewers do follow this instruction. The interview helps sort this out. In my experience, there were some Pioneer applicants for whom the committee would not have recommended funding even if infinite funds were available since it became clear in the interview that the applicant did not really have a handle on what they were up against.

    I would be concerned that reviewers would have difficulty with this with 5 page applications for general grants. They will not have enough information to judge and will have to either give folks the benefit of the doubt or assume that they would not be capable of doing anything complicated that they had not already done. I would also be concerned about the importance of old boys networks becoming even more impactful than they already are. I am proud of the fact that some of the most successful Pioneer Awardees were relatively young and unknown when they received the Awards. I am confident that some of these programs would have struggled tremendously as through traditional mechanisms.

    Like

  20. drugmonkey Says:

    I am proud of the fact that some of the most successful Pioneer Awardees were relatively young and unknown when they received the Awards.

    I would be very (VERY) interested in the degree to which “unknown” investigators were selected. Meaning people without extensive ties to “known” investigators, of course. Training relationships, etc.

    some of the Pioneer Awardees used the funds to work on less risky aspects of their project so that there are relatively few unproductive Pioneer Awards.

    Kind of key to the HHMI (15 yr average) / DP1 (back in the fray on the usual cycle) difference, isn’t it?

    Like

  21. The Other Dave Says:

    I would be concerned that reviewers would have difficulty with this with 5 page applications for general grants. They will not have enough information to judge and will have to either give folks the benefit of the doubt or assume that they would not be capable of doing anything complicated that they had not already done.

    In my experience, longer grants shift reviewer focus from the idea to the feasibility. I think this is consistent with what you are saying.

    I would also be concerned about the importance of old boys networks becoming even more impactful than they already are.

    I think the opposite. As I said above, short proposal focuses people’s attention on the idea. Old boy networks don’t have a monopoly on ideas. Of course, if there are questions of feasibility there won’t be room to address these. But then again, I definitely don’t think it helps new reviewers when senior applicants can fill the extra pages with stories about past successes and mountains of equipment they’ve accumulated over 20 years.

    Like

  22. The Other Dave Says:

    ‘New applicants’. I meant ‘new applicants’, not ‘new reviewers’.

    See what I’m saying? The overall perceived value of my idea was lessened substantially because I had the opportunity to write more than I needed — and blew it. Some of us like short proposals because they provide fewer opportunities for us to shoot ourselves in the foot.

    Maybe these comment boxes should be limited to 200 characters.

    Like

  23. Grumble Says:

    @JB:

    “They will not have enough information to judge and will have to either give folks the benefit of the doubt or assume that they would not be capable of doing anything complicated that they had not already done.”

    I think this misses the point. An important reason for very short applications is that only part – a small part – of the review is based on what the applicant actually proposes to do. In 5 pages, one can’t propose any details, and yes, what gets proposed would often be what the applicant dreams about, not what s/he can necessarily do. But if the applicant has a track record of being able to do that kind of work already, or at least very solid work that’s well respected and moves the field forward, you don’t need the interview to separate the real from the fake. You already know who’s real based on what they’ve done, not what they’ve written 5 pages of BS about.

    Of course, *all* the money can’t be allocated with short application/no interview system, because then only people with established track records would ever get funded. So there should be a separate track just for new/young investigators, which would be similar to the review system we have now (or even to the current DP system, with interviews).

    “I would also be concerned about the importance of old boys networks becoming even more impactful than they already are.”

    I don’t see why it would necessarily get any worse (or, I should say, if it could possibly get any worse).

    Like

  24. dsks Says:

    ” One could argue that the distribution for Pioneer Awards should be bimodal with a subset (perhaps even a small subset) being very successful and another subset doing relatively poorly.”

    Indeed. And in fact, for this mechanism, it’s arguably how far one of the tails of the curve reaches along the Impact axis that really matters, not the humps. Whether 90% of the awardees produce work no better than their Ro1 counterparts is irrelevant; the awards success is really down to whether the top 1-3% are producing work of substantially greater impact than the top 1-3% of successful RO1 funded scientists. Of course, it could take ten years or more before the true impact of the work done can be accurately evaluated, so it’s difficult to gauge the success of such a program in the short term.

    As it is, simply creating an award for risky science is not alone sufficient to remove the current trend of risk aversion, which is as much attributed to publication pressures as grant application pressures. But maybe it’s a good start, time will tell.

    Like

  25. Drugmonkey Says:

    That is utter and complete bollocks dsks. It is *explicitly* a mechanism to identify awesome eleventy stuff. 1-3% should *easily* get through a regular old system. OR, those ppl should be gertzin funded on x, doing y anyway and knocking off our socks.

    Like

  26. Cynric Says:

    Is it complete and utter bollocks?
    Surely that’s the point of this program – to fund high risks. Most risks won’t pay off, therefore you’d predict a very small number of spectacular successes, and a much larger number of failures (like funding startups), not a smear of variable-but-typical output. I also don’t see why hoping the 1-3% will make their breakthroughs by gaming the regular system is preferable…?

    Like

  27. DrugMonkey Says:

    Because 1-3% of R01 awards >>>> 1-3% of DP1 awards.

    If there is a systematic problem with identifying amazing stuff then fix the problem at the point of review.

    Like

  28. The Other Dave Says:

    C’mon, DM, you’ve sat on plenty of review panels, right? You know how it goes. High risk high payoff stuff is like apples to all the other oranges. It’s tough to compare the two. How do you rank a super duper eleventy project with a 50% chance of working out to something that will definitely for sure absolutely result in a solid but incremental advance?

    Both types of science are important, but the eleventy stuff usually gets shot down in panels. It’s important to have a different sort of mechanism where apples can compete only with other apples.

    Thus, I like that the DP1 thing exists.

    I just wish NIH wasn’t using their usual dumb methods to rank these proposals. Can we get a cognitive psychologist with expertise in bias somewhere in charge at NIH, please? We got all these guys there who think knowing how to pipette magically makes them competent in decision theory and management.

    Like

  29. drugmonkey Says:

    As it happens, TOD, I have argued many times in favor of grants which appeared to me to be high payoff in the face of opposition based on stupid kvetching about feasibility or minor grantsmithing “problems”.

    I have also argued that yet another proposal to look at Topic C or Topic H was low priority, no matter the chance of solid work emerging based on Professor OldGuye’s track record.

    (It is also undoubtedly true I’ve argued a traditionalist line for other proposals at times)

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: