Show me the money, Francis!
May 15, 2013
Later on Drugmonkey, we will be discussing this.
Later……
So this linked set of slides describes analysis of the DP1 award mechanism. Said DP1 was created to “address concerns that high risk, visionary research was not being supported due to the conservative nature of existing NIH funding mechanisms“. So, instead of gee, I dunno, FIXING this problem they did what they always do and created a new mechanism.
It was supposed to be creating review:
Based on the premise that “Person Based” application and review processes would reward past creativity and encourage innovators to go in new directions
The DP1 was open to all career stages and took a 5 page essay to describe how awesome and visionary you are. There was no requirement to submit a budget and the awards were for $500k direct costs for 5 years.
This analysis compares the DP1 awardees 2004-2006 with 1) matched R01s (on PI stage and background, topic, local institution, same time frame. combined budget was 50% of the DP1 Pioneers) 2) Random all-NIH portfolios of same total costs (but not matched on other PI characteristics), 3) HHMI investigators from 2005 competition ($600K direct costs, reappointment rate is 80% after 5 yrs and HHMI averages 15 yrs duration total) and 4) the DP1 applicants that weren’t selected.
As you will see from the slides, there is tremendous degree of overlap in the distribution of outcome measures that they are using. Tremendous. So mean differences need to be taken with a huge dose of “yeah but all is never held completely equal”. Still
1) DP1 produce the same number of publications per dollar as matched R01s, in higher Scimago ranked journals and the awards have a higher h-index rating (DM- interesting to give a grant award an h-index isn’t it?). “experts assess DP1 research as having more impact and innovation”.
Yeah. Big whoop. You select a group for innovation and awesomeness, take them off the cycle of grant churning by handing them a 2-3 grant award all at once (for the cost of a 5 page essay and a Biosketch, plus some big swanging reference letters) and they look incrementally better. See overlap, they aren’t awesomely better. Compared with labs unselected, fighting the regular grant wars and with half the money. Color me severely underimpressed by this analysis of the DP1 program. All it does is tell us to give more people the same damn deal. One might even suggest this deal approximates the deal that many of our older colleagues had in effect for much of their careers when success rates for competing continuations from established investigators was north of 45%.
2) Now you match these Pioneers on R01 direct costs there is no difference in pubs or cites. Impact factor is higher for the Pioneers but the h-index doesn’t differ. Experts assess Pioneers as higher impact and innovation.
These matched direct-costs PIs were in lower rank institutions (by some margin) and were longer past their terminal degrees. Since there was no matching on topic, institution or background characteristics I’m going to suggest that h-index really comes to the fore for this analysis. You simply cannot compare glamour chasing molecular eleventy labs with, say, clinical operations. There are too many differences in citation practices, GlamourHumping and what is conventionally viewed as “high impact”. h-index gives us a better approximation of real impact. So meh on this analysis too.
3) HHMI folks published more papers, had more citations but not if one accounted for the direct-cost differential. HHMI folks published in higher impact journals but the h-index was the same. Experts assess the impact and innovation the same. Interestingly (to me) the HHMI folks were closer to their terminal award than the mean of the other groups. The spread was tighter, this is by design of the HHMI but I guess I was a little surprise the DP1 time-since-degree was so diverse. Institutional rankings did not differ between HHMI, DP1.
Snoooooore……
4) The only loser-finalists data reported was on the Institution rankings and the time-since-degree which didn’t differ from the successful DP1 awardees.
Very frustrated on this one! An absolutely critical comparison group in my view. How did these folks do? Did they get other funding? Did they suck in terms of productivity? Did they get their money anyway and compete successfully?
The overall conclusion slide nails it in the first bullet point. We really don’t need to go on from:
It appears that higher funding leads to higher portfolio‐level impact.
Look, I’m not saying that other factors don’t contribute. But this is an “all else held equal” analysis. Or an attempt at it. If you match the PIs on their approximate fields, type of work, background, local institutions, etc and give them the same amount of research support, they do the same. Even the much-vaunted HHMI sinecure funding (at approximately 1 R21 or half a R01 greater value per year vs DP1) which can be expected to last for 15 year of programmatic support doesn’t make a radical difference! Note that the DP1 doesn’t come with any such guarantee of longer-term funding for the PI. S/he knows full well that they are right back in the hunt 2-3 years down the road. So I guess it is worth hammering the final bullet point:
DP1 vs HHMI: likely not attributable to flexibility of research, or riskiness of ideas, but may be due to funding level and stability, differences in PIs, or differences in areas of science.