Jocelyn Kaiser at ScienceInsider has obtained data on PI numbers from the NIH.

NIH PIs Graphic

Nice.

I think this graph should be pinned up right next to Sally Rockey’s desk. It is absolutely essential to any attempts to understand and fix grant application success rates and submission churning.

UPDATE 03/12/14: I should have noted that this graph depicts PIs who hold R01-equivalent grants (R01, R23, R29, R37 with ARRA excluded). The Science piece has this to say about the differential from RPG:

NIH shared these data for two sets of grants: research project grants (RPGs), which include all research grants, and R01 equivalents, a slightly smaller category that includes the bread-and-butter R01 grants that support most independent labs.

NIH-PIs-RPG-R01eqBut if you read carefully, they’ve posted the excel files for both the R01-equivalents and RPG datasets. Woo-hoo! Let’s get to graphing, shall we? There is nothing like a good comparison graph to make summary language a little more useful. Don’t you think? I know I do….

A “slightly smaller category” eh? Well, I spy some trends in this direct comparison. Let’s try another way to look at it. How about we express the difference between the number of RPG and R01-equivalent numbers to see how many folks have been supported on non-R01/equivalent Research Project Grants over the years…
NIHPI-RPGdifferentialWell I’ll be hornswaggled. All this invention of DP-this and RC-that and RL-whatsit and all the various U-mechs and P01 (Center components seem to be excluded) in recent years seemingly has had an effect. Sure, the number of R01 equivalent PIs only slightly drifted down from the end of the doubling until now (relieved briefly by the stimulus). So those in NIH land could say “Look, we’re not sacrificing R01s, our BreadNButter(TM) Mech!”. But in the context of the growth of nonR01 RPG projects, well….hmmm.

Jeremy Berg has a new President’s Message up at ASBMB Today. It looks into a topic of substantial interest to me, i.e., the fate of investigators funded by the NIH. This contrasts with our more-usual focus on the fate of applications.

With that said, the analysis does place the impact of the sequester in relatively sharp focus: There were about a thousand fewer investigators funded by these mechanisms in FY13 compared with FY12. This represents more than six times the number of investigators who lost this funding from FY11 to FY12 and a 3.8 percent drop in the R-mechanism-funded investigator cohort.

another tidbit addresses the usual claim from NIHlandia that R-mechs and R01s in particular are always prioritized.

In her post, Rockey notes that the total funding for all research project grants, or RPGs, dropped from $15.92 billion in FY12 to $14.92 billion in FY13, a decrease of 6.3 percent. The total funding going to the R series awards that I examined (which makes up about 85 percent of the RPG pool) dropped by 8.9 percent.

What accounts for this difference? U01 awards comprise the largest remaining portion of the RPG pool…The funds devoted to U01 awards remained essentially constant from FY12 to FY13 at $1.57 billion.

Go Read the whole thing.

This type of analysis really needs more attention at the NIH level. They’ve come a looooong way in recent years in terms of their willingness to focus on what they are actually doing in terms of applications, funding, etc. This is in no small part due to the efforts of Jeremy Berg, who used to be the Director of NIGMS. But tracking the fate of applications only goes so far, particularly when it is assessed only on a 1-2 year basis.

The demand on the NIH budget is related to the pool of PIs seeking funding. This pool is considerably less elastic than the submission of grant applications. PIs don’t submit grant applications endlessly for fun, you know. They seek a certain level of funding. Once they reach that, they tend to stop submitting applications. A lot of the increase in application churn over the past decade or so has to do with the relative stability of funding. When odds of continuing an ongoing project are high, a large number of PIs can just submit one or two apps every 5 years and all is well. Uncertainty is what makes her submit each and every year.

Similarly, when a PI is out of funding completely, the number of applications from this lab will rise dramatically….right up until one of them hits.

I argue that if solutions to the application churn and the funding uncertainty (which decreases overall productivity of the NIH enterprise) are to be found, they will depend on a clear understanding of the dynamics of the PI population.

Berg has identified two years in which the PI turnover is very different. How do these numbers compare with historical trends? Which is the unusual one? Or is this the expected range?

Can we see the 1,000 PI loss as a temporary situation or a permanent fix? It is an open question as to how many sequential years without NIH funding will affect the PI. Do these individuals tend to regain funding in 2, 3 or 4 years’ time? Do they tend to go away and never come back? More usefully, what proportion of the lost investigators will follow these fates?

The same questions arise for the other factoids Berg mentions. The R00 transition to other funding would seem to be incredibly important to know. But a one year gap seems hardly worth discussing. This can easily happen under the current conditions. But if they are not getting funded after 2 or maybe 3 years after the R00 expires? This is of greater impact.

Still, a welcome first step, Dr. Berg. Let’s hope Sally Rockey is listening.

A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.

The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:

  • Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
  • Usually accepted on standard receipt (postmarked) dates on an on-going basis
  • Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner

In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.

This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?

Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:

This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.

This brings us to the key difference between the PA and a PAR (or a PAS):

  • Special Types
    • PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
    • PAS: A PA that includes specific set-aside funds as described in the PAS announcement

Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues.  So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.

The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and,  presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in  a general CSR panel.

As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.

It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.

The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and  that a mere PA was not going to be sufficient* to focus minds.

As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.

It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.

__

*This is a direct validation of my position that the PA is a rather weak stimulus, btw.

As always when it comes to NIDA specifics, see Disclaimer.

In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.

I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.

I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.

The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.

I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.

It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.

So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.

The NHLBI data say this is utter nonsense.

Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.

What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.

I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.

This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.

A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.

Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.

[PubMed, Publisher]

I think Figure 2 makes the point, even without knowing much about the particulars
Danthi14-Fig2

and the last part of the Abstract makes it clear.

We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.

“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”

Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.

The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.


Science is unpredictable.

Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.

…or maybe it is.

One of the things that I try to emphasize in NIH grant writing strategy is to ensure you always submit a credible application. It is not that difficult to do.

You have to include all the basic components, not commit more than a few typographical errors and write in complete sentences. Justify the importance of the work. Put in a few pretty pictures and plenty of headers to create white space. Differentiate an Aim from a hypothesis from an Experiment.

Beyond that you are often constrained by the particulars of your situation and a specific proposal. So you are going to have to leave some glaring holes, now and again. This is okay! Maybe you are a noob and have little in the way of specific Preliminary Data. Or have a project which is, very naturally, a bit of a fishing expedition hypothesis generating, exploratory work. Perhaps the Innovation isn’t high or there is a long stretch to attach health relevance.

Very few grants I’ve read, including many that were funded, are even close to perfect. Even the highest scoring ones have aspects that could readily be criticized without anyone raising an eyebrow.

The thing is, you have to be able to look at your proposal dispassionately and see the holes. You should have a fair idea of where trouble may lie ahead and shore up the proposal as best you can.

No preliminary data? Then do a better job with the literature predictions and alternate considerations/pitfalls. Noob lab? Then write more methods and cite them more liberally. Low Innovation? Hammer down the Significance. Established investigator wanting to continue the same-old, same-old under new funding? Disguise that with an exciting hypothesis or newish-sounding Significance link. (Hint: testing the other person’s hypothesis with your approaches can go over great guns when you are in a major theoretical dogfight over years’ worth of papers.)

What you absolutely cannot do is to leave the reviewers with nothing. You cannot leave gaping holes all over the application. That, my friends, is what drops you* below the “credible” threshold.

Don’t do that. It really does not make you any friends on the study section panel.

__
*This is one case where the noob is clearly advantaged. Many reviewers make allowances for a new or young-ish laboratory. There is much less sympathy for someone who has been awarded several grants in the past when the current proposal looks like a slice of Swiss cheese.

The Legislative Mandates have been issued for FY 2014.

The intent of this Notice is to provide information on the following statutory provisions that limit the use of funds on NIH grant, cooperative agreement, and contract awards for FY2014.

It contains the usual familiar stuff, of pointed interest is the prohibition against using grant funds to promote the legalization of Schedule I drugs and the one that prohibits any lobbing of the government. With respect to the Schedule I drugs issue, for a certain segment of my audience, I remind you of the critical exception:

(8) Limitation on Use of Funds for Promotion of Legalization of Controlled Substances (Section 509)
“(a) None of the funds made available in this Act may be used for any activity that promotes the legalization of any drug or other substance included in schedule I of the schedules of controlled substances established under section 202 of the Controlled Substances Act except for normal and recognized executive-congressional communications. (b)The limitation in subsection (a) shall not apply when there is significant medical evidence of a therapeutic advantage to the use of such drug or other substance or that federally sponsored clinical trials are being conducted to determine therapeutic advantage.”

I wouldn’t like to find out the hard way but I would presume this means that research into the medical benefits of marijuana, THC and/or other cannabinoid compounds are just fine. I seem to recall reading more than one paper listing NIH support that might be viewed in this light.

What I found more fascinating was a little clause that I had not previously noticed in the anti-lobbying section.

(3) Anti-Lobbying (Section 503)

(c) The prohibitions in subsections (a) and (b) shall include any activity to advocate or promote any proposed, pending or future Federal, State or local tax increase, or any proposed, pending, or future requirement or restriction on any legal consumer product, including its sale or marketing, including but not limited to the advocacy or promotion of gun control.”

there is also another stand-alone section in case you didn’t get the point:

(2) Gun Control (Section 217)
“None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.”

I was sufficiently curious to go back through the years and found out that this language did not appear in the Notice for FY 2011 and was inserted for FY 2012. This was part of the “FY 2012 the Consolidated Appropriations Act, 2012 (Public Law 112-74) signed into law on December 23, 2011“. I didn’t bother to go back through the legislative history and try to figure out when the gun control part was added but it looks like something similar that affected the CDC appropriation was put into place in 1996.

So I guess we should have expected the anti-gun-control forces to get around to it eventually?

Existing commitments will be honored but NOT-CA-14-023 makes it clear:

Effective immediately, no new nominations for NCI MERIT (R37) awards will be made.  In addition, NCI MERIT (R37) extensions will not be considered.

As a reminder, competing continuation R01s that score very well can be nominated for R37 which means that you get 10 years of non-competing instead of the usual limit to 5. That last bit in the quote refers to the fact that apparently these things can be extended even past the first 10 years.

 

A search on RePORTER shows that the NCI has about* 43 of these on the books at the moment.

 

*I didn’t screen for supplements or other dual entries.

This question is mostly for the more experienced of the PItariat in my audience. I’m curious as to whether you see your grant scores as being very similar over the long haul?

That is, do you believe that a given PI and research program is going to be mostly a “X %ile” grant proposer? Do your good ones always seem to be right around 15%ile? Or for that matter in the same relative position vis a vis the presumed payline at a given time?

Or do you move around? Sometimes getting 1-2%ile, sometimes midway to the payline, sometimes at the payline, etc?

This latter describes my funded grants better. A lot of relative score (i.e., percentile) diversity.

It strikes me today that this very experience may be what reinforces much of my belief about the random nature of grant review. Naturally, I think I put up more or less the same strength of proposal each time. And naturally, I think each and every one should be funded.

So I wonder how many people experience more similarity in their scores, particularly for their funded or near-miss applications. Are you *always* coming in right at the payline? Or are you *always* at X %ile?

In a way this goes to the question of whether certain types of grant applications are under greater stress when the paylines tighten. The hypothesis being that perhaps a certain type of proposal is never going to do better than about 15%ile. So in times past, no problem, these would be funded right along with the 1%ile AMAZING proposals. But in the current environment, a change in payline makes certain types of grants struggle more.

An RT Tweet from @betenoire1 was making the rounds of my Twitter feed today. It points to a Facebook polemic from a Leon Avery, Phd. (CV; RePORTER). He says that he is Leaving Science.

I have decided, after 40 years as a lab scientist and 24 years running my own lab, to shut it down and leave. I write this to explain why, for those of my friends and colleagues who’d like to know. The short answer is that I’m tired of being a professor.

Okay, no problem. No problem whatsoever. Dude was appointed in 1990 and has been working his tail off for 24 years at the NIH funded extramural grant game. He’s burned out. I get this.

I have never liked being a boss. My happiest years as a scientist were when I was a student and then a postdoc. I knew I wouldn’t like running a lab, and I didn’t like it. This has always been true.

My immediate plans are to go back to school and get a degree in Mathematics. This too has been a passion of mine ever since high-school sophomore Geometry, when I first learned what math is really about. And my love of it has increased in recent years as I have learned more. It will be tremendous fun to go back and learn those things that I didn’t have the time or the money to study as an undergrad.

GREAT! This is awesome. You do one thing until you tire of it and then, apparently, you have the ability to retire into a life of the mind. This is FANTASTIC!

So what’s the problem? Well, he can’t resist taking a few swipes at NIH funded extramural science, even as he admits he was never cut out for this PI stuff from the beginning. And after a long and easy gig (more on that below) he is distressed by the NIH funding situation. And feels like his way of doing science is under specific attack.

For many years NIH was interested in funding basic research as well as research aimed directly at curing diseases. With the tightening funding has come a focus on so-called “translational research”. Now when we apply for funding we have to explain what diseases our work is going to cure.

Ok, actually, this is the “truthy” part that is launching a thousand discussions of the “real problem” at NIH. So I’m going to address this part to make it very clear to his fans and back thumpers what we are talking about. On RePORTER (link above) we find that Dr Avery had one grant for 22 years. Awarded in April of 1991 and his CV lists 1990 as his first appointment. So within 15 mo (but likely 9 mo given typical academic start dates from about July through Sept) he had R01 support that he maintained through his career. In the final 5 years, he was awarded the R37 which means he has ten years of non-competing renewal. I see another R21 and one more R01. This latter was awarded on the A1. So as far as we can tell, Professor Avery never had to work too hard for his NIH grant funding. I mean sure, maybe he was putting in three grants a round for 20 years and never managed to land anything more than what I have reviewed. Somehow I doubt this. I bet his difficulties getting the necessary grant funding to run his laboratory were not all that steep compared to most of the rest of us plebes.

And actually, his Facebook post backs it up a tiny bit.

And I’ve been lucky that the world was willing to pay me to do it. Now it is hard for me to explain the diseases my work will cure. It feels like selling snake oil. I don’t want to do it any more.

I think the people enthusiastically passing along this Fb post of his maybe should focus on the key bits about his personal desires and tolerance for the job. Instead of turning this into yet another round of: “successful scientist bashes the NIH system now that finally, after all this time of a sweet, sweet ride s/he experiences a bare minimal taster of what the rest of us have faced our entire careers”.

Final note on the title: Dude, by all means. Anyone who has had a nice little run with NIH funding and is no longer entused….LEAVE. We’ll keep citing you, don’t worry. Leave the grants to those of us who still give a crap, though, eh?

UPDATE (comment from @boehninglab):

Berg2014IntramuralChartJeremy Berg has a new column up at ASBMB Today which examines the distribution of NIH intramural funding. Among other things, he notes that you can play along at home via searching RePORTER using the ZIA activity code (i.e., in place of R01, R21, etc). At first blush you might think “WOWZA!”. The intramural lab is pretty dang flush. If you think about the direct costs of an extramural R01 grant – the full modular is only $250K per year. So you would need three awards (ok, the third one could be an R21) just to clear the first bin. But there are interesting caveats sprinkled throughout Berg’s comments and in the first comment to the piece. Note the “Total Costs”? Well, apparently there is an indirect costs rate within the IRPs and Berg comments that it is so variable that it is hard to issue anything similar to a negotiated extramural IDC rate for the entire NIH Intramural program. The comment from an ex-IRP investigator points to more issues. There may be some shared costs inserted into a given PI’s apparent budget that this PI has no control over. Whether this is part of the overhead or an overhead-like cost….or maybe a cost shard across one IC’s IRP…who knows?

We also don’t know what a given PI has to pay for out of his or her ZIA allocation. What are animal housing costs like? Are they subsidized for certain ICs’ IRPs? For certain labs? Who is a PI and who is a staff scientist of some sort within the IRPs? Do these status’ differ? Are they comparable to extramural lab operations? I know for certain sure that people who are more or less the equivalent of an extramural Assistant/Associate Professor in a soft money job category exist within the NIH IRPs without being considered a PI with their own ZIA allocation. So that means that a “PI” on the chart that Berg presents may in fact be equivalent to 2-3 PIs out here in extramural land. (And yes, I understand that some of the larger extramural labs similarly have several people who would otherwise be heading their own lab all subsumed within the grants awarded to one BigCheez PI.)

With that said, however, the IRP is supposed to be special. As Berg notes

The IRP mission statement asserts that the IRP should “conduct distinct, high-impact laboratory, clinical, and population-based research” and that it should support research that “cannot be readily funded or accomplished in traditional academia.”

So by one way of looking at it, we shouldn’t be comparing the IRP scientists to ourselves. They should be different.

Even if we think of IRP investigators as not much different from ourselves, I’m having difficulty making any sense of these numbers. It is nice to see them, but it is really hard to compare to what is going on with extramural grant funding.

Perhaps of greater value is the analysis Berg presents for whether NIH’s intramural research is feeling their fair share of the budgetary pain.

In 2003, when I became an NIH institute director, the overall NIH appropriation was $26.74 billion, while the overall intramural program consumed $2.56 billion, or 9.6 percent. In fiscal 2013, the overall NIH appropriation was $29.15 billion, and the intramural share had grown to $3.26 billion, or 11.2 percent.
 
Some of this growth is because of ongoing intramural activities, such as those involving the NIH Clinical Center, where, like at other hospitals, costs are very hard to contain below rates of inflation, or because of new activities, such as the NIH Chemical Genomics Center. The IRP is particularly expensive in terms of taxpayer dollars, because it is difficult to leverage the federal support to the IRP with funds from other sources as occurs in the extramural community.

So I guess that would be “no”. No the IRP, in aggregate, is not sharing the pain of the flatlined budget. There is no doubt that some of the individual components of the various IRPs are. It is inevitable. Previously flush budgets no doubt being reduced. Senior folk being pushed out. Mid and lower level employees being cashiered. I’m sure there are counter examples. But as a whole, it is clear that the IRP is being protected, inevitably at the expense of R-mech extramural awards.

 

 

New Grant Snooping

February 4, 2014

As usual, I like to keep and eye on RePORTER and SILK to see what the various ICs of my own dearest interest are up to with regard to grants that were supposed to fund Dec 1, 2013. Per usual, there was no budget and the more conservative ICs wait around to do anything. Some of the less-conservative ones do tend to start funding new grant awards in December and Jan so there is always something to see on SILK.

I noticed something interesting. NIAID has 44 new R01s listed that were on the A1 revision and 19 that were funded on the “first” submission. RePORTER notes that 30 funded in Dec, 12 of these funded in Jan and  17 on or after 2/1/2014 (not sure if I miscounted totals on SILK or RePORTER hasn’t caught up or what).

My ICs of dearest concern are still waiting, only a bare handful of new R01s are listed.

NCI has 36 new R01 apps funded on A1, 21 on the A0. DK is running 15/13.

Scanning down the rest of the list of ICs, it looks like DK is about as close to even as it gets and that a 2:1 ratio of A1 to A0 being funded is not too far off the mean.

 

I still think we’d be a lot better off if something like 2/3rd of grants were awarded on first submission and the A1s were only about a third.

The R37/MERIT award is an interesting beast in NIH-land. It is typically (exclusively?) awarded upon a successful competing continuation (now called renewal) R01 application. Program then decides in some cases to extend the interval of non-competition for another 5 years*. This, my friends, is person-not-project based funding.

The R37 is a really good gig….if you can get it.

So, given that I’m blogging about award disparity this week….I took a look at the R37s currently on the books for one of my favorite ICs.

There are 25 of them.

The PIs include

1 transgender PI.
4 female PIs
0 East Asian / East Asian-American PIs (that I could detect)
3 South Asian / South Asian-American PIs (that I could detect)
0 SubSaharan African / African-American PIs (that I could detect)
0 Latino PIs (that I could detect)

hmmm, not that strong of a job. How about another of my favorite ICs?

23 awards (Interesting because this IC is half the size of the above-mentioned one)

12 female PIs.
0 East Asian / East Asian-American PIs (that I could detect)
1-2 South Asian / South Asian-American PIs (that I could detect)
0 SubSaharan African / African-American PIs (that I could detect)
3-4 Latino PIs (that I could detect)

way better on the sex distribution. Whether this number of R37s reflects more than average good-old-folks clubbery or the above represents less than average I don’t know. 25 at another large IC close to my interests. 95ish (I didn’t parse for supplements) at another. Only 45ish at NCI. Clearly a big range relative to IC size.

Both of these are doing really poorly on East Asian/ Asian-American and African-American PIs. The first is pretty pathetic on Latino PIs as well.

On the other hand, good old white guys with grey hair or receding hairlines are doing quite well in the R37 stakes.

How are your favorite ICs doing, Dear Reader?

__
*The way I hear it. I have heard rumor that these can go beyond a total of 10 years of R37 but I’m not sure on that.

The takeaway message from the report of Ginther and colleagues (2011) on Race, Ethnicity and NIH Research Awards can be summed up by this passage from the end of the article:

Applications from black and Asian investigators were significantly less likely to receive R01 funding compared with whites for grants submitted once or twice. For grants submitted three or more times, we found no significant difference in award probability between blacks and whites; however, Asians remained almost 4 percentage points less likely to receive an R01 award (P < .05). Together, these data indicate that black and Asian investigators are less likely to be awarded an R01 on the first or second attempt, blacks and Hispanics are less likely to resubmit a revised application, and black investigators that do resubmit have to do so more often to receive an award.

Recall that these data reflect applications received for Fiscal Years 2000 to 2006.

Interestingly, we were just discussing the most recent funding data from the NIH with a particular focus on the triaged applications. A comment on the Rock Talk blog of the OER at NIH was key.

I received a table of data covering A0 R01s received between FY 2010 and FY2012 (ARRA funds and solicited applications were excluded). Overall at NIH, 2.3% of new R01s that were “not scored” as A0s were funded as A1s (range at different ICs was 0.0% to 8.4%), and 8.7% of renewals that were unscored as A0s were funded as A1s (range 0.0% to 25.7%).

I noted the following for a key distinction between new and competing-continuation applications.

The mean and selected ICs I checked tell the same tale, i.e., that Type 2 apps have a much better shot at getting funded after triage on the A0. NIDA is actually pretty extreme from what I can tell- 2.8% versus 15.2%. So if there is a difference in the A1 resubmission rate for Type 1 and Type 2 (and I bet Type 2 apps that get triaged on A0 are much more likely to be amended and resubmitted) apps, the above analysis doesn’t move the relative disadvantage around all that much. However for NIAAA the Type 1 and Type 2 numbers are closer- 4.7% versus 9.8%. So for NIAAA supplicants, a halving of the resubmission rate for Type 1 might bring the odds for Type 1 and Type 2 much closer.

So look. If you were going to try to really screw over some category of investigators you would make sure they were more likely to be triaged and then make it really unlikely that a triaged application could be revised into the fundable range. You could stoke this by giving an extra boost to triaged applications that had already been funded for a prior interval….because your process has already screened your target population to decrease representation in the first place. It’s a feed-forward acceleration.

What else could you do? Oh yes. About those revisions, poorer chances on the first 1-2 attempts and the need for Asian and black PIs to submit more often to get funded. Hey I know, you could prevent everybody from submitting too many revised versions of the grant! That would provide another amplification of the screening procedure.

So yeah. The NIH halved the number of permitted revisions to previously unfunded applications for those submitted after January 25, 2009.

Think we’re ever going to see an extension of the Ginther analysis to applications submitted from FY2007 onward? I mean, we’re seeing evidence in this time of pronounced budgetary grimness that the NIH is slipping on its rather overt efforts to keep early stage investigator success rates similar to experienced investigators’ and to keep women’s success rates similar to mens’.

The odds are good that the plight of African-American and possibly even Asian/Asian-American applicants to the NIH has gotten even worse than it was for Fiscal Years 2000-2006.

Jeremy Berg made a comment

If you look at the data in the Ginther report, the biggest difference for African-American applicants is the percentage of “not discussed” applications. For African-Americans, 691/1149 =60.0% of the applications were not discussed whereas for Whites, 23,437/58,124 =40% were not discussed (see supplementary material to the paper). The actual funding curves (funding probability as a function of priority score) are quite similar (Supplementary Figure S1). If applications are not discussed, program has very little ability to make a case for funding, even if this were to be deemed good policy.

that irritated me because it sounds like yet another version of the feigned-helpless response of the NIH on this topic. It also made me take a look at some numbers and bench race my proposal that the NIH should, right away, simply pick up enough applications from African American PIs to equalize success rates. Just as they have so clearly done, historically, for Early Stage Investigators and very likely done for woman PIs.

Here’s the S1 figure from Ginther et al, 2011:
Ginther-S1

[In the below analysis I am eyeballing the probabilities for illustration’s sake. If I’m off by a point or two this is immaterial to the the overall thrust of the argument.]

My knee jerk response to Berg’s comment is that there are plenty of African-American PI’s applications available for pickup. As in, far more than would be required to make up the aggregate success rate discrepancy (which was about 10% in award probability). So talking about the triage rate is a distraction (but see below for more on that).

There is a risk here of falling into the Privilege-Thinking, i.e. that we cannot possible countenance any redress of discrimination that, gasp, puts the previously underrepresented group above the well represented groups even by the smallest smidge. But looking at Supplementary Fig1 from Gither, and keeping in mind that the African American PI application number is only 2% of the White applications, we can figure out that a substantial effect on African American PI’s award probability would cause only an imperceptible change in that for White PI applications. And there’s an amazing sweetener….merit.

Looking at the award probability graph from S1 of Ginther, we note that there are some 15% of the African-American PI’s grants scoring in the 175 bin (old scoring method, youngsters) that were not funded. About 55-56% of all ethnic/racial category grants in the next higher (worse) scoring bin were funded. So if Program picks up more of the better scoring applications from African American PIs (175 bin) at the expense of the worse scoring applications of White PIs (200 bin), we have actually ENHANCED MERIT of the total population of funded grants. Right? Win/Win.

So if we were to follow my suggestion, what would be the relative impact? Well thanks to the 2% ratio of African-American to White PI apps, it works like this:

Take the 175 scoring bin in which about 88% of white PIs and 85% of AA PIs were successful. Take a round number of 1,000 apps in that scoring bin (for didactic purposes, also ignoring the other ethnicities) and you get a 980/20 White/African-AmericanPI ratio of apps. In that 175 bin we’d need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.

Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it….. 20.8%.

And so on.

(And actually, the percentage changes would be smaller in reality because there is typically not a flat distribution across these bins and there are very likely more applications in each worse-scoring bin compared to the next better-scoring bin. I assumed 1,000 in each bin for my example.)

Another way to look at this issue is to take Berg’s triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.

These are entirely trivial numbers in terms of the “hit” to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.

It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.

Tell me how this is not a no-brainer for the NIH?