Jocelyn Kaiser at ScienceInsider has obtained data on PI numbers from the NIH.

NIH PIs Graphic

Nice.

I think this graph should be pinned up right next to Sally Rockey’s desk. It is absolutely essential to any attempts to understand and fix grant application success rates and submission churning.

UPDATE 03/12/14: I should have noted that this graph depicts PIs who hold R01-equivalent grants (R01, R23, R29, R37 with ARRA excluded). The Science piece has this to say about the differential from RPG:

NIH shared these data for two sets of grants: research project grants (RPGs), which include all research grants, and R01 equivalents, a slightly smaller category that includes the bread-and-butter R01 grants that support most independent labs.

NIH-PIs-RPG-R01eqBut if you read carefully, they’ve posted the excel files for both the R01-equivalents and RPG datasets. Woo-hoo! Let’s get to graphing, shall we? There is nothing like a good comparison graph to make summary language a little more useful. Don’t you think? I know I do….

A “slightly smaller category” eh? Well, I spy some trends in this direct comparison. Let’s try another way to look at it. How about we express the difference between the number of RPG and R01-equivalent numbers to see how many folks have been supported on non-R01/equivalent Research Project Grants over the years…
NIHPI-RPGdifferentialWell I’ll be hornswaggled. All this invention of DP-this and RC-that and RL-whatsit and all the various U-mechs and P01 (Center components seem to be excluded) in recent years seemingly has had an effect. Sure, the number of R01 equivalent PIs only slightly drifted down from the end of the doubling until now (relieved briefly by the stimulus). So those in NIH land could say “Look, we’re not sacrificing R01s, our BreadNButter(TM) Mech!”. But in the context of the growth of nonR01 RPG projects, well….hmmm.

Jeremy Berg has a new President’s Message up at ASBMB Today. It looks into a topic of substantial interest to me, i.e., the fate of investigators funded by the NIH. This contrasts with our more-usual focus on the fate of applications.

With that said, the analysis does place the impact of the sequester in relatively sharp focus: There were about a thousand fewer investigators funded by these mechanisms in FY13 compared with FY12. This represents more than six times the number of investigators who lost this funding from FY11 to FY12 and a 3.8 percent drop in the R-mechanism-funded investigator cohort.

another tidbit addresses the usual claim from NIHlandia that R-mechs and R01s in particular are always prioritized.

In her post, Rockey notes that the total funding for all research project grants, or RPGs, dropped from $15.92 billion in FY12 to $14.92 billion in FY13, a decrease of 6.3 percent. The total funding going to the R series awards that I examined (which makes up about 85 percent of the RPG pool) dropped by 8.9 percent.

What accounts for this difference? U01 awards comprise the largest remaining portion of the RPG pool…The funds devoted to U01 awards remained essentially constant from FY12 to FY13 at $1.57 billion.

Go Read the whole thing.

This type of analysis really needs more attention at the NIH level. They’ve come a looooong way in recent years in terms of their willingness to focus on what they are actually doing in terms of applications, funding, etc. This is in no small part due to the efforts of Jeremy Berg, who used to be the Director of NIGMS. But tracking the fate of applications only goes so far, particularly when it is assessed only on a 1-2 year basis.

The demand on the NIH budget is related to the pool of PIs seeking funding. This pool is considerably less elastic than the submission of grant applications. PIs don’t submit grant applications endlessly for fun, you know. They seek a certain level of funding. Once they reach that, they tend to stop submitting applications. A lot of the increase in application churn over the past decade or so has to do with the relative stability of funding. When odds of continuing an ongoing project are high, a large number of PIs can just submit one or two apps every 5 years and all is well. Uncertainty is what makes her submit each and every year.

Similarly, when a PI is out of funding completely, the number of applications from this lab will rise dramatically….right up until one of them hits.

I argue that if solutions to the application churn and the funding uncertainty (which decreases overall productivity of the NIH enterprise) are to be found, they will depend on a clear understanding of the dynamics of the PI population.

Berg has identified two years in which the PI turnover is very different. How do these numbers compare with historical trends? Which is the unusual one? Or is this the expected range?

Can we see the 1,000 PI loss as a temporary situation or a permanent fix? It is an open question as to how many sequential years without NIH funding will affect the PI. Do these individuals tend to regain funding in 2, 3 or 4 years’ time? Do they tend to go away and never come back? More usefully, what proportion of the lost investigators will follow these fates?

The same questions arise for the other factoids Berg mentions. The R00 transition to other funding would seem to be incredibly important to know. But a one year gap seems hardly worth discussing. This can easily happen under the current conditions. But if they are not getting funded after 2 or maybe 3 years after the R00 expires? This is of greater impact.

Still, a welcome first step, Dr. Berg. Let’s hope Sally Rockey is listening.

A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.

The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:

  • Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
  • Usually accepted on standard receipt (postmarked) dates on an on-going basis
  • Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner

In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.

This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?

Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:

This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.

This brings us to the key difference between the PA and a PAR (or a PAS):

  • Special Types
    • PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
    • PAS: A PA that includes specific set-aside funds as described in the PAS announcement

Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues.  So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.

The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and,  presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in  a general CSR panel.

As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.

It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.

The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and  that a mere PA was not going to be sufficient* to focus minds.

As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.

It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.

__

*This is a direct validation of my position that the PA is a rather weak stimulus, btw.

As always when it comes to NIDA specifics, see Disclaimer.

Congress is losing it.

February 27, 2014

Just after we noticed that Congress has seen fit to add a special prohibition on anything done with Federal grant funds that might suggest gun control is in order, there’s another late breaking Congressional mandate notice.

NOT-OD-14-062:

FY 2014 New Legislative Mandate

Restriction of Pornography on Computer Networks (Section 528)
“(a) None of the funds made available in this Act may be used to maintain or establish a computer network unless such network blocks the viewing, downloading, and exchanging of pornography.

(b) Nothing in subsection (a) shall limit the use of funds necessary for any Federal, State, tribal, or local law enforcement agency or any other entity carrying out criminal investigations, prosecution, or adjudication activities.”

Really guys? That was a top priority item?

Interesting though, isn’t it? Including indirect cost expenditures this would seem to apply to a very large number of Universities in the US. And now Congress has demanded they adopt nanny pR0n filters.

I don’t see any exceptions for classwork here, either.

NIH Multi-PI Grant Proposals.

February 24, 2014

In my limited experience, the creation, roll-out and review of Multi-PI direction of a single NIH grant has been the smoothest GoodThing to happen in NIH supported extramural research.

I find it barely draws mention in review and deduce that my fellow scientists agree with me that it is a very good idea, long past due.

Discuss.

While I’m getting all irate about the pathetic non-response to the Ginther report, I have been neglecting to think about the intramural research at NIH.

From Biochemme Belle:

In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.

I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.

I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.

The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.

I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.

It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.

So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.

The NHLBI data say this is utter nonsense.

Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.

What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.

I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.

This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.

A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.

Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.

[PubMed, Publisher]

I think Figure 2 makes the point, even without knowing much about the particulars
Danthi14-Fig2

and the last part of the Abstract makes it clear.

We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.

“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”

Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.

The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.


Science is unpredictable.

Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.

…or maybe it is.

One of the things that I try to emphasize in NIH grant writing strategy is to ensure you always submit a credible application. It is not that difficult to do.

You have to include all the basic components, not commit more than a few typographical errors and write in complete sentences. Justify the importance of the work. Put in a few pretty pictures and plenty of headers to create white space. Differentiate an Aim from a hypothesis from an Experiment.

Beyond that you are often constrained by the particulars of your situation and a specific proposal. So you are going to have to leave some glaring holes, now and again. This is okay! Maybe you are a noob and have little in the way of specific Preliminary Data. Or have a project which is, very naturally, a bit of a fishing expedition hypothesis generating, exploratory work. Perhaps the Innovation isn’t high or there is a long stretch to attach health relevance.

Very few grants I’ve read, including many that were funded, are even close to perfect. Even the highest scoring ones have aspects that could readily be criticized without anyone raising an eyebrow.

The thing is, you have to be able to look at your proposal dispassionately and see the holes. You should have a fair idea of where trouble may lie ahead and shore up the proposal as best you can.

No preliminary data? Then do a better job with the literature predictions and alternate considerations/pitfalls. Noob lab? Then write more methods and cite them more liberally. Low Innovation? Hammer down the Significance. Established investigator wanting to continue the same-old, same-old under new funding? Disguise that with an exciting hypothesis or newish-sounding Significance link. (Hint: testing the other person’s hypothesis with your approaches can go over great guns when you are in a major theoretical dogfight over years’ worth of papers.)

What you absolutely cannot do is to leave the reviewers with nothing. You cannot leave gaping holes all over the application. That, my friends, is what drops you* below the “credible” threshold.

Don’t do that. It really does not make you any friends on the study section panel.

__
*This is one case where the noob is clearly advantaged. Many reviewers make allowances for a new or young-ish laboratory. There is much less sympathy for someone who has been awarded several grants in the past when the current proposal looks like a slice of Swiss cheese.

The Legislative Mandates have been issued for FY 2014.

The intent of this Notice is to provide information on the following statutory provisions that limit the use of funds on NIH grant, cooperative agreement, and contract awards for FY2014.

It contains the usual familiar stuff, of pointed interest is the prohibition against using grant funds to promote the legalization of Schedule I drugs and the one that prohibits any lobbing of the government. With respect to the Schedule I drugs issue, for a certain segment of my audience, I remind you of the critical exception:

(8) Limitation on Use of Funds for Promotion of Legalization of Controlled Substances (Section 509)
“(a) None of the funds made available in this Act may be used for any activity that promotes the legalization of any drug or other substance included in schedule I of the schedules of controlled substances established under section 202 of the Controlled Substances Act except for normal and recognized executive-congressional communications. (b)The limitation in subsection (a) shall not apply when there is significant medical evidence of a therapeutic advantage to the use of such drug or other substance or that federally sponsored clinical trials are being conducted to determine therapeutic advantage.”

I wouldn’t like to find out the hard way but I would presume this means that research into the medical benefits of marijuana, THC and/or other cannabinoid compounds are just fine. I seem to recall reading more than one paper listing NIH support that might be viewed in this light.

What I found more fascinating was a little clause that I had not previously noticed in the anti-lobbying section.

(3) Anti-Lobbying (Section 503)

(c) The prohibitions in subsections (a) and (b) shall include any activity to advocate or promote any proposed, pending or future Federal, State or local tax increase, or any proposed, pending, or future requirement or restriction on any legal consumer product, including its sale or marketing, including but not limited to the advocacy or promotion of gun control.”

there is also another stand-alone section in case you didn’t get the point:

(2) Gun Control (Section 217)
“None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.”

I was sufficiently curious to go back through the years and found out that this language did not appear in the Notice for FY 2011 and was inserted for FY 2012. This was part of the “FY 2012 the Consolidated Appropriations Act, 2012 (Public Law 112-74) signed into law on December 23, 2011“. I didn’t bother to go back through the legislative history and try to figure out when the gun control part was added but it looks like something similar that affected the CDC appropriation was put into place in 1996.

So I guess we should have expected the anti-gun-control forces to get around to it eventually?

Existing commitments will be honored but NOT-CA-14-023 makes it clear:

Effective immediately, no new nominations for NCI MERIT (R37) awards will be made.  In addition, NCI MERIT (R37) extensions will not be considered.

As a reminder, competing continuation R01s that score very well can be nominated for R37 which means that you get 10 years of non-competing instead of the usual limit to 5. That last bit in the quote refers to the fact that apparently these things can be extended even past the first 10 years.

 

A search on RePORTER shows that the NCI has about* 43 of these on the books at the moment.

 

*I didn’t screen for supplements or other dual entries.

This question is mostly for the more experienced of the PItariat in my audience. I’m curious as to whether you see your grant scores as being very similar over the long haul?

That is, do you believe that a given PI and research program is going to be mostly a “X %ile” grant proposer? Do your good ones always seem to be right around 15%ile? Or for that matter in the same relative position vis a vis the presumed payline at a given time?

Or do you move around? Sometimes getting 1-2%ile, sometimes midway to the payline, sometimes at the payline, etc?

This latter describes my funded grants better. A lot of relative score (i.e., percentile) diversity.

It strikes me today that this very experience may be what reinforces much of my belief about the random nature of grant review. Naturally, I think I put up more or less the same strength of proposal each time. And naturally, I think each and every one should be funded.

So I wonder how many people experience more similarity in their scores, particularly for their funded or near-miss applications. Are you *always* coming in right at the payline? Or are you *always* at X %ile?

In a way this goes to the question of whether certain types of grant applications are under greater stress when the paylines tighten. The hypothesis being that perhaps a certain type of proposal is never going to do better than about 15%ile. So in times past, no problem, these would be funded right along with the 1%ile AMAZING proposals. But in the current environment, a change in payline makes certain types of grants struggle more.

I don’t. I just don’t. I cannot in anyway understand scientists who are offended that they have to some up with some thin veneer of health-relevance to justify the grant award they are seeking. The H in NIH stands for “Health”. The mission statement reads:

NIH’s mission is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.

Yeah, sure, if you end at the seventh word, you can convince yourself that the NIH is about basic research. Maybe you get to continue on to the fifteenth. But this is a highly selective reading. I just don’t see where it is a burden to think for a minute or two about whether you are doing anything to address the second half of the statement.

After all, you are asking the taxpayers of the US to front you some serious cash. Millions of dollars for many of the PIs who are complaining about how hard it is to get basic research grants funded (BRAINI proponents, I’m looking at you). It really isn’t that much of an insult to ask you to pay something back on the matter of public health.

An RT Tweet from @betenoire1 was making the rounds of my Twitter feed today. It points to a Facebook polemic from a Leon Avery, Phd. (CV; RePORTER). He says that he is Leaving Science.

I have decided, after 40 years as a lab scientist and 24 years running my own lab, to shut it down and leave. I write this to explain why, for those of my friends and colleagues who’d like to know. The short answer is that I’m tired of being a professor.

Okay, no problem. No problem whatsoever. Dude was appointed in 1990 and has been working his tail off for 24 years at the NIH funded extramural grant game. He’s burned out. I get this.

I have never liked being a boss. My happiest years as a scientist were when I was a student and then a postdoc. I knew I wouldn’t like running a lab, and I didn’t like it. This has always been true.

My immediate plans are to go back to school and get a degree in Mathematics. This too has been a passion of mine ever since high-school sophomore Geometry, when I first learned what math is really about. And my love of it has increased in recent years as I have learned more. It will be tremendous fun to go back and learn those things that I didn’t have the time or the money to study as an undergrad.

GREAT! This is awesome. You do one thing until you tire of it and then, apparently, you have the ability to retire into a life of the mind. This is FANTASTIC!

So what’s the problem? Well, he can’t resist taking a few swipes at NIH funded extramural science, even as he admits he was never cut out for this PI stuff from the beginning. And after a long and easy gig (more on that below) he is distressed by the NIH funding situation. And feels like his way of doing science is under specific attack.

For many years NIH was interested in funding basic research as well as research aimed directly at curing diseases. With the tightening funding has come a focus on so-called “translational research”. Now when we apply for funding we have to explain what diseases our work is going to cure.

Ok, actually, this is the “truthy” part that is launching a thousand discussions of the “real problem” at NIH. So I’m going to address this part to make it very clear to his fans and back thumpers what we are talking about. On RePORTER (link above) we find that Dr Avery had one grant for 22 years. Awarded in April of 1991 and his CV lists 1990 as his first appointment. So within 15 mo (but likely 9 mo given typical academic start dates from about July through Sept) he had R01 support that he maintained through his career. In the final 5 years, he was awarded the R37 which means he has ten years of non-competing renewal. I see another R21 and one more R01. This latter was awarded on the A1. So as far as we can tell, Professor Avery never had to work too hard for his NIH grant funding. I mean sure, maybe he was putting in three grants a round for 20 years and never managed to land anything more than what I have reviewed. Somehow I doubt this. I bet his difficulties getting the necessary grant funding to run his laboratory were not all that steep compared to most of the rest of us plebes.

And actually, his Facebook post backs it up a tiny bit.

And I’ve been lucky that the world was willing to pay me to do it. Now it is hard for me to explain the diseases my work will cure. It feels like selling snake oil. I don’t want to do it any more.

I think the people enthusiastically passing along this Fb post of his maybe should focus on the key bits about his personal desires and tolerance for the job. Instead of turning this into yet another round of: “successful scientist bashes the NIH system now that finally, after all this time of a sweet, sweet ride s/he experiences a bare minimal taster of what the rest of us have faced our entire careers”.

Final note on the title: Dude, by all means. Anyone who has had a nice little run with NIH funding and is no longer entused….LEAVE. We’ll keep citing you, don’t worry. Leave the grants to those of us who still give a crap, though, eh?

UPDATE (comment from @boehninglab):

Berg2014IntramuralChartJeremy Berg has a new column up at ASBMB Today which examines the distribution of NIH intramural funding. Among other things, he notes that you can play along at home via searching RePORTER using the ZIA activity code (i.e., in place of R01, R21, etc). At first blush you might think “WOWZA!”. The intramural lab is pretty dang flush. If you think about the direct costs of an extramural R01 grant – the full modular is only $250K per year. So you would need three awards (ok, the third one could be an R21) just to clear the first bin. But there are interesting caveats sprinkled throughout Berg’s comments and in the first comment to the piece. Note the “Total Costs”? Well, apparently there is an indirect costs rate within the IRPs and Berg comments that it is so variable that it is hard to issue anything similar to a negotiated extramural IDC rate for the entire NIH Intramural program. The comment from an ex-IRP investigator points to more issues. There may be some shared costs inserted into a given PI’s apparent budget that this PI has no control over. Whether this is part of the overhead or an overhead-like cost….or maybe a cost shard across one IC’s IRP…who knows?

We also don’t know what a given PI has to pay for out of his or her ZIA allocation. What are animal housing costs like? Are they subsidized for certain ICs’ IRPs? For certain labs? Who is a PI and who is a staff scientist of some sort within the IRPs? Do these status’ differ? Are they comparable to extramural lab operations? I know for certain sure that people who are more or less the equivalent of an extramural Assistant/Associate Professor in a soft money job category exist within the NIH IRPs without being considered a PI with their own ZIA allocation. So that means that a “PI” on the chart that Berg presents may in fact be equivalent to 2-3 PIs out here in extramural land. (And yes, I understand that some of the larger extramural labs similarly have several people who would otherwise be heading their own lab all subsumed within the grants awarded to one BigCheez PI.)

With that said, however, the IRP is supposed to be special. As Berg notes

The IRP mission statement asserts that the IRP should “conduct distinct, high-impact laboratory, clinical, and population-based research” and that it should support research that “cannot be readily funded or accomplished in traditional academia.”

So by one way of looking at it, we shouldn’t be comparing the IRP scientists to ourselves. They should be different.

Even if we think of IRP investigators as not much different from ourselves, I’m having difficulty making any sense of these numbers. It is nice to see them, but it is really hard to compare to what is going on with extramural grant funding.

Perhaps of greater value is the analysis Berg presents for whether NIH’s intramural research is feeling their fair share of the budgetary pain.

In 2003, when I became an NIH institute director, the overall NIH appropriation was $26.74 billion, while the overall intramural program consumed $2.56 billion, or 9.6 percent. In fiscal 2013, the overall NIH appropriation was $29.15 billion, and the intramural share had grown to $3.26 billion, or 11.2 percent.
 
Some of this growth is because of ongoing intramural activities, such as those involving the NIH Clinical Center, where, like at other hospitals, costs are very hard to contain below rates of inflation, or because of new activities, such as the NIH Chemical Genomics Center. The IRP is particularly expensive in terms of taxpayer dollars, because it is difficult to leverage the federal support to the IRP with funds from other sources as occurs in the extramural community.

So I guess that would be “no”. No the IRP, in aggregate, is not sharing the pain of the flatlined budget. There is no doubt that some of the individual components of the various IRPs are. It is inevitable. Previously flush budgets no doubt being reduced. Senior folk being pushed out. Mid and lower level employees being cashiered. I’m sure there are counter examples. But as a whole, it is clear that the IRP is being protected, inevitably at the expense of R-mech extramural awards.