Michael Eisen has an interesting post up today on a topic which comes up occasionally here on this blog. He blames peer review, but really it is an indictment of GlamourMag science. A criticism of the conflation of journal reputation with the quality of any article published therein.

One finger point is directed at the reviewer/editor demands for more data/studies/proof before a paper could be accepted. I agree with much of Eisen’s critique on this point.

What I am pondering today, however, is the tight NIH grant supply.

It strikes me that this is going to be a damn good thing if it stomps down on authors’ willingness to put up with unnecessary* reviewer demands for more work.

*the controls appropriate to evaluate the data as presented are fair game. “gee it would be cool if you also showed blahdeeblah…” are typically not.

No matter what esoteric small town grocer science you conduct in your laboratory, there will eventually be a FOA just for you.

No idea what this “MedCities News” is but they’ve tabulated the FY 2010 NIH awards by Institution and by State.


A ranking of the Top 100 institutions getting NIH grants, followed by a ranking of all 50 states, is below. Johns Hopkins’ is a runaway leader in getting NIH grants, followed next by the University of Pennsylvania ($577 million), University of Washington ($571 million), University of Michigan ($565 million) and University of California San Francisco ($538 million).

Go read for the full table of Top 100….

Abel Pharmboy pointed to a piece in The Scientist entitled “Losing Your Lab” which discusses the plight of the soft-money researcher who has run out of funding. Actually, the plight of one researcher in particular. The commentary is, however, getting interesting and I thought many of our readers might want to go play.
There are a couple things in the article however that seem a bit off-kilter to me.

Read the rest of this entry »

There is a recent commentary in Nature from Brian C. Martinson, one of those chaps funded to study the enterprise of science. Recent pubs from this author/group on ethical conduct in science are here, here and here. [Update: See a prior note on this work from writedit.]

Bait quotes to get you to read the commentary (emphasis all added, DM):

we should all be concerned about the negative effects this may have on the robustness of the research engine; by damping scientists’ willingness to pursue high-risk projects; by causing them to spend excessive time in pursuit of funding; or by causing talented individuals to shun research careers. Read the rest of this entry »

Berg2014IntramuralChartJeremy Berg has a new column up at ASBMB Today which examines the distribution of NIH intramural funding. Among other things, he notes that you can play along at home via searching RePORTER using the ZIA activity code (i.e., in place of R01, R21, etc). At first blush you might think “WOWZA!”. The intramural lab is pretty dang flush. If you think about the direct costs of an extramural R01 grant – the full modular is only $250K per year. So you would need three awards (ok, the third one could be an R21) just to clear the first bin. But there are interesting caveats sprinkled throughout Berg’s comments and in the first comment to the piece. Note the “Total Costs”? Well, apparently there is an indirect costs rate within the IRPs and Berg comments that it is so variable that it is hard to issue anything similar to a negotiated extramural IDC rate for the entire NIH Intramural program. The comment from an ex-IRP investigator points to more issues. There may be some shared costs inserted into a given PI’s apparent budget that this PI has no control over. Whether this is part of the overhead or an overhead-like cost….or maybe a cost shard across one IC’s IRP…who knows?

We also don’t know what a given PI has to pay for out of his or her ZIA allocation. What are animal housing costs like? Are they subsidized for certain ICs’ IRPs? For certain labs? Who is a PI and who is a staff scientist of some sort within the IRPs? Do these status’ differ? Are they comparable to extramural lab operations? I know for certain sure that people who are more or less the equivalent of an extramural Assistant/Associate Professor in a soft money job category exist within the NIH IRPs without being considered a PI with their own ZIA allocation. So that means that a “PI” on the chart that Berg presents may in fact be equivalent to 2-3 PIs out here in extramural land. (And yes, I understand that some of the larger extramural labs similarly have several people who would otherwise be heading their own lab all subsumed within the grants awarded to one BigCheez PI.)

With that said, however, the IRP is supposed to be special. As Berg notes

The IRP mission statement asserts that the IRP should “conduct distinct, high-impact laboratory, clinical, and population-based research” and that it should support research that “cannot be readily funded or accomplished in traditional academia.”

So by one way of looking at it, we shouldn’t be comparing the IRP scientists to ourselves. They should be different.

Even if we think of IRP investigators as not much different from ourselves, I’m having difficulty making any sense of these numbers. It is nice to see them, but it is really hard to compare to what is going on with extramural grant funding.

Perhaps of greater value is the analysis Berg presents for whether NIH’s intramural research is feeling their fair share of the budgetary pain.

In 2003, when I became an NIH institute director, the overall NIH appropriation was $26.74 billion, while the overall intramural program consumed $2.56 billion, or 9.6 percent. In fiscal 2013, the overall NIH appropriation was $29.15 billion, and the intramural share had grown to $3.26 billion, or 11.2 percent.
 
Some of this growth is because of ongoing intramural activities, such as those involving the NIH Clinical Center, where, like at other hospitals, costs are very hard to contain below rates of inflation, or because of new activities, such as the NIH Chemical Genomics Center. The IRP is particularly expensive in terms of taxpayer dollars, because it is difficult to leverage the federal support to the IRP with funds from other sources as occurs in the extramural community.

So I guess that would be “no”. No the IRP, in aggregate, is not sharing the pain of the flatlined budget. There is no doubt that some of the individual components of the various IRPs are. It is inevitable. Previously flush budgets no doubt being reduced. Senior folk being pushed out. Mid and lower level employees being cashiered. I’m sure there are counter examples. But as a whole, it is clear that the IRP is being protected, inevitably at the expense of R-mech extramural awards.

 

 

Defunding the NIH

December 4, 2013

A article in the Pacific Standard magazine by Michael White provides an update on my prior post on The NIH Un-Doubling. The primary point in that post was a graph published in 2007 in

Heinig SJ, Krakower JY, Dickler HB, Korn D. Sustaining the engine of U.S. biomedical discovery. N Engl J Med. 2007 Sep 6;357(10):1042-7. [Publisher Link]

which presented the NIH budget allocations in dollar amounts adjusted for inflation* (expressed in 1998 dollars). The “undoubling” part reflected the 2007 allocation and 2008 Bush administration request in comparison with a trendline established from the early 1970s until the beginning of the doubling. It’s worth revisiting the graph from that article
Heinig07-NIHbudget-trend.jpeg.jpg

Figure 1. NIH Appropriations (Adjusted for Inflation in Biomedical Research) from 1965 through 2007, the President’s Request for 2008, and Projected Historical Trends through 2010.
All values have been adjusted according to the Biomedical Research and Development Price Index on the basis of a standard set of relevant goods and services (with 1998 as the base year). The trend line indicates average real annual growth between fiscal years 1971 and 1998 (3.34%), with projected growth (dashed line) at the same rate. The red square indicates the president’s proposed NIH budget for fiscal year 2008, also adjusted for inflation in biomedical research.

because the updated one, below, only starts in 1990.

NIHBudget-MAW-edit-497x400This new article How We’re Unintentionally Defunding the NIH provides the update, now represented in 2011 dollars. I’m not immediately seeing whether Michael White made this graph himself or sourced it from somewhere else but he does cite a Congressional Research Services report by John F. Sargent Jr which is worth a read.

This is fascinating. We’ve discussed historical funding trends and success rates under NIH extramural grant awards in the past. One post I wrote is highly pertinent:


The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). Note that they are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications*. The blue line shows total number of applications reviewed…which may or may not be of interest to you. [update 7/12/12: I forgot to mention that the data in the 60s are listed as “estimated” success rates.]

The bottom line here is that looking at the actual numbers can be handy when playing the latest round of “We had it tougher than you did” at the w(h)ine and cheese hour after departmental seminar…Things are worse than they’ve ever been and these dismal patterns have bee sustained for much longer. … Anyone who tries to tell you they had it as hard or harder at any time in the past versus now is high as a kite. Period.

One key takeaway from this new graph is a consideration for those who insist that the NIH doubling interval was a poisoned gift. There are those that claim that our current woes are because research Universities and Medical Schools built up tremendous amounts of new infrastructure and personnel during the doubling, with the expectation that that rate of NIH budget escalation would continue. The thinking is that we experienced a bubble and the only reason we have problems now (during this extended interval of budget flatlining and therefore slipping purchasing power**) with dismal success rates. Too many mouths at the trough, is the way I put the situation, even if I don’t specifically blame the doubling interval for this.

This new graph makes it very clear that we have not just returned to the 3.3% growth trendline for the NIH budget. We have fallen off that line. Furthermore, the stimulus funding and the modest increases the Obama Administration have bruited as an initial budget offering are insufficient to change this divergence. It is absolutely clear that the NIH purchasing power is shrinking. Shrinking below the trends established from 1971 to 1998.

This is not a contraction relative to the doubling interval anymore! We’re way beyond that. We look to be as far below the historical trendline as we were above the line at the peak (end) of the doubling interval. We’re something on the order of $8-$10 Billion in the hole, something around 75% of where the historical trendline would have taken us. That seems like a lot of money until you realize

__
*from here:
RPGsuccessbyYear.png
source

**using BRDPI (Biomedical Research and Development Price Index)

A reader question over at writedit’s epic thread on NIH Paylines & Resources caught my eye. Tom asks:

Just wondering if a 5 yr R01 grant is harder to get approved than a 4 yr R01…. Please advise.

My short answer is:
No.
Furthermore, your default stance should be a 5 year plan at the modular budget limit of $250,000 in direct costs per year.
Long answer after the jump.

Read the rest of this entry »

I have a post I’m working on that references a topic I’ve been talking about on the blog for a long time. I was about to quote extensively from this one but I figured I’d just better repost the whole thing. This originally appeared on 10 Sep 2007.


I’ve made reference a time or two to what I describe as “bias” for amended (revised) applications. In the lifecycle of the standard, investigator initiated research project grant (the R01) application, it is initially submitted and reviewed and if not funded, the application can be revised/amended one (called the A1) or two (A2) times. (Thereafter the PI must submit a substantially new proposal.) First, the evidence that revised applications score better and are more likely to get funded relative to initial submissions is readily available.

Read the rest of this entry »

So one thing you can request of your Senator or Congress person is quite simple. Does your delegate believe in these three principles? Has s/he signed the petition yet?


The Pro-Test Petition
We the undersigned believe:

  1. That animal research has contributed and continues to contribute to major advances in the length and quality of our lives. It remains vital to understanding basic biological processes and for the development of new treatments and therapies such as antibiotics, vaccines, organ transplants, and cancer medicines.
  2. That animal research is morally justifiable provided animal welfare remains a high priority and no valid non-animal alternatives are available.
  3. That violence, intimidation and harassment of scientists and others involved in animal research is neither a legitimate means of protest, nor morally justified.

I’ve made reference a time or two to what I describe as “bias” for amended (revised) applications. In the lifecycle of the standard, investigator initiated research project grant (the R01) application, it is initially submitted and reviewed and if not funded, the application can be revised/amended one (called the A1) or two (A2) times. (Thereafter the PI must submit a substantially new proposal.) First, the evidence that revised applications score better and are more likely to get funded relative to initial submissions is readily available. Read the rest of this entry »

MWE&G notes that NIAID is particularly upfront about funding strategies, in substantial contrast to most ICs. I don’t like the opacity of most of the ICs on funding strategies either. But one reason they do it is to minimize certain study section behavior. There is a natural and perhaps inescapable psychology to grant review in which the reviewer is, at some level, thinking “fund it” or “don’t fund it”. This results in scores clustering around the “perceived” funding line.

ICs don’t like this because they want a nice flat distribution of scores so that no matter where the funding line is drawn, there are not a ton of “hard calls” to make. The more applications with the same score, the harder the decision. (Actually applicants should favor this approach too because in theory it decreases arbitrary IC behavior with regard to selecting apps for funding.)

Fortunately, from the IC perspective, there is some lack of calibration in the “perceived funding line” in the typical study section. (Also, SRAs are tasked with fighting this tendency by urging reviewers to distribute their scores across the entire available range.) This introduces variance into the result of the same psychological process, namely funding line seeking, in reviewers. I think that if all Institutes were highly vocal about the funding lines, hard and soft alike, the problem of score clustering would increase. I think you would also start to see mean scores for Institutes start to move around to match the funding line. “Oh, NIMH is at 135 and NIAAA is at 140? Well, I can assign a 130 to this one, a 140 to that one and the SRA can’t say I’m not spreading scores!” Over the tens of thousands of apps I think you would start to see effects. Then the ICs would have to cycle back on the funding line by saying “well, our grants average 5 pts higher so our cut line is going up”. So the process would cycle around recursively. Not to mention that ICs do compare on things like scores and percentile, I have no doubt. So they aren’ t really interested in doing things that might put their scores at a disadvantage relative to other ICs because their percentiles would start rising creating the impression that they fund substandard science.

It gets complicated.

To return to the applicant, unfortunately from the individual perspective, variance in the perceived funding line can introduce categorical problems. Often a reviewer who is less experienced or knowledgeable may assign a “good” score that is in fact not a “good” score at present time. So the actual intent of the reviewer is not realized because s/he thinks a 170 is a great score, which it might have been five years ago. So you might get hosed because you were, essentially randomly, assigned a reviewer that is less calibrated than those on another application.

Jocelyn Kaiser at ScienceInsider has obtained data on PI numbers from the NIH.

NIH PIs Graphic

Nice.

I think this graph should be pinned up right next to Sally Rockey’s desk. It is absolutely essential to any attempts to understand and fix grant application success rates and submission churning.

UPDATE 03/12/14: I should have noted that this graph depicts PIs who hold R01-equivalent grants (R01, R23, R29, R37 with ARRA excluded). The Science piece has this to say about the differential from RPG:

NIH shared these data for two sets of grants: research project grants (RPGs), which include all research grants, and R01 equivalents, a slightly smaller category that includes the bread-and-butter R01 grants that support most independent labs.

NIH-PIs-RPG-R01eqBut if you read carefully, they’ve posted the excel files for both the R01-equivalents and RPG datasets. Woo-hoo! Let’s get to graphing, shall we? There is nothing like a good comparison graph to make summary language a little more useful. Don’t you think? I know I do….

A “slightly smaller category” eh? Well, I spy some trends in this direct comparison. Let’s try another way to look at it. How about we express the difference between the number of RPG and R01-equivalent numbers to see how many folks have been supported on non-R01/equivalent Research Project Grants over the years…
NIHPI-RPGdifferentialWell I’ll be hornswaggled. All this invention of DP-this and RC-that and RL-whatsit and all the various U-mechs and P01 (Center components seem to be excluded) in recent years seemingly has had an effect. Sure, the number of R01 equivalent PIs only slightly drifted down from the end of the doubling until now (relieved briefly by the stimulus). So those in NIH land could say “Look, we’re not sacrificing R01s, our BreadNButter(TM) Mech!”. But in the context of the growth of nonR01 RPG projects, well….hmmm.

Jeremy Berg has a new President’s Message up at ASBMB Today. It looks into a topic of substantial interest to me, i.e., the fate of investigators funded by the NIH. This contrasts with our more-usual focus on the fate of applications.

With that said, the analysis does place the impact of the sequester in relatively sharp focus: There were about a thousand fewer investigators funded by these mechanisms in FY13 compared with FY12. This represents more than six times the number of investigators who lost this funding from FY11 to FY12 and a 3.8 percent drop in the R-mechanism-funded investigator cohort.

another tidbit addresses the usual claim from NIHlandia that R-mechs and R01s in particular are always prioritized.

In her post, Rockey notes that the total funding for all research project grants, or RPGs, dropped from $15.92 billion in FY12 to $14.92 billion in FY13, a decrease of 6.3 percent. The total funding going to the R series awards that I examined (which makes up about 85 percent of the RPG pool) dropped by 8.9 percent.

What accounts for this difference? U01 awards comprise the largest remaining portion of the RPG pool…The funds devoted to U01 awards remained essentially constant from FY12 to FY13 at $1.57 billion.

Go Read the whole thing.

This type of analysis really needs more attention at the NIH level. They’ve come a looooong way in recent years in terms of their willingness to focus on what they are actually doing in terms of applications, funding, etc. This is in no small part due to the efforts of Jeremy Berg, who used to be the Director of NIGMS. But tracking the fate of applications only goes so far, particularly when it is assessed only on a 1-2 year basis.

The demand on the NIH budget is related to the pool of PIs seeking funding. This pool is considerably less elastic than the submission of grant applications. PIs don’t submit grant applications endlessly for fun, you know. They seek a certain level of funding. Once they reach that, they tend to stop submitting applications. A lot of the increase in application churn over the past decade or so has to do with the relative stability of funding. When odds of continuing an ongoing project are high, a large number of PIs can just submit one or two apps every 5 years and all is well. Uncertainty is what makes her submit each and every year.

Similarly, when a PI is out of funding completely, the number of applications from this lab will rise dramatically….right up until one of them hits.

I argue that if solutions to the application churn and the funding uncertainty (which decreases overall productivity of the NIH enterprise) are to be found, they will depend on a clear understanding of the dynamics of the PI population.

Berg has identified two years in which the PI turnover is very different. How do these numbers compare with historical trends? Which is the unusual one? Or is this the expected range?

Can we see the 1,000 PI loss as a temporary situation or a permanent fix? It is an open question as to how many sequential years without NIH funding will affect the PI. Do these individuals tend to regain funding in 2, 3 or 4 years’ time? Do they tend to go away and never come back? More usefully, what proportion of the lost investigators will follow these fates?

The same questions arise for the other factoids Berg mentions. The R00 transition to other funding would seem to be incredibly important to know. But a one year gap seems hardly worth discussing. This can easily happen under the current conditions. But if they are not getting funded after 2 or maybe 3 years after the R00 expires? This is of greater impact.

Still, a welcome first step, Dr. Berg. Let’s hope Sally Rockey is listening.

A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.

The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:

  • Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
  • Usually accepted on standard receipt (postmarked) dates on an on-going basis
  • Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner

In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.

This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?

Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:

This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.

This brings us to the key difference between the PA and a PAR (or a PAS):

  • Special Types
    • PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
    • PAS: A PA that includes specific set-aside funds as described in the PAS announcement

Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues.  So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.

The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and,  presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in  a general CSR panel.

As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.

It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.

The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and  that a mere PA was not going to be sufficient* to focus minds.

As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.

It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.

__

*This is a direct validation of my position that the PA is a rather weak stimulus, btw.

As always when it comes to NIDA specifics, see Disclaimer.