There was a little twitter discussion yesterday about the distribution of NIH funds, triggered in no small part by a couple of people tweeting about how the NIH modular limit ($250,000 direct per year) hasn’t kept up with inflation. I, of course, snarked about welcome-to-the-party since we’ve been talking about this on the blog for some time.

This then had me meandering across some of my favorite thoughts on the topic. Including the fact that a report from the initial few rounds after introducing modular budgeting found that 92.4% of grants were submitted under the limit and that 41% requested either $175,000 or $200,000 in annual direct costs. The lower number was the mode but only just barely. There’s more interesting stuff linked at this NIH page evaluating the modular budget process. Even better, there are some “Historical Documents” linked here. The update document [PDF] after two submission cycles contains this gem:

NIH data indicate that almost 90 percent of competing individual research project grant (R01) applications request $250,000 or less in direct costs. On the basis of this experience, the size of the modules and the maximum of $250,000 were selected.

It could not be any clearer. The intent of the NIH selecting the cap that they did was to capture the vast majority of proposals. And in the first several rounds they did, well above 90%. Furthermore the plurality of grants did not ask for the cap amount. This has changed as of 2018.

The declining purchase power of $250K due to inflation has wrought a number of trends. More grants coming in as traditional budgets above the cap. More grants under the cap asking for the full amount. And, as we know from the famous mythbusting post from the previous OER honcho Sally Rockey, there was a small increase in those PIs that held 2 or 3 concurrent awards from 1986 to 2009, this was most pronounced in the top 20% best funded investigators who now felt it necessary to secure 3 concurrent awards.

Stability. This is the concept that NIH simply could not grasp for the longest time, at least from an official top-down stop-the-bleeding perspective. They wrung hands over success rates and cut awards to permit making more of them. They limited amended/revised grant submission to try to cover up the real success rate. They refused to budge the modular cap. They started talking about limiting the number of awards or number of dollars awarded. They started trying to claim “efficiency” based on pubs per direct cost.

All to naught, I would suggest, because of a refusal to start with a basic understanding of their grant funded PI workforce. Their first mistake was clinging to the idea that extramural researchers are not “their” workforce (technically correct) and in defiance of de facto reality.

Here’s the reality. Vast swaths of PIs seeking funding from the NIH operate in job environments where they feel they must maintain a certain lab size, a certain lab vigor and a related (directly or indirectly) certain number of NIH grant dollars. This amount varies across individuals, job categories and scientific subfields. Yes. But I would argue the constant is that a given PI has a relatively fixed idea of how much purchasing power she would prefer to have under her control, as NIH funds, more or less all of the time. Constantly, consistently with some assurance of continuation. If your job depends on large part on sustained funding from the NIH, you work for them.

That is what guides and drives most grant seeking behavior, I assert. People are not “greedy”. They are not seeking more and more grants as some sort of detached score keeping game. They are trying to survive. They would like to be awarded tenure. They would like very much to “be a scientist” which means conducting and publishing research in their field. They would like to make Full Professor one day and they would like their trainees to feel like they got good training and good launches to their careers.

This is not crazy-town stuff.

And being reasonably smart, motivated and professional people these PIs are going to fight hard to try to bring in the consistent funding that is required by their chosen career path.

Since NIH has so steadfastly refused to start at this understanding, their hapless attempts to do things to make their time-to-first-R01 and success rate stats look better do not succeed. When you squeeze down the purchasing power of an award, and make the probability of award for any given application less certain, you push PIs to submit more and more and more applications. [This also affects applicant institutions, of course, but this is a bit more diffuse and I’m not going to get into it today.]

The twitter discussion about the modular cap inevitably crept down the supposition that increasing the modular cap to match inflation would inevitably result in fewer grants awarded (true) and therefore decreased success rates. I think this latter is not quite so simple as a claim. This is the kind of thing that NIH should have spent a lot of time and effort modeling out with some fancy math and data mining. When grant seeking PIs feel comfortable with the amount of grant support they have, THEY STOP SUBMITTING NEW APPLICATIONS! When they feel in need of shoring up their evaporating funding and uncertain of the award of any new proposal (and in particular their continuation applications) they flog those applications out like crazy. Occasionally they overshoot in the “too many than intended” direction. I think the vast majority of the PIs would much rather this than they would occasionally undershooting and experiencing a funding gap. Datahound had some posts on the chances of getting refunded after a gap and the annual PI churn / turnover rate that I think point to another unfortunate NIH blindspot. It was really late in the game before we started seeing NIH OER officialdom talking about per-investigator success rates. And they still give very little evidence of wanting to pro-actively grapple with these issues that motivate the PIs who are applying and figure out how to make their award process a little less hellish.

Look, at some level they know. They know that MERIT and PECASE extensions are good things. The NIGMS invented up the MIRA approach as a semi-recognition that people would trade stability for grant over-shoot. They started wringing their hands recently about giving yet more affirmative action help to the ESI applicants who were struggling with renewals and second grants. But they do not appear to want to broaden these concepts. MERIT, PECASE and MIRA are very rare. They are not awarded in anything near the numbers required to have a stabilizing effect on enough applicants to dampen the grant churn. To my knowledge, competing continuation applications are not doing better, they are doing worse than ever before.

I may take this up in another post, but I want to defang the immediate criticism. Stability of funding exists in tension with diversity. Diversity of PIs and diversity of research topics and approaches. The more you let those that get in stay in with lesser competition, the more you keep out newcomers. In theory…..

I say in theory because, in point of fact, the NIH system is already hugely biased in favor of those already inside. And does only a middling job at opening itself up to newcomers and new ideas that are not just like the existing population of funded PIs and funded projects. I just feel as though the NIH must be able to do more modeling and data mining to get a better idea of their real turnover now, in the past, etc. I feel that they should able to do a better job of producing the same result with less grant churn. This should be able to be accomplished with a better proactive understanding of sustainable lab/funding size, funding gaps, the population of approximately continually-funded PIs, PI turnover, etc.