The post from NIGMS Director Lorsch on “shared responsibility” (my blog post) has been racking up the comments, which you should go read.

As a spoiler, it is mostly a lot of the usual, i.e., Do it to Julia!

But two of the comments are fantastic. This one from anonymous really nails it down to the floor.

More efficient? My NOA for my R01 came in a few weeks ago for this year, and as usual, it has been cut. I will get ~$181,000 this year. Let’s break down the costs of running a typical (my) lab to illustrate that which is not being considered. I have a fairly normal sized animal colony for my field, because in immunology, nothing gets published well without knockouts and such. That’s $75,000 a year in cage per diem costs. Let’s cover 20% of my salary (with fringe, at 28.5%), one student, and one postdoc (2.20 FTE total). Total salary costs are then $119,800. See, I haven’t done a single experiment and my R01 is gone. How MORE efficient could I possibly be? Even if we cut the animals in half, I have only about $20,000 for the entire year for my reagents. Oh no, you need a single ELISA kit? That’s $800. That doesn’t include plates? Hell, that’s another $300. You need magnetic beads to sort cells, that’s $800 for ONE vial of beads. Wait, that doesn’t include the separation tubes? Another $700 for a pack. You need FACS sort time? That’s $100 an hour. Oh no, it takes 4 hours to sort cells for a single experiment? Another $400. It’s easy to spend $1500 on a single experiment given the extreme costs of technology and reagents, especially when using mice. Then, after 4 years of work, you submit your study (packed into a single manuscript) for publication and the reviewers complain that you didn’t ALSO use these 4 other knockout mice, and that the study just isn’t complete enough for their beloved journal. And you (the NIH) want me to be MORE efficient? I can’t do much of anything as it is.

Anyone running an academic research laboratory should laugh (or vomit) at the mere suggestion that most are not already stretching every penny to its breaking point and beyond.

This is what is so phenomenally out of touch with Lorsch’s concentration on the number of grants a PI holds. Most of us play in the full-modular space. Even for people with multiple grants that have one that managed to get funded with a substantial upgrade from full-mod, they are going to have other ones at the modular limit. And even the above-modular grants often get cut extra compared with the reductions that are put on the modular-limit awards.

The full-modular has not been adjusted with inflation. And the purchasing power is substantially eroded compared with a mere 15 years ago when they started the new budgeting approach.


[this graph depicts the erosion of purchasing power of the $250K/yr full-modular award in red and the amount necessary to maintain purchasing power in black. Inflation adjustment used was the BRDPI one]

Commenter Pinko Punko also has a great observation for Director Lorsch.

The greatest and most massive inefficiency in the system is the high probability of a funding gap for NIGMS (and all other Institute) PIs. Given that gaps in funding almost always necessitate laying off staff, and prevent long-term retention of expertise, the great inefficiency here is that expertise cannot possibly be “on demand”. I know that you are also aware that given inflation, the NIH budget never actually doubled. There has likely been a PI bubble, but it is massively deflating with a huge cost.

The lowest quantum for funding units in labs is 1. Paylines are so low, it seems the only way to attempt to prevent a gap in funding is to have an overlap at some point, because going to zero is a massive hit when research needs to grind to a halt. It is difficult to imagine that there is a large number of excessively funded labs.

While I try to put a positive spin on the Datahound analysis showing the probability of a PI becoming re-funded after losing her last NIH award, the fact is that 60% of PIs do not return to funding. A prior DataHound post showed that something between 14-30% of PIs are approximately continuously-funded (extrapolating generously here from only 8 years of data). Within these two charts there is a HUGE story of the inefficiency of trying to maintain that funding for the people who will, in the career-long run, fall into that “continuously funded” category.

This brings me to the Discussion point of the day. Lorsch’s blog post is obsessed with efficiency. which he asserts comes with modestly sized research operations, indexed approximately by the number of grant awards. Three R01s being his stated threshold for excessive grants even though he cites data showing that $700K per year in direct costs is the most productive* amount of funding- i.e., three grants at a minimum.

I have a tale for you, Dear Reader. The greatly abridged version, anyway.

Once upon a time the Program Staff of ICx decided that they were interested in studies on Topic Y and so they funded some grants. Two were R01s obtained without revision. They sailed on for their full duration of funding. To my eye, there was not one single paper that resulted that was specific to the goals of Topic Y and damn little published at all. Interestingly there were other projects also funded on Topic Y. One of them required a total of 5 grant applications and was awarded a starter grant, followed by R01 level funding. This latter project actually produced papers directly relevant to Topic Y.

Which was efficient, Director Lorsch?
How could this process have been made more efficient?

Could we predict PI #3 was the one that was going to come through with the goods? Maybe we should have loaded her up with the cash and screw the other two? Could we really argue that funding all three on a shoestring was more efficient? What if the reason that the first two failed is that they just didn’t have enough cash at the start to make a good effort on what was, obviously, a far from easy problem to attack.

Would it be efficient to take this scenario and give PI #3 a bunch of “people-not-projects” largesse at this point in time because she’s proved able to move the scientific ball on this? Do we look at the overall picture and say “in for a penny, in for a pound”? Or do we fail to learn a damn thing and let the productive PI keep fighting against the funding cycles, the triage line and what not to keep the program going under our current approaches?

It may sound like I am leaning in one direction on this but really, I’m not. I don’t know what the answer is. The distribution of success/failure across these three PIs could have been entirely different. As it happens, all three are pretty dang decent scientists. The degree to which they looked like they could kick butt on Topic Y at the point of funding their respective projects definitely didn’t support the supremacy of PI#3 in the end analysis. But noobs can fail too. Sometimes spectacularly. Sometimes, as may have been the case in this little tale, people can fail because they simply haven’t grown their lab operations large enough, fast enough to sustain a real program, particularly when one of the projects is difficult.

I assume, as usual, that this narrow little anecdote is worth relating because these are typical scenarios. Maybe not hugely common but not all that rare either. Common enough that a Director of an IC should be well aware.

When you have an unhealthy interest in the grant game, as do I, you notice this stuff. You can see it play out in RePORTER and PubMed. You can see it play out as you try to review competing-continuation proposals on study section. You see it play out in your sub-fields of interest and with your closer colleagues.

It makes you shake your head in dismay when someone makes assertions that they know how to make the NIH-funded research enterprise more efficient.

UPDATE: I realized that I should really should say that the third project required at least five applications since I’m going by the amended status of the *funded* awards. It is unknown if there were unfunded apps submitted. It is also unknown if either of the first two PIs tried to renew the awards and couldn’t get a fundable score. I think I should also note that the third project was awarded funding in a context that triggers on at least three of the major “this is the real problem” complaints being bandied in Lorsch’s comments section. The project that produced nothing at all, relevant or not, was awarded in a context that I think would align with these complainants “good” prescription. FWIW.

__
*there are huge problems even with this assessment of productivity but we’ll let that slide for now.

I’ve said it repeatedly on this blog and it is true, true, true people.

In NIH grant review, the worm turns very rapidly.

The pool of individual PIs who are appropriate to apply for, and review, NIH grants in a narrow subfield is a lot smaller than most people seem to think. Or maybe this is just my field.

My guiding belief is that the reviewer of a given grant is going to have one of her own grants reviewed by the PI of the proposal she just reviewed  in very short order. Or maybe it takes a half a decade, even more. But it will happen.

And PIs do not take kindly to jackholish reviews of their proposals.

As we all know, in this day and age it takes very little in the way of reviewer behavior to totally torpedo a grant’s chances. You don’t even have to be obvious about it*.

This is why I try as hard as I possibly can to ground my grant reviewing in concrete reasons for criticism.

Because I want the reviewers of my proposals to do the same. And it is the right thing to do.

We have a system of grant review that is at all times precariously balanced on a knife’s edge that could slide off into Mutually Assured Destruction cycles of retaliation** at any time. And I am sure it happens in some study sections and amongst some reviewers.

Mutual Professional Respect is better. It is supported one review at a time by engaging our firmest professionalism to override the biases that we cannot help but have.

 

illustration from here.

__

*This is very likely the second hardest decision I have to make about registering a Conflict of Interest in reviewing grants. I have reviewed a lot of grants of PIs who have been on the study section panels reviewing my grants. I am pretty confident this is the case for just about anyone who has served a full term appointed on a study section and probably anyone who has reviewed with full loads in over about 3 panels as ad hoc. This in and of itself cannot be a reason to recuse yourself or they would never get anything reviewed. And as my Readers know, I am very firm in my belief that it is a fool’s errand to try to game out which reviewers were on your proposals and which ones were…critical.

**And, gods above, pre-emptive counter-striking.