Go!

The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA’s “Neuroscience Research on Drug Abuse” PA.

They also come in highly specific varieties, generally as RFAs.

The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.

As always, I can only offer up the way I look at these things.

As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn’t mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.

Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can’t escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.

How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.

Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.

Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn’t fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the “wrong” direction.

Most severely, you might be rejected without review. This can happen. If you do not meet the PO’s idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.

Alternately, you might get triaged by a panel that just doesn’t see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.

Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.

My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.

Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA’s stated goals. This doesn’t mean the system is broken.

So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.

__
It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived “riskiness” (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn’t investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

Repost: Keep the ball in play

September 21, 2016

This was originally posted 16 September, 2014.


We’re at the point of the fiscal year where things can get really exciting. The NIH budget year ends Sept 30 and the various Institutes and Centers need to balance up their books. They have been funding grants throughout the year on the basis of the shifting sands of peer review with an attempt to use up all of their annual allocation on the best possible science.

Throughout the prior two Council rounds of the year, they have to necessarily be a bit conservative. After all, they don’t know in the first Round if maybe they will have a whole bunch of stellar scores come in during the third Round. Some one-off funding opportunities are perhaps schedule for consideration only during the final Round. Etc.

Also, the amount of funding requested for each grant varies. So maybe they have a bunch of high scoring proposals that are all very inexpensive? Or maybe they have many in the early rounds of the year that are unusually large?

This means that come September, the ICs are sometimes sitting on unexpended funds and need to start picking up proposals that weren’t originally slated to fund. Maybe it is a supplement, maybe it is a small mechanism like a R03 or R21. Maybe they will offer you 2 years of funding of an R01 proposed for 5. Maybe they will offer you half the budget you requested. Maybe they have all of a sudden discovered a brand new funding priority and the quickest way to hit the ground running is to pick something up with end-of-year funds.

Now obviously, you cannot game this out for yourself. There is no way to rush in a proposal at the end of the year (save for certain administrative supplements). There is no way for you to predict what your favorite IC is going to be doing in Sep- maybe they have exquisite prediction and always play it straight up by priority score right to the end, sticking within the lines of the Council rounds. And of course, you cannot assume lobbying some lowly PO for a pickup is going to work out for you.

There is one thing you can do, Dear Reader.

It is pretty simple. You cannot receive one of these end-of-year unexpected grant awards unless you have a proposal on the books and in play. That means, mostly, a score and not a triage outcome. It means, in a practical sense, that you had better have your JIT information all squared away because this can affect things. It means, so I hear, that this is FINALLY the time when your IC will quite explicitly look at overhead rates to see about total costs and screw over those evil bastiges at high overhead Universities that you keep ranting about on the internet. You can make sure you have not just an R01 hanging around but also a smaller mech like an R03 or R21.

It happens*. I know lots and lots of people who have received end-of-the-FY largesse that they were not expecting. Received this type of benefit myself. It happens because you have *tried* earlier in the year to get funding and have managed to get something sitting on the books, just waiting for the spotlight of attention to fall upon you.

So keep that ball in play, my friends. Keep submitting credible apps. Keep your Commons list topped off with scored apps.

__
*As we move into October, you can peruse SILK and RePORTER to see which proposals have a start date of Sep 30. Those are the end-of-year pickups.

h/t: some Reader who may or may not choose to self-identify 🙂

The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.

NIH Exploratory/Developmental Research Grant Program ( Parent R21)

In the real world of NIH grant review, however, the “Developmental” part is entirely ignored in most cases. If you want a more accurate title, it should be:

NIH High Risk / High Reward Research Grant Program ( Parent R21)

This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly “high risk/high reward”.

And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.

So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there’s the R21, right? Says right in the title that it is Developmental.

But….it doesn’t work in practice.

So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You’d be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.

The only thing stopping this from being a thing is the study section culture that won’t accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn’t come laden with quite the same “high risk, high reward” approach to R21 review that biases for flash over solid workmanlike substance.

The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn’t turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.

Or you could always just shout aimlessly into the ether of social media.

As I’ve noted on these pages before, my sole detectable talent for this career is the ability to take a punch.

There are a lot of punches in academic science. A lot of rejection and the congratulations for a job well done are few and far between. Nobody ever tells you that you are doing enough.

“Looking good, Assistant Professor! Just keep this up, maybe even chill a little now and then, and tenure will be no problem!” – said no Chair ever.

My concern is that resilience in the face of constant rejection, belittling and unkind comparisons of your science to the true rock stars in a Lake Wobegon approach can have a selection effect. Only certain personality types can stand this.

I happen to have one of these personality types but it is not something of any particular credit. I was born and/or made this way by my upbringing. I cannot say anyone helped to train me in this way as an academic scientist*.

So I am at a complete loss as to how to help my trainees with this.

Have you any insights Dear Reader? From your own development as a scientist or as a supervisor of other scientists?

Related Reading: Tales of postdocs past: what did I learn?
__
*well maybe indirectly. And not in a way I care to extend to any trainee of mine thankyewveerymuch.

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don’t.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn’t matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn’t matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden’s top-ranked university was just sacked because of the Macchiarini scandal

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.

two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an “erroneous placeholder figure”, or a full on retraction by the “good” authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. “Several labs have replicated and extended our work”, is how it goes if the paper is an old one. “We’ve replicated the bad [nonWestern, can’t be located] postdoc’s work” if the paper is newer.

I say “aided and abetted” because the Editors have to approve the language of the authors’ erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining “good” authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So…corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman’s agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I’d bet that an inability to get beyond “he-said/he-said” is probably at the root of Baylor’s “inconclusive” preliminary inquiry result for this Liang/Feng dispute.

NPR had a good segment on this today: The Difficulty Of Enforcing Laws Against Driving While High. Definitely well worth a listen.

I had a few reactions in a comment that ended up being post-length, so here you go.

The major discussion of the segment was two-fold and I think illustrates where policy based on the science can be helpful, even if only to point to what we need to know but do not at present.

The first point was that THC hangs around in the body for a very long time post-consumption, particularly in comparison with alcohol. Someone who is a long term chronic user can have blood THC levels that are…appreciable (no matter the particular threshold for presumed impairment, this is relevant). Some of the best data on this are from the laboratory of Marilyn Huestis when she was, gasp, an intramural investigator at NIDA! There are some attempts in the Huestis work to compare THC and metabolite ratios to determine recency of consumption-that’s a good direction. IMO.

The second argument was about behavioral tolerance. One of the scientist interviewed was quoted along the lines of saying the relationship between blood levels, repetitive use and actual impairment was more linear for alcohol than for THC. Pretty much. There is some evidence for substantial behavioral tolerance, meaning even when acutely intoxicated, the chronic user may have relatively preserved performance versus the noob. There’s a laboratory study here that makes the point fairly succinctly, even if the behavior itself isn’t that complex. As a counterpoint, this recent human study fails to confirm behavioral tolerance in an acute dosing study (see Fig 4A for baseline THC by frequency of use, btw). As that NPR piece noted, it would be very valuable to get some rapid field screen for THC/driving – relevant impairment on a tablet.

Pot Ponder

September 6, 2016

Five states have recreational marijuana legalization on the ballot this fall, if I heard correctly

I feel as though we should probably talk about this over the next couple of months. 

ETA:
Arizona

California

Maine

Massachusetts

Nevada

As most of you are aware, these follow successful recreational legalization initiatives in Washington (2012), Colorado (2012), Oregon (2014), Alaska (2014) and the District of Columbia (2014).

ScienceHound has posted a new analysis related to the NIH budget and award policy. He’s been beavering away with mathematical models lately that are generally going to be beyond my ability to understand. In a tweet however, he made it pretty clear.

As expanded in his blog post:

The largest difference between the curves occurs at the beginning of the doubling period (1998-2003) where the model predicts a large increase in the number of grants that was not observed. This is due to the fact that NIH initiated a number of larger non–RPG-based programs when substantial new funding was available rather than simply funding more RPGs (although they did this to some extent). For example, in 1998, NIH invested $17 million through the Specialized Center–Cooperative Agreements (U54) mechanism. This grew to $146 million in 1999, $188 million in 2000, $298 million in 2001, $336 million in 2002, and $396 million in 2003. Note that the change each year matters for the number of new and competing grants that can be made because, for a given year, it does not matter whether funds have been previously committed to RPGs or to other mechanisms.

This interval of time, in my view, is right around when the first of the GenXers were getting (or should have been getting) appointed Assistant Professor. Certainly, YHN was appointed in this interval.

Let us recall a couple of graphs. First, this one:

The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). Note that they are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications. The blue line shows total number of applications reviewed…which may or may not be of interest to you. [update 7/12/12: I forgot to mention that the data in the 60s are listed as “estimated” success rates.]

Ok, Ok, Not much to see here, right? The 30% success rate was about the same in the doubling period as it was in the 80s. Now view this broken down by noobs and experienced investigators.
RPGsuccessbyYear.png
source

As we know from prior posts, career-stage differences matter a LOT. In the 80s when the overall success rate was 30%, you can see that newcomers were at about 20% and established investigators were enjoying at least a 17%age point advantage (I think these data also conflate competing continuation with new applications so there’s another important factor buried in the “Experienced” trace.) Nevertheless, since the Experienced/New gap was similar from 1980 to 2006, we can probably assume it held true prior to that interval as well.

Again, first time applicants had about the same lack of success in the 80s as they did in the early stages of the doubling (ok, actually a few points higher in the 80s). About 20%. Things didn’t go severely into the tanker for the noobs until the end of the doubling around 2004. But think of the career arc. A person who started in the 80s with their first grant jumped up to enjoy 30% success rates and a climbing trend. Someone who managed to land a five year R01 in 2000, conversely, faced steeply declining success rates just when they were ready to get their next grant 4-5 years later.

This is for Research Project Grants (R01, R03, R15, R21, R22, R23, R29, R33, R34, R35, R36, R37, R55, R56, RC1, P01, P42, PN1, U01, U19, UC1) and does not refer to the Centers or U54 that ScienceHound discussed. Putting his analysis and insider explanation (if you don’t know, ScienceHound was NIGMS Director from 2003-2010) to work, we can assume that these RPG or R01-equiv success rates would have been much higher during the doubling, save for the choice of NIH not to devote the full largesse to RPGs.

So. Instead of restoring experienced investigator success to where it had been during the early 80s and instead of finally (finally) doing something about noob-investigator success rates that had resulted in handwringing since literally the start of the NIH (ok, the 60s anyway) the NIH decided to spend money on boondoggles.

The NIH decided to assign a disproportionate share of the doubling to the very best funded institutions and scientists using mechanisms that were mostly peer reviewed by….the best funded scientists from the best-funded institutions. One of the CSR rules, after all, is that apps for a given mechanism should be reviewed mostly by those who have obtained such a mechanism. You have to have an R01 to be in a regular R01-reviewing panel and P50/P60/P01 are reviewed mostly by those who have been funded by such mechanisms.

One way to look at this is that a lot of the doubling was sequestered from the riff-raff by design.

This is part of the reason that Gen X will never live up to its scientific potential. The full benefit of the doubling was never made available to us in a competitive manner. Large-mech projects under the elite, older generation kept us shadowed. Maybe a couple of us* shared in the Big-Mechanism wealth in minor form but we were by no means ready to make a play to lead them and get the full benefit. Meantime, our measly R01 applications were being beat up mercilessly by the established and compared unfavorably to Senior PI apps supported by their multi-R01 and BigMech labs.

The story is not over.

Given that I grew up as a scientist in this era, and given that like most of us I was pretty ignorant of longitudinal funding trends, etc, my perception was that a Big Mech was…expected. As in eventually, we were supposed to get to the point where not just the very tippy-top best of us, but basically anyone with maybe top-25% verve and energy could land a BigMech. Maybe a P01 Program Project, maybe a Center. The Late-Boomers felt it too. I saw several of the late Boomers get into this mode right as the badness struck. They were semi-outraged, let me tell you, when the nearly universal Program Officer response was “We’re not funding P01s anymore. We suggest you don’t submit one.“.

AYFK? For people who were used to hearing POs say “We advise you to revise and resubmit” at the drop of a hat and who had never been told by a PO not to try (with a half decent idea) this was quite surprising. Especially when they looked at the lucky ducks who had put their Big Mechs together just a few years before….well there was a lot of screaming about bias and unfairness at first.

P01s are relatively easy for Program to shut down. As always, YMMV when it comes to NIH matters. But in general, I’d say that P01s tended to be a lot more fluid** than Centers (P50/P60). Once a Big Hitter group got a-hold of a Center award, they tended to stay funded. For decades. IME, anyway. or in my perception, more accurately.

Take a look at the history of Program Projects versus Centers in your field / favorite ICs, DearReader and report back, eh?

Don’t get me wrong. There is much to like about Program Projects and Centers. Done right, they can be very good at shepherding the careers of transitioning / new scientists. But they are profoundly undemocratic and tend to consolidate NIH funding in the hands of the few elite of the IC in question. Often times they appear to be less productive than those of us not directly in them would calculate “should” happen for the the same expenditure on R01s. Such complaints are both right and wrong and often simultaneously when it comes to the same Center award. It is something that depends on your perspective and what you value and/or predict as outcome.

I can think of precisely one GenX Center Director in the stable of my favorite ICs at the moment. No doubt there are more because I don’t do exhaustive review and I don’t recognize every name to put to a face right off if I were to go RePORTERing. But still. I can rattle off tons of Boomer and pre-Boomer Center Directors.

It goes back to a point I made in a prior post. Gen X scientists were not just severely filtered. Even the ones that managed to transition to faculty appointments were delayed at every step. Funding came harder and at a delay. Real purchasing power was reduced. Publication expectations went up. We were not ready and able to take up the reins of larger efforts to anywhere near the same extent when we approached mid career. We could not rely upon clockwork schedules of grant renewal. We could not expect that a high percentage of our new proposals would be funded. We did not have as extensive a run of successful individual productivity on which to base a stretch for BigMech science.

And this comes back to a phenomenon ScienceHound identifies. The NIH decided*** to put a disproportionate share of the doubling monies into Centers rather than R01s for the struggling new PIs. This had a very long tail of lasting effects.

__
*I certainly did.

**Note: The P01 is considered an RPG with the R01s, etc, but Centers are not. There is some floofraw about these being “different pots of money” from an appropriation standpoint. They are not directly substitutable in immediate priority, the way I hear it.

***Any NIH insiders that start in on how Congress tied their hands can stop before starting. Appropriations language involved back and forth with NIH, believe me.