You are familiar with the #GintherGap, the disparity of grant award at NIH that leaves the applications with Black PIs at substantial disadvantage. Many have said from the start that it is unlikely that this is unique to the NIH and we only await similar analyses to verify that supposition.

Curiously the NSF has not, to my awareness, done any such study and released it for public consumption.

Well, a group of scientists have recently posted a preprint:

Chen, C. Y., Kahanamoku, S. S., Tripati, A., Alegado, R. A., Morris, V. R., Andrade, K., & Hosbey, J. (2022, July 1). Decades of systemic racial disparities in funding rates at the National Science Foundation. OSF Preprints. July 1. doi:10.31219/osf.io/xb57u.

It reviews National Science Foundation awards (from 1996-2019) and uses demographics provided voluntarily by PIs. They found that the applicant PIs were 66% white, 3% Black, 29% Asian and below 1% for each of American Indian/Alaska Native and Native Hawaiian/Pacific Islander groups. They also found that across the reviewed years, the overall funding rate varied from 22%-34%, so the data were represented as the rate for each group relative to the average for each year. In Figure 1, reproduced below, you can see that applications with white PIs enjoy a nice consistent advantage relative to other groups and the applications with Asian PIs suffer a consistant disadvantage. The applications with Black PIs are more variable year over year but are mostly below average except for 5 years when they are right at the average. The authors note this means that in 2019, there were 798 awards with white PIs above expected value, and 460 fewer than expected awarded with Asian PIs. The size of the disparity differs slightly across the directorates of the NSF (there are seven, broken down by discipline such as Biological Sciences, Engineering, Math and Physical Sciences, Education and Human Resources, etc) but the same dis/advantage based on PI race remains.

Fig 1B from Chen et al. 2022 preprint

It gets worse. It turns out that these numbers include both Research and Non-Research (conference, training, equipment, instrumentation, exploratory) awards. Which represent 82% and 18% of awards, with the latter generally being awarded at 1.4-1.9 times the rate for Research awards in a given year. For white

Fig 3 from Chen et al 2022 preprint FY 13 – 19;
open = Non-Research, closed = Research

PI applications the two types both are funded at higher than the average rate, however significant differences emerge for Black and Asian PIs with Research awards having the lower probability of success.

So why is this the case. Well, the white PI applications get better scores from extramural reviewers. Here, I am not expert in how NSF works. A mewling newbie really. But they solicit peer reviewers which assign merit scores from 1 (Poor) to 5 (Excellent). The preprint shows the distributions of scores for FY15 and FY16 Research applications, by PI race, in Figure 5. Unsurprisingly there is a lot of overlap but the average score for white PI apps is superior to that for either Black or Asian PI apps. Interestingly, average scores are worse for Black PI apps than for Asian PI apps. Interesting because the funding disparity is larger for Asian PIs than for Black PIs. And as you can imagine, there is a relationship between score and chances of being funded but it is variable. Kind of like a Programmatic decision on exception pay or the grey zone function in NIH land. Not sure exactly how this matches up over at NSF but the first author of the preprint put me onto a 2015 FY report on the Merit Review Process that addresses this. Page 74 of the PDF (NSB-AO-206-11) has a Figure 3.2 showing the success rates by average review score and PI race. As anticipated, proposals in the 4.75 (score midpoint) bin are funded at rates of 80% or better. About 60% for the 4.25 bin, 30% for the 3.75 bin and under 10% for the 3.25 bin. Interestingly, the success rates for Black PI applications are higher than for white PI applications at the same score. The Asian PI success rates are closer to the white PI success rates but still a little bit higher, at comparable scores. So clearly something is going on with funding decision making at NSF to partially counter the poorer scores, on average, from the reviewers. The Asian PI proposals do not have as much of this advantage. This explains why the overall success rates for Black PI applications are closer to the average compared with the Asian PI apps, despite worse average scores.

Fig 5 from Chen et al 2022 preprint

One more curious factor popped out of this study. The authors, obviously, had to use only the applications for which a PI had specified their race. This was about 96% in 1999-2000 when they were able to include these data. However it was down to 90% in 2009, 86% in 2016 and then took a sharp plunge in successive years to land at 76% in 2019. The first author indicated on Twitter that this was down to 70% in 2020, the largest one year decrement. This is very curious to me. It seems obvious that PIs are doing whatever they think is going to help them get funded. For the percentage to be this large it simply has to involve large numbers of white PIs and likely Asian PIs as well. It cannot simply be Black PIs worried that racial identification will disadvantage them (a reasonable fear, given the NIH data reported in Ginther et al.) I suspect a certain type of white academic who has convinced himself (it’s usually a he) that white men are discriminated against, that the URM PIs have an easy ride to funding and the best thing for them to do is not to declare themselves white. Also another variation on the theme, the “we shouldn’t see color so I won’t give em color” type. It is hard not to note that the US has been having a more intensive discussion about systemic racial discrimination, starting somewhere around 2014 with the shooting of Michael Brown in Ferguson MO. This amped up in 2020 with the strangulation murder of George Floyd in Minneapolis. Somewhere in here, scientists finally started paying attention to the Ginther Gap. News started getting around. I think all of this is probably causally related to sharp decreases in the self-identification of race on NSF applications. Perhaps not for all the same reasons for every person or demographic. But if it is not an artifact of the grant submission system, this is the most obvious conclusion.

There is a ton of additional analysis in the preprint. Go read it. Study. Think about it.

Additional: Ginther et al. (2011) Race, ethnicity, and NIH research awards. Science, 2011 Aug 19; 333(6045):1015-9. [PubMed]

The latest blog post over at Open Mike, from the NIH honcho of extramural grant award Mike Lauer, addresses “Discussion Rate”. This is, in his formulation, the percent of applicants (in a given Fiscal Year, FY21 in this case) who are PI on at least one application that reaches discussion. I.e., not triaged. The post presents three Tables, with this Discussion rate (and Funding rate) presented by the Sex of the PI, by race (Asian, Black, White only) or ethnicity (Hispanic or Latino vs non-Hispanic only). The tables further presented these breakdowns by Early Stage Investigator, New Investigator, At Risk and Established. At risk is a category of “researchers that received a prior substantial NIH award but, as best we can tell, will have no funding the following fiscal year if they are not successful in securing a competing award this year.” At this point you may wish to revisit an old blog post by DataHound called “Mind the Gap” which addresses the chances of regaining funding once a PI has lost all NIH grants.

I took the liberty of graphing the By-Race/Ethnicity Discussion rates, because I am a visual thinker.

There seem to be two main things that pop out. First, in the ESI category, the Discussion rate for Black PI apps is a lot lower. Which is interesting. The 60% rate for ESI might be a little odd until you remember that the burden of triage may not fall on ESI applications. At least 50% have to be discussed in each study section, small numbers in study section probably mean that on average it is more than half, and this is NIH wide data for FY 21 (5,410 ESI PIs total). Second, the NI category (New, Not Early on the chart) seems to suffer relative to the other categories.

Then I thought a bit about this per-PI Discussion rate being north of 50% for most categories. And that seemed odd to me. Then I looked at another critical column on the tables in the blog post.

The Median number of applications per applicant was…. 1. That means the mode is 1.

Wow. Just….wow.

I can maybe understand this for ESI applicants, since for many of them this will be their first grant ever submitted.

but for “At Risk”? An investigator who has experience as a PI with NIH funding, is about to have no NIH funding if a grant does not hit, and they are submitting ONE grant application per fiscal year?

I am intensely curious how this stat breaks down by deciles. How many at risk PIs are submitting only one grant proposal? Is it only about half? Two-thirds? More?

As you know, my perspective on the NIH grant getting system is that if you have only put in one grant you are not really trying. The associated implication is that any solutions to the various problems that the NIH grant award system might have that are based on someone not getting their grant after only one try are not likely to be that useful.

I just cannot make this make sense to me. Particularly if the NIH

It is slightly concerning that the NIH is now reporting on this category of investigator. Don’t get me wrong. I believe this NIH system should support a greater expectation of approximately continual funding for investigators who are funded PIs. But it absolutely cannot be 100%. What should it be? I don’t know. It’s debatable. Perhaps more importantly who should be saved? Because after all, what is the purpose of NIH reporting on this category if they do not plan to DO SOMETHING about it? By, presumably, using some sort of exception pay or policy to prevent these at risk PIs from going unfunded.

There was just such a plan bruited about for PIs funded with the ESI designation that were unable to renew or get another grant. They called them Early Established Investigators and described their plans to prioritize these apps in NOT-OD-17-101. This was shelved (NOT-OD-18-214) because “NIH’s strategy for achieving these goals has evolved based on on-going work by an Advisory Committee to the Director (ACD) Next Generation Researchers Initiative Working Group and other stakeholder feedback” and yet asserted “NIH..will use an interim strategy to consider “at risk investigators”..in its funding strategies“. In other words, people screamed bloody murder about how it was not fair to only consider “at risk” those who happened demographically to benefit from the ESI policy.

It is unclear how these “consider” decisions have been made in the subsequent interval. In a way, Program has always “considered” at risk investigators, so it is particularly unclear how this language changes anything. In the early days I had been told directly by POs that my pleas for an exception pay were not as important because “we have to take care of our long funded investigators who will otherwise be out of funding”. This sort of thing came up in study section more than once in my hearing, voiced variously as “this is the last chance for this PIs one grant” or even “the PI will be out of funding if…”. As you can imagine, at the time I was new and full of beans and found that objectionable. Now….well, I’d be happy to have those sentiments applied to me.

There is a new version of this “at risk” consideration that is tied to the new PAR-22-181 on promoting diversity. In case you are wondering why this differs from the famously rescinded NINDS NOSI, well, NIH has managed to find themselves a lawyered excuse.

Section 404M of the Public Health Service Act (added by Section 2021 in Title II, Subtitle C, of the 21st Century Cures Act, P.L. 114-255, enacted December 13, 2016), entitled, “Investing in the Next Generation of Researchers,” established the Next Generation Researchers Initiative within the Office of the NIH Director.  This initiative is intended to promote and provide opportunities for new researchers and earlier research independence, and to maintain the careers of at-risk investigators.  In particular, subsection (b) requires the Director to “Develop, modify, or prioritize policies, as needed, within the National Institutes of Health to promote opportunities for new researchers and earlier research independence, such as policies to increase opportunities for new researchers to receive funding, enhance training and mentorship programs for researchers, and enhance workforce diversity;

enacted December 13, 2016“. So yeah, the NOSI was issued after this and they could very well have used this for cover. The NIH chose not to. Now, the NIH chooses to use this aspect of the appropriations language. And keep in mind that when Congress includes something like this NGRI in the appropriations language, NIH has requested it or accepted it or contributed to exactly how it is construed and written. So this is yet more evidence that their prior stance that the “law” or “Congress” was preventing them from acting to close the Ginther Gap was utter horseshit.

Let’s get back to “at risk” as a more explicitly expressed concern of the NIH. What will these policies mean? Well, we do know that none of this comes with any concrete detail like set aside funds (the PAR is not a PAS) or ESI-style relaxation of paylines. We do know that they do this all the damn time, under the radar. So what gives? Who is being empowered by making this “consideration” of at-risk PI applications more explicit? Who will receive exception pay grants purely because they are at risk? How many? Will it be in accordance with distance from payline? How will these “to enhance diversity” considerations be applied? How will these be balanced against regular old “our long term funded majoritarian investigator is at risk omg” sentiments in the Branches and Divisions?

This is one of the reasons I like the aforementioned Datahound analysis, because at least it gave a baseline of actual data for discussion purposes. A framework a given I or C could follow in starting to make intelligent decisions.

What is the best policy for where, who, what to pick up?

NIDA, NIMH, and NINDS have issued a Program Announcement (PAR-22-181) to provide Research Opportunities for New and “At-Risk” Investigators with the intent to Promote Workforce Diversity.

This is issued as a PAR, which is presumably to allow Special Emphasis Panels to be convened. It is not a PAS, however, the announcement includes set-aside funding language familiar to PAS and RFA Funding Opportunity Announcements (FOA).

Funds Available and Anticipated Number of Awards The following NIH components intend to commit the following amounts for the duration of this PAR: NINDS intends to commit up to $10 million per fiscal year, approximately 25 awards, dependent on award amounts; NIDA intends to commit up to $5 million per fiscal year, 12-15 awards, dependent on award amounts; NIMH intends to commit up to $5 million per fiscal year, 12-15 awards, dependent on award amounts; Future year amounts will depend on annual appropriations.

This is a PA typical 3 year FOA which expires June 7, 2025. Reciept dates are one month ahead of standard, i.e., Sept (new R01) / Oct (Resub, Rev, Renew); Jan/Feb; May/Jun for the respective Cycles.

Eligibility is in the standard categories of concern including A) Underrepresented Racial/Ethnic groups, B) Disability, C) economic disadvantage and D) women. Topics of proposal have to be within the usual scope of the participating ICs. Eligibility of PIs is for the familiar New Investigators (“has not competed successfully for substantial, NIH (sic) independent funding from NIH“) and a relatively new “at risk” category.

At risk is defined as “has had prior support as a Principal Investigator on a substantial independent research award and, unless successful in securing a substantial research grant award in the current fiscal year, will have no substantial research grant funding in the following fiscal year.

So. We have an offset deadline (at least for new proposals), set aside funds, SEPs for review and inclusion of NI (instead of merely ESI) and the potential for the more experienced investigator who is out of funding to get help as well. Pretty good! Thumbs up. Can’t wait to see other ICs jump on board this one.

To answer your first question, no, I have no idea how this differs from the NINDS/NIDA/NIAAA NOSI debacle. As a reminder:

Notice NOT-NS-21-049 Notice of Special Interest (NOSI): NIH Research Project Grant (R01) Applications from Individuals from Diverse Backgrounds, Including Under-Represented Minorities was released on May 3, 2021.

The “debacle” part is that right after NIDA and NIAAA joined NINDS in this NOSI, the Office of the Director put it about that no more ICs could join in and forced a rescinding of the NOSI on October 25, 2021 while claiming that their standard issue statement on diversity accomplished the same goals.

I see nothing in this new PAR that addresses either of the two real reasons that may have prompted the Office of the Director to rescind the original NOSI. The first and most likely reason is NIH’s fear of right wing, anti-affirmative action, pro-white supremacy forces in Congress attacking them. The second reason would be people in high places* in the NIH that are themselves right wing, anti-affirmative action and pro-white supremacy. If anything, the NOSI was much less triggering since it came with no specific plans of action or guarantees of funding. The PAR, with the notification of intended awards, is much more specific and would seemingly be even more offensive to right wingers.

I do have two concerns with this approach, as much as I like the idea.

First, URM-only opportunities have a tendency to put minority applicants in competition with each other. Conceptually, suppose there is an excellent URM qualified proposal that gets really high priority scores from study section and presume it would have also done so in an open, representation-blind study section. This one now displaces another URM proposal in the special call and *fails to displace* a lesser proposal from (statistically probable) a majoritarian PI. That’s less good than fixing the bias in the first place so that all open competitions are actually open and fair. I mentioned this before:

These special FOA have the tendency to put all the URM in competition with each other. This is true whether they would be competitive against the biased review of the regular FOA or, more subtly, whether they would be competitive for funding in a regular FOA review that had been made bias-free(r). […] The extreme example here is the highly competitive K99 application from a URM postdoc. If it goes in to the regular competition, it is so good that it wins an award and displaces, statistically, a less-meritorious one that happens to have a white PI. If it goes in to the MOSAIC competition, it also gets selected, but in this case by displacing a less-meritorious one that happens to have a URM PI. Guaranteed.

The second concern is one I’ve also described before.

In a news piece by Jocelyn Kaiser, the prior NIH Director Elias Zerhouni was quoted saying that study sections responded to his 2006/2007 ESI push by “punishing the young investigators with bad scores”. As I have tried to explain numerous times, phrasing this as a matter of malign intent on the part of study section members is a mistake. While it may be true that many reviewers opposed the idea that ESI applicants should get special breaks, adjusting scores to keep the ESI application at the same chances as before Zerhouni’s policies took effect is just a special case of a more general phenomenon.

So, while this PAR is a great tactical act, we must be very vigilant for the strategic, long term concerns. It seems to me very unlikely that there will be enthusiasm for enshrining this approach for decades (forever?) like the ESI breaks on merit scores/percentiles/paylines. And this approach means it will not be applied by default to all qualifying applications, as is the case for ESI.

Then we get to the Oppression Olympics, an unfortunate pitting of the crabs in the barrel against each other. The A-D categories of under-representation and diversity span quite a range of PIs. People in each category, or those who are concerned about specific categories, are going to have different views on who should be prioritized. As you are well aware, Dear Reader, my primary concern is with the Ginther gap. As you are aware, the “antis” and some pro-diversity types are very concerned to establish that a specific person who identifies as African-American has been discriminated against and is vewwwwy angweee to see any help being extended to anyone of apparent socio-economic privileges who just so happens to be Black. Such as the Obama daughters. None of us are clean on this. Take Category C. I have relatively recently realized that I qualify under Category C since I tick three of the elements, only two are required. I do not think that there is any possible way that my qualification on these three items affects my grant success in the least. To do so would require a lot of supposing and handwaving. I don’t personally think that anyone like me who qualifies technically under Category C really should be prioritized against, say, the demonstrated issue with the Ginther gap. These are but examples of the sort of “who is most disadvantaged and therefore most deserving” disagreement that I think may be a problem for this approach.

Why? Because reviewers will know that this is the FOA they are reviewing under. Opinions on the relative representation of categories A-D, Oppression Olympics and the pernicious stanning of “two-fers” will be front and present. Probably explicit in some reviews. And I think this is a problem in the broader goals of improving equity of opportunity and in playing for robust retention of individuals in the NIH funded research game.

__

*This is going to have really ugly implications for the prior head of the NIH, Francis Collins, if the PAR is not rescinded from the top and the only obvious difference here is his departure from NIH.

In a prior post, A pants leg can only accommodate so many Jack Russells, I had elucidated my affection for applying Vince Lombardi’s advice to science careers.

Run to Daylight.

Seek out ways to decrease the competition, not to increase it, if you want to have an easier career path in academic science. Take your considerable skills to a place where they are not just expected value, but represent near miraculous advance. This can be in topic, in geography, in institution type or in any other dimension. Work in an area where there are fewer of you.

This came up today in a discussion of “scooping” and whether it is more or less your own fault if you are continually scooped, scientifically speaking.

He’s not wrong. I, obviously, was talking a similar line in that prior post. It is advisable, in a career environment where things like independence, creativity, discovery, novelty and the like are valued, for you NOT to work on topics that lots and lots of other people are working on. In the extreme, if you are the only one working on some topic that others who sit in evaluation of you see as valuable, this is awesome! You are doing highly novel and creative stuff.

The trouble is, that despite the conceits in study section review, the NIH system does NOT tend to reward investigators who are highly novel solo artists. It is seemingly obligatory for Nobel Laureates to complain about how some study section panel or other passed on their grant which described the plans to pursue what became the Nobel-worthy work. Year after year a lot of me-too grants get funded while genuinely new stuff flounders. The NIH has a whole system (RFAs, PAs, now NOSI) set up to beg investigators to submit proposals on topics that are seemingly important but nobody can get fundable scores to work on.

In 2019 the Hoppe et al. study put a finer and more quantitatively backed point on this. One of the main messages was the degree to which grant proposals on some topics had a higher success rate and some on other topics had lower success rates. You can focus on the trees if you want, but the forest is all-critical. This has pointed a spotlight on what I have taken to calling the inherent structural conservatism of NIH grant review. The peers are making entirely subjective decisions, particularly right at the might-fund/might-not-fund threshold of scoring, based on gut feelings. Those peers are selected from the ranks of the already-successful when it comes to getting grants. Their subjective judgments, therefore, tend to reinforce the prior subjective judgments. And of course, tend to reinforce an orthodoxy at any present time.

NIH grant review has many pseudo-objective components to it which do play into the peer review outcome. There is a sense of fair-play, sauce for the goose logic which can come into effect. Seemingly objective evaluative comments are often used selectively to shore up subjective, Gestalt reviewer opinions, but this is in part because doing so has credibility when an assigned reviewer is trying to convince the other panelists of their judgment. One of these areas of seemingly objective evaluation is the PI’s scientific productivity, impact and influence, which often touches on publication metrics. Directly or indirectly. Descriptions of productivity of the investigator. Evidence of the “impact” of the journals they publish in. The resulting impact on the field. Citations of key papers….yeah it happens.

Consideration of the Hoppe results, the Lauer et al. (2021) description of the NIH “funding ecology” in the light of some of the original Ginther et al. (2011, 2018) investigation into the relationship of PI publication metrics is relevant here.

Publication metrics are a function of funding. The number of publications a lab generates depend on having grant support. More papers is generally considered better, fewer papers worse. More funding means an investigator has the freedom to make papers meatier. Bigger in scope or deeper in converging evidence. More papers means, at the very least, a trickle of self-cites to those papers. More funding means more collaborations with other labs…which leads to them citing both of you at once. More funding means more trainees who write papers, write reviews (great for h-index and total cites) and eventually go off to start their own publication records…and cite their trainee papers with the PI.

So when the NIH-generated publications say that publication metrics “explain” a gap in application success rates, they are wrong. They use this language, generally, in a way that says Black PIs (the topic of most of the reports, but this generalizes) have inferior publication metrics so this causes a lower success rate. With the further implication that this is a justified outcome. This totally ignores the inherent circularity of grant funding and publication measures of awesomeness. Donna Gither has written a recent reflection on her work on NIH grant funding disparity, which doubles down on her lack of understanding on this issue.

Publication metrics are also a function of funding to the related sub-field. If a lot of people are working on the same topic, they tend to generate a lot of publications with a lot of available citations. Citations which buoy up the metrics of investigators who happen to work in those fields. Did you know, my biomedical friends, that a JIF of 1.0 is awesome in some fields of science? This is where the Hoppe and Lauer papers are critical. They show that not all fields get the same amount of NIH funding, and do not get that funding as easily. This affects the available pool of citations. It affects the JIF of journals in those fields. It affects the competition for limited space in the “best” journals. It affects the perceived authority of some individuals in the field to prosecute their personal opinions about the “most impactful” science.

That funding to a sub-field, or to certain approaches (technical, theoretical, model, etc, etc) has a very broad and lasting impact on what is funded, what is viewed as the best science, etc.

So is it good advice to “Run to daylight”? If you are getting “scooped” on the regular is it your fault for wanting to work in a crowded subfield?

It really isn’t. I wish it were so but it is bad advice.

Better advice is to work in areas that are well populated and well-funded, using methods and approaches and theoretical structures that everyone else prefers and bray as hard as you can that your tiny incremental twist is “novel”.

I was recently describing Notice NOT-NS-21-049 Notice of Special Interest (NOSI): NIH Research Project Grant (R01) Applications from Individuals from Diverse Backgrounds, Including Under-Represented Minorities in the context of the prior NIH Director’s comments that ESI scores got worse after the news got around about relaxed paylines.

One thing that I had not originally appreciated was the fact that you are only allowed to put one NOSI in Box 4b.

Which means that you have to choose. If you qualify as an individual from diverse backgrounds you could use this, sure. But that means you cannot use a NOSI that is specific to the topic you are proposing.

This is the usual NIH blunder of stepping on their own junk. How many ways can I count?

Look, the benefit of NOSI (and the predecessor, the Program Announcement) is uncertain. It seemingly only comes into play when some element of Program wishes to fund an award out of the order of review. Wait, you say, can’t they just do that anyway for whatever priority appears in the NOSI? Yes, yes they can….when it comes to the topic of the grant. So why do NOSI exist at all?

Well…one presumes it is because elements of Program do not always agree on what should be funded out of order of review. And one presumes there is some sort of conflict resolution process. During which the argument that one grant is related to the Programmatic Interest formally expressed in the NOSI has some weight or currency. Prioritizing that grant’s selection for funding over the identically-percentiled grant that does not mention a NOSI.

One still might wonder about a topic that fits the NOSI but doesn’t mention the NOSI directly. Well, the threat language at the bottom of some of those NOSI, such as oh I don’t know this one, is pretty clear to me.

  • For funding consideration, applicants must include “NOT-DA-21-006” (without quotation marks) in the Agency Routing Identifier field (box 4B) of the SF424 R&R form. Applications without this information in box 4B will not be considered for this initiative.

Applications nonresponsive to terms of this NOSI will not be considered for the NOSI initiative.

So what is a PI to do? Presumably the NOSI has some non-negligible value and everyone is motivated to use those if possible. Maybe it will be the difference between a grey zone pickup and not, right? If your ideas for this particular grant proposal fit with something that your favorite IC has gone to the trouble if mentioning in a NOSI…well….dang it….you want to get noticed for that!

So what can you do if you are a person underrepresented who qualifies for the NOSI NOT-NS-21-049 ? The value of this one is uncertain. The value of any other NOSI for your particular application is likewise uncertain. We know perfectly well the NIH as a whole is running scared of right wing political forces when it comes to picking up grants. We know that this NOSI may be related to the well meaning ICs’ staff having difficulty getting PI demographic information and could simply be a data collection strategy for them.

Cynical you say? Well I had a few exchanges with a fairly high up Program person who suggested to me that perhaps the strategy was to sneak the “extra” NOSI into the Abstract of the proposal. This would somehow get it in front of Program eyes. But….but….there’s the “will not be considered” boilerplate. Right? What does this mean? It is absolutely maddening for PIs who might like to take advantage of this new NOSI which one might think would be used to fix the Ginther Gap. It is generally enraging for anyone who wants to see the Ginther Gap addressed.

It makes me positively incandescent to contemplate the possibility that the mere announcing of this NOSI will lead to study sections giving even worse scores to those applications, without any real assistance coming from Program.

A couple of more thoughts. This doesn’t apply to anything other than an R01 application, which is nuts. Why not apply it to all investigator initiated mechanisms? Trust me, underrepresented folks would like a leg up on R21 and R03 apps as well. These very likely help with later R01 getting, on a NIH wide statistical basis. You know, the basis of the Ginther Gap. So why not include other mechs?

And did you notice that no other ICs have joined? NINDS issued the NOT on May 3 and they were rapidly joined by NIDA (May 6) and NIAAA (May 11). All in good time for the June 5 and July 5 submission rounds. Since then….crickets. No other ICs have joined in. Weird, right?

I was on a Zoom thing awhile back where a highly authoritative Program person claimed that the Office of the Director (read: Francis Collins) had put a hold on any more ICs joining the NINDS NOSI.

Why? Allegedly because there was a plan to make this more general, more NIH wide all at one fell swoop.

Why? Who the heck knows. To cover up the reluctance of some of the ICs that would not be joining the NOSI if left up to their own devices? If so, this is HORRENDOUS, especially give the above mentioned considerations for choosing only one NOSI for Box 4b. Right? If they do extend this across all NIH, how would any PI know that their particular IC has no intention whatsoever of using this NOSI to fund some grants? So maybe they choose to use it, for no help, while bypassing another NOSI that might have been of use to them.

Notice NOT-NS-21-049 Notice of Special Interest (NOSI): NIH Research Project Grant (R01) Applications from Individuals from Diverse Backgrounds, Including Under-Represented Minorities was released on May 3, 2021.

The NOSI is the new Program Announcement, for those who haven’t been keeping track. As with the old PA the direct benefit is not obvious. There is no set aside funding or promise to fund any applications at all. In the context of Ginther et al 2011, Hoppe et al 2019, the discussions of 2020 and the overall change in tone from the NIH on diversity matters, it is pretty obvious that this is designed merely to be the excuse. This PA, sorry NOSI, is what will permit Program to pick up grants on the (partial?) basis of the PI’s identity.

What identities? Well the NOSI re-states the categories A, B and C that are familiar from other similar documents.

A. Individuals from racial and ethnic groups that have been shown by the National Science Foundation to be underrepresented in health-related sciences on a national basis…

B. Individuals with disabilities, who are defined as those with a physical or mental impairment that substantially limits one or more major life activities

C. Individuals from disadvantaged backgrounds, defined as those who meet two or more of the following criteria…

And then there is a statement about gender intersectionality to close out.

GREAT! Right?

Yeah, it is. To the extent this is used to figure out a way to start working the naked, quota based, top down, heavy handed affirmative action that has been benefiting ESI applicants since 2007 on the NIH funding disparity identified in Ginther and Hoppe, this is a win. From the very start of hearing about Ginther I‘ve been talking about exception pay rapid solutions and this only heated up with the disingenuous claim that exception pay was not accelerating the disparity which was made in Hoppe et al. The NOSI allows pickups/exception pay, for sure, under the “special interest” idea. I don’t know if this will end up generating explicit payline benefits on a universal categorical basis as has been done at many ICs for ESI applications. The difference, of course, is that the former is much more variable and subject to biases that are expressed by Program Officers individually and collectively. Explicit rules would be better…..ish. It’s complicated.

In a news piece by Jocelyn Kaiser, the prior NIH Director Elias Zerhouni was quoted saying that study sections responded to his 2006/2007 ESI push by “punishing the young investigators with bad scores”. As I have tried to explain numerous times, phrasing this as a matter of malign intent on the part of study section members is a mistake. While it may be true that many reviewers opposed the idea that ESI applicants should get special breaks, adjusting scores to keep the ESI application at the same chances as before Zerhouni’s policies took effect is just a special case of a more general phenomenon.

NIH grant reviewers have a pronounced tendency to review grant proposals with an eye to “fund it” versus “don’t fund it”. Continual exhortations from SROs that panels do not make funding decisions, that they should review merit on a mostly continuous scale and that they should spread scores has minimal impact on this. Reviewers have a general idea of what scores will result in funding and what will not and they score accordingly*. I have mentioned that when I first started on study section the SRO would actually send us score distributions as part of the effort to get us to spread scores. INVEVITABLY the scores would stack up around the perceived funding line. Across a couple of years one could even see this move in tandem (with a round or two lag, obv) with what was funding at the 2-3 ICs that most grants were assigned to.

One interpretation of the “punishing” phenomenon is simply that panels were doing what they always do (as I assert anyway) in matching scoring to their perception of the payline and their gut feeling about whether a given app was deserving. What this assumes, of course, is that whatever biases were keeping the applications of the ESI-qualifying individuals from getting good scores in the past were still present and the reviewers were simply continuing the same behavior.

My concern with the new NOSI is, of course, that something similar will happen with the applications that qualify for this NOSI. There is the potential for a growing general suspicion (assumption? bias?) among study section reviewers that “oh, those URM PIs get special breaks” and then the scores will get even worse than they were before this idea started to percolate around the culture. It might be accelerated if the ICs generate explicit statements of relaxed paylines…but the campfire chatter about how the NOSI is being used will be sufficient.

Vigilance!

Vigilance is the thing. NIH cannot be permitted to put this in place, pay no attention to the results and “suddenly realize” five or ten years later that things are not working according to design.

__

*generally. Reviewers can be mistaken about paylines. They can be miscalibrated about scores and percentiles. They have a limited picture which only reflects their own knowledge of what is being funded. But still, there is an aggregate effect.

The recent NOT-OD-21-073 Upcoming Changes to the Biographical Sketch and Other Support Format Page for Due Dates on or after May 25, 2021 indicates one planned change to the Biosketch which is both amusing and of considerable interest to us “process of NIH” fans.

For the non-Fellowship Biosketch, Section D. has been removed. … As applicable, all applicants may include details on ongoing and completed research projects from the past three years that they want to draw attention to within the personal statement, Section A.

Section D is “Additional Information: Research Support and/or Scholastic Performance“. The prior set of instructions read:

List ongoing and completed research projects from the past three years that you want to draw attention to. Briefly indicate the overall goals of the projects and your responsibilities. Do not include the number of person months or direct costs.

And if the part about “want to draw attention to” was not clear enough they also added:

Do not confuse “Research Support” with “Other Support.” Other Support information is not collected at the time of application submission.”

Don’t answer yet, there’s more!

Research Support: As part of the Biosketch section of the application, “Research Support” highlights your accomplishments, and those of your colleagues, as scientists. This information will be used by the reviewers in the assessment of each your qualifications for a specific role in the proposed project, as well as to evaluate the overall qualifications of the research team.

This is one of those areas where the NIH intent has been fought bitterly by the culture of peer review, in my experience (meaning in my ~two decades of being an applicant and slightly less time as a reviewer). These policy positions, instructions, etc and the segregation of the dollars and all total research funding into the Other Support documentation make it very clear to the naive reader that the NIH does not want reviewers contaminating their assessment of the merit of a proposal with their own ideas about whether the PI (or other investigators) have too much other funding. They do not want this at all. It is VERY clear and this new update to the Biosketch enhances this by deleting any obligatory spot where funding information seemingly has to go.

But they are paddling upstream in a rushing, spring flood, rapids Cat V river. Good luck, say I.

Whenever this has come up, I think I’ve usually reiterated the reasons why a person might be motivated to omit certain funding from their Biosketch. Perhaps you had an unfortunate period of funding that was simply not very productive for any of a thousand reasons. Perhaps you do have what looks to some eyes like “too much funding” for your age, tenure, institution type, sex or race. Or for your overall productivity level. Perhaps you have some funding that looks like it might overlap with the current proposal. Or maybe even funding from some source that some folks might find controversial. The NIH has always (i.e. during my time in the system) endorsed your ability to do so and the notion that these consideration should not influence the assessment of merit.

I have also, I hope consistently, warned folks not to ever, ever try to omit funding (within the past three years) from their Biosketch, particularly if it can be found in any way on the internet. This includes those foundation sites bragging about their awards, your own lab website and your institutional PR game which put out a brag on you. The reason is that reviewers just can’t help themselves. You know this. How many discussions have we had on science blogs and now science twitter that revolve around “solutions” to NIH funding stresses that boil down to “those guys over there have too much money and if we just limit them, all will be better”? Scores.

Believe me, all the attitudes and biases that come out in our little chats also are present in the heads of study section members. We have all sorts of ideas about who “deserves” funding. Sometimes these notions emerge during study section discussion or in the comments. Yeah, reviewers know they aren’t supposed to be judging this so it often come up obliquely. Amount of time committed to this project. Productivity, either in general or associated with specific other awards. Even ones that have nothing to do with the current proposal.

My most hilariously vicious personal attack summary statement critique ever was clearly motivated by the notion that I had “too much money”. One of the more disgusting aspects of what this person did was to assume incorrectly that I had a tap on resources associated with a Center in my department. Despite no indication anywhere that I had access to substantial funds from that source. A long time later I also grasped an even more hilarious part of this. The Center in question was basically a NIH funded Center with minimal other dollars involved. However, this Center has what appear to be peer Centers elsewhere that are different beasts entirely. These are Centers that have a huge non-federal warchest involving more local income and an endowment built over decades. With incomes that put R21 and even R01 money into individual laboratories that are involved in the Center. There was no evidence anywhere that I had these sorts of covert resources, and I did not. Yet this reviewer felt fully comfortable teeing off on my for “productivity” in a way that was tied to the assumption I had more resources than were represented by my NIH grants.

Note that I am not saying many other reviews of my grant applications have not been contaminated by notions that I have “too much”. At times I am certain they were. Based on my age at first. Based on my institution and job type, certainly. And on perceptions of my productivity, of course. And now in the post-Hoppe analysis….on my race? Who the fuck knows. Probably.

But the evidence is not usually clear.

What IS clear is that reviewers, who are your peers with the same attitudes they express around the water cooler, on average have strong notions about whether PIs “deserve” more funding based on the funding they currently have and have had in the past.

NIH is asking, yet again, for reviewers to please stop doing this. To please stop assessing merit in a way that is contaminated by other funding.

I look forward with fascination to see if NIH can managed to get this ship turned around with this latest gambit.

The very first evidence will be to monitor Biosketches in review to see if our peers are sticking with the old dictum of “for God’s sake don’t look like you are hiding anything” or if they will take the leap of faith that the new rules will be followed in spirit and nobody will go snooping around on RePORTER and Google to see if the PI has “too much funding”.

There is a new review by Shansky and Murphy out this month which addresses the NIH policy on considering sex as a biological variable (SABV).

Shansky, RM and Murphy, AZ. Considering sex as a biological variable will require a global shift in science culture. Nat Neurosci, 2021 Mar 1. doi: 10.1038/s41593-021-00806-8. Online ahead of print.

To get this out of the way, score me as one who is generally on board with the sentiments behind SABV and one who started trying to change my own approach to my research when this first started being discussed. I even started trying to address this in my grant proposals several cycles before it became obligatory. I have now, as it happens, published papers involving both male and female subjects and continue to do so. We currently have experiments being conducted that involve both male and female subjects and my plan is to continue to do so. Also, I have had many exchanges with Dr. Shansky over the years about these issues and have learned much from her views and suggestions. This post is going to address where I object to things in this new review,for the most part, so I thought I should make these declarations, for what they are worth.

In Box 1, the review addresses a scientist who claims that s/he will first do the work in males and then followup in females as follows:

“We started this work in males, so it makes sense to keep going in males. We will follow up with females when this project is finished.” Be honest, when is a project ever truly finished? There is always another level of ‘mechanistic insight’ one can claim to need. Playing catch-up can be daunting, but it is better to do as much work in both sexes at the same
time, rather than a streamlined follow-up study in females years after the original male work was published. This latter approach risks framing the female work as a lower-impact ‘replication study’ instead of equally valuable to scientific knowledge.

This then dovetails with a comment in Box 2 about the proper way to conduct our research going forward:

At the bare minimum, adhering to SABV means using experimental cohorts that include both males and females in every experiment, without necessarily analyzing data by sex.

I still can’t get past this. I understand that this is the place that the NIH policy on SABV has landed. I do. We should run 50/50 cohorts for every study, as Shansky and Murphy are suggesting here. I cannot for the life of me see the logic in this. I can’t. In my work, behavioral work with rats for the most part, there is so much variability that I am loathe to even run half-size pilot studies. In a lot of the work that I do, N=8 is a pretty good starting size for the minimal ability to conclude much of anything. N=4? tough, especially as a starting size of the groups.

The piece eventually gets around to the notion of how we enforce the NIH SABV policy. As I have pointed out before and as is a central component of this review, we are moving rapidly into a time when the laboratories who claim NIH support for their studies are referencing grant proposals that were submitted under SABV rules.

NOT-OD-15-102 appeared in June of 2015 and warned that SABV policy would “will take effect for applications submitted for the January 25, 2016, due date, and thereafter“. Which means grants to be reviewed in summer 2016, considered at Council in the Fall rounds and potentially funded Dec 1, 2016. This means, with the usual problems with Dec 1 funding dates, that we are finishing up year 4 of some of these initial awards.

One of the main things that Shansky and Murphy address is in the section “Moving forward-who is responsible?“.

whether they have [addressed SABV] in their actual research remains to be seen. NIH grants are nonbinding, meaning that awardees are not required to conduct the exact experiments they propose. Moreover, there is no explicit language from NIH stating that SABV adherence will be enforced once the funds are awarded. Without accountability measures in place, no one is prevented from exclusively using male subjects in research funded under SABV policies.

Right? It is a central issue if we wish to budge the needle on considering sex as a biological variable. And the primary mechanism of enforcement is, well, us. The peers who are reviewing the subsequent grant applications from investigators who have been funded in the SABV era. The authors sortof mention this: “Researchers should be held accountable by making documentation of SABV compliance mandatory in yearly progress reports and by using compliance as a contingency for grant renewals (both noncompetitive and competitive).” Actually, the way this is structured, combined with the following sentence about manuscript review, almost sidesteps the critical issue. I will not sidestep in this post.

We, peer scientists who are reviewing the grant proposals, are the ones who must take primary responsibility to assess whether a PI and associated Investigators have made a good faith attempt to follow/adopt SABV policy or not. Leaving this in the hands of Program to sort out, based on tepid review comments, is a dereliction of duty and will result in frustrating variablity of review that we all hate. So….we are the ones who will either let PIs off the hook, thereby undercutting everything NIH has tried to accomplish, or we will assist NIH by awarding poor scores to applications with a team that has not demonstrably taken SABV seriously. We are at a critical and tenuous point. Will PIs believe that their grants will still be funded with a carefully crafted SABV statement, regardless of whether they have followed through? Or will PIs believe that their grant getting is in serious jeopardy if they do not take the spirit of the SABV policy to heart? The only way this is decided is if the peer review scores reward those who take it seriously and punish those who do not.

So now we are back to the main point of this post which is how we are to assess good-faith efforts. I absolutely agree with Shansky and Murphy that an application (competing or not) that basically says “we’re going to follow up in the females later“, where later means “Oh we didn’t do it yet, but we pinky swear we will do it in this next interval of funding” should not be let off the hook.

However. What about a strategy that falls short of the “bare minimum”, as the authors insist on in Box 2, of including males and females in 50/50 proportion in every experiment, not powered to really confirm any potential sex difference?

I believe we need a little more flexibility in our consideration of whether the research of the PI is making a good faith effort or not. What I would like to see is simply that male and female studies are conducted within the same general research program. Sure, it can be the 50/50 group design. But it can also be that sometimes experiments are in males, sometimes in females. Particularly if there is no particular sense that one sex is always run first and the other is trivially “checked, or that one sex dominates the experimental rationale. Pubs might include both sexes within one paper, that’s the easiest call, but they might also appear as two separate publications. I think this can often be the right approach, personally.

This will require additional advocacy, thinking, pushback, etc, on one of the fundamental principles that many investigators have struggled with in the SABV era. As is detailed in Box 1 and 2 of the review, SABV does not mean that each study is a direct study of sex differences nor that every study in female mammals becomes a study of estrous cycle / ovarian hormones. My experience, as both an applicant and a reviewer, is that NIH study section members often have trouble with this notion. There has not been, in my experience on panels, a loud and general chorus rebutting any such notions during discussion either, we have much ground still to cover.

So we will definitely have to achieve greater agreement on what represents a good faith effort on SABV, I would argue, if we are to advocate strongly for NIH study sections to police SABV with the firm hand that it will require.

I object to what might be an obvious take-away from Shansky and Murphy, i.e., that the 50/50 sample approach is the essential minimum. I believe that other strategies and approaches to SABV can be done which both involve full single-sex sample sizes and do not require every study to be a direct contrast of the sexes in an experimentally clean manner.

The Director of the NIH, in the wake of a presentation to the Advisory Committee to the Director meeting, has issued a statement of NIH’s commitment to dismantle structural racism.

Toward that end, NIH has launched an effort to end structural racism and racial inequities in biomedical research through a new initiative called UNITE, which has already begun to identify short-term and long-term actions. The UNITE initiative’s efforts are being informed by five committees with experts across all 27 NIH institutes and centers who are passionate about racial diversity, equity, and inclusion. NIH also is seeking advice and guidance from outside of the agency through the Advisory Committee to the Director (ACD), informed by the ACD Working Group on Diversity, and through a Request for Information (RFI) issued today seeking input from the public and stakeholder organizations. The RFI is open through April 9, 2021, and responses to the RFI will be made publicly available. You can learn more about NIH’s efforts, actions, policies, and procedures via a newly launched NIH webpage on Ending Structural Racism aimed at increasing our transparency on this important issue.

This is very much welcome, coming along as it does, a decade after Ginther and colleagues showed that Black PIs faced a huge disadvantage in getting their NIH grants funded. R01 applications with Black PIs were funded at only 58% of the rate that applications with white PIs were funded.

Many people in the intervening years, accelerated after the publication of Hoppe et al 2019 and even further in the wake of the murder of George Floyd at the hands of the Minneapolis police in 2020, have wondered why the NIH does not simply adopt the same solution they applied to the ESI problem. In 2006/2007 the then-Director of NIH, Elias Zerhouni, dictated that the NIH would practice affirmative action to fund the grants of Early Stage Investigators. As detailed in Science by Jocelyn Kaiser

Instead of relying solely on peer review to apportion grants, [Zerhouni] set a floor—a numerical quota—for the number of awards made to new investigators in 2007 and 2008.

A quota. The Big Bad word of anti-equity warriors since forever. Gawd forbid we should use quotas. And in case that wasn’t clear enough

The notice states that NIH “intends to support new investigators at success rates comparable to those for established investigators submitting new applications.” In 2009, that will mean at least 1650 awards to new investigators for R01s, NIH’s most common research grant.

As we saw from Hoppe et al, the NIH funded 256 R01s with Black PIs in the interval from 2011-2015, or 51 per year. In a prior blog post I detailed how some 119 awards to poorer-scoring applications with white PIs could have been devoted to better-scoring proposals with Black PIs. I also mentioned how doing so would have moved the success rate for applications with Black PIs fro 10.7% to 15.6% whereas the white PI success rate would decline from 17.7% to 17.56%. Even funding every single discussed application with a Black PI (44% of the submissions) by subtracting those 1057 applications from the pool awarded with white PIs would reduce the latter applications’ hit rate only to 16.7% which is still a 56% higher rate than the 10.7% rate that the applications with Black PIs actually experienced.

I have been, and continue to be, an advocate for stop-gap measures that immediately redress the funding rate disparity by mandating at least equivalent success rates, just as Zerhouni mandated for ESI proposals. But we need to draw a key lesson from that episode. As detailed in the Science piece

Some program directors grumbled at first, NIH officials say, but came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni. That is, a previous slight gap in review scores for new grant applications from first-time and seasoned investigators widened in 2007 and 2008, [then NIGMS Director Jeremy] Berg says. It revealed a bias against new investigators, Zerhouni says.

I don’t know for sure that this continued, but the FY 2012 funding data published by Kienholz and Berg certainly suggest that several NIH ICs continued to fund ESI applications at much lower priority scores/percentiles than were generally required for non-ESI applications to receive awards. And if you examine those NIH ICs pages that publish their funding strategy each year [see the writedit blog for current policy links], you will see that they continue to use a lower payline for ESIs. So. From 2007 to 2021 that is a long interval of policy which is not “affirmative action”, just, in the words of Zerhouni, “leveling the playing field”.

The important point here is that the NIH has never done anything to get to the “real reason” for the fact that early stage investigators proposals were being scored by peer review at lower priority than they, NIH, desired. They have not undergone spasms of reviewer “implicit bias” training. They have not masked the identity of the PI or done anything suggesting they think they can “fix” the review outcome for ESI PIs.

They have accepted the fact that they just need to counter the bias.

NIH will likewise need to accept that they will need to fund Black PI applications with a different payline for a very long time. They need to accept that study sections will “punish*” those applications with even worse scores. They will even punish those applications with higher ND rates. And to some extent, merely by talking about it this horse has left the stall and cannot be easily recalled. We exist in a world where, despite all evidence, white men regularly assert with great confidence that women or minority individuals have all the advantages in hiring and career success.

So all of this entirely predictable behavior needs to be accounted for, expected and tolerated.

__

*I don’t happen to see it as “punishing” ESI apps even further than whatever the base rate is. I think reviewers are very sensitive to perceived funding lines and to reviewing grants from a sort of binary “fund/don’t fund” mindset. Broadcasting a relaxed payline for ESI applications almost inevitably led to reviewers moving their perceived payline for those applications.

Rule Followers

February 12, 2021

As always Dear Reader, I start with personal confession so you know how to read my biases appropriately.

I am a life time Rule Follower.

I am also a life time Self-Appointed Punisher of Those Who Think the Rules Do Not Apply to Them.

What does “Rule Follower” mean to me? No, not some sort of retentive allegiance to any possible guideline or rule, explicit or implicit. I’ve been known to speed once in awhile. It doesn’t even mean that rule followers are going to agree with, and follow, every rule imaginable for any scenario. It is just an orientation of a person that believes there are such things as rules of behavior, these rules are good things as a social or community compact and that it is a good idea to adhere to them as a general rule. It is a good idea to work within the rules and that this is what is best for society, but also for the self.

The other kind of person, the “Rules Don’t Apply to ME” type, is not necessarily a complete sociopath*. And, in fact, such people may actually be a Rule Follower when it comes to the really big, obvious and Important (in their view) rules. But these are people that do not agree that all of the implicit social rules that Rule Followers follow actually exist. They do not believe that these rules apply to them, and often extend that to the misdemeanor sort of actual formal Rules, aka The Law.

Let’s talk rules of the road- these are the people who routinely speed, California Stop right on reds, and arc into the far lane when making a turn on a multi-lane road. These are the people that bypass a line of patiently waiting traffic and then expect to squeeze into the front of the line with an airy “oops, my badeee, thanks!” smile and wave. They are the ones that cause all sorts of merging havoc because they can’t be arsed to simply go down to the next street or exit to recover from their failure to plan ahead. These are often the people who, despite living in a State with very well defined rules of the road for bicycle traffic, self-righteously violate those rules as a car driver and complain about how the law-compliant bicycle rider is the one in the wrong.

But above all else, these people feel entitled to their behavior. It is an EXTREME OUTRAGE whenever they are disciplined in any way for their selfish and rude behavior that is designed to advantage themselves at the cost to (many) others.

If you don’t let them in the traffic line, you are the asshole. When you make the left turn into lane 2 and they barely manage to keep from hitting you as they fail to arc their own turn properly..you are the asshole. When they walk at you three abreast on the sidewalk and you eyeball the muppethugger trying to edge you off your single lane coming the other way and give every indication you are willing to bodycheck their selfish ass until they finally grudgingly rack it the fuck in…YOU are the asshole.

When they finally get a minor traffic citation for their speeding or failing to stop on a right on red… Oh, sister. It’s forty minutes of complaining rationalization about how unfair this is and why are those cops not solving real crimes and oh woe is me for a ticket they can easily pay. Back in the day when it was still illegal, this was the person caught for a minor weed possession citation who didn’t just pay it but had to go on at length about how outrageous it was to get penalized for their obvious violation of the rules. Don’t even get me started about how these people react to a citation for riding their bicycle on the sidewalk (illegal!) instead of in the street (the law they personally disagree with).

Back before Covid you could identify these two types by hanging around the bulk food bin at your local hippy grocery store. Rule Followers do not sample the items before paying and exiting the store. Those other people…..

Hopefully I’ve chosen examples that get you into the proper mindset of a complex interplay of formal rules that not everyone follows and informal rules of conduct that not everyone follows. I shouldn’t have to draw your attention to how the “Rules Don’t Apply to Me” sail along with convenient interpretations, feigned ignorances and post-hoc everyone-does-it rationales to make their lives a lot easier. That’s right, it’s convenient to not follow the rules, it gets them ahead and frankly those Rule Followers are beta luser cucks for not living a life of personal freedom!

We’re actually in the midst of one of these scenarios right now.

Covid vaccination

As you are aware, there are formal “tiers” being promulgated for who gets schedule for vaccines at which particular time. You know the age cutoffs- we started with 75+ and are now at 65+ in most locations. Then there are the job categories. Health care workers are up first, and then we are working a cascade of importance given occupation. Well, in my environment we had a moment in which “lab workers” were greenlit and Oh, the science lab types rushed to make their appointments. After a short interval, the hammer came down because “lab” meant “lab actually dealing with clinical care and health assessment samples” and not just “any goofaloon who says they work in a lab”.

Trust me, those at the head of that rush (or those pushing as the lab head or institution head) were not the Rule Followers. It was, rather, those types of people who are keen to conveniently define some situation to their own advantage and never consider for a second if they are breaking the Rules.

Then there have been some vaccine situations that are even murkier. We’ve seen on biomedical science tweeter that many lab head prof types have had the opportunity to get vaccinated out of their apparent tier. It seemed, especially in the earlier days prior to vaccine super centers, that a University associated health system would reach the end of their scheduled patients for the day and have extra vaccine.

[ In case anyone has been hiding under a rock, the first vaccines are fragile. They have to be frozen for storage in many cases and thus thawed out. They may not be stable overnight once the vial in question has been opened. In some cases the stored version may need to be “made up” with vehicles or adjuvants or whatever additional components. ]

“Extra” vaccine in the sense of active doses that would otherwise be lost / disposed of if there was no arm to stick it in. Employees who are on campus or close by, can readily be rounded up on short notice, and have no reason to complain if they can’t get vaccinated that particular day, make up this population of arms.

Some Rule Followers were uncomfortable with this.

You will recognize those other types. They were the ones triumphantly posting their good luck on the internet.

In my region, we next started to have vaccine “super centers”. These centers recruited lay volunteers to help out, keep an eye on patients, assist with traffic flow, run to the gloves/syringe depot, etc. And, as with the original health center scenario, there were excess doses available at the end of the day which were offered to the volunteers.

Again, some Rule Followers were uncomfortable with this. Especially because in the early days it was totally on the DL. The charge nurse closest to you would pull a volunteer aside and quietly suggest waiting around at the end of the day just “in case”. It was all pretty sketchy sounding….. to a Rule Follower. The other type of person? NO PROBLEM! They were right there on day one, baby! Vacc’d!

Eventually the volunteer offer policy became someone formalized in my location. Let me tell you, this was a slight relief to a Rule Follower. It for sure decreases the discomfort over admitting one’s good fortune on the intertoobs.

But! It’s not over yet! I mean, these are not formalized processes and the whole vaccine super-center is already chaos just running the patients through. So again, the Rules Don’t Need To Be Followed types are most likely to do the self-advocacy necessary to get that shot in their arm as quickly and assuredly as possible. Remember, it’s only the excess doses that might be available. And you have to keep your head up on what the (rapidly shifting and evolving) procedure might be at your location if you want to be offered vaccine.

Fam, I’m not going to lie. I leaned in hard on anyone I think of as a Rule Follower when I was relating the advantages of volunteering** at one of our vaccine super-centers. I know what we are like. I tell them as much about the chaotic process as I know so as to prepare them for self-advocacy, instead of their native reticence to act without clear understanding of rules that entitle them to get stuck with mRNA.

Still with me?

NIH has been cracking down on URLs in grant applications lately. I don’t know why and maybe it has to do with their recent hoopla about “integrity of review” and people supposedly sharing review materials with outside parties (in clear violation of the review confidentiality RULES, I will note). Anyway, ever since forever you are not supposed to put URL links in your grant applications and reviewers are exhorted never ever to click on a link in a grant. It’s always been explained to me in the context of IP address tracking and identifying the specific reviewers on a panel that might be assigned to a particular application. Whatever. It always seemed a little paranoid to me. But the Rules were exceptionally clear. This was even reinforced with the new Biosketch format that motivated some sort of easy link to one’s fuller set of publications. NIH permits PubMed links and even invented up this whole MyBibliography dealio at MyNCBI to serve this purpose.

Anyway there has been a few kerfuffles of EXTREME ANGER on Science Twitter from applicants who had their proposals rejected prior to review for including URLs. It is an OUTRAGE, you see, that they should be busted for this clear violation of the rules. Which allegedly, according to Those To Whom Rules Do Not Apply, were incredibly arcane rules that they could not possibly be expected to know and waaah, the last three proposals had the same link and weren’t rejected and it isn’t FAAAAAIIIIR!

My gut reaction is really no different than the one I have turning left in a two lane turn or walking at sidewalk hogs. Or the one I have when a habitual traffic law violator finally has to pay a minor fine. Ya fucked around and found out. As the kids say these days.

For some additional perspective, I’ve been reviewing NIH grants since the days when paper hard copies were submitted by the applicant and delivered to the reviewers as such. Pages could be missing if the copier effed up- there was no opportunity to fix this once a reviewer noticed it one week prior to the meeting. Font size shenanigans were seemingly more readily played. And even in the days since, as we’ve moved to electronic documents, there are oodles and oodles of rules for constructing the application. No “in prep” citations in the old Biosketch….people did it anyway. No substituting key methods in the Vertebrate Animals section…..people still do it anyway. Fonts and font size, okay, but what about vertical line spacing….people fudge that anyway. Expand figure “legends” (where font size can be smaller) to incorporate stuff that (maybe?) should really be in the font-controlled parts of the text. Etc, etc, etc.

And I am here to tell you that in many of these cases there was no formal enforcement mechanism. Ask the SRO about a flagrant violation and you’d get some sort of pablum about “well, you are not obliged to consider that material..”. Font size? “well…..I guess that’s up to the panel”. Which is enraging to a Rule Follower. Because even if you want to enforce the rules, how do you do it? How do you “ignore” that manuscript described as in prep, or make sure the other reviewers do? How do you fight with other reviewers about how key methods are “missing” when they are free to give good scores even if that material didn’t appear anywhere in figure legend, Vertebrate Animals or, ISYN, a 25% of the page “footnote” in microfont. Or how do your respond if they say “well, I’m confident this investigator can work it out”?

If, in the old days, you gave a crappy score to a proposal that everyone loved by saying “I put a ruler on the vertical and they’ve cheated” the panel would side eye you, vote a fundable score and fuck over any of your subsequent proposals that they read.

Or such might be your concern if your instinct was to Enforce the Rules.

Anyway, I’m happy to see CSR Receipt and Referral enforce rules of the road. I don’t think it an outrage at all. The greater outrage is all the people who have been able to skirt or ignore the rules and advantage themselves against those of us who do follow the rules***.

__

*Some of my best friends are habitual non-followers-of-rules.

**I recommend volunteering at a vaccine super station if you have the opportunity. It is pretty cool just to see how your health care community is reacting in this highly unusual once-in-a-generation crisis. And its cool, for those of us with zero relevant skills, to have at least a tiny chance to help out. Those are the Rules, you know? 🙂

***Cue Non-Followers-of-Rules who, Trumplipublican- and bothsiders-media-like, are absolutely insistent then when they manage to catch a habitual Rule Follower in some violation it proves that we’re all the same. That their flagrant and continual behavior is somehow balanced by one transgression of someone else.

I have long standing doubts about certain aspects of funding mechanisms that are targeted to underrepresented individuals. This almost always has come up in the past in the context of graduate or postdoctoral fellowships and when there is a FOA open to all, and a related or parallel FOA that is directed explicitly at underrepresented individuals. For example see NINDS F31, K99/R00 , NIGMS K99/R00 initiatives, and there is actually a NIH parent F32 – diversity as well).

At first blush, this looks awesome! Targeted opportunity, presumably grant panel review that gives some minimal attention to the merits of the FOA and, again presumably, some Program traction to fund at least a few.

My Grinchy old heart is, however, suspicious about the real opportunity here. Perhaps more importantly, I am concerned about the real opportunity versus the opportunity that might be provided by eliminating any disparity of review that exists for the review of applications that come in via the un-targeted FOA. No matter the FOA, the review of NIH grants is competitive and zero sum. Sure, pools of money can be shifted from one program to another (say from the regular F31 to the F31-diversity) but it is rarely the case there is any new money coming in. Arguing about the degree to which funding is targeted by decision of Congress, of the NIH Director, of IC Directors or any associated Advisory Councils is a distraction. Sure NIGMS gets a PR hit from announcing and funding some MOSAIC K99/R00 awards…but they could just use those moneys to fund the apps coming in through their existing call that happen to have PIs who are underrepresented in science.

The extreme example here is the highly competitive K99 application from a URM postdoc. If it goes in to the regular competition, it is so good that it wins an award and displaces, statistically, a less-meritorious one that happens to have a white PI. If it goes in to the MOSAIC competition, it also gets selected, but in this case by displacing a less-meritorious one that happens to have a URM PI. Guaranteed.

These special FOA have the tendency to put all the URM in competition with each other. This is true whether they would be competitive against the biased review of the regular FOA or, more subtly, whether they would be competitive for funding in a regular FOA review that had been made bias-free(r).

I was listening to a presentation from Professor Nick Gilpin today on his thoughts on the whole Ginther/Hoppe situation (see his Feature at eLife with Mike Taffe) and was struck by comments on the Lauer pre-print. Mike Lauer, head of NIH’s office of extramural awards, blogged and pre-printed an analysis of how the success rates at various NIH ICs may influence the funding rate for AA/B PIs. It will not surprise you that this was yet another attempt to suggest it was AA/B PIs’ fault that they suffer a funding disparity. For the sample of grants reviewed by Lauer (from the Hoppe sample), 2% were submitted with AA/B PIs, NIH-wide. The percentage submitted to the 19 individual funding ICs he covered ranged from 0.73% to 14.7%. This latter institute was the National Institute on Minority Health and Health Disparities (NIMHD). Other notable ICs of disproportionate relevance to the grants submitted with AA/B PIs include NINR (4.6% AA/B applications) and NICHD (3%).

So what struck me, as I listened to Nick’s take on these data, is that this is the IC assignment version of the targeted FOA. It puts applications with AA/B investigators in higher competition with each other. “Yeahbutt”, you say. It is not comparable. Because there is no open competition version of the IC assignment.

Oh no? Of course there is, particularly when it comes to NIMHD. Because these grants will very often look like a grant right down the center of those of interest to the larger, topic-focused ICs….save that it is relevant to a population considered to be minority or suffering a health disparity. Seriously, go to RePORTER and look at new NIMHD R01s. Or heck, NIMHD is small enough you can look at the out year NIMHD R01s without breaking your brain since NIHMH only gets about 0.8% of the NIH budget allocation. With a judicious eye to topics, some related searches across ICs, and some clicking on the PI names to see what else they may have as funded grants, you can quickly convince yourself that plenty of NIMHD awards could easily be funded by a related I or C with their much larger budgets*. Perhaps the contrary is also true, grants funded by the parent / topic IC which you might also argue would fit at NIMHD, but I bet the relative percentage goes the first way.

If I am right in my suspicions, the existence of NIMHD does not necessarily put more aggregate money into health disparities research. That is, more than that which could just as easily come out of the “regular” budget. The existence of NIMHD means that the parent IC can shrug off their responsibility for minority health issues or disparity issues within their primary domains of drug abuse, cancer, mental health, alcoholism or what have you. Which means they are likewise shrugging off the AA/B investigators who are disproportionately submitting applications with those NIMHD-relevant topics and being put in sharp competition with each other. Competition not just within a health domain, but across all health domains covered by the NIH.

It just seems to me that putting the applications with Black PIs preferentially in competition with themselves, as opposed to making it a fair competition for the entire pool of money allocated to the purpose, is sub optimal.

__

*Check out the descriptions for MD010362 and CA224537 for some idea of what I mean. The entire NIMHD budget is 5% as large as the NCI budget. Why, you might ask, is NCI not picking up this one as well?

This is not news to this audience but it bears addressing in as many ways as possible, in the context of the Hoppe et al 2019 and Ginther et al 2011 findings. Behind most of the resistance to doing anything about the funding disparity for investigators and, as we’re now finding out, topics is still some lingering idea that the NIH grant selection process is mostly about merit.

Objective merit. Sure we sort of nod that we understand that there is some wiggle room but overall it is difficult to find anyone that appears to understand something in a deep way.

“Merit” of NIH grants is untethered to anything objective. It relies on the opinion of the peer reviewers. The ~3 reviewers who are assigned to do deep review and the members of the panel (which can be 20-30ish folks) who vote scores after the discussion.

This particular dumb twitter poll shows that 77% of experienced reviewers either occasionally or regularly have the experience of thinking a grant that should not receive funding is very likely to do so.

and this other dumb little twitter poll shows that 88% of experienced NIH grant reviewers either occasionally or frequently experience a panel voting a non-fundable score for a grant they think deserves funding.

It will not escape you that individual reviewers tend to think a lot more grants should be funded than can be funded. And this shows up in the polls to some extent.

This is not high falutin’ science and it is possible we have some joker contamination here from people who are not in fact NIH review experienced.

But with that caveat, it tends to support the idea that the mere chance of which individuals are assigned to review a grant can have a major effect on merit. After all, the post-discussion scores of these individuals tends to significantly constrain the voting. But the voting is important too, since panel members can go outside the range or decide en masse to side with one or the other ends of the post-discussion range.

Swap out the assigned reviewers for a different set of three individuals and the outcomes are likely to be very different. Swap out one panel for another and the tendencies could be totally different. Is your panel heavy in those interested in sex differences and/or folks heavily on board with SABV? Or is it dominated by SABV resisters?

Is the panel super interested in the health effects of cannabis and couldn’t give a fig about methamphetamine? What do YOU think is going to come out of that panel with fundable scores?

Does the panel think any non-mammalian species is horrible for modeling human health and should really never be funded? Does the panel geek away at tractable systems and adore anything fly or worm driven and complain about the lack of manipulability available in a rat?

Of course you know this. These kinds of whines and complaints are endemic to fireside chats whenever two or more NIH grant-seeking investigators are present!

But somehow when it is a disparity of race or of topics of interest to minority communities in the US, such as from Hoppe et al 2019, then nobody is concerned. Even when there are actual data on the table showing a funding disparity. And everyone asks their “yeahbutwhatabout” questions, springing right back into the mindset that at the very core the review and selection of grants is about merit. The fact their worm grant didn’t get selected is clear evidence of a terrible bias in the NIH approach. The fact African-American PIs face a payline far lower than they do…..snore.

Because in that case it is about objective merit.

And not about the coincidence of whomever the SRO has decided should review that grant.

The NIH has launched a new FOA called the Stephen I. Katz Early Stage Investigator Research Project Grant (Open Mike blog post). PAR-21-038 is the one for pre-clinical, PAR-21-039 is the one for clinical work. These are for Early Stage Investigators only and have special receipt dates (e.g. January 26, 2021; May 26, 2021; September 28, 2021). Details appear to be a normal R01- up to 5 years and any budget you want to try (of course over $500k per year requires permission).

The novelty here appears to be entirely this:

For this FOA, applications including preliminary data will be considered noncompliant with the FOA instructions and will be withdrawn. Preliminary data are defined as data not yet published. Existence of preliminary data is an indication that the proposed project has advanced beyond the scope defined by this program and makes the application unsuitable for this funding opportunity. Publication in the proposed new research direction is an indication that the proposed work may not be in a new research direction for the ESI.

This will be fascinating. A little bit more specification that the scientific justification has to rest on published (or pre-printed) work only:

The logical basis and premise for the proposed work should be supported by published data or data from preprints that have a Digital Object Identifier (DOI). These data must be labeled and cited adjacent to each occurrence within the application and must be presented unmodified from the original published format. Figures and tables containing data must include citation(s) within the legend. The data should be unambiguously identified as published through citation that includes the DOI (see Section IV.2). References and data that do not have an associated DOI are not allowed in any section of the application. Prospective applicants are reminded that NIH instructions do not allow URLs or hyperlinks to websites or documents that contain data in any part of the application

So how is this going to work in practice for the intrepid ESI looking to apply for this?

First, there is no reason you have to put the preliminary data you have available in the application. One very hot comment over at the Open Mike blog post about the proposals being unsupported and therefore the projects will be doomed to failure is totally missing this point. PIs are not stupid. They aren’t going to throw up stupid ideas, they are going to propose their good ideas that can be portrayed as being unsupported by preliminary data.

Twill be interesting to see how this is interpreted vis a vis meeting presentations, seminars and (hello!) job talks. What is a reviewer expected to do if they see an application without any preliminary data per the FOA, but have just seen a relevant presentation from the applicant which shows that Aim 1 is already completed? Will they wave a flag? See above, the FOA says the “existence” of preliminary data, not the “inclusion” of preliminary data will make the app non-compliant.

But there is an aspect of normal NIH grant review that is not supposed to depend on “secret” knowledge, i.e., that available only to the reviewer, not published. So it is frowned upon for a reviewer to say “well the applicant gave a seminar last month at our department and showed that this thing will work”. It’s special knowledge only available to that particular reviewer on the panel. Unverifiable.

This would be similar, no?

Or is this more like individual knowledge that the PI had faked data? In such cases the reviewers are obligated to report that to the SRO in private but not to bring it up during the review.

If they ARE going to enforce the “existence” of relevant preliminary data, how will it be possible to make this fair? It will be entirely unfair. Some applicants will be unlucky enough to have knowledgeable whistle blowers on the panel and some will evade that fate by chance. Reviewers being what they are, will only variably respond to this charge to enforce the preliminary data thing, even if semi-obligated. After all, what is the threshold for the data being specifically supportive of the proposal at hand?

Strategy-wise, of course I endorse ESI taking advantage of this. The FOAs list almost all of the ICs with relevant funding authority if I counted correctly (fewer for the human-subjects one, of course). There is an offset receipt date, so it keeps the regular submission dates clear. You can put one in, keep working on it and if the prelim data look good, put a revised version in for a regular FOA next time. Or, if you can’t work on it or the data aren’t going well, you can resubmit “For Resubmissions, the committee will evaluate the application as now presented, taking into consideration the responses to comments from the previous scientific review group and changes made to the project.” Win-win.

Second strategy thing. This is a PAR and the intent is to convene panels for this mechanism. This means that your relative ESI advantage at the point of review disappears. You are competing only against other ESI. Now, how each IC chooses to prioritize these is unknown. But once you get a score, you are presumably just within whatever ESI policy a given IC has set for itself.

I’m confused by the comments over at Open Mike. They seem sort of negative about this whole thing. It’s just another FOA, folks. It doesn’t remove opportunities like the R15. No it doesn’t magically fix every woe related to review. It is an interesting attempt to fix what I see as a major flaw in the evolved culture of NIH grant review and award. Personally I’d like to see this expanded to all applicants but this is a good place to start.

One of the potential takeaway messages from the Hoppe et al 2019 finding, and the Open Mike blogpost, is that if Black PIs want to have a better success rate for their applications, perhaps they should work on the right topics. The “right” topics, meaning the ones that enjoy the highest success rates. After all, the messaging around the release of Hoppe was more or less this: Black PI apps are only discriminated against because they are disproportionately proposed on topics that are discriminated against.

(Never mind that right in the Abstract of Hoppe they admit this only explains some 20% of the funding gap.)

We find, however, a curious counter to this message buried in the Supplement. I mentioned this in a prior post but it bears posting the data for a more memorable impact.

The left side of Figure S6 in Hoppe et al. shows the percent of applications within both the African-American/Black and WHite distributions that were submitted which landed in topic clusters across the success rate quintiles. The 1st is the best, i.e, the most readily funded topic cluster. We can see from this that while applications with white PIs are more or less evenly distributed, the applications with Black PIs are less frequently landing in the best funded topic clusters and more frequently landing in the lowest funded topic clusters. Ok, fine, this is the distributional description that underlies much of the takeaway messaging. On the right side of Figure S6, there is a Table. No idea why they chose that instead of a graph but it has the tendency to obscure a critical point.

Here I have graphed the data, which is the success rate for applications which fall into the topic-success quintiles by the race of the PI. This, first of all, emphasizes the subvocalized admission that even in the least-fundable topic clusters, applications with white PIs enjoyed a success advantage. Actually this main effect was present in each quintile, unsurprisingly. What also immediately pops out, in a way it does not with the table representation, is that in the best funded topic area the advantage of applications with white PIs is the greatest. Another way to represent this is by calculating the degree to which applications with Black PIs are disadvantaged within each quintile.

This represents the success for applications with Black PIs as a percentage of the success for applications with white PIs. As you can see, the biggest hit is at the first and fifth quintiles with the applications faring the best at the middle topic-success quintile. Why? Well one could imagine all kinds of factors having to do with the review of applications in those topic domains. The OpenMike blog post on ICs with lower funding rates (because they have tiny budgets, in part) may explain the fifth quintile but it wouldn’t apply to the top quintile. In fact quite the contrary. Ok, this depiction speaks to the relative hit to success rates within quintile. But the applicant might be a little more interested in the raw percentile hit, especially given the cumulative probability distributions we were discussing yesterday. Recall, the average difference was 7 percentile points (17.7% for Wh PI apps vs 10.7% for Black PI apps).

The disparity is highest in the first quintile. It is a hit of 10.8 percent, as opposed to the all-apps average hit of 7.0 percent.

Obviously we cannot draw much more from the available data. But it certainly cautions us that pushing Black applicants to work on the “right” topics is not a clear solution and may even be counter productive. This is on the acute level of a current PI deciding what to propose in an application, and what to pursue with a multi-application strategy over time. But it also, importantly, serves up a caution for pipeline solutions that try to get more Black trainees into the right labs so that they will work on the right topics using, in Dr. Collins’ parlance, “upgraded” “methodologies”. If this topic/race disparity is not resolved by the time these new trainees hit Assistant Professor stage, we are going to push more Black Professors into research domains that are even harder to succeed in.

We may eventually get some more insight. CSR promised this summer to start looking into study section behavior more closely. It may be that fewer applications from Black PIs in the most successful topic domains is due to disproportionately fewer Black PIs in those fields which leads to fewer of them on the relevant study sections. Even absent that factor, a lower presence in the fields of interest may drive more implicit or explicit bias against the ones that do choose those fields. We just can’t tell without more information about the constitution of study sections and the success rates that emerge from them. Oh, and the exception pay behavior of the Branches and Divisions within each IC. That’s also important to examine as it may relate to topic domain.

It is hard to overstate the problem that plummeting success rates at the NIH have caused for biomedical science careers. We have expectations for junior faculty that were developed in the 1980s and maybe into the 90s. Attitudes that are firmly entrenched in our senior faculty who got their first awards in the 1980s or even the 1970s…and then were poised to really rake it in during the doubling interval (since undoubled). Time for a trip down memory lane.

The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). These are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications. The blue line shows total number of applications reviewed and the data in the 60s are listed as “estimated” success rates. (source)

The extension of these data into more recent FY can be found over at the RePORTER. I like to keep my old graph because NIH has this nasty tendency to disappear the good old days so we’ll forget about how bad things really are now. From 2011 to 2017 success rates hovered from 17 to 19% and in the past two years we’ve seen 21-22% success.

In the historical trends from about 1980 to the end of the doubling in 2002 we see that 30% success rates ruled the day as expected average. Deviations were viewed as disaster. In fact the doubling of the NIH budget over a decade was triggered by the success rates falling down into the 25% range and everyone screaming at Congress for help. For what it is worth, the greybeards when I was early career were still complaining about funding rates in the early 1980s. Was it because they were used to the 40% success years right before that dropping down to 30%? Likely. When they were telling us “it’s all cyclical, we’ve seen this before on a decade cycle” during the post-doubling declines….well it was good to see these sorts of data to head off the gaslighting, I can tell you.

Anyway, the point of the day is that folks who had a nice long run of 30% success rates (overall; it was higher once you were established, aka had landed one grant) are the ones who set, and are setting, current expectations. Today’s little exercise in cumulative probability of grant award had me thinking. What does this analysis look like in historical perspective?

I’m using the same 17.7% success rate for applications with white PIs reported in Hoppe et al and 30% as a sort of historical perspective number. Relevant to tenure expectations, we can see that the kids these days have to work harder. Back in the day, applicants had a 83.2% cumulative probability of award with just 5 applications submitted. Seems quaint doesn’t it? Nowadays a white PI would have to submit 9 applications to get to that same chance of funding.

How does that square with usual career advice? Well, of course newbs should not submit R01 in the first year. Get the lab up and running on startup, maybe get a paper, certainly get some solid preliminary data. Put the grant in October in year 2 (triaged), wait past a round to do a serious revision, put it in for July. Triaged again in October of Year 3. Two grants in, starting Year 3. Well now maybe things are clicking a bit so the PI manages to get two new proposals together for Oct and/or Feb and if the early submission gets in, another revision for July. So in Fall of Year 4 we’re looking at four or five submissions with a fairly good amount of effort and urgency. This could easily stretch into late Year 4.

Where do the kids these days fit in four more applications?