The Endgame

June 16, 2020

Hoppe et al 2019 reported that R01 applications submitted by Black PIs for possible funding in Fiscal Years 2011-2015 were awarded at a rate of 10.7%. At the same time, R01 applications submitted by white PIs enjoyed an award rate of 17.7%.

There were 2,403 R01 applications submitted by Black PIs and 18,315 applications submitted by white PIs that were funded.

If you take the unawarded applications submitted by Black PIs (2,147 of them) and swap these out for applications funded to white PIs this would reduce the funding rate of the applications submitted by white PIs to.

15.6%

Which is still 46% higher than the award rate the applications from Black PIs actually achieved.

I want you to really think deeply about fairness.

The NIH rules on empaneling reviewers on study sections says right at the top under General Requirements:

There must be diversity with respect to the geographic distribution, gender, race, and ethnicity of the membership.

You will notice that it does not specify any specific diversity targets. One handy older report that I had long ago, lost and then found again is called the CSR Data Book, FY 2004 and it is dated 5/23/2005. Among other details, Table 16 shows that the from 2000-2004 the percent of female reviewers appointed to panels went 27.0%, 25.8%, 28.2%, 31.1%, 32.9%. The percent of non-standing (ad hocs and SEP participation) went 24.5%, 25.7%, 25.2%, 24.4%, 24.9%. That’s good enough for now, feel free to chase down any more recent stats, I’m sure they are in the NIH site somewhere.

My dumb little twitter poll showed that 35.3% of people that had an opinion thought that the NIH’s apparent female reviewer target was about right. I assert that they probably arrive at their target based on what they think is the fraction of their target population (STEM profs? Biomed profs? NIH applicants?). Who knows but I bet whatever it is, it is below the population representation. Some 59.9% of those that offered an opinion thought that the ~population target was about right.

It isn’t in that older document, but Hoppe et al do report in Table S10 that 2.4% of reviewers for all study sections that evaluated R01s were African-American while 77.8% were white. As a reminder, about 14% of Americans are Black if you include those that check other boxes as multi-racial, 12.4% if you do not.

We can see from this that of the responses offered, 12.7% thought there should be fewer Black reviewers than their are (or roughly the same), some 19% thought it should be about the proportion of Black Professors in STEM fields and 68.3% thought it should more or less match the population level.

There is a serious disconnect between the opinion of the dumb little twitter poll of those that follow me on Twitter and what CSR is targeting as being “diverse with respect to…gender, race“.

Now, admittedly I have been preparing the field of battle for two weeks at this point, years by some reckonings. Softening them up. Carpet bombing with Ginther napalm and Hoppe munitions. So this is by no means a random sample. This is a sample groomed to be at least aware of NIH funding disparity and a sample subjected to an awful lot of my viewpoint that this is a massive failure of the NIH that needs to be corrected.

But still, I think some direct questions are in order. So next time you are talking to your favorite SRO, maybe ask them about this.

See if you can get them to admit to the targets that are discussed inside CSR.

Offer your own opinion on what target they should be using.

One interesting little point. I posted these polls only an hour apart and flogged both of them a couple of times later in the day. I actually pinned the second one which should give it slightly more visibility, if anything.

405 people offered an opinion on the question about African-American reviewers and 689 on the second one. The gender one got 4 RTs (which might boost reach) and the racial one got 2. The “no opinion” vote was 98 for the racial question and 107 for the gender poll so apparently the looky-loo portion of the samples is ~the same number of people.

I find this to be pertinent to the miasma of institutional injustice that we are discussing of late.

I’ve been tweeting a lot of stuff lately that is related to Ginther et al., 2011, Ginther et al., 2018 and most especially the Hoppe et al. 2019 publication. This has been somewhat related to the national conversation we’re having about racial disparity in the wake of the white woman dogwalker attempted murder by cop, the George Floyd murder by cop and the ensuing peaceful protests and cop counter-protest violence that ensued.

I’ve had quite a bit to say about the original Ginther and the dismal NIH response to it. I was particularly unhappy with the NIH (ok, Director Francis Collin’s) response to the Hoppe et al paper.

These papers and findings are tops on my mind, especially as I fielded reactions both direct and indirect from my colleagues. Everybody is really dismayed by the George Floyd murder. Everybody is taking a moment, maybe because they are home with the Corona virus restrictions, but taking a moment to be be really bothered. And really keen to UNDERSTAND. And to DO something. Well, doing things kinda starts in our own house, eh?

The Hoppe paper is mostly about topic words and the way that the types of research interests that Black PIs have may set them at a disadvantage. Nevermind the fact that even within topic word clusters Black PIs still are at a disadvantage, the NIH is really keen to discuss the glass being half not-racist instead of the fact it’s also still half racist. But for me this was an opportunity to grapple with the numbers and revisit my old topics about how few grants it would actually take to even up the hit rate for Black PIs. This is because Black applicants are only in the low single digits in terms of percentages. The Hoppe data looks at R01 applications submitted for FY2011-2015, taking only the ones with identified Black or white PIs. We’re going jump into the middle a bit here so that I can download my recent tweet storm into a post. First, a poll I put up.

The question came from my thinking about Hoppe, but I waited to see the votes before returning to a theme I’d been on before. It will help you to open both Hoppe and the Supplement and look at Figure 1 from the former and Table S1 of the latter. Figure 1 confuses applicants (left side) with applications (top right) so it can be good to refer to the Table S1.

There were 2403 R01 applications from Black PIs. 1346 (or 56%) were triaged and 1057 (44%) were discussed. Of the discussed applications 256 (10.7%) were funded and 801 of the discussed apps were not funded. (Note there’s some rounding error here so don’t hold me to one app one way or the other. That 10.7% was rounded up because 10.7% of 2403 is 257, not 256.) This was for applications submitted across five Fiscal Years, so we’re talking ~269 apps triaged (not discussed) per year and ~160 discussed but not funded per year. There are 25 NIH ICs that fund grants, if I have it right. (I’m pulling the relative allocation per-IC below from a spreadsheet that lists 25.)

So that’s 11 (triaged) and 6 (discussed) Black PI applications per year per IC that do not get funded. For reference, NIMH (which is the 9th biggest IC by budget) has 256 new R01 and 37 Type 2 renewal R01s on the books right now. That’s right, you say, ICs are different in size and so therefore we need to adjust the unfunded applications from Black PIs to the size of the IC. Yes, I realize we probably have large differences in % Black PIs seeking funding across the ICs but it’s all we have to go on without better information. ok, so lets look at the unfunded apps by IC share. This analysis to follow will be selected ICs.

The biggest NIH institute, NCI, receives 15.5% of the entire NIH allocation (which is $41.64 Billion). If we allocate the unfunded applications from Black PIs proportionally then NCI applications account for 42 NDs and 25 discussed-unfunded. But that institute is so large it is hard to really grasp. Lets look at NIGMS (5th by $)- 19 NDs and 11 unfunded. MH? 13/8; DA? 10/6; AA? 4/2. and I’m rounding up for the last two ICs. so. what percentage of their funded (type 1, type 2) would this be? I’m basing off current FY Type 1 and 2 because we’re talking forward policy. If these ICs picked up the discussed-not-funded by %NIH$ share? NIGMS- 2%, NIMH- 2.7%, NIDA- 5.2%, NIAAA- 2.5%.

For completeness the share of the triaged/ND apps would be: NIGMS- 3.3%, NIMH- 4.5%, NIDA- 8.7%, NIAAA- 4.2%. again as a fraction of their current new grants. I mention this because one of the consistent findings of Gither et al 2011 and Hoppe et al. 2019 is that applications from Black PIs are more likely to be triaged. The difference in the Hoppe data set was 56% of applications from Black PIs went un-discussed versus only 42.6% of white PI applications.

So. Those numbers of discussed-but-unfunded applications from Black PIs are low, but it seems high enough to be relevant. A couple to five percentage of the portfolio for a year? This is not unimportant to the IC portfolio. But to YOU, my friend… remember the population size. If we took those 801 apps from the Hoppe data set and funded them, while subtracting 801 apps funded to white PIs (remember, they ignored all other categories of PI race), this would make the success rate for white PI applications go from 17.7% to…wait for it…16.9%. Recall, the funding rate for Black PI applications was 10.7%. So yes, that would push the success rate for Black PI applications to 44%ile if NIH funded all of the discussed applications. Which sounds totally unfair. But before you get too amped about that, recall your history.

Those people we think of as the current luminaries spend a good chunk of the middle of their careers enjoying >30% success. Look at those rates in the 1980s…you may not be aware of this but the early 80s was time remembered as simply terrible in the grant getting. Oh, the older folks would tell me tales of their woes even in the mid 2000s. Well I eventually realized why. Some of them had a few years in there, prior to the 1980s, of 40% or better. And this particular data set (it’s RPG, not just R01 btw) isn’t even broken out by established/new PI or continuation/new grant! So I’m sure the hit rate for established PI applications was higher as was the rate for competing renewal applications.

Why yes, we ARE coming back to the establishment of generational accumulated wealth. From a certain point of view. but not right now. we’re not ready to talk about the R word.

Instead, let’s come at this the other way. We kinda got into this a few days ago talking about the white PI grants that were funded at lower scores than *any* funded app with a Black PI (this is in Table 1 of Hoppe et al). There were 2403 Black PI applications in the dataset used in Hoppe et al.. 17.7% of this is 425. Subtract the 256 that were funded and we are at 169 applications (as a reminder this is NIH wide, over 5 years) to reach parity with the white PI rate. Of course subtracting those 169 from the white PI pool would plunge their success. *plunge* I tell you.

From 17.7% to…..17.5%. which would obviously be totally unfair so I’ll let you do the math to get them to meet in the middle. Just remember NIH prefers if the Black PI apps are juuuuuust under. Statistically indistinguishable tho. Like for gender. Getting this to meet in the middle means that something less than a 0.2% change in the success rate of grants submitted by white PIs would fix the 7.0% deficit in success rates that applications from Black PIs suffer.

If instead of just matching success rate, NIH were to fund every single discussed application submitted by Black PIs, this would only change white PI success rates by 0.8%, down from 17.7% to 16.9% as outlind above. Again, we need to compare that 0.8% drop to the 7% deficit suffered by applications with a Black PIs that is currently NBD according to the NIH. and many of our science peers.

I feel confident there are many who are contemplating these analyses and the implied questions thinking “wait, I’m not exchanging my grant for their grant“. But that’s not the right way to think about this. You would be exchanging your current 17.7% success rate for a 17.5% success rate.

I was just noticing something that I hadn’t really focused on before in the Hoppe et al 2019 report on the success of grant applications based on topic choices. This is on me because I’d done an entire blog post on a similar feature of this situation back when Ginther et al 2011 emerged. The earlier blog post focused on the quite well established reality that almost all apps are funded up to a payline (or virtual payline for ICs that claim, disingenuously, that they don’t have one) and that the odds of being funded as one moves away from (worse scoring) that payline, the lower the odds. Supplemental Figure S1 in Ginther showed that these general trends were true for all racial groups.

My blog post was essentially focused on the idea that some apps from African-American PIs were not being funded at a given near-miss score while some apps from white PIs were being funded at worse scores.

It’s worth taking a look at this in Hoppe et al. because it is a more recent dataset from applications scored using the new nine point scale.

I was alerted to Table 1 of Hoppe et al. which shows the percentage of the total funded pool of applications from Black and white PIs by the voted percentile rank, binned into 5 percentile ranges (0-4 is good, 85-89%ile bad).

As you would expect, almost all applications in the top two bins (0-9%ile) were funded regardless of PI race. And the chances of an app being funded at a given percentile bin decrease the further they are away from the very top scores. Where it gets interesting is after the 34%ile mark where no Black PI apps were funded. In any score bin. And there was at least one application in each bin save for 65-69, 75-79 and 80-84 which are not worth talking about anyway.

The pinch is observing that at least some applications of white PIs were funded from 35-59th percentile. I.e., at scores that are worse than the score of any app funded with a Black PI. On Twitter I originally screwed up the count because I stupidly applied the bin percentages to the entire population of funded awards. Not so. In fact I need to calculate it per bin.

Now if my current thinking is right, and it may not be, those bonus bins for white PIs represent 25% of the distribution (5 bins, 5%ile points per bin). The supplement Table S1 tells us there were 103,620 applications submitted by white PIs so that leaves us with 25,905 applications, 5,181 in each bin.

This is very rough.

Percentiling of applications is within a rolling three rounds of each standing study section. Special Emphasis Panels are variously percentiled- sometimes against an associated parent study section, sometimes against the total CSR pool.

But let’s take this as the aggregate for discussion.

Multiplying each of the bin success rates, I end up with a total of 119 applications of white PIs funded from 34-59th percentile. A score range at which ZERO applications were funded to Black PIs.

So, in essence, you could replace all of those applications funded to white PIs with more meritorious (well? that’s how they use the rankings. percentile = merit) unfunded applications submitted by Black PIs. Even by some distance as only 74% of 10-14%ile scoring applications with Black PIs were funded for example.

I was curious why Hoppe et al included the Table and what use they made of it. I could find only one mention of Table 1 and it was in the section titled “IC decisions do not contribute to funding gap“.

However, below the 15th percentile, there was no difference in the average rate at which ICs funded each group (Table 1); applications from AA/B and WH scientists that scored in the 15th to 24th percentile range, which was just above the nominal payline for FY 2011–2015, were funded at similar rates (AA/B 25.2% versus WH 26.6%, P = 0.76; Table 1). The differences we observe at narrower percentile ranges (15 to 19, 20 to 24, 25 to 29, and 30 to 34) slightly favored either AA/B or WH applicants alternately but were in no case statistically significant (P ≥ 0.13 for all ranges). These results suggest that final funding decisions by ICs, whether based on impact scores or discretionary funding decisions, do not contribute to the funding gap.

This is more than a little annoying. Sure, they sliced and diced the analysis down to where it is not statistically resolvable as a difference. But real world? It’s not a matter of constant anger for any PI who has a near miss score and gets wind of anyone being funded at a worse score? Sure it is.

And that last statement is just plain false. 119 white PI applications funded at worse scores is 46.5% of the total number of applications funded with Black PIs. If all of those discretionary funding decisions had gone to Black PIs, that would raise the hit rate from 10.7% to 15.6% for Black PIs. Whereas the white PI hit rate would plunge from 17.7% to…17.56%.

So this analysis they are referring to supports quite the opposite conclusion. Discretionary funding decisions, i.e. outside of percentile ranks where nearly every application is funded, do in fact contribute substantially to the disparity.

and correcting this to give Black PIs a fair hit rate, by selecting applications of HIGHER MERIT, would cause an entirely imperceptible change in the chances for white PIs.

One of my NIH grant game survival mantras is that one should never let the lack of one particular bit of preliminary data prevent you from submitting a grant application.

There is occasionally a comment on the grant game that suggests one needs to have the data that support the main hypotheses in the proposal before one can get a fundable score. This may be snark or it may be heartfelt advice. Either way, I disagree.

I believe that preliminary data serve many purposes and sometimes all you need are feasibility data. Data that show you can perform the types of experiments or assays that you are proposing. That you have a fairly good handle on the trouble shooting space and know how to interpret your results.

Sometimes your data are beyond mere feasibility but are somewhat indirectly related to the topic at hand. This is also an area where you do not require overwhelmingly closely-related preliminary data.

I understand that you may, particularly if you are less experienced in the game, have a series of disappointing summary statements that appear to show that you will never ever get funded until you have a grant’s worth of money to generate the preliminary data that support all the hypotheses and only have to be written up, once the grant funds. I am willing to believe that in limited cases there may be study sections where this is true. But I suspect that even for noobs it is not universally true and the best strategy is to keep the proposals flying out the door and into different study sections.

The reason is that you will never ever lawyer a grant into funding by having just exactly the right Goldilocks-approved combination of preliminary data. Preliminary data criticisms are often mere StockCritique that are deployed or ignored depending on the reviewer’s gestalt impression. If the reviewer is basically on board, you only need enough preliminary data to beat back the most trivial of complaints about whether you have a pulse as a lab. And if the reviewer is not convinced by the larger picture of the proposal, you will never make them give it a 1 score just because the preliminary data are so pretty.

A recent anecdote: I had a grant come back with one reviewer saying “there is not enough preliminary data to show that this one specific thing will ever work”. Naturally, getting “this one thing” to work was in fact the purpose of the proposal, and it was stated all throughout why it needed work thrown at it and what it was a good idea to throw that work at the question. The second reviewer said “The preliminary data show that this one specific thing is basically all done so there is no need to put any funds into it”.

This, I surmise, is what happens when you hit that perfect sweet spot with the preliminary data. It is all down to the eyes of the beholder as to whether it is supportive of the proposed work, or so complete that it questions the need to do any more work.

When I have formulated these views in the past I have apparently managed to screw up and fail to communicate. A certain potty mouthed ex-blogger at this site used to say something like “a given bit of preliminary data supports an infinity of potential grant proposals”. And I would bang on about not waiting for some perfect figure, but to hit your deadlines with whatever you happened to have around.

It has recently come to my attention that I have not been clarifying this enough. There is a subtle difference, I guess, in how one assembles a grant from the available preliminary data. One approach is to have a firm idea of what you want to propose and then to search your lab books and hard drive for data that seemingly support that idea. And I do think this is okay to do, as part of a diversified grant writing strategy. But what I also meant to convey, and didn’t, is that one should be taking a look at the data one is generating, or has in hand, and to ask oneself “what is the best proposal that arises from these data“. In retrospect I meant the latter to large degree.

Look, presumably you conducted those experiments or collected that data for a purpose. It has a place in the world. Work from that. What were you thinking. As a second level, think about that other stuff you have in hand and where it might help, once a handful of your data has started telling a particular story.

The bottom line is that when I say one should use a given bit of preliminary data in many proposals, I don’t mean that you should stick it in just any old place. A grant proposal has to tell a story. And part of that story is being written by the data you have in hand.

Sorry if I was never clear on this distinction.

When you start signing your reviews, you are confessing that you think you have reached the point where your reputation is more persuasive than the quality of your ideas.

Website

Thought of the day

October 14, 2016

Please explain to me why we are supposed to coddle the supposedly normal or centrist Republicans at this point. And pat them soothingly and give them cookies because finally, at this late date, they have discovered Trumpism is horrible.

What is to be gained here?

As I’ve noted on these pages before, my sole detectable talent for this career is the ability to take a punch.

There are a lot of punches in academic science. A lot of rejection and the congratulations for a job well done are few and far between. Nobody ever tells you that you are doing enough.

“Looking good, Assistant Professor! Just keep this up, maybe even chill a little now and then, and tenure will be no problem!” – said no Chair ever.

My concern is that resilience in the face of constant rejection, belittling and unkind comparisons of your science to the true rock stars in a Lake Wobegon approach can have a selection effect. Only certain personality types can stand this.

I happen to have one of these personality types but it is not something of any particular credit. I was born and/or made this way by my upbringing. I cannot say anyone helped to train me in this way as an academic scientist*.

So I am at a complete loss as to how to help my trainees with this.

Have you any insights Dear Reader? From your own development as a scientist or as a supervisor of other scientists?

Related Reading: Tales of postdocs past: what did I learn?
__
*well maybe indirectly. And not in a way I care to extend to any trainee of mine thankyewveerymuch.

A question and complaint from commenter musclestumbler on a prior thread introduces the issue.

So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it’s obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.

and extends:

I personally go after R15 and R03 mechanisms because that’s all that can be reasonably obtained at my university. … Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue…

This isn’t simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms – a “little kids table” if you will – but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.

The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!

I noted the CSR webpage on study section selection says:

Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.

It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.

I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting “appropriate expertise” when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.

But ultimately we come to the “mercy of the court” aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what “has” to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.

I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.

The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don’t just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.

Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.

Sometimes you have to experiment a little with the NIH system. You’d be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.

Potnia has some thoughts on how not to manage trainees in your lab if they have different compensation levels.

This may be important come December if your University decides to create a time-clock 40 h per week category of postdoc. Dealing with the new overtime rules may induce some Universities to try to make that work.

Potnia points out that effective management will be needed for the different classes of trainees at the same nominal level.

Go Read.

via sbnation:

With the win, Manuel became the first black woman in Olympic history to earn an individual swimming gold medal and the first African-American woman to win an individual medal.

Is the RealSolution to the stresses of the NIH grant system best described by the tone of the RNC or the DNC convention?

A News piece in Science by Jeffrey Mervis details the latest attempt of the NIH to kick the Ginther can down the road.

Armed with new data showing black applicants suffer a 35% lower chance of having a grant proposal funded than their white counterparts, NIH officials are gearing up to test whether reviewers in its study sections give lower scores to proposals from African-American applicants. They say it’s one of several possible explanations for a disparity in success rates first documented in a 2011 report by a team led by economist Donna Ginther of the University of Kansas, Lawrence.

Huh. 35%? I thought Ginther estimated more like a 13% difference? Oh wait. That’s the award probability difference. About 16% versus 29% for white applicants which would be about a 45% lower chance. And this shows “78-90% the rate of white…applicants”. And there was Nakamura quoted in another piece in Science:

At NIH, African-American researchers “receive awards at “55% to 60% the rate of white applicants,” Nakamura said. “That’s a huge disparity that we have not yet been able to seriously budge,” despite special mentoring and networking programs, as well as an effort to boost the number of scientists from underrepresented minorities who evaluate proposals.

Difference vs rate vs lower chance…. Ugh. My head hurts. Anyway you spin it, African-American applicants are screwed. Substantially so.

Back to the Mervis piece for some factoids.

Ginther..noted…black researchers are more likely to have their applications for an R01 grant—the bread-and-butter NIH award that sustains academic labs—thrown out without any discussion…black scientists are less likely to resubmit a revised proposal …whites submit at a higher rate than blacks…

So, what is CSR doing about it now? OK HOLD UP. LET ME REMIND YOU IT IS FIVE YEARS LATER. FIFTEEN FUNDING ROUNDS POST-GINTHER. Ahem.

The bias study would draw from a pool of recently rejected grant applications that have been anonymized to remove any hint of the applicant’s race, home institution, and training. Reviewers would be asked to score them on a one-to-nine scale using NIH’s normal rating system.

It’s a start. Of course, this is unlikely to find anything. Why? Because the bias at grant review is a bias of identity. It isn’t that reviewers are biased against black applicants, necessarily. It is that they are biased for white applicants. Or at the very least they are biased in favor of a category of PI (“established, very important”) that just so happens to be disproportionately white. Also, there was this interesting simulation by Eugene Day that showed a bias that is smaller than the non-biased variability in a measurement can have large effects on something like a grant funding system [JournalLink].

Ok, so what else are they doing?

NIH continues to wrestle with the implications of the Ginther report. In 2014, in the first round of what NIH Director Francis Collins touted as a 10-year, $500 million initiative to increase the diversity of the scientific workforce, NIH gave out 5-year, $25 million awards to 10 institutions that enroll large numbers of minority students and created a national research mentoring network.

As you know, I am not a fan of these pipeline-enhancing responses. They say, in essence, that the current population of black applicant PIs is the problem. That they are inferior and deserve to get worse scores at peer review. Because what else does it mean to say the big money response of the NIH is to drum up more black PIs in the future by loading up the trainee cannon now?

This is Exhibit A of the case that the NIH officialdom simply cannot admit that there might be unfair biases at play that caused the disparity identified in Ginther and reinforced by the other mentioned analyses. The are bound and determined to prove that their system is working fine, nothing to see here.

So….what else ?

A second intervention starting later this year will tap that fledgling mentoring network to tutor two dozen minority scientists whose R01 applications were recently rejected. The goal of the intervention, which will last several months, is to prepare the scientists to have greater success on their next application. A third intervention will educate minority scientists on the importance of resubmitting a rejected proposal, because resubmitted proposals are three times more likely to be funded than a de novo application from a researcher who has never been funded by NIH.

Oh ff….. More of the same. Fix the victims.

Ah, here we go. Mervis finally gets around to explaining that 35% number

NIH officials recently updated the Ginther study, which examined a 2000–2006 cohort of applicants, and found that the racial disparity persists. The 35% lower chance of being funded comes from tracking the success rates of 1054 matched pairs of white and black applicants from 2008 to 2014. Black applicants continue to do less well at each stage of the process.

I wonder if they will be publishing that anywhere we can see it?

But here’s the kicker. Even faced with the clear evidence from their own studies, the highest honchos still can’t see it.

One issue that hung in the air was whether any of the disparity was self-inflicted. Specifically, council members and NIH officials pondered the tendency of African-American researchers to favor certain research areas, such as health disparities, women’s health, or hypertension and diabetes among minority populations, and wondered whether study sections might view the research questions in those areas as less compelling. Valantine called it a propensity “to work on issues that resonate with their core values.” At the same time, she said the data show minorities also do less well in competition with their white peers in those fields.

Collins offered another possibility. “I’ve heard stories that they might have been mentored to go into those areas as a better way to win funding,” he said. “The question is, to what extent is it their intrinsic interest in a topic, and to what extent have they been encouraged to go in that direction?”

Look, Ginther included a huge host of covariate analyses that they conducted to try to make the disparity go away. Now they’ve done a study with matched pairs of investigators. Valantine’s quote may refer to this or to some other analysis I don’t know but obviously the data are there. And Collins is STILL throwing up blame-the-victim chaff.

Dude, I have to say, this kind of denialist / crank behavior has a certain stench to it. The data are very clear and very consistent. There is a funding disparity.

This is a great time to remind everyone that the last time a major funding disparity came to the attention of the NIH it was the fate of the early career investigators. The NIH invented up the ESI designation, to distinguish it from the well established New Investigator population, and immediately started picking up grants out of the order of review. Establishing special quotas and paylines to redress the disparity. There was no talk of “real causes”. There was not talk of strengthening the pipeline with better trainees so that one day, far off, they magically could better compete with the established. Oh no. They just picked up grants. And a LOT of them.

I wonder what it would take to fix the African-American PI disparity…

Ironically, because the pool of black applicants is so small, it wouldn’t take much to eliminate the disparity: Only 23 more R01 applications from black researchers would need to be funded each year to bring them to parity.

Are you KIDDING me? That’s it?????

Oh right. I already figured this one out for them. And I didn’t even have the real numbers.

In that 175 bin we’d need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.

Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it….. 20.8%.

Mere handfuls. I had probably overestimated how many black PIs were seeking funding. If this Mervis piece is to be trusted and it would only take 23 pickups across the entire NIH to fix the problem….

I DON’T UNDERSTAND WHAT FRANCIS COLLINS’ PROBLEM IS.

Twenty three grants is practically rounding error. This is going to shake out to one or maybe three grants per year for the ICs, depending on size and what not.

Heck, I bet they fund this many grants every year by mistake. It’s a big system. You think they don’t have a few whoopsies sneak by every now and again? Of course they do.

But god forbid they should pick up 23 measly R01s to fix the funding disparity.

Higher education in the US weaves, for many students, a fantastical dream. 

You can do what you want and people will pay you for it!

Any intellectual pursuit that interests your young brain will end up as a paying career! 

This explains why there are so many English majors who can’t get jobs upon graduation. I know, an easy target. Also see Comm majors. 

But we academic scientists are the absolute worst at this. 
It results in a pool of postdoc scientist PhDs who are morally outraged to find out the world doesn’t actually work that way. 

Yes. High JIF pubs and copious grant funding are viewed as more important than excellent teaching reviews and six-sigma chili peppers or wtfever.
In another context, yeah, maybe translational research is a tiny bit easier to fund than your obsession with esoteric basic research questions.