You are familiar with the #GintherGap, the disparity of grant award at NIH that leaves the applications with Black PIs at substantial disadvantage. Many have said from the start that it is unlikely that this is unique to the NIH and we only await similar analyses to verify that supposition.

Curiously the NSF has not, to my awareness, done any such study and released it for public consumption.

Well, a group of scientists have recently posted a preprint:

Chen, C. Y., Kahanamoku, S. S., Tripati, A., Alegado, R. A., Morris, V. R., Andrade, K., & Hosbey, J. (2022, July 1). Decades of systemic racial disparities in funding rates at the National Science Foundation. OSF Preprints. July 1. doi:10.31219/osf.io/xb57u.

It reviews National Science Foundation awards (from 1996-2019) and uses demographics provided voluntarily by PIs. They found that the applicant PIs were 66% white, 3% Black, 29% Asian and below 1% for each of American Indian/Alaska Native and Native Hawaiian/Pacific Islander groups. They also found that across the reviewed years, the overall funding rate varied from 22%-34%, so the data were represented as the rate for each group relative to the average for each year. In Figure 1, reproduced below, you can see that applications with white PIs enjoy a nice consistent advantage relative to other groups and the applications with Asian PIs suffer a consistant disadvantage. The applications with Black PIs are more variable year over year but are mostly below average except for 5 years when they are right at the average. The authors note this means that in 2019, there were 798 awards with white PIs above expected value, and 460 fewer than expected awarded with Asian PIs. The size of the disparity differs slightly across the directorates of the NSF (there are seven, broken down by discipline such as Biological Sciences, Engineering, Math and Physical Sciences, Education and Human Resources, etc) but the same dis/advantage based on PI race remains.

Fig 1B from Chen et al. 2022 preprint

It gets worse. It turns out that these numbers include both Research and Non-Research (conference, training, equipment, instrumentation, exploratory) awards. Which represent 82% and 18% of awards, with the latter generally being awarded at 1.4-1.9 times the rate for Research awards in a given year. For white

Fig 3 from Chen et al 2022 preprint FY 13 – 19;
open = Non-Research, closed = Research

PI applications the two types both are funded at higher than the average rate, however significant differences emerge for Black and Asian PIs with Research awards having the lower probability of success.

So why is this the case. Well, the white PI applications get better scores from extramural reviewers. Here, I am not expert in how NSF works. A mewling newbie really. But they solicit peer reviewers which assign merit scores from 1 (Poor) to 5 (Excellent). The preprint shows the distributions of scores for FY15 and FY16 Research applications, by PI race, in Figure 5. Unsurprisingly there is a lot of overlap but the average score for white PI apps is superior to that for either Black or Asian PI apps. Interestingly, average scores are worse for Black PI apps than for Asian PI apps. Interesting because the funding disparity is larger for Asian PIs than for Black PIs. And as you can imagine, there is a relationship between score and chances of being funded but it is variable. Kind of like a Programmatic decision on exception pay or the grey zone function in NIH land. Not sure exactly how this matches up over at NSF but the first author of the preprint put me onto a 2015 FY report on the Merit Review Process that addresses this. Page 74 of the PDF (NSB-AO-206-11) has a Figure 3.2 showing the success rates by average review score and PI race. As anticipated, proposals in the 4.75 (score midpoint) bin are funded at rates of 80% or better. About 60% for the 4.25 bin, 30% for the 3.75 bin and under 10% for the 3.25 bin. Interestingly, the success rates for Black PI applications are higher than for white PI applications at the same score. The Asian PI success rates are closer to the white PI success rates but still a little bit higher, at comparable scores. So clearly something is going on with funding decision making at NSF to partially counter the poorer scores, on average, from the reviewers. The Asian PI proposals do not have as much of this advantage. This explains why the overall success rates for Black PI applications are closer to the average compared with the Asian PI apps, despite worse average scores.

Fig 5 from Chen et al 2022 preprint

One more curious factor popped out of this study. The authors, obviously, had to use only the applications for which a PI had specified their race. This was about 96% in 1999-2000 when they were able to include these data. However it was down to 90% in 2009, 86% in 2016 and then took a sharp plunge in successive years to land at 76% in 2019. The first author indicated on Twitter that this was down to 70% in 2020, the largest one year decrement. This is very curious to me. It seems obvious that PIs are doing whatever they think is going to help them get funded. For the percentage to be this large it simply has to involve large numbers of white PIs and likely Asian PIs as well. It cannot simply be Black PIs worried that racial identification will disadvantage them (a reasonable fear, given the NIH data reported in Ginther et al.) I suspect a certain type of white academic who has convinced himself (it’s usually a he) that white men are discriminated against, that the URM PIs have an easy ride to funding and the best thing for them to do is not to declare themselves white. Also another variation on the theme, the “we shouldn’t see color so I won’t give em color” type. It is hard not to note that the US has been having a more intensive discussion about systemic racial discrimination, starting somewhere around 2014 with the shooting of Michael Brown in Ferguson MO. This amped up in 2020 with the strangulation murder of George Floyd in Minneapolis. Somewhere in here, scientists finally started paying attention to the Ginther Gap. News started getting around. I think all of this is probably causally related to sharp decreases in the self-identification of race on NSF applications. Perhaps not for all the same reasons for every person or demographic. But if it is not an artifact of the grant submission system, this is the most obvious conclusion.

There is a ton of additional analysis in the preprint. Go read it. Study. Think about it.

Additional: Ginther et al. (2011) Race, ethnicity, and NIH research awards. Science, 2011 Aug 19; 333(6045):1015-9. [PubMed]

This is funny
Right wing anti-science nuts in Congress are not going to stop attacking research grants just because the Abstracts are expressed in less technical language. Their political agenda is at work and poor understanding of the project  has nothing whatever to do with their motivations. 

I don’t know what started the round of “I only got paid X when I was a trainee” on the twitts but I noticed nobody was adjusting for inflation.

Using the US Dept of Labor calculator, I came up with the following.

For an initial frame of general reference, $30K in 2012 is equal to $22K in 2000, $17K in 1990, $11K in 1980 and $5K in 1970.

The grad stipend when I started graduate school was equal to $15.6K in 2012 adjusted dollars. For us, the NSF fellowship was a considerable upgrade and the NSF graduate fellowship from that time is equivalent to $22.6K in 2012.

Interesting. So how are today’s trainees doing?

The current NSF stipend is apparently $30K, a 33% increase in adjusted dollars compared to what it was when I was a graduate student. Looking at my old training department, they are offering a 35% increase in stipend over what they were offering when I started, again, in constant dollars.

I also happened to spend some time on NIH training grant funds so I can also report that my starting postdoc salary was $28.6K in 2012 dollars. The current NRSA base is $39.3K, which represents a 37% increase.

The bottom line is this. We’re in crap economic times and graduate students and postdocs are getting paid at least 33% more than I was, even going by inflation adjusted dollars.

Stop whining about your salary.

In NIH land (and apparently at NSF) the annual Progress Report functions as the application for the next non-competing interval of support. The NIH ones are short, 2 pages, and you have to squeeze in comments about progress on the project goals and the significance of the findings. So there isn’t a lot of room for all the data you have generated.

Science Professor indicates that she involves trainees in the preparation of progress reports.

I was asked to do this when I was a postdoc and I have continued the tradition with my postdocs. As you will surmise, I always think it a good idea to train postdocs in the grant-game. How much were/are you involved with progress reporting as a postdoc, DearReader?

Prof-like Substance’s post was asking how seriously to take the NSF progress report. I have always taken my NIH ones pretty seriously and tried to summarize the grant progress as best I can. (Yes, I rewrite the drafts provided by the postdocs – thus is training after all.) One benefit is that when it comes time to write the competing renewal application you have a starting point all ready to go.

For the noob PIs… Don’t sweat it. I’ve only once had a PO so much as comment on the Progress Report. In that case this person was, IMO, clearly out of line since we were right on target with the grant plan. More so than usual for me. And the PO also was misunderstanding the science in a way that was a little concerning for that little subarea of the IC…but whatevs. I made a response, the PO backed down and the project went on without further kvetching from this person.

So how about it? Do you involve your trainees in writing Progress Reports? Have you had any responses from POs on these? How seriously do you take them?

Crossposting from Scientopia.
Additional comment from: Cackle of Rad, Chris Mooney, PZ Myers, joetotheizzoe


There is a long tradition of Congressional members trying to whip up a little support from their base by going after federally funded extramural research projects of the NIH. I have described some of this here and here.
You will note the trend, this has by and large been an effort of socially conservative Republican Congress Critters to attack projects that focus on issues of sexual behavior, drug taking, gender identity, homosexuality, etc. We know this is their focus because despite talking about “waste” of federal money they make no effort to realistically grapple with cost/benefit. No doubt because in their view the only necessary solution to behavioral health issues is “Stop it! If you can’t then you must be morally inferior and do not deserve any public concern”.
You will also note that they don’t really mean it in many cases. You’ll see this blather when they know they have no chance of getting the votes. In a prior case I reviewed, the complainers identified cancer as being a “real” concern worthy of funding, and then picked on a cancer-related project. A long while back when I first got interested (and I can’t remember the specific details- it was a psychology type grant on beautifying dorm rooms or something), the Congress Critter’s amendment specified an existing specific grant year- there was no way that I could see that the funds can be retrieved in such a situation. So you could see where much of this is just naked political posturing with no intent of actually doing anything. But still…it continues the anti-science environment and political memery. So we should address it.
Cackle of Rad has tipped us to a new effort by Rep Eric Cantor (R; VA) and Adrian Smith (R; NE) to invite you, the public, to identify NSF projects that irritate you. One assumes they think the public should be allowed to vote the projects out of funding.
Now, admittedly, I find the specific examples to be refreshing and new

Read the rest of this entry »

There is a long tradition of Congressional members trying to whip up a little support from their base by going after federally funded extramural research projects of the NIH. I have described some of this here and here.

You will note the trend, this has by and large been an effort of socially conservative Republican Congress Critters to attack projects that focus on issues of sexual behavior, drug taking, gender identity, homosexuality, etc. We know this is their focus because despite talking about “waste” of federal money they make no effort to realistically grapple with cost/benefit. No doubt because in their view the only necessary solution to behavioral health issues is “Stop it! If you can’t then you must be morally inferior and do not deserve any public concern”.

You will also note that they don’t really mean it in many cases. You’ll see this blather when they know they have no chance of getting the votes. In a prior case I reviewed, the complainers identified cancer as being a “real” concern worthy of funding, and then picked on a cancer-related project. A long while back when I first got interested (and I can’t remember the specific details- it was a psychology type grant on beautifying dorm rooms or something), the Congress Critter’s amendment specified an existing specific grant year- there was no way that I could see that the funds can be retrieved in such a situation. So you could see where much of this is just naked political posturing with no intent of actually doing anything. But still…it continues the anti-science environment and political memery. So we should address it.

Cackle of Rad has tipped us to a new effort by Rep Eric Cantor (R; VA) and Adrian Smith (R; NE) to invite you, the public, to identify NSF projects that irritate you. One assumes they think the public should be allowed to vote the projects out of funding.

Now, admittedly, I find the specific examples to be refreshing and new

Recently, however NSF has funded some more questionable projects – $750,000 to develop computer models to analyze the on-field contributions of soccer players and $1.2 million to model the sound of objects breaking for use by the video game industry.

Not a sign of a social issue and, gasp, are they really criticizing corporate pork? Admittedly the video game industry is not traditionally an ally of social conservatives (Grand Theft Auto anyone? hmm, maybe this requires some additional thought) but still.

Okay, so what are my two biggest objections to this practice.

First, the basic-science issue. It has been discussed before extensively on blogs. All clinical applications, medical devices, drugs, etc, are rooted in prior basic science that stretches back for decades and in cases centuries. We cannot get to new treatments in the future without laying the groundwork of basic understanding of healthy and diseased function of the human, the mammal, the vertebrate, the animal, the alive, the Earth-ian. Therefore the application of much of the present basic science work cannot be confidently asserted at the time it is being conducted. Sure, we pursue a general idea and can make some predictions about where it might apply but the history suggests that it is often a fortuitous inference, surprising connection or unlooked-for application of existing knowledge that creates a new therapy.

Non-biological research and design differs very little in this regard. Many new products and applications are built on the discoveries and innovations that came from basic (and applied, admittedly) science that came before.

It is a big mistake to allow persons who do not understand this to make the tactical decisions on what should and should not be funded. By tactical, of course, I mean the specific projects. I have less problem with Congress weighing in on general priorities, such as swings from focus on breast cancer to AIDS to Alzheimer’s to diabetes or whatever. We have to accept, in the sciences, that there will be some degree of this prioritization that will not respect each of our own parochial research interests.

Just so long as we don’t have wholesale prevention of research into major categories of health concern, that is…

My second objection to the democratic approach is the cost/benefit analysis objection. Not that it is my role to do such cost/benefit but the system as a whole should be sensitive to this. To a rational knowledge that, for example, if we create a new drug which lets an Alzheimer’s patient live at home for 9 mo longer, stave off the need for in-home professional help for 12 mo and/or transition to low-intensity hospice later..well this is going to save a lot of money on a population basis. Not least because then they might, statistically, die of a stroke or heart attack or some other normal condition more frequently before they go into the intensive phase of managing end stage Alzheimer’s.

The argument for corporate welfare for new products of a non-health nature is really no different. Spend money now to reap bigger savings later.

It’s called “investment”, yo!

And I don’t really see where little ‘d’ democracy at a tactical level helps out with deciding what to invest in for the future.

Prof-like has a pair of questions up.

1) should unfunded PIs be included on panels or study sections?

2) Should postdocs (if funded by Federal funds) be included on panels or study sections?

The implication here seems to be that the NSF allows professorial rank people without NSF (or other major governmental?) awards to review proposals. Perhaps also that postdocs (or perhaps research scientists?) that do have funding to review grants. If so, this is most unlike the NIH where the vast majority of reviewers have to be of Associate Professor status or higher. Also unlike the NIH expectation that reviewers have to have been awarded a grant similar to those which s/he is reviewing. My answer got a little long so I thought I’d pop it up as a post.

I’m on record in favor of PIs who are not yet funded by the NIH being represented on review panels. So Yes on #1. I throw out a “maybe even some senior postdocs as well” but I always figured that was an extreme Overton shifting position. Are you telling me that NSF lets postdocs review research grants? Interesting.

I’m in favor of this because it seems like basic fairness, one, and the only way to combat biases, two.

Look at it this way- The NIH has explicit rules that study sections must have diversity. Check this link

There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership.

In my experience this seems to be taken quite seriously. Ethnic minorities would appear to be well represented on panels on which I’ve served. Women run about 35-40% I think at one point I check this for my most frequent section against the CSR overall numbers. Through conversations suggesting reviewers to an SRO or discussing why so-and-so had been ad hoc’ing for two years and not appointed, it became clear to me that the geographic distribution is a pretty hard line.

Notice anything missing in this “diversity” statement? No? Well how about this comment…
There is a need for balance in the level of seniority represented among members of a study section. Too many senior-level reviewers are just as problematic as too few.

Right on. And too few junior reviewers are as problematic as too many….what? Where is that statement? Not to be seen…

So why do we have diversity requirements if not to make things *fairer* for all applicants? What would be the point of requiring a diversity of reviewer backgrounds, perspectives, seniority, geography, etc, if not to ensure fairness through the competition of biases? hmm?

So why would one suspect class of applicant be overtly and intentionally excluded? The NIH made a lot of noise recently about purging assistant professors off the panels. Their justification for doing so was almost entirely unstated and for damn sure free of any backing data.

ScienceInsider has published a letter from Harvard Dean of the Faculty of Arts and Sciences, Michael Smith, addressed to his faculty.

it is with great sadness that I confirm that Professor Marc Hauser was found solely responsible, after a thorough investigation by a faculty investigating committee, for eight instances of scientific misconduct under FAS [Faculty of Arts and Sciences] standards.

The dean notes that their internal inquiry is over but that there are ongoing investigations from the NIH and NSF. So my curiosity turns to Hauser’s NIH support- I took a little stroll over to RePORTER.

From 1997 to 2009 there are nine projects listed under the P51RR000168 award which is the backbone funding for the New England Primate Research Center, one of the few places in which the highly endangered cotton top tamarin is maintained for research purposes. The majority of the projects are titled “CONCEPTUAL KNOWLEDGE AND PERCEPTION IN TAMARINS”. RePORTER is new and the prior system, CRISP, did not link the amounts but you can tell from the most recent two years that these are small projects amounting to $50-60K.

Hauser appears to have only had a single R01 “Mechanisms of Vocal Communication” (2003-2006).

Of course we do not know how many applications he may have submitted that were not selected for funding and, of course, ORI considers applications that have been submitted when judging misconduct and fraud, not just the funded ones. One of the papers that has been retracted was published in 2002 so the timing is certainly such that there could have been bad data included in the application.

The P51 awards offer a slight twist. I’m not totally familiar with the system but it would not surprise me if this backbone award to the Center, reviewed every 5 years, only specified a process by which smaller research grants would be selected by a non-NIH peer review process. Perhaps it is splitting hairs but it is possible that Hauser’s subprojects were not reviewed by the NIH. There may be some loopholes here.

Wandering over to NSF’s Fastlane search I located 10 projects on which Hauser was PI or Co-PI. This is where his big funding has been coming from, apparently. So yup, I bet NSF will have some work to do in evaluating his applications to them as well.

This announcement from the Harvard Dean is just the beginning.

I mean really. You just can’t make this up.


The goals of this forum do not include supporting to harassment of the unfunded by the funded. If you have received funding, or if you don’t see any problems with NSF’s current practices, I honestly offer you my congratulations. Please feel free to leave constructive comments and observations that lay out your views.
But I’ve seen too many internet forums taken over by smug minorities to allow that to happen to this one.
All posts of a judgemental or mocking nature will be deleted as soon as I see them.

I don’t know why I am so fascinated by this guy’s quixotic campaign of complaint about how the NSF is broken because s/he hasn’t yet managed to get funded.
I suppose it is because we have the exact same phenotype of person complaining about the NIH. I guess I feel like hearing the perceptions (accurate or not) that are out there helps me to craft my message to the faculty I run across that are outraged about the NIH.
I always wonder in a case like this just how hard the person is working to get funded. It is one of the things we don’t talk about much. How many apps have you put in to get X number of grants.. me, I’d have to go to Commons. I can’t possibly remember. It isn’t like I could do more than ballpark it if some junior faculty member asked me.
__
Update: Thoughts from Odyssey at Pondering Blather on targeting your application to specific reviewers motivated by commentary at the “NSF is broken” forum.

I mean really. You just can’t make this up.


The goals of this forum do not include supporting to harassment of the unfunded by the funded. If you have received funding, or if you don’t see any problems with NSF’s current practices, I honestly offer you my congratulations. Please feel free to leave constructive comments and observations that lay out your views.
But I’ve seen too many internet forums taken over by smug minorities to allow that to happen to this one.
All posts of a judgemental or mocking nature will be deleted as soon as I see them.

I don’t know why I am so fascinated by this guy’s quixotic campaign of complaint about how the NSF is broken because s/he hasn’t yet managed to get funded.
I suppose it is because we have the exact same phenotype of person complaining about the NIH. I guess I feel like hearing the perceptions (accurate or not) that are out there helps me to craft my message to the faculty I run across that are outraged about the NIH.
I always wonder in a case like this just how hard the person is working to get funded. It is one of the things we don’t talk about much. How many apps have you put in to get X number of grants.. me, I’d have to go to Commons. I can’t possibly remember. It isn’t like I could do more than ballpark it if some junior faculty member asked me.
__
Update: Thoughts from Odyssey at Pondering Blather on targeting your application to specific reviewers motivated by commentary at the “NSF is broken” forum.

Wheee!
Prof-like Substance has found a new forum:

Funding for science is tight right now. No one knows that more than I and the stack of rejected grant proposals I have on my desk. For a lot of people the shifting climate sucks and for new people it can be be painful to get one’s foot in the door. But, is this in itself proof positive that The System is broken? Aureliano Buendia* thinks so.
This morning I was sent a link to a new forum for discussing the “problems” with NSF and what can be done to fix it. Specifically, the creator of the forum states its purpose as discussing “What problems have you had with NSF? What creative solutions have you come up with to these problems? The forum is designed to address such issues. Let’s bring out our best ideas, and hope that NSF pays attention.”

Head over to Prof-like’s place for the link to the (currently) open forum of NSF whingery.