NIH always jukes the stats in their favor
October 4, 2016
DataHound requested information on submissions and awards for the baby MIRA program from NIGMS. His first post noted what he considered to be a surprising number of applications rejected prior to review. The second post identifies what appears to be a disparity in success for applicants who identify as Asian* compared with those who identify white.
The differences between the White and Asian results are striking. The difference between the success rates (33.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.
There was also a potential sign of a disparity for applicants that identify as female versus male.
Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%
Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%
Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.
Same old, same old. Right? No matter what aspect of the NIH grant award we are talking about, men and white people always do better than women and non-white people.
The man-bites-dog part of the tale involves what NIGMS published on their blog about this.
Basson, Preuss and Lorsch report on the Feedback Loop blog entry dated 9/30/2016 that:
One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award
…
We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution.
Hard to reconcile with DataHound’s report which comes from data requested under FOIA, so I presume it is accurate. Oh, and despite small numbers of “Others”* DataHound also noted:
The differences between the White and Other category results are less pronounced but also favored White applicants. The difference between the success rates (33.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) not statistically significant with p = 0.14 although the trend favors White applicants.
Not sure how NIGMS will choose to weasel out of being caught in a functional falsehood. Perhaps “did not observe” means “we took a cursory look and decided it was close enough for government work”. Perhaps they are relying on the fact that the gender effects were not statistically significant, as DataHound noted. Women PIs were 19 out of 82 (23.2%) of the funded and 63/218 (28.9%) of the reviewed-but-rejected apps. This is not the way DataHound calculated success rate, I believe, but because by chance there were 63 female apps reviewed-but-rejected and 63 male apps awarded funding the math works out the same.
There appears to be no excuse whatever for the NIGMS team missing the disparity for Asian PIs.
The probability of administrative rejection really requires some investigation on the part of NIGMS. Because this would appear to be a huge miscommunication, even if we do not know where to place the blame for the breakdown. If I were NIGMS honchodom, I’d be moving mountains to make sure that POs were communicating the goals of various FOA fairly and equivalently to every PI who contacted them.
Related Reading.
__
*A small number of applications for this program (403 were submitted, per DataHound’s first post) means that there were insufficient numbers of applicants from other racial/ethnic categories to get much in the way of specific numbers. The NIH has rules (or possibly these are general FOIA rules) about reporting on cells that contain too few PIs…something about being able to identify them too directly.