Disparity of NSF funding
July 22, 2022
You are familiar with the #GintherGap, the disparity of grant award at NIH that leaves the applications with Black PIs at substantial disadvantage. Many have said from the start that it is unlikely that this is unique to the NIH and we only await similar analyses to verify that supposition.
Curiously the NSF has not, to my awareness, done any such study and released it for public consumption.
Well, a group of scientists have recently posted a preprint:
Chen, C. Y., Kahanamoku, S. S., Tripati, A., Alegado, R. A., Morris, V. R., Andrade, K., & Hosbey, J. (2022, July 1). Decades of systemic racial disparities in funding rates at the National Science Foundation. OSF Preprints. July 1. doi:10.31219/osf.io/xb57u.
It reviews National Science Foundation awards (from 1996-2019) and uses demographics provided voluntarily by PIs. They found that the applicant PIs were 66% white, 3% Black, 29% Asian and below 1% for each of American Indian/Alaska Native and Native Hawaiian/Pacific Islander groups. They also found that across the reviewed years, the overall funding rate varied from 22%-34%, so the data were represented as the rate for each group relative to the average for each year. In Figure 1, reproduced below, you can see that applications with white PIs enjoy a nice consistent advantage relative to other groups and the applications with Asian PIs suffer a consistant disadvantage. The applications with Black PIs are more variable year over year but are mostly below average except for 5 years when they are right at the average. The authors note this means that in 2019, there were 798 awards with white PIs above expected value, and 460 fewer than expected awarded with Asian PIs. The size of the disparity differs slightly across the directorates of the NSF (there are seven, broken down by discipline such as Biological Sciences, Engineering, Math and Physical Sciences, Education and Human Resources, etc) but the same dis/advantage based on PI race remains.

It gets worse. It turns out that these numbers include both Research and Non-Research (conference, training, equipment, instrumentation, exploratory) awards. Which represent 82% and 18% of awards, with the latter generally being awarded at 1.4-1.9 times the rate for Research awards in a given year. For white
PI applications the two types both are funded at higher than the average rate, however significant differences emerge for Black and Asian PIs with Research awards having the lower probability of success.
So why is this the case. Well, the white PI applications get better scores from extramural reviewers. Here, I am not expert in how NSF works. A mewling newbie really. But they solicit peer reviewers which assign merit scores from 1 (Poor) to 5 (Excellent). The preprint shows the distributions of scores for FY15 and FY16 Research applications, by PI race, in Figure 5. Unsurprisingly there is a lot of overlap but the average score for white PI apps is superior to that for either Black or Asian PI apps. Interestingly, average scores are worse for Black PI apps than for Asian PI apps. Interesting because the funding disparity is larger for Asian PIs than for Black PIs. And as you can imagine, there is a relationship between score and chances of being funded but it is variable. Kind of like a Programmatic decision on exception pay or the grey zone function in NIH land. Not sure exactly how this matches up over at NSF but the first author of the preprint put me onto a 2015 FY report on the Merit Review Process that addresses this. Page 74 of the PDF (NSB-AO-206-11) has a Figure 3.2 showing the success rates by average review score and PI race. As anticipated, proposals in the 4.75 (score midpoint) bin are funded at rates of 80% or better. About 60% for the 4.25 bin, 30% for the 3.75 bin and under 10% for the 3.25 bin. Interestingly, the success rates for Black PI applications are higher than for white PI applications at the same score. The Asian PI success rates are closer to the white PI success rates but still a little bit higher, at comparable scores. So clearly something is going on with funding decision making at NSF to partially counter the poorer scores, on average, from the reviewers. The Asian PI proposals do not have as much of this advantage. This explains why the overall success rates for Black PI applications are closer to the average compared with the Asian PI apps, despite worse average scores.

One more curious factor popped out of this study. The authors, obviously, had to use only the applications for which a PI had specified their race. This was about 96% in 1999-2000 when they were able to include these data. However it was down to 90% in 2009, 86% in 2016 and then took a sharp plunge in successive years to land at 76% in 2019. The first author indicated on Twitter that this was down to 70% in 2020, the largest one year decrement. This is very curious to me. It seems obvious that PIs are doing whatever they think is going to help them get funded. For the percentage to be this large it simply has to involve large numbers of white PIs and likely Asian PIs as well. It cannot simply be Black PIs worried that racial identification will disadvantage them (a reasonable fear, given the NIH data reported in Ginther et al.) I suspect a certain type of white academic who has convinced himself (it’s usually a he) that white men are discriminated against, that the URM PIs have an easy ride to funding and the best thing for them to do is not to declare themselves white. Also another variation on the theme, the “we shouldn’t see color so I won’t give em color” type. It is hard not to note that the US has been having a more intensive discussion about systemic racial discrimination, starting somewhere around 2014 with the shooting of Michael Brown in Ferguson MO. This amped up in 2020 with the strangulation murder of George Floyd in Minneapolis. Somewhere in here, scientists finally started paying attention to the Ginther Gap. News started getting around. I think all of this is probably causally related to sharp decreases in the self-identification of race on NSF applications. Perhaps not for all the same reasons for every person or demographic. But if it is not an artifact of the grant submission system, this is the most obvious conclusion.
There is a ton of additional analysis in the preprint. Go read it. Study. Think about it.
Additional: Ginther et al. (2011) Race, ethnicity, and NIH research awards. Science, 2011 Aug 19; 333(6045):1015-9. [PubMed]
Reconsidering “Run to Daylight” in the Context of Hoppe et al.
January 7, 2022
In a prior post, A pants leg can only accommodate so many Jack Russells, I had elucidated my affection for applying Vince Lombardi’s advice to science careers.
Run to Daylight.
Seek out ways to decrease the competition, not to increase it, if you want to have an easier career path in academic science. Take your considerable skills to a place where they are not just expected value, but represent near miraculous advance. This can be in topic, in geography, in institution type or in any other dimension. Work in an area where there are fewer of you.
This came up today in a discussion of “scooping” and whether it is more or less your own fault if you are continually scooped, scientifically speaking.
The trouble is, that despite the conceits in study section review, the NIH system does NOT tend to reward investigators who are highly novel solo artists. It is seemingly obligatory for Nobel Laureates to complain about how some study section panel or other passed on their grant which described the plans to pursue what became the Nobel-worthy work. Year after year a lot of me-too grants get funded while genuinely new stuff flounders. The NIH has a whole system (RFAs, PAs, now NOSI) set up to beg investigators to submit proposals on topics that are seemingly important but nobody can get fundable scores to work on.
In 2019 the Hoppe et al. study put a finer and more quantitatively backed point on this. One of the main messages was the degree to which grant proposals on some topics had a higher success rate and some on other topics had lower success rates. You can focus on the trees if you want, but the forest is all-critical. This has pointed a spotlight on what I have taken to calling the inherent structural conservatism of NIH grant review. The peers are making entirely subjective decisions, particularly right at the might-fund/might-not-fund threshold of scoring, based on gut feelings. Those peers are selected from the ranks of the already-successful when it comes to getting grants. Their subjective judgments, therefore, tend to reinforce the prior subjective judgments. And of course, tend to reinforce an orthodoxy at any present time.
NIH grant review has many pseudo-objective components to it which do play into the peer review outcome. There is a sense of fair-play, sauce for the goose logic which can come into effect. Seemingly objective evaluative comments are often used selectively to shore up subjective, Gestalt reviewer opinions, but this is in part because doing so has credibility when an assigned reviewer is trying to convince the other panelists of their judgment. One of these areas of seemingly objective evaluation is the PI’s scientific productivity, impact and influence, which often touches on publication metrics. Directly or indirectly. Descriptions of productivity of the investigator. Evidence of the “impact” of the journals they publish in. The resulting impact on the field. Citations of key papers….yeah it happens.
Consideration of the Hoppe results, the Lauer et al. (2021) description of the NIH “funding ecology” in the light of some of the original Ginther et al. (2011, 2018) investigation into the relationship of PI publication metrics is relevant here.
Publication metrics are a function of funding. The number of publications a lab generates depend on having grant support. More papers is generally considered better, fewer papers worse. More funding means an investigator has the freedom to make papers meatier. Bigger in scope or deeper in converging evidence. More papers means, at the very least, a trickle of self-cites to those papers. More funding means more collaborations with other labs…which leads to them citing both of you at once. More funding means more trainees who write papers, write reviews (great for h-index and total cites) and eventually go off to start their own publication records…and cite their trainee papers with the PI.
So when the NIH-generated publications say that publication metrics “explain” a gap in application success rates, they are wrong. They use this language, generally, in a way that says Black PIs (the topic of most of the reports, but this generalizes) have inferior publication metrics so this causes a lower success rate. With the further implication that this is a justified outcome. This totally ignores the inherent circularity of grant funding and publication measures of awesomeness. Donna Gither has written a recent reflection on her work on NIH grant funding disparity, which doubles down on her lack of understanding on this issue.
Publication metrics are also a function of funding to the related sub-field. If a lot of people are working on the same topic, they tend to generate a lot of publications with a lot of available citations. Citations which buoy up the metrics of investigators who happen to work in those fields. Did you know, my biomedical friends, that a JIF of 1.0 is awesome in some fields of science? This is where the Hoppe and Lauer papers are critical. They show that not all fields get the same amount of NIH funding, and do not get that funding as easily. This affects the available pool of citations. It affects the JIF of journals in those fields. It affects the competition for limited space in the “best” journals. It affects the perceived authority of some individuals in the field to prosecute their personal opinions about the “most impactful” science.
That funding to a sub-field, or to certain approaches (technical, theoretical, model, etc, etc) has a very broad and lasting impact on what is funded, what is viewed as the best science, etc.
So is it good advice to “Run to daylight”? If you are getting “scooped” on the regular is it your fault for wanting to work in a crowded subfield?
It really isn’t. I wish it were so but it is bad advice.
Better advice is to work in areas that are well populated and well-funded, using methods and approaches and theoretical structures that everyone else prefers and bray as hard as you can that your tiny incremental twist is “novel”.
Preprints and NIH Study Section Behavior
March 11, 2019
An interesting pre-print discussion emerged on Twitter today in the wake of an observation
that members of study sections apparently are not up to speed on the NIH policy encouraging the use of pre-prints and permitting them to be cited in NIH grant applications. The relevant Notice [NOT-OD-17-050] was issued in March of 2017 and it is long past time for most reviewers to be aware of what pre-prints are, what they are not and to understand that NIH has issued the above referenced Notice.
Now, the ensuring Twitscussion diverted off into several related topics but the part I find worth addressing is a tone that suggests that not only should NIH grant reviewers understand what a pre-print it, but that they should view them in some particular way. Typically this is expressed as outrage that reviewers do not view pre-prints favorably and essentially just like a published paper. On this I do not agree and will push a different agenda. NIH reviewers were not told how to view pre-prints in the context of grant review by the NIH as far as I know. Or, to the extent the NIH issued instructions, it was to essentially put pre-prints down below peer reviewed work.
The NIH agrees that interim research products offer lower quality information than peer-reviewed products. This policy is not intended to replace peer-review, nor peer-reviewed journals …
Further, the NIH is instructing awardees to explicitly state in preprints text that the work is not peer-reviewed. These two practices should help reviewers easily identify interim products. The NIH will offer explicit guidance to reviewers reminding them that interim research products are not peer-reviewed. Further, since interim products are new to so many biomedical disciplines, the NIH hopes that these conventions will become the norm for all interim products, and will help the media and the public understand that interim products have undergone less review than peer-reviewed articles.
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-17-050.html
Given this, I would suggest that NIH reviewers are quite free to discount pre-prints entirely, to view them as preliminary data (and be grumpy about this as an effort to evade the page limits of the NIH application)…..or to treat them as fully equivalent to a peer reviewed paper because they disagree with the NIH’s tone / take on this. Reviewers get to decided. And as is typical, if reviewers on the same panel disagree they are free to hammer this disagreement out during the Discussion of applications.
I believe that pre-print fans should understand that they have to advocate and discuss their views on pre-prints and also understand that merely whinge about how reviewers must be violating the integrity of review or some such if they do not agree with the most fervent pre-print fans is not helpful. We advocate first by using pre-prints with regularity ourselves. We advocate next by taking advantage of the NIH policy and citing our pre-prints in our grant applications, identified as such. Then, if we happen to be invited to serve on study sections we can access a more direct lever, the Discussion of proposals. (Actually, just writing something in the critique about how it is admirable would be helpful as well. NIH seems to suggest in their Notice that perhaps this would go under Rigor.)
NIH policy on A2 as A0 that I didn’t really appreciate.
July 26, 2018
The NOT-OD-18-197 this week seeks to summarize policy on the submission of revised grant applications that has been spread across multiple prior notices. Part of this deals with the evolved compromise where applicants are only allowed to submit a single formal revision (the -xxA1 version) but are not prohibited from submitting a new (-01, aka another A0 version) one with identical content, Aims, etc.
Addendum A emphasizes rules for compliance with Requirements for New Applications. The first one is easy. You are not allowed an extra Introduction page. Sure. That is what distinguishes the A1, the extra sheet for replying.
After that it gets into the weeds. Honestly I would have thought this stuff all completely legal and might have tried using it, if the necessity ever came up.
The following content is NOT allowed anywhere in a New A0 Application or its associated components (e.g., the appendix, letters of support, other attachments):
Introduction page(s) to respond to critiques from a previous review
Mention of previous overall or criterion scores or percentile
Mention of comments made by previous reviewers
Mention of how the application or project has been modified since its last submission
Marks in the application to indicate where the application has been modified since its last submission
Progress Report
I think I might be most tempted to include prior review outcome? Not really sure and I’ve never done this to my recollection. Mention of prior comments? I mean I think I’ve seen this before in grants. maybe? Some sort of comment about prior review that did not mean the revision series.
Obviously you can accomplish most of this stuff within the letter of the law by not making explicit mention or marking of revision or of prior comments. You just address the criticisms and if necessary say something about “one might criticize this for…but we have proposed….”.
The Progress Report prohibition is a real head scratcher. The Progress Report is included as a formal requirement with a competing continuation (renewal in modern parlance) application. But it has to fit within the page limits, unlike either an Introduction or a List of Publications Resulting (also an obligation of renewals apps) which gets you extra pages.
But the vast majority of NIH R01s include a report on the progress made so far. This is what is known as Preliminary Data! In the 25 page days, I tended to put Preliminary Data in a subsection with a header. Many other applications that I reviewed did something similar. It might as well have been called the Progress Report. Now, I sort of spread Preliminary Data around the proposal but there is a degree to which the Significance and Innovation sections do more or less form a report on progress to date.
There are at least two scenarios where grant writing behavior that I’ve seen might run afoul of this rule.
There is a style of grant writer that loves to place the proposal in the context of their long, ongoing research program. “We discovered… so now we want to explore….”. or “Our lab focuses on the connectivity of the Physio-Whimple nucleus and so now we are going to examine…”. The point being that their style almost inevitably requires a narrative that is drawn from the lab as a whole rather than any specific prior interval of funding. But it still reads like a Progress Report.
The second scenario is a tactical one in which a PI is nearing the end of a project and chooses to continue work on the topic area with a new proposal rather than a renewal application. Maybe there is a really big jump in Aims. Maybe it hasn’t been productive on the previously proposed Aims. Maybe they just can’t trust the timing and surety of the NIH renewal proposal process and need to get a jump on the submission date. Given that this new proposal will have some connection to the ongoing work under a prior award, the PI may worry that the review panel will balk at overlap. Or at anticipated overlap because they might assume the PI will also be submitting a renewal application for that existing funding. In the old days you could get 2 or 3 R01 more or less on the same topic (dopamine and stimulant self-administration, anyone?) but I think review panels are unkeen on that these days. They are alert to signs of multiple awards on too-closely-related topics. IME anyway. So the PI might try to navigate the lack of overlap and/or assure the reviewers that there is not going to be a renewal of the other one in some sort of modestly subtle way. This could take the form of a Progress Report. “We made the following progress under our existing R01 but now it is too far from the original Aims and so we are proposing this as a new project..” is something I could totally imagine writing.
But as we know, what makes sense to me for NIH grant applications is entirely beside the point. The NOT clarifies the rules. Adhere to them.
Endnote
February 10, 2018
Nobody who is younger than me in the scientific generation sense should ever be manually entering references in manuscripts or grant applications.
Ever.
SABV in NIH Grant Review
February 8, 2018
We’re several rounds of grant submission/review past the NIH’s demand that applications consider Sex As a Biological Variable (SABV). I have reviewed grants from the first round of this obligation until just recently and have observed a few things coming into focus. There’s still a lot of wiggle and uncertainty but I am seeing a few things emerge in my domains of grants that include vertebrate animals (mostly rodent models).
1) It is unwise to ignore SABV.
2) Inclusion of both sexes has to be done judiciously. If you put a sex comparison in the Aim or too prominently as a point of hypothesis testing you are going to get the full blast of sex-comparisons review. Which you want to avoid because you will get killed on the usual- power, estrus effects that “must” be there, various caveats about why male and female rats aren’t the same – behaviorally, pharmacokinetically, etc etc – regardless of what your preliminary data show.
3) The key is to include both sexes and say you will look at the data to see if there appears to be any difference. Then say the full examination will be a future direction or slightly modify the subsequent experiments.
4) Nobody seems to be fully embracing the SABV concept coming from the formal pronouncements about how you use sample sizes that are half males and half females into perpetuity if you don’t see a difference. I am not surprised. This is the hardest thing for me to accept personally and I know for certain sure manuscript reviewers won’t go for it either.
Then there comes the biggest categorical split in approach that I have noticed so far.
5a) Some people appear to use a few targeted female-including (yes, the vast majority still propose males as default and females as the SABV-satisfying extra) experiments to check main findings.
5b) The other take is just to basically double everything up and say “we’ll run full groups of males and females”. This is where it gets entertaining.
I have been talking about the fact that the R01 doesn’t pay for itself for some time now.
A full modular, $250K per year NIH grant doesn’t actually pay for itself.
the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget.
The R01 still doesn’t pay for itself and reviewers are getting worse
I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about “feasibility” in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say “nice but there is no way this can be accomplished for $250K direct”.
Well, “we’re going to duplicate everything in females” as a response to the SABV initiative just administered the equivalent of HGH to this trend. There is approximately zero real world dealing with this in the majority of grants that slap in the females and from what I have seen no comment whatever from reviewers on feasibility. We are just entirely ignoring this.
What I am really looking forward to is the review of grants in about 3 years time. At that point we are going to start seeing competing continuation applications where the original promised to address SABV. In a more general sense, any app from a PI who has been funded in the post-SABV-requirement interval will also face a simple question.
Has the PI addressed SABV in his or her work? Have they taken it seriously, conducted the studies (prelim data?) and hopefully published some things (yes, even negative sex-comparisons)?
If not, we should, as reviewers, drop the hammer. No more vague hand wavy stuff like I am seeing in proposals now. The PI had better show some evidence of having tried.
What I predict, however, is more excuse making and more bad faith claims to look at females in the next funding interval.
Please prove me wrong, scientists in my fields of study.
__
Additional Reading:
NIH’s OER blog Open Mike on the SABV policies.
NIH Reviewer Guidance [PDF]
Your Grant in Review: Power analysis and the Vertebrate Animals Section
February 11, 2016
As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.
Simplification! Cool, right?
There’s a landmine here.
For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.
The notice says:
Summary of Changes
The VAS criteria are simplified by the following changes:
-
A description of veterinary care is no longer required.
-
Justification for the number of animals has been eliminated.
-
A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.
This means that if I continue with my current strategy, I’m going to start seeing complaints about “where is the power analysis” and “hey buddy, stop trying to evade page limits by putting it in the VAS“.
So back to the old way we must go. Leave space for your power analysis, folks.
__
If you don’t know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/
Your Grant In Review: Errors of fact from incompetent reviewers
December 3, 2015
Bjoern Brembs has posted a lengthy complaint about the errors of fact made by incompetent reviewers of his grant application.
I get it. I really do. I could write a similar penetrating expose of the incompetence of reviewers on at least half of my summary statements.
And I will admit that I probably have these thoughts running through my mind on the first six or seven reads of the summary statements for my proposals.
But I’m telling you. You have to let that stuff eventually roll off you like water off the proverbial duck’s back. Believe me*.
Brembs:
Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP.
…
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers.
Speaking for the NIH grant system only, you are an idiot if you expect this level of “expert peer” as the assigned reviewers to each and every one of your applications. I am not going to pretend to be an expert in this issue but even I can suspect that the body of work on this area does not lead each and every person who is “expert” to the same conclusion. And therefore even an expert might disagree with Brembs on what reviewers should “recognize”. A less-than-expert is going to be subject to a cursory or rapid reading of related literature or, perhaps, an incomplete understanding from a prior episode of attending to the issue.
As a grant applicant, I’m sorry, but it is your job to make your interpretations clear, particularly if you know there are papers pointing in different directions in the literature.
More ‘tude from the Brembster:
For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.
…
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.
These are repeated several times triumphantly as if they are some excellent sick burn. Don’t think like this. First, NIH reviewers are not expected to do a lot of outside research reading your papers (or others’) to apprehend the critical information needed to appreciate your proposal. Second, NIH reviewers are explicitly cautioned not to follow links to sites controlled by the applicant. DO. NOT. EXPECT. REVIEWERS. TO. READ. YOUR. BLOG! …or your papers.
With respect to “graduate student level”, it will be better for you to keep in mind that many peers who do not work directly in the narrow topic you are proposing to study have essentially a graduate student level acquaintance with your topic. Write your proposal accordingly. Draw the reader through it by the hand.
__
*Trump voice
Thought of the Day
October 16, 2015
I know this NIH grant game sucks.
I do.
And I feel really pained each time I get email or Twitter messages from one of my Readers (and there are many of you, so this isn’t as personal as it may seem to any given Reader) who are desperate to find the sekrit button that will make the grant dollars fall out of the hopper.
I spend soooooo much of my discussion on this blog trying to explain that NOBODY CAN TELL YOU WHERE THE SEKRIT BUTTON IS BECAUSE IT DOESN’T EXIST!!!!!!!!!!!!
Really. I believe this down to the core of my professional being.
Sometimes I think that the problem here is the just-world fallacy at work. It is just so dang difficult to give up on the notion that if you just do your job, the world will be fair. If you do good work, you will eventually get the grant funding to support it. That’s what all the people you trained around seemed to experience and you are at least as good as them, better in many cases, so obviously the world owes you the same sort of outcome.
I mean yeah, we all recognize things are terrible with the budget and we expect it to be harder but…..maybe not quite this hard?
I feel it too.
Believing in a just-world is really hard to shed.
Repost: Don’t tense up
August 7, 2015
I’ve been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.
If you’ve been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.
Your next proposals are stiff…and jam packed with what is supposed to be ammunition to ward off the criticisms you’ve been receiving lately. Excessive citation of the lit to defend your hypotheses…and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.
The trouble is, then your grant is wall to wall text and nearly unreadable.
Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don’t like the whole thing for reasons only tangentially related to the nits they are picking.
So your defensive crouch isn’t actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.
Thought of the Day
June 22, 2015
I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.
The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.
The post originally appeared December 2, 2008.
The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, “Why do they always drop the females?”
I was reminded of this when reading over Dr. Isis’ excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.
What really motivated me, however, was a comment from the always insightful Stephanie Z:
Thank you. That’s the first time I’ve seen someone address the reasons behind ongoing gender disparities in health research. I still can’t say as it thrills me (or you, obviously), but I understand a bit better now.
Did somebody ring?
As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.
Funding
There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an “over ambitious” research plan. As nicely detailed in Isis’ post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not “well established” and needs to be “compared to the standard gonadectomy models”, LOL >sigh<.
Grant reviewers prefer simplicityKeep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as “does this drug compound ameliorate cocaine addiction?” So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for “scope of the project”.
Another StockCritique to blame is “feasibility”. Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say “The PI has no idea what s/he is doing methodologically” if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don’t have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator….but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.
Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of “hasn’t fully considered….” and “it is not entirely clear how the PI will do…” and “how the hypothesis will be evaluated has not been sufficiently detailed…”.
Career
Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be “great” or “exciting” science in our respective fields.
The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone’s career they start getting props on their grant reviews for this. Early? Well the person hasn’t yet shown the necessity and profit for the exhaustive designs and instead they just look…unproductive. Like they haven’t really shown anything yet.
As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.
So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he’s more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.
My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.
We spend a fair amount of time talking about grant strategy on this blog. Presumably, this is a reflection of an internal process many of us go through trying to decide how to distribute our grant writing effort so as to maximize our chances of getting funded. After all we have better things to do than to write grants.
So we scrutinize success rates for various ICs, various mechanisms, FOAs, etc as best we are able. We flog RePORTER for evidence of which study sections will be most sympathetic to our proposals and how to cast our applications so as to be attractive. We worry about how to construct our Biosketch and who to include as consultants or collaborators. We obsess over how much preliminary data is enough (and too much*).
This is all well and good and maybe, maybe….perhaps….it helps.
But at some level, you have to follow your gut, too. Even when the odds seem overwhelmingly bad, there are going to be times when dang it, you just feel like this is the right thing to do.
Submitting an R01 on very thin preliminary data because it just doesn’t work as an R21 perhaps.
Proposing an R03 scope project even if the relevant study section has only one** of them funded on the RePORTER books.
Submitting your proposal when the PO who will likely be handling it has already told you she hates your Aims***.
Revising that application that has been triaged twice**** and sending it back in as a A2asA0 proposal.
I would just advise that you take a balanced approach. Make your riskier attempts, sure, but balance those with some less risky applications too.
I view it as….experimenting.
__
*Just got a question about presenting too much preliminary data the other day.
**of course you want to make sure there is not a structural issue at work, such as the section stopped reviewing this mechanism two years ago.
***1-2%ile scores have a way of softening the stony cold heart of a Program Officer. Within-payline skips are very, very rare beasts.
****one of my least strategic behaviors may be in revising grants that have been triaged. Not sure I’ve ever had one funded after initial triage and yet I persist. Less so now than I used to but…..I have a tendency. Hard headed and stupid, maybe.
It is one of the most perplexing things of my career and I still don’t completely understand why this is the case. But it is important for PIs, especially those who have not yet experienced study section, to understand a simple fact of life.
The NIH Program Officers do not completely understand what contributes to the review and scoring of your grant application.
My examples are legion and I have mentioned some of them in prior blog posts over the years.
The recent advice from NIAID on how to get your grant to fit within a modular budget limit.
The advice from a PO that PIs (such as myself) just needed to “write better grants” when I was already through a stint on study section and had read many, many crappy and yet funded grants from more established investigators.
The observation that transitioning investigators “shouldn’t take that job” because it was soft money and K grants were figuring heavily in the person’s transition/launch plans.
Apparently honest wonder that reviewers do not read their precious Program Announcements and automatically award excellent scores to applications just because they align with the goals of the PA.
Ignorance of the revision queuing that was particularly endemic during the early part of my career (and pretend? ignorance that limiting applications to one revision round made no functional difference in this).
The “sudden discovery” that all of the New Investigator grants during the checkbox era were going to well-established investigators who simply happened not to have NIH funding before, instead of boosting the young / recently appointed investigators.
An almost comically naive belief that study section outcome for grants really is an unbiased reflection of grant merit.
I could go on.
The reason this is so perplexing to me is that this is their job. POs [eta: used to] sit in on study section meetings or listen in on the phone. At least three times a year but probably more often given various special emphasis panels and the assignment of grants that might be reviewed in any of several study sections. They even take notes and are supposed to give feedback to the applicant with respect to the tenor of the discussion. They read any and all summary statements that they care to. They read (or can read) a nearly dizzying array of successful and unsuccessful applications.
And yet they mostly seem so ignorant of dynamics that were apparent to me after one, two or at the most three study section meetings.
It is weird.
The takeaway message for less NIH-experienced applicants is that the PO doesn’t know everything. I’m not saying they are never helpful….they are. Occasionally very helpful. Difference between funded and not-funded helpful. So I fully endorse the usual advice to talk to your POs early and often.
Do not take the PO word for gospel, however. Take it under advisement and integrate it with all of your other sources of information to try to decide how to advance your funding strategy.
Crystal clear grant advice from NIAID
May 21, 2015
from this Advice Corner on modular budgeting:
As you design your research proposal, tabulate a rough cost estimate. If you are above but near the $250,000 annual direct cost threshold, consider ways to lessen your expenses. Maybe you have a low-priority Specific Aim that can be dropped or a piece of equipment you could rent rather than buy new.
H/t: PhysioProf
Related Reading:
NIAID
Sample Grants