End of Year Funds in 2020
October 1, 2020
As you know, the end of the Federal fiscal year can be a fun time for hopeful NIH grant applicants. This is when your favorite ICs are counting up the beans and making sure they use all of their appropriated money.
This means that they often pick up some grants with scores that otherwise looked like they were not going to fund.
This is great if you are one of the lucky ones. I have had 9/30 grant starts in the past. It feels awesome.
This year however….this year.
I am thinking about the Hoppe finding and the original Ginther report. I am thinking as always about the NIH’s complete and utter failure to address this issue. And I am hopeful. Always. That IC Directors will take it upon themselves to follow the advice I had right from the start.
Just FIX this. It won’t take much. Just a few extra grant pickups that happen to have Black PIs. End of year is a great time to slip one or three or five into the portfolio. Nobody can complain about these decisions.
So… pull up RePORTER. Click the start date of 9/15/2020, enter the two letter code for your favorite ICs and start searching.
See any of your Black colleagues getting grants?
NIH grant application topics by IC
August 13, 2020
As you will recall, the Hoppe et al. 2019 report [blogpost] both replicated Ginther et al 2011 with a subsequent slice of grant applications, demonstrating that after the news of Ginther, with a change in scoring procedures and changes in permissible revisions, applications with Black PIs still suffered a huge funding disparity. Applications with white PIs are 1.7 times more likely to be funded. Hoppe et al also identified a new culprit for the funding disparity to applications with African-American / Black PIs. TOPIC! “Aha”, they crowed, “it isn’t that applications with Black PIs are discriminated against on that basis, no. It’s that the applications with Black PIs just so happen to be disproportionately focused on topics that just so happen to have lower funding / success rates”. Of course it also was admitted very quietly by Hoppe et al that:
WH applicants also experienced lower award rates in these clusters, but the disparate outcomes between AA/B and WH applicants remained, regardless of whether the topic was among the higher- or lower-success clusters (fig. S6).
Hoppe et al., Science Advances, 2019 Oct 9;5(10):eaaw7238. doi: 10.1126/sciadv.aaw7238
If you go to the Supplement Figure S6 you can see that for each of the five quintiles of topic clusters (ranked by award rates) applications with Black PIs fare worse than applications with white PIs. In fact, in the least-awarded quintile, which has the highest proportion of the applications with Black PIs, the white PI apps enjoy a 1.87 fold advantage, higher than the overall mean of the 1.65 fold advantage.
Record scratch: As usual I find something new every time I go back to one of these reports on the NIH funding disparity. The overall award rate disparity was 10.7% for applications with Black PIs versus 17.7% for those with white PIs. The take away from Hoppe et al. 2019 is reflected in the left side of Figure S6 where it shows that the percentage of applications with Black PIs is lowest (<10%) in the topic domains with the highest award rates and highest (~28%) in the domains with the lowest award rates. The percentages are more similar for apps with white PIs, approximately 20% per quintile. But the right side lists the award rates by quintile. And here we see that in the second highest award-rate topic quintile, the disparity is similar to the mean (12.6% vs 18.9%) but in the top quintile it is greater (13.4% vs 24.2% or a 10.8%age point gap vs the 7%age point gap overall). So if Black PIs followed Director Collins’ suggestion that they work on the right topics with the right methodologies, they would fare even worse due to the 1.81 fold advantage for applications with white PIs in the top most-awarded topic quintile!
Okay but what I really started out to discuss today was a new tiny tidbit provided by a blog post on the Open Mike blog. It reports the topic clusters by IC. This is cool to see since the word clusters presented in Hoppe (Figure 4) don’t map cleanly onto any sort of IC assumptions.

All we are really concerned with here is the ranking along the X axis. From the blog post:
…17 topics (out of 148), representing 40,307 R01 applications, accounted for 50% of the submissions from African American and Black (AAB) PIs. We refer to these topics as “AAB disproportionate” as these are topics to which AAB PIs disproportionately apply.
Note the extreme outliers. One (MD) is the National Institute on Minority Health and Health Disparities. I mean… seriously. The other (NR) is the National Institute on Nursing Research which is also really interesting. Did I mention that these two Is get 0.8% and 0.4% of the NIH budget, respectively? The NIH mission statement reads: “NIH’s mission is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.” Emphasis added. The next one (TW) is the Fogerty International Center which focuses on global health issues (hello global pandemics!) and gets 0.2% of the NIH budget.
Then we get into the real meat. At numbers 4-6 on the AAB Disproportionate list of ICs we reach the National Institute on Child Health and Development (HD, 3.7% of the budget), NIDA (DA, 3.5%) and NIAAA (AA, 1.3%). And clocking in at 7 and 9 we have National Institute on Aging (AG, 8.5%) and the NIMH (MH, 4.9%).
These are a lot of NIH dollars being expended in ICs of central interest to me and a lot of my audience. We could have made some guesses based on the word clusters in Hoppe et al 2019 but this gets us closer.
Yes, we now need to get deeper and more specific. What is the award disparity for applications with Black vs white PIs within each of these ICs? How much of that disparity, if it exists, accounted for by the topic choices within IC?
And lets consider the upside. If, by some miracle, a given IC is doing particularly well with respect to funding applications with Black PIs fairly….how are they accomplishing this variance from the NIH average? What can the NIH adopt from such an IC to improve things?
Oh, and NINR and NIMHHD really need a boost to their budgets. Maybe NIH Director Collins could put a 10% cut prior to award to the other ICs to improve investment in the applying-knowledge-to-enhance-health goals of the mission statement?
Sally Amero, Ph.D., NIH’s Review Policy Officer
and Extramural Research Integrity Liaison Officer, has posted a new entry on the Open Mike blog addressing reviewer guidance in the Time of Corona. They have listed a number of things that are now supposed not to affect scoring. The list includes:
- Some key personnel on grant applications may be called up to serve in patient testing or patient care roles, diverting effort from the proposed research
- Feasibility of the proposed approach may be affected, for example if direct patient contact is required
- The environment may not be functional or accessible
- Additional human subjects protections may be in order, for example if the application was submitted prior to the viral outbreak
- Animal welfare may be affected, if institutions are closed temporarily
- Biohazards may include insufficient protections for research personnel
- Recruitment plans and inclusion plans may be delayed, if certain patient populations are affected by the viral outbreak
- Travel for key personnel or trainees to attend scientific conferences, meetings of consortium leadership, etc., may be postponed temporarily
- Curricula proposed in training grant applications may have to be converted to online formats temporarily
- Conferences proposed in R13/U13 applications may be cancelled or postponed.
Honestly, I’m not seeing how we are in a situation where this comes into the consideration. Nothing moves quickly enough with respect to grant proposals for future work. I mean, any applicants should be optimistic and act like everything will be normal status, for grants submitted this round for first possible funding, ah, NEXT APRIL. Grants received for review in the upcoming June/July study sections were for the most part received before this shutdown happened so likewise, there is no reason they would have had call to mention the Corona Crisis. That part is totally perplexing.
The next bit, however is a real punch in the gut.
We have also had many questions from applicants asking what they should do if they don’t have enough preliminary data for the application they had planned to submit. While it may not be the most popular answer, we always recommend that applicants submit the best application possible. If preliminary data is lacking, consider waiting to submit a stronger application for a later due date.
Aka “Screw you”.
I will admit this was entirely predictable.
There is no guarantee that grant review in the coming rounds will take Corona-related excuses seriously. And even if they do, this is still competition. A competition where if you’ve happened to be more productive than the next person, your chances are better. Are the preliminary data supportive? Is your productivity coming along? Well, the next PI looks fine and you look bad so…. so sorry, ND. Nobody can ever have confidence that where they are when they shut down for corona will ever be enough to get them their next bit of funding.
I don’t see any way for the NIH to navigate this. Sure, they could give out supplements to existing grants. But, that only benefits the currently funded. Bridge awards for those that had near-miss scores? Sure, but how many can they afford? What impact would this have on new grants? After all, the NIH shows no signs yet of shutting down receipt and review or of funding per Council round as normal. But if we are relying on this, then we are under huge pressure to keep submitting grants as normal. Which would be helped by new Preliminary Data. And more publications.
So we PIs are hugely, hugely still motivated to work as normal. To seek any excuse as to why our ongoing studies are absolutely essential. To keep valuable stuff going, by hook or by crook…. Among other reasons, WE DON’T KNOW THE END DATE!
I hate being right when it comes to my cynical views on how the NIH behaves. But it is very clear. They are positively encouraging the scofflaws to keep on working, to keep pressing their people to come to work and to tell their administration whatever is necessary to keep it rolling. The NIH is positively underlining the word Essential for our employees. If you don’t keep generating data, the lab’s chances of getting funded go down, relative to the labs that keep on working. Same thing for fellowships, trainees. That other person gunning for the rare K99 in your cohort is working so…..
Here’s the weird thing. These people at the NIH have to know that their exhortations to reviewers to do this, that or the other generally do not work. Look how the early stage / young investigator thing has played out across four or five decades. Look at the whole SABV initiative. Look at the remarks we’ve seen where grant reviewers refuse to accept that pre-prints are meaningful.
All they would have had to do is put in some meaningless pablum about how they were going to “remind reviewers that they should assume issues resulting from the coronavirus pandemic should not affect scores” and include Preliminary Data may not be as strong as in other times in the above bullet point list.
Grant seeking in the Time of Corona
April 14, 2020
The NIH has been responding to the corona virus epidemic / crisis by shooting out various funding opportunity announcements to encourage new research on the issue. They will fund supplements, administrative and competing, as well as new grants and contracts. This is what NIH does in the face of perceived new health issues.
This response is perhaps more rapid than usual, but it is not very different from responses to other perceived crises such as the HIV/AIDS one, the SARS one, the Ebola one, the opioid one, etc. It’s not all that different from sudden political support for things such as the ongoing War on Cancer or the BRAINI scam.
As usual this sparks some minor debate in the ranks of the NIH funded science community. Is it some sort of outrage that individuals seek to create some sort of artificial, Frankenstein’s monster type of research program to respond to such funding opportunities? Is it distastefully mercenary? Will it just end up funding poorly considered, crap science?
Some seem to be arguing this line with respect to the corona virus crisis.
I am one who shakes my head ruefully and says “well, that’s how this system works”. On a tactical level my advice to grant funded PIs is to just say “well, two Aims for them, one Aim for me and let’s call it a day”.
Meaning sure, try to meet the intentions of the FOA by marrying what you already do to the interest of the day. Do some credible work on their interests – after all, at some level we do in fact work for NIH priorities which means national taxpayer priorities. We are fortunate to live in an investigator-initiated environment for the most part so is it really so terrible that once in awhile we’re sort of told what to do? I say no. Especially since we’re able to invent up the boundaries of what we’re being told to do. But this is also the opportunity to get some funding for what really interests you. To the tune of at least an Aim, probably more. How is that not a good thing?
I went through some of this as an observer and a participant PI during the HIV/AIDS version of this. Congress pushed a bunch of money at NIH for HIV/AIDS research and, the way I understand it, instructed NIH on who was going to be in charge of how much. Well, a lot of money ended up in the hands of NIDA. I can’t recall all the whys on that—those decisions were made before I was aware of this situation.
But, I very much was aware during a time when grants were supposed to come in with basically four groups or manipulations or what have you: Control, Immunodeficiency Virus Related, Drug Related, Virus + Drug. Another way to put it is: “How does Drug X affect pathogenicity in your immunodeficiency virus model?”
This is pretty specific but I think it generalizes to corona where there will be a lot of objection to people marrying The Real, Important, Critical Work on Corona Virus to Whatever They Happen To Do.
I didn’t think I was going to have an angle on corona virus at all. Shocked me to find that NIDA was actually out front. Why? Remember all that speculation back in the earlier days that Chinese men were perhaps more at risk than Chinese women due to cigarette smoking rates? And then there was some loose association of that with vaping in Scientific American (I think) speculation and boom, off to the races.
NIDA published one of the first FOA that I saw. NOT-DA-20-047 appeared March 19. Notice of Special Interest (NOSI) regarding the Availability of Administrative Supplements and Urgent Competitive Revisions for Research on the 2019 Novel Coronavirus
It’s a very broad one. Not just about smoking. They are ON it.
In order to rapidly improve our understanding of the risks, prevalence, and available control measures for 2019-nCoV in substance using or HIV-affected populations, NIDA is encouraging the submission of applications for Competitive Revisions to active grants to address the following research areas of interest:
Research to determine whether substance use (especially smoking tobacco or marijuana, vaping, opioids and other drug use) is a risk factor for the onset and progression of COVID-19.
Research on how HIV among persons who use substances may impact the onset and progression of COVID-19.
Research to understand system-level responses to COVID-19 prevention and risk mitigation in secure settings such as prisons and jails, with a particular emphasis on detainees with substance use disorder (SUD). For example:
Research to understand the respiratory effects of SARS-CoV-2 infection among individuals with substance use disorders (SUD); in particular those with nicotine, marijuana, opioid, and methamphetamine use disorders.
Research to understand how the respiratory effects of COVID-19 influences the rate of opioid overdoses both in pain patients as well as patients with an opioid use disorders and also to assess how it influences the outcomes for naloxone interventions for overdose reversal
Research to develop therapeutic approaches for comorbid SARS-CoV-2 infection and SUDs.
Research to evaluate drug-drug interaction of medications to treat SARS-CoV-2 and substances of abuse or medications to treat SUDs.
Research to understand system- or organizational-level responses to identify, prevent, or mitigate the impact of COVID-19 in service settings that serve vulnerable populations, including people who are homeless or unstably housed.
Research to understand and mitigate the impact of COVID-19 in methadone treatment programs and syringe exchange services.
Research on how potential overcrowding of emergency departments and health services will impact the treatment of opioid overdoses and of opioid use disorder
Research using ongoing studies to understand the broad impacts of COVID-19 (e.g., school closures, food insecurity, anxiety, social isolation, family loss) on neurodevelopment, substance use, substance use disorders, and access to addiction treatment.
COVID-19: Potential Implications for Individuals with Substance Use Disorders
is a webpage with more of their thinking on this.
So, is it terrible if I were to respond to this by firing the lab back up? By turning stones at my University until I found someone with a decent rodent-related set of expertises in corona viruses? Started plotting an attack on funding?
I am being ASKED to do so by the NIH. Encouraged to get in the game. And that means, you guessed it, putting the lab to work on this.
Corruption of NIH Peer Review
April 13, 2020
The Office of the Inspector General at the HHS (NIH’s government organization parent) has recently issued a report [PDF] which throws some light on the mutterings that we’ve been hearing. Up to this point it has mostly been veiled “reminders” about the integrity of peer review at NIH and how we’re supposed to be ethical reviewers and what not.
As usual when upstanding citizens such as ourselves hear such things we are curious. As reviewers, we think we are trying our best to review ethically as we have been instructed. As applicants, of course, we are curious about just what manner of screwing we’ve suffered at the hands of NIH’s peer review now. After all, we all know that we’re being screwed, right?
“NIH isn’t funding [X] anymore“, we cry. X can be clinical, translational, model system, basic…. you name it. X can be our specific subarea within our favorite IC. X can be our model system or analytical approach or level of analysis. X can be our home institution’s ZIP code, or prestige, or type within the academic landscape.
And of course, our study section isn’t giving us a good score because of a conspiracy. Against X or against ourselves, specifically. It’s a good old insider club, doncha see, and we are on the outside. They just give good scores to applications of the right X or from the right people who inhabit the club. The boys. The white people. The Ivy League. The R1s. Those who trained with Bobs. Glam labs. Nobel club.
Well, well, well, the OIG / HHS report has verified all of your deepest fears.
NIH Has Acted To Protect Confidential Information Handled by Peer Reviewers, But It Could Do More [OEI-05-19-00240; March 2020; oig.hhs.gov; Susanne Murrin, Deputy Inspector General for Evaluation and Inspections.
Let’s dig right in.
In his August 2018 statement on protecting the integrity of U.S. biomedical research, NIH Director Dr. Francis Collins expressed concern about the inappropriate sharing of confidential information by peer reviewers with others, including foreign entities.5 At the same time, Dr. Collins wrote to NIH grantee institutions to alert them to these foreign threats, noting that “foreign entities have mounted systematic programs to influence NIH researchers and peer reviewers.”6 As an example of these programs, NIH’s Advisory Committee to the Director warned NIH of China’s Thousand Talents plan, which is intended to attract talented scientists while facilitating access to intellectual property
…
Additionally, congressional committees have expressed concerns and requested information about potential threats to the integrity of taxpayer-funded research, including the theft of intellectual property and its diversion to foreign entities.8, 9 In a June 2019 Senate hearing, NIH Principal Deputy Director Dr. Lawrence Tabak testified that NIH was “aware that a few foreign governments have initiated systematic programs to capitalize on the collaborative nature of biomedical research and unduly influence U.S.-based researchers
So the rumors are true. It’s about the Chinese. One of the reasons I’ve been holding off blogging about this during the whispers and hints era was this. This may be why NIH itself has been so circumspect. Nobody wants to conflate what looks like racism along with what appears to be state-sponsored activity to take advantage of our relatively open scientific system. Many academic scientists love to bleat about the wonderful international nature of the scientific endeavor. I like it myself and occasionally reference this. I wish it was not inevitably and ultimately wrapped up in geo-politics and what not. But it is. Science influences economic activity and therefore power.
I am on the record as a protectionist when it comes to academic employment in the public and public-funded sectors. I don’t think we need hard absolute walls but I also think in hard times, we raise serious and very high barriers to funding NIH grants to foreign applicant institutions. I think, of course, that we need to take a harder look at employment politics. Like any other sector, immigrant postdocs and graduate students often devalue the labor market for domestic employees. I’d like to see a little more regulation on that to keep opportunities for US citizens prioritized.
But I also appreciate that we are an immigrant nation founded on the hard work of immigrants who often ARE more eager to work hard than native born folks (of which I am one, people. I’m including myself in the lazy sack category here). Hard. So we need to have some academic science immigration, of course. And I am not that keen on traditional lines of white supremacy dictating who gets to immigrate here to do science.
So, when I started getting the feeling this was directed specifically at the Chinese, let’s just say the hairs on my neck went up.
But, this report makes it pretty clear this is the problem. They are targeting this “Thousand Talents” effort of China very specifically and are going after US-employed scientists who do not report financial conflicts….from China. And other sources, but…the picture in this report is sharp.
I have heard of more than one local investigator who had a Chinese lab or company who was not reporting this appropriately. They also hold NIH funds and so were disciplined. Grants were pulled. At least one person has disappeared back to China. At least one person is apparently under some sort of NIH suspension but the grants are still running out the clock on the current fiscal year so I can’t quite validate the rumors. A multi-year suspension from grant seeking is being whispered around the campfire.
So what about the reviewers? Where does this come in?
As of November 2019, NIH had flagged 77 peer reviewers across both CSR- and IC-organized study sections as Do Not Use because of a breach of peer review confidentiality. A reviewer who is flagged as Do Not Use may not participate in further study section meetings or review future applications until the flag is removed
…Between February 2018 and November 2019, NIH terminated the service of 10 peer reviewers who not only had undisclosed foreign affiliations, but had also disclosed confidential information from grant applications. For example, some of these reviewers shared critiques of grant applications with colleagues or shared their NIH account passwords with colleagues.
There is a bunch of more of this talk in bullet points about reviewers being suspended or under investigation for both violations peer review and undisclosed foreign conflicts of interest. It could be companies or funding, although this is not clearly specified. Then….. the doozy:
As of November 2019, NIH dissolved two study sections because of evidence of systemic collusion among the reviewers in the section. At least one instance involved the disclosure of confidential information. NIH dissolved the first study section in 2017 and the second in 2018. All grant applications that the study sections reviewed were reassessed by different reviewers.
AHA! There IS a conspiracy against your grants. Look, this is bad. I’m trying to maintain some humor here, but the fact is that this would be relatively easy to pull off, so long as the conspirators were all on board and nobody ratted. What would you need? A third of a study section? A quarter? Half? I dunno but it isn’t *that* many people. Some are in on the main conspiracy (puppeted by a foreign government?), some are willing pawns because their own grants do well, some are just plain convinced by their buddies that this is how it actually works here?. And if they are all in contact, how long would it take? five minute phone conversations about how they need to support applications from A, B and C and run down those likely looking top-scoring apps from X, Y and Z?
I don’t know how they caught these conspiracies but there were probably emails to go along with the forensic evidence on their foreign conflicts of employment, affiliation and funding. Oh wait, the report tells us:
One way NIH learns about instances of possible undue foreign influence is through its national security partners. Since 2017, NIH has increasingly worked with the FBI on emerging foreign threats to NIH-funded research. NIH reported that in 2018, the FBI provided it with referrals of researchers—some of whom were also peer reviewers—who had NIH grants and were alleged to have undisclosed foreign affiliations.
It also says that program staff may have noticed papers that cited funding that has not been disclosed properly (on the Other Support that PIs have to file prior to funding, I presume).
As of November 2019, NIH determined that allegations against 207 researchers were potentially substantiated. Of those 207 researchers, NIH determined that 129 had served as peer reviewers in 2018 and/or 2019. NIH designated 47 of these 129 peer reviewers as Do Not Use. When OIG asked NIH about the remaining 82 peer reviewers—i.e., those who had potentially substantiated allegations but who had not been designated as Do Not Use—NIH did not respond.
What the heck? Why not? This is the IG ffs. How do they “not respond”?
Between February 2018 and November 2019, NIH confirmed 10 cases involving peer reviewers who were stealing or disclosing confidential information from grant applications or related materials and who also had undisclosed foreign affiliations. Two of these 10 cases involved peer reviewers who were selected for China’s Thousand Talents program. The breaches of confidentiality included disclosing scoring information, sharing study section critiques, and forwarding grant application information to third parties. In some of these instances, reviewers shared confidential information with foreign entities.…In two cases, NIH dissolved a study section
So the worst of the worst. How long had this been going on? How many proposals were affected? How many ill gotten grant awards aced out more legitimate competitors? Were those PIs made whole (hahaha. Of course not.) For the dissolved study sections, just how bad WAS it?
Look, I’m glad they caught this stuff. But I have no confidence that we are getting anything even remotely like a full picture here. The tone seems to be that this was sparked by some pretty egregious violations of Other Support declarations leading to scrutiny of those PIs who happened to review grants. The NIH then managed to find evidence (confessions?) of violations of peer review rules. The description of the actual peer review violations leans heavily on inappropriate disclosure of confidential information. Showing critiques and grants to people who have no right to see them. Is this all it was? This is what led to a study section dissolution? Or, as I would suspect, a lot more going on with grant-score-deciding behavior? That is what should lead to dissolution of a section but it is a lot harder to prove than “clearly you gave your password to someone who is logging in from half a world away two hours after you logged in from the US”. I want answers to these harder questions- how are these conspiracies and conflicts leading to funding for those inside the conspiracy and the loss of funding for those who are not?
NIH is highly motivated to soft-pedal that part. Because they are really, really, REALLY motivated to pretend their system of grant selection works to fund the most meritorious science. Probing into how easy it would be to suborn this process, as a single rogue reviewer OR as a conspiracy, is likely to lead to very awkward information.
I never feel that NIH takes participant confidence in their system of review and grant award seriously enough. I don’t think they do enough to reassure the rank and file that yes, it IS their intent to be fair and to select grants on merit. Too many participants in extramural grant review, as applicants and as reviewers, continue to talk with great confidence and authority about what a racket it is and how there are all these specific unfairnesses I alluded to above.
Well, what happens if reviewers believe that stuff?
“Everybody is doing it” is the most frequent response when scientists are caught faking data, right? Well….
A loss of confidence in the integrity of NIH review is going to further excuse future misdeeds in the minds of reviewers themselves. If the system is biased against model systems, it’s okay for me Captain Defender of Models System Neuroscience, to give great scores to fly grants, right? I’m just making up for the bias, not introducing one of my own. If the system is clearly biased in favor of those soft money high indirect cost professional grant writers than hey, it is totally fair that I , Professor of Heavy Teaching Load Purity to do down their grants and favor those of people like me, right? It’s just balancing the scales.
Because everyone knows the system is stacked against me.
Do it to Julia, not me, Julia!
I think the NIH needs to do far more than to blame the dissolution of two study sections of foreign influence and call it a day. I think they need to admit to how easy it was for such efforts to corrupt review and to tell us how they can put processes in place to keep review cartel behavior, explicit OR IMPLICIT, from biasing the selection of grants for funding.
They need to restore confidence.
There’s a podcast (mp3) on this linked to the Open Mike blog here.
The NIH is going to reduce their use of Program Announcements to advertise their interest in receiving applications on scientific topics. Previously, various stripes of PA including PAS (set aside funds, like an RFA) and PAR (special emphasis panel convened for review) were published to solicit grant applications on specific topics. The above linked podcast indicates that now these interests will be advertised with a Notice of Special Interest (NOSI). The NOSI are published as a Notice (e.g., NOT-DA-20-039) and they point to existing FOA, like parent R01, R21, etc as the FOA link you actually will apply under.
One super key point is that you have to indicate the specific NOSI as follows in your submission:
For funding consideration, applicants must include “NOT-XX-20-0xy” (without quotation marks) in the Agency Routing Identifier field (box 4B) of the SF424 R&R form. Applications without this information in box 4B will not be considered for this initiative.
As the podcast indicates, normally you wouldn’t put anything there. And it also warns that your application may not be considered under the NOSI interests if you forget. “May not”. Yeah.
Note that as with the specific PAs, these NOSI may have different key dates, other submission instructions or variations on the usual mechanism themes. Total duration of an R01 might be limited to 3 years, for example. So read carefully.
You may want to know, as I do, why is NIH doing this? Well the only thing that was repeated in the podcast that made any sense was the speed of getting these approved. Dr. Jodi Black, Deputy Director of NIH’s OER, said on the podcast that they can get the information out quicker. She claimed that the old way might require up to a year to get a PA approved and published and that they can get these out in 4-5 weeks. Since the FOA itself (i.e., the parent R01 FOA) is already approved.
Ok, sounds great. Everything is faster.
Does it get you anything? Is it worth it to pay attention to the weekly NIH guide and scrutinize it for NOSI that might work for you? Should you use NOSI?
Yes. Ish.
Dr. Black seemed to be saying that the prior PA purpose was, and the NOSI purpose is, to advertise NIH interests. A naive PI, as I once was, might assume that if you have work that fits really, really well with the PA that this should get you some extra benefit. Like, some credit at review for meeting the goals of the PA.
A naive PI, as I once was, might likewise assume that meeting the goals of the PA was one of the only ways to get Program on your side for a pickup of a borderline score.
When I got on study section I was quickly disabused of the notion that meeting the goals of the PA did much for you during initial review in a standing panel. Sure, the diligent reviewers would notice if the application was under a PA that described a scientific interest. Especially if they were favorably disposed towards the grant already. But I can’t recall a single case where it seemed to make much difference. The grants were reviewed on the grounds typical for the section. “Significance” was reviewed based on the reviewer’s own view of the importance, not that one of the NIH ICs had stated what was important to them. Occasionally I have even heard what amounted to jury nullification or revolt, where the reviewer was essentially in disagreement with the IC’s expression of importance for the topic!
BTW, sex differences research topics were often the subject of focal PA from my ICs of closest interest. We all know how that went.
As I gained more experience with the murky view on Program decision making and pickups, I came to the conclusion that PAs weren’t all that much help with borderline pickups either. Unless it was the very specific case of a PAS or RFA with funds already committed, there was never any guarantee (*actually not even with RFA, but you know what I mean) any grants would be funded for a given round. Program would certainly pickup a parent R01 proposal over a specific PA/RFA proposal if it met their other suite of interests (e.g., “our long term funded investigator- no not you DM – is running out of money”).
I still submit apps relevant to focal PAs and I now submit apps under NOSI. Why not, right? It can’t hurt, is my thought. But I have a much, much lower estimate of how much it might help to get my grants funded.
On a recent case, for example, I was bouncing emails around with some Program Officers and eventually one of them let it slip that the IC in question may be listed on the NOSI but they really didn’t expect to put any money into it. I interpreted this as “meh, if there’s a REALLY good one from our buddy PI who we (meaning the IC brass / powers that be) luuurv to pieces than maaaaaybe it will get some special consideration”. I ended up not doing an app for that particular “special interest”.
(Yes of COURSE my lab is the best possible lab for that particular interest…but without POs being on board with the topic in any real, hard dollars kind of way…it’s a waste of time.)
But since you never know when a NOSI is going to tip the difference on an application that just missed the payline by a smidge, I’m going to keep paying attention to NOSI announcements.
It is that time of year when NIH issues a notice covering some long-standing prohibitions against spending their grant money on certain topics. NOT-OD-20-066 reads in part:
(a) No part of any appropriation contained in this Act or transferred pursuant to section 4002 of Public Law 111– 148 shall be used, other than for normal and recognized executive legislative relationships, for publicity or propaganda purposes, for the preparation, distribution, or use of any kit, pamphlet, booklet, publication, electronic communication, radio, television, or video presentation designed to support or defeat the enactment of legislation before the Congress or any State or local legislature or legislative body, except in presentation to the Congress or any State or local legislature itself, or designed to support or defeat any proposed or pending regulation, administrative action, or order issued by the executive branch of any State or local government, except in presentation to the executive branch of any State or local government itself.
(b) No part of any appropriation contained in this Act or transferred pursuant to section 4002 of Public Law 111–148 shall be used to pay the salary or expenses of any grant or contract recipient, or agent acting for such recipient, related to any activity designed to influence the enactment of legislation, appropriations, regulation, administrative action, or Executive order proposed or pending before the Congress or any State government, State legislature or local legislature or legislative body, other than for normal and recognized executive-legislative relationships or participation by an agency or officer of a State, local or tribal government in policy making and administrative processes within the executive branch of that government
You can see the weasel words, of course. The deployment of “designed to” could mean a whole host of things when it comes to “publication” or “electronic communication”. But still, the message one tends to receive here is that if some Congress Critter gets all up in a snit about it, you could be in trouble for publishing any studies or reviews or opinion pieces that tend to have political/public policy implications.
Gotta be honest folks, I think the vast majority of what I do could possible have public policy implications. Now, of course, most of what I do falls on the seemingly good side of one of the specific issues of concern to Congress about what I publish.
None of the funds made available in this Act may be used for any activity that promotes the legalization of any drug or other substance included in schedule I of the schedules of controlled substances established under section 202 of the Controlled Substances Act except for normal and recognized executive congressional communications.
But you can see where someone might get nervous about whether or not some aspect of their study of, oh, cannabis or THC or cannabidiol (CBD) just picking one out of the hat not really, might be viewed as “promoting legalization”. Especially if some advocates happened upon some result or other and started using your paper as their Exhibit A…. You can also see where the cannabis proponents (especially the medical advocates) might view this as the root of the conspiracy to scientifically demonize their favorite plant. and maybe it is, maybe it is… Congress will argue that they’ve thought this all through!
(b) The limitation in subsection (a) shall not apply when there is significant medical evidence of a therapeutic advantage to the use of such drug or other substance or that federally sponsored clinical trials are being conducted to determine therapeutic advantage. “
…..but this doesn’t help, right? The whole point of doing basic and pre-clinical and even clinical research (not trials, research) is to determine if there even IS any ” significant medical evidence of a therapeutic advantage“, right? This escape clause reads like you have to pull that medical evidence out of a non-Federally-funded hat before you can then do more research which might tend to “promote the legalization” of, e.g., cannabis.
But I digress. Oh, look SQUIRREL!
(2) Gun Control (Section 210)
“None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.”
Yeah, that’s still in there. Notwithstanding Republican protestations that there really isn’t a ban on research on gun-related harms every time they get put in the crosshairs (oops) by the press in the wake of a mass shooting.
But I digress. Again. The thing I really wanted to discuss is:
(3) Anti-Lobbying (Section 503)
“ (a) No part of any appropriation contained in this Act or transferred pursuant to section 4002 of Public Law 111– 148 shall be used, other than for normal and recognized executive legislative relationships, for publicity or propaganda purposes, for the preparation, distribution, or use of any kit, pamphlet, booklet, publication, electronic communication, radio, television, or video presentation designed to support or defeat the enactment of legislation before the Congress or any State or local legislature or legislative body,
….
(c) The prohibitions in subsections (a) and (b) shall include any activity to advocate or promote any proposed, pending or future Federal, State or local tax increase, or any proposed, pending, or future requirement or restriction on any legal consumer product, including its sale or marketing, including but not limited to the advocacy or promotion of gun control.”
“any legal consumer product”. WOWIEE. Yes guns, but at present this includes all kinds of barely-regulated supplements and quack remedies (hi CBD!), cigarettes, e-cigarettes, organic and GMO/antiGMO foodstuffs…. the list goes on and on. And I don’t know how “services” might be distinguished from “product” but this might include chiropracty and aromatherapy and meditation and hot yoga and who knows what else that falls into the probably-woo camp.
Maybe this was always in this anti-lobbying section and I just never noticed or realized the full implications, as written.
My concern is not really that the NIH will come after my or my institution for a refund should any Congress Critter decide to make hay against one of my papers under this prohibition.
It’s that NIH ICs will run scared before this and be highly conservative in terms of what they fund, lest it run afoul of Congress.
Yeah, I had a ring side seat at one of these in the past so it’s not a theoretical concern. It’s something that should concern all of us.
NIH Discontinues Continuous Submission for Frequent Service With A Gaslighting Excuse
January 28, 2020
The Notice NOT-ED-20-006 rescinds the continuous submission privilege for the “recent substantial service” category that has been in place since 2009 (NOT-OD-09-155). This extended the privilege that had been given to people who were serving an appointed term on a study section (NOT-OD-08-026). The qualification for “recent substantial service” meant serving as a study section member six times in an 18 month interval. In comparison, an appointed member of a study section serves in 3 meetings per year maximum, with the conversion to 6 year options entailing only two rounds per year. As a reminder the stated goal for this extension was: “to recognize outstanding review and advisory service, and to minimize disincentives to such service“. This is why it is so weird that the latest notice rescinding the policy for the “substantial service” seems to blame these people for having some sort of malign influence. To wit: “prior policy had unintended consequences, among them encouraging excessive review service and thus disproportionate influence by some.“
Something smells. Really, really badly.
There is a new CSR blogpost up on Review Matters, by the current Director of CSR Noni Byrnes, which further adds to the miasma. She starts off with stuff that I agree with 1,000 percent.
The scientific peer review process benefits greatly when the study section reviewers bring not only strong scientific qualifications and expertise, but also a broad range of backgrounds and varying scientific perspectives. Bringing new viewpoints into the process replenishes and refreshes the study section, enhancing the quality of its output.
I have blogged many a word that addresses this topic in various ways. From my comments opposing the Grande Purge of Assistant Professors started by Toni Scarpa, to my comments generally on the virtues of the competition of biases to address inevitable implicit bias to my pointed comments on the Ginther finding and NIH’s dismal response to same. I agree that broadening the participation in NIH peer review is a good goal. And I welcome this post because it gives us some interesting data, new to my eyes.
As of January 1, 2020, there were 22,608 individuals with active R01 funding. Of these, 30% (6715) have served one to five times, and 18% (4074) have never served as a reviewer in the last 12 years. Of those who have served only one to five times over 12 years, 26% are assistant professors and 34% are associate professors.
Cool, cool. At least it is a starting point for discussion. Should they be trying to reduce that 18% number? Heck yes. To what? I don’t know. Some of this is structural in the sense that someone just awarded their first R01 probably is less likely to have a service record within the next 3 months. Right? So…5%? The question is how to do this, why are the 18% being overlooked, etc. Well, if you are the head of CSR you know in your bones that peer review service is opt-in…but only opt-in upon request from a CSR (or in limited cases IC-associated) SRO. So the 18% needs to be parsed into those who have never been asked and those who have refused. Those that have been asked several times (3+ over time?) and those that have only been asked once (I mean, stuff happens and you aren’t always available when requested). And Director Byrnes is sorta, half heartedly, putting the blame where it belongs, pending data on refusal rates, on the SROs. “ In an effort to facilitate broader participation in review, we are making these data available to SROs and encouraging them to identify qualified and scientifically appropriate reviewers, who may not have been on their radar previously. ” “Encouraging” . Gee, for some reason, the SROs I talked to during the Scarpa Grande Purge suggested that he was doing a lot more than mere “encouragement” to get rid of Assistant Professors. And in full disclosure more than one SRO alluded to fighting back and slow-walking since they disagreed with Scarpa’s agenda. But both of these things suggest that Byrnes is going to have to do more than just show her SROs the data and ask them nicely to do better.
Then the blog post goes into a topic I think I’ve planned to blog, and failed to do so, for years. Disproportionate influence of a given reviewer, by virtue of constant and substantial participation in peer review of grants. This is a tricky topic. As I said, the system is opt-in upon request. So a given reviewer is at least partially to blame for the number of panels he or she or they serve on. The blog post has a nice little graph of distribution of the 12 year service history of anyone who has been on a CSR panel in the past two years.
However, one aspect of broadening the pool of reviewers is to avoid excessive review service by a small fraction of people, which can lead them to have a disproportionate effect on review outcomes. We are looking into issue of undue influence, or the “gatekeeper” phenomenon, where a reviewer has participated in the NIH peer review process at a rate much higher than their peers, and thus has had a disproportionate effect on review outcomes in a given field.
Look, Dear Reader, by now you know what the primary analysis from the peanut gallery will be. If you think a given reviewer hates your work, your approaches, you, your pubs, etc, you think they are having undue influence on the study section to which you are submitting your grants. It is particularly inflaming when you can’t seem to escape them because no matter whether you send stuff to your best fit study section, various SEPs, try a different mechanism, etc….up they pop. Professor Antagonist the Perma-Reviewer. On the other hand, if a reviewer that you think is sympathetic to your proposals keeps showing up, heck you wouldn’t complain if that continued on 75% of the sections your grants are reviewed in for decades. Right?
Director Byrnes drew her first line on the chart at 1-36 meetings per 12 year interval. Now me, I think I want to see something a little bit closer and more segmented on that. Three meetings, year in, year out for 12 years does seem like a fairly substantial and outsized influence. One per year does not. One 12 round interval of service (three rounds per year in 4 years or two in 6) as an appointed reviewer seems okay to me. The chart then shows quite a number of people in the 37-72 meeting range (5% of the sample) and even some folks in the 73+ range (1%ish). The way they are talking about undue influence it seems like they should be in the low single digits, right?
But they are not. The minimum standard was 6 review panels per 5 rounds. This is one more than is standard for an empaneled reviewer. And she or he could always just pick up an extra one, right? And I went to view the video of the Advisory Council meeting linked in the blog post and there is a suggestion that the real problem are reviewers begging SROs for an assignment at the last minute to keep their eligibility. Right? So they are for sure pointing the finger at people who meet the bare minimum. Which is not much different from that “influence” wielded by a term of service.
And probably even less. Why? because if you are cobbling your 6 out of 5 rounds from ad hoc requests it is very likely to be entailing a smaller load. SEP service, in my experience, means a smaller review load per panel. So does ad hoc service on established panels, frequently, because the SRO is trying not to annoy the ad hocs and the empaneled folks have buy in. The Advisory Council discussion came oh so close to covering this but veered away into distraction. In part because Director Byrnes started talking about voting scores as being more important than reviews written….but this is also correlated. For the SEPs, fewer items per reviewer often comes with a smaller overall panel load compared with a standing panel. I’ve been on established study sections that routinely have anywhere from 60-90 apps per round. Rarely, if ever, on a SEP with more than about 30.
I really don’t understand the CSR logic here.
The only thing that makes any sense is that they are tired of having to route so many last-minute applications once SROs of standing panels have started trying to assign apps and recruit ad hoc members. And maybe tired of having to convene 5-15 app SEPs to deal with the overflow. Certainly my personal experience has been that in the past few years my continuous submission go to SEPs and are refused by the standing panel SROs. This never used to happen to me in the first years of this policy.
but who knows.
Neat new feature in the NIH Data book lets you search funding rates by priority score for R01s
January 10, 2020
The NIH has now automated, at least for R01/R37, the charts for grants funded / unfunded by priority score in this new part of the RePORT page. You can select all of NIH or go IC by IC using the dropdown menu. Fiscal Years 2014-2018 are currently available. As a reminder this is a long delayed followup to analyses that Jeremy Berg pioneered when he was the Director of NIGMS. They looked like this

When Berg and Michelle Kienholz (aka writedit) published their “How the NIH can Help You Get Funded” book in 2014, they managed to extract similar data from a larger subset of the ICs, and this looked like this.

There is not much new under the sun so you will find that playing around with the FY2018 data on the website pretty much tells the same sort of tale. ICs such as NINDS continue to have a relatively strict payline policy where almost everything under a certain number is funded and relatively little above that line gets funded (the bump is most likely to be ESI policy). Other IC’s (NIDA, NIMH) have a more graded approach. There’s an apparent payline under which almost everything gets funded, a point above which very little gets funded and a “grey zone” in between. The slopes in the above grey zones show some differences in their breadth and, again, bumps that appear to represent ESI funding policy do occcur (NIA in particular).
Diversity and Disadvantage
December 9, 2019
Mike Lauer, head of NIH’s Office of Extramural Research, has a blog post up which points to new and expanded “diversity” criteria for Administrative Supplements and other purposes. The Notice is: NOT-OD-20-031. The blog post includes the fact that fewer than 1% of the diversity supplements they awarded in 2018 were for the “disadvantaged background” criterion. It also shows that the vast majority of applications were under Hispanic or African-American categories (and the success rates for those were 70% and 62%, respectively).
The old “disadvantaged” criteria were:
Individuals who come from a family with an annual income below established low-income thresholds
Individuals who come from an educational environment such as that found in certain rural or inner-city environments that has demonstrably and directly inhibited the individual from obtaining the knowledge, skills, and abilities necessary to develop and participate in a research career.
The second one is almost laughably imprecise and amorphous and apparently instead of this resulting in a deluge of applications, it resulted in very few. So they’ve decided to expand and elaborate:
Were or currently are homeless, as defined by the McKinney-Vento Homeless Assistance Act
Were or currently are in the foster care system, as defined by the Administration for Children and Families;
Were eligible for the Federal Free and Reduced Lunch Program for two or more years;
Have/had no parents or legal guardians who completed a bachelor’s degree (see the U.S. Department of Education);
Were or currently are eligible for Federal Pell grants (;
Received support from the Special Supplemental Nutrition Program for Women, Infants and Children as a parent or child;
Grew up in one of the following areas: a) a U.S. rural area, as designated by the Health Resources and Services Administration Rural Health Grants Eligibility Analyzer, or b) a Centers for Medicare and Medicaid Services-designated Low-Income and Health Professional Shortage Areas (qualifying zip codes are included in the file). Only one of the two possibilities in #7 can be used as a criterion for the disadvantaged background
So there you have it. More opportunities for those who are at disadvantage in the sciences to get support. Most pointedly, these individuals will qualify for Research Supplements to Promote Diversity in Health-Related Research (PA-18-586). These are administrative supplements, meaning any PI of a host of research grant mechanisms can request additional funds to support staff at any level ranging from high school students to investigators. No kidding!
My main purpose here is advertising/PR/education to the PI and to prospective candidates, as per usual. If you are, or know of, a candidate that fits, it may be worth trying this mechanism to get support. These new expanded definitions of socio-economic disadvantage may make it easier to determine who fits, relative to the prior criteria.
Do note that if you are a prospective candidate, you may have to self-identify to a PI. I mean, this is also the case for racial / ethnic qualifications, of course. But that’s hard enough for the PI to parse. Believe me, “say, are you some sort of minority that qualifies” is not an easy conversation to have with prospective trainees as it is. People of majoritarian presentation who may have no particular expression of their childhood disadvantage are even less likely than those with certain surnames or apparent skin tone to trigger an inquiry.
Moving on to the editorial part……you knew there would be a “but”……
I have always been a bit suspicious of efforts to add socio-economic considerations to affirmative action / diversity efforts. These come in, I have seen, whenever an institution appears to be under assault from anti-affirmative action positions that are mostly against giving opportunities to African-American, Hispanic and Native-American individuals. It isn’t that I don’t think socio-economic disadvantage is bad for the academy, I do. And in the best of worlds I would love it if we added this as an “also”. Which, given by the less than 1% stats reported by Lauer, the NIH program has been until this point. It does not appear to have chipped away at the awards to Hispanic or African-Americans in any large numbers. So….great. That’s the tactical angle- and I will be looking to see if Lauer updates us over time as to how these proportions are changing with the re-defined language.
There’s also a strategic angle. The strategy of making affirmative action a strategy to redress individual, personal disparity. This has been pursued by anti-affirmative action voices and has been a matter of craven capitulation from those who should know better.
Affirmative action, done right, is to address the systematic problems. A given University, say, that lacks a diverse faculty body, isn’t concerned with specific individuals. It is concerned with increasing the diversity of its faculty overall and it can’t expect this to be precise. It isn’t trying to be fair to Joe Smith who somehow deserves a position at that particular University.
The idea of enhancing diversity of the faculty is to enhance the diversity of the instruction and scholarship and other perspectives embodied by the professors. Should any one person be obliged to cover all the bases at once? Is the ideal candidate for diversity poor, LGBTQ+, female, of color, disabled etc? Of course not, that’s a Bill Maher bit.
And this is the slope that we start down with including socio-economic disparity in the diversity sphere. Combined with the aforementioned misdirection that this is about personal fairness, we open the door to the idea that the only legitimate diversity hire is the one where you can prove individual suffering from socio-economic disparity. It doesn’t matter that the person may have systemic discrimination and bias against them relative to others with their own background, you see. It doesn’t matter what perspectives they can bring to bear. Because we’re in the Oppression Olympics now, baby. And we’ve now moved to argue that only by demonstrating individual adversity relative to everyone, that we have achieved true progress towards identifying individuals who deserve diversity of opportunity.
This is a mistake.
And I will be keeping my weather eye on the NIH to see how they behave with this newly expanded definition.
NIH/CSR reverts to random-order grant review
September 13, 2019
A tweet from Potnia Theron alerts us to a change in the way the CSR study sections at NIH will review grants for this round and into the future. A tweet from our good blog friend @boehninglab confirms a similar story from the June rounds. I am very pleased.
When I first started reviewing NIH grants, the grants were ordered for discussion in what appeared to be clusters of grants assigned to the same Program Officer or at least a given Program Branch or Division. This was back when a substantial number of the POs would attend the meeting in person to follow the discussion of the grants which might be of interest to them to fund. Quite obviously, it would be most efficient and reasonable for a SRO to be able to tell a PO that their grants would all be discussed in, e.g., a two hour contiguous and limited time interval instead of scattered randomly across a two day meeting interval.
Importantly, this meant that grants were not reviewed in any particular order with respect to pre-meeting scores and the grants slated for triage were ordered along with everything that was slated to be discussed.
When we shifted to reviewing grants in ascending order of preliminary score (i.e., best to worst) I noticed some things that were entirely predictable*. These things had a quelling effect on score movement through the discussion process for various reasons. Now I do say “noticed“. I have not seen any data from the CSR on this and would be very interested to see some before / after for the prior change and for the current reversion. So I cannot assert any strong position that indeed my perceptions are valid.
This had the tendency to harden the very best scores. Which, btw, were the ones almost guaranteed to fund since this came along during a time of fixed budget and plummeting paylines. Still, the initial few projects were not as subject to…calibration…as they may have been before. When you are facing the first two proposals in the round, it’s easy for everyone to nod along with the reviewers who are throwing 2s and saying the grant is essentially perfect. When you get such a beast in day 2 when you’ve already battled through a range of issues…..it’s more likely someone is going to say “yeah but whattabout….?”
It’s axiomatic that there is no such thing as an unassailable “perfect” grant proposal. Great scores arise not because the reviewers can find no flaws but because they have chosen to overlook or downplay flaws that might have been a critical point of discussion for another proposal. The way the NIH review works, there is no re-visitation of prior discussions just because someone realizes that the issue being used to beat the heck out of the current application also applied to the one discussed five grants ago that was entirely downplayed or ignored. This is why, fair or not, discussion tends to get more critical as the meeting goes on. So in the old semi-random order, apps that had good and bad preliminary scores were equally subject to this factor. In the score-ordered era, the apps with the best preliminary scores were spared this effect.
Another factor which contributed to this hardening of the preliminary score order is the “why bother?” factor. Reviewers are, after all, applicants and they are sensitive to the perceived funding line as it pertains to the scores. They have some notion of whether the range of scores under current discussion means “this thing is going to fund unless the world explodes“, “this thing is going to be a strong maybe and is in the hunt for gray zone pickup” or “no way, no how is this going to fund unless there is some special back scratching going on“. And believe you me they score accordingly despite constant admonishment to use the entire range and that reviewers do not make funding decisionsTM.
When I was first on study section the SRO sent out scoring distribution data for the prior several rounds and it was awesome to see. The score distribution would flatten out (aka cluster) right around the operative perceived score line at the time. The discussions would be particularly fierce around that line. But since an app at any given score range could appear throughout the meeting there was motivation to stay on target, right through to the last app discussed at times. With the ordered review, pretty much nothing was going to matter after lunch on the first day. Reviewers were not making distinctions that would be categorically relevant after that point. Why bother fighting over precisely which variety of unfundable score this app receives? So I argue that exhaustion was more likely to amplify score hardening.
I don’t have any data for that but I bet the CSR does if they would care to look.
These two factors hit the triage list in a double whammy.
To recap, anyone on the panel (and not in conflict) can request that a grant slated not to be discussed be raised for discussion. For any reason.
In the older way of doing things, the review order would include grants scheduled for triage, the Chair would come to it and just say that it was triaged and ask if anyone wanted to discuss it. Mostly everyone just enters ND on the form and goes on to the next one. However sometimes a person wanted to bring it up out of triage and discuss it.
You can see that if this was in order of the third proposal on the first day that the psychology of pulling it up would differ from if it were an application scheduled last in the meeting on day 2 when everyone is eager to rush to the airport.
In the score order way of doing things, this all came at the end. When the mind of the reviewer was already on early flights and had sat through many hours of “why are we discussing this one when it can’t possibly fund”. The pressure not to pull up any more grants for discussion was severe. My perception is that the odds of being pulled up for discussion went way, way, way down. I bet CSR has data on that. I’d like to see it.
I don’t have full details if the new policy of review order will include triaged apps or be a sort of hybrid. But I hope it returns to scheduling the triaged apps right along with everything else so that they have a fairer chance to be pulled up for discussion.
__
*and perhaps even intentional. There were signs from Scarpa, the CSR Director at the time, that he was trying to reduce the number of in-person meetings (which are very expensive). If scores did not change much (and if the grants selected for funding did not change) between the pre-meeting average of three people and the eventual voted score, then meetings were not a good thing to have. Right? So a suspicious person like myself immediately suspected that the entire goal of reviewing grants in order of initial priority score was to “prove” that meetings added little value by taking structural steps to reduce score movement.
The NIH’s extramural workforce craves stability
June 5, 2019
There was a little twitter discussion yesterday about the distribution of NIH funds, triggered in no small part by a couple of people tweeting about how the NIH modular limit ($250,000 direct per year) hasn’t kept up with inflation. I, of course, snarked about welcome-to-the-party since we’ve been talking about this on the blog for some time.

This then had me meandering across some of my favorite thoughts on the topic. Including the fact that a report from the initial few rounds after introducing modular budgeting found that 92.4% of grants were submitted under the limit and that 41% requested either $175,000 or $200,000 in annual direct costs. The lower number was the mode but only just barely. There’s more interesting stuff linked at this NIH page evaluating the modular budget process. Even better, there are some “Historical Documents” linked here. The update document [PDF] after two submission cycles contains this gem:
NIH data indicate that almost 90 percent of competing individual research project grant (R01) applications request $250,000 or less in direct costs. On the basis of this experience, the size of the modules and the maximum of $250,000 were selected.
It could not be any clearer. The intent of the NIH selecting the cap that they did was to capture the vast majority of proposals. And in the first several rounds they did, well above 90%. Furthermore the plurality of grants did not ask for the cap amount. This has changed as of 2018.
The declining purchase power of $250K due to inflation has wrought a number of trends. More grants coming in as traditional budgets above the cap. More grants under the cap asking for the full amount. And, as we know from the famous mythbusting post from the previous OER honcho Sally Rockey, there was a small increase in those PIs that held 2 or 3 concurrent awards from 1986 to 2009, this was most pronounced in the top 20% best funded investigators who now felt it necessary to secure 3 concurrent awards.
Stability. This is the concept that NIH simply could not grasp for the longest time, at least from an official top-down stop-the-bleeding perspective. They wrung hands over success rates and cut awards to permit making more of them. They limited amended/revised grant submission to try to cover up the real success rate. They refused to budge the modular cap. They started talking about limiting the number of awards or number of dollars awarded. They started trying to claim “efficiency” based on pubs per direct cost.
All to naught, I would suggest, because of a refusal to start with a basic understanding of their grant funded PI workforce. Their first mistake was clinging to the idea that extramural researchers are not “their” workforce (technically correct) and in defiance of de facto reality.
Here’s the reality. Vast swaths of PIs seeking funding from the NIH operate in job environments where they feel they must maintain a certain lab size, a certain lab vigor and a related (directly or indirectly) certain number of NIH grant dollars. This amount varies across individuals, job categories and scientific subfields. Yes. But I would argue the constant is that a given PI has a relatively fixed idea of how much purchasing power she would prefer to have under her control, as NIH funds, more or less all of the time. Constantly, consistently with some assurance of continuation. If your job depends on large part on sustained funding from the NIH, you work for them.
That is what guides and drives most grant seeking behavior, I assert. People are not “greedy”. They are not seeking more and more grants as some sort of detached score keeping game. They are trying to survive. They would like to be awarded tenure. They would like very much to “be a scientist” which means conducting and publishing research in their field. They would like to make Full Professor one day and they would like their trainees to feel like they got good training and good launches to their careers.
This is not crazy-town stuff.
And being reasonably smart, motivated and professional people these PIs are going to fight hard to try to bring in the consistent funding that is required by their chosen career path.
Since NIH has so steadfastly refused to start at this understanding, their hapless attempts to do things to make their time-to-first-R01 and success rate stats look better do not succeed. When you squeeze down the purchasing power of an award, and make the probability of award for any given application less certain, you push PIs to submit more and more and more applications. [This also affects applicant institutions, of course, but this is a bit more diffuse and I’m not going to get into it today.]
The twitter discussion about the modular cap inevitably crept down the supposition that increasing the modular cap to match inflation would inevitably result in fewer grants awarded (true) and therefore decreased success rates. I think this latter is not quite so simple as a claim. This is the kind of thing that NIH should have spent a lot of time and effort modeling out with some fancy math and data mining. When grant seeking PIs feel comfortable with the amount of grant support they have, THEY STOP SUBMITTING NEW APPLICATIONS! When they feel in need of shoring up their evaporating funding and uncertain of the award of any new proposal (and in particular their continuation applications) they flog those applications out like crazy. Occasionally they overshoot in the “too many than intended” direction. I think the vast majority of the PIs would much rather this than they would occasionally undershooting and experiencing a funding gap. Datahound had some posts on the chances of getting refunded after a gap and the annual PI churn / turnover rate that I think point to another unfortunate NIH blindspot. It was really late in the game before we started seeing NIH OER officialdom talking about per-investigator success rates. And they still give very little evidence of wanting to pro-actively grapple with these issues that motivate the PIs who are applying and figure out how to make their award process a little less hellish.
Look, at some level they know. They know that MERIT and PECASE extensions are good things. The NIGMS invented up the MIRA approach as a semi-recognition that people would trade stability for grant over-shoot. They started wringing their hands recently about giving yet more affirmative action help to the ESI applicants who were struggling with renewals and second grants. But they do not appear to want to broaden these concepts. MERIT, PECASE and MIRA are very rare. They are not awarded in anything near the numbers required to have a stabilizing effect on enough applicants to dampen the grant churn. To my knowledge, competing continuation applications are not doing better, they are doing worse than ever before.
I may take this up in another post, but I want to defang the immediate criticism. Stability of funding exists in tension with diversity. Diversity of PIs and diversity of research topics and approaches. The more you let those that get in stay in with lesser competition, the more you keep out newcomers. In theory…..
I say in theory because, in point of fact, the NIH system is already hugely biased in favor of those already inside. And does only a middling job at opening itself up to newcomers and new ideas that are not just like the existing population of funded PIs and funded projects. I just feel as though the NIH must be able to do more modeling and data mining to get a better idea of their real turnover now, in the past, etc. I feel that they should able to do a better job of producing the same result with less grant churn. This should be able to be accomplished with a better proactive understanding of sustainable lab/funding size, funding gaps, the population of approximately continually-funded PIs, PI turnover, etc.
The current version of the NIH Biosketch includes a space for a Personal Statement. As the Instructions say, this is to
Briefly describe why you are well-suited for your role(s) in this project. Relevant factors may include: aspects of your training; your previous experimental work on this specific topic or related topics; your technical expertise; your collaborators or scientific environment; and/or your past performance in this or related fields.
This part is pretty obvious. As you are aware, the Investigator criterion is one of five allegedly co-equal criteria on which the merit of your NIH application is supposed to be assessed. But this could also be approximately deduced from the old version of the Biosketch, all this does is enhance your ability to spin a tale for easy apprehension. But the new Personal Statement of the biosketch allows something that wasn’t allowed before.
Note the following additional instructions for ALL applicants/candidates:
If you wish to explain factors that affected your past productivity, such as family care responsibilities, illness, disability, or military service, you may address them in this “A. Personal Statement” section.
This was a significant advance, in my view. For better or for worse, one of the key facts about you as an investigator that is of interest to reviewers of your application is your scientific productivity. The thinking goes that if you have been a productive investigator in the past then you will be a productive investigator in the future and are therefore, as they say, a strength of the proposal. Conversely, if you have not produced very well or have suspicious gaps in your productivity this is a weakness- perhaps it predicts that you are not assured to be productive in the future.
Now, my view is that gaps in productivity or periods of unexpectedly low productivity are not a death knell. At least when I have been in the room for discussion of grants, I find that reviewers have a nonzero probability of giving good scores despite some evidence of poor productivity of the PI. The key is that they need to have a reason for why the productivity was low. In ye olden dayes, the applicant had to just suffer the bad score on the first version of the application and then supply his or her explanation in the Intro to the revised (amended; A1) application. So it is an advantage to be able to pre-empt this whole cycle and provide a reason for the appearance of a slow period in the PI’s history.
It is not, of course, some sort of trump or get out of jail free card. Reviewers are still free to view your productivity however they like, fairly or not. They are free to view the explanation that you offer however they like as well. But the advantage is that they can evaluate the explanation. And the favorably disposed reviewer can use that information to argue against the criticisms of the disfavorable reviewer. It gives the applicant a chance, where before there was none.
You will notice that I use the term explanation and not the term excuse. It is not an excuse. This is not a good way to view it. Not good on the part of the applicant or on the part of the reviewer(s). Grant evaluation is not a reward or a punishment for past behavior. Grant evaluation is a prediction about the future, given that the grant is funded. When it comes to PI productivity, past performance is only properly used to try to predict (imperfectly) future performance. If the PI got in a bad car wreck and was in intensive care for two months and basically invalided for another nine months, well, this says something about the prediction validity of that corresponding gap in publications. Right? And you’d have to be a real jerk to think that this PI deserved to be somehow punished (with a bad grant score) for getting in a car wreck.
This was triggered by a tweet that seemed to be saying that life is hard for everyone, why should we buy anyone’s excuse. I thought the tone was a bit punitive. And that it might scare people out of using the Personal Statement as it was intended to be used by applicants and how, in my view, it should be used by reviewers. As I said above, there is no formal obligation for reviewers to “buy” an explanation that is proffered. And my personal view on what represents a jerky reviewer stance on a given explanation for a gap in productivity cannot possibly extend to all situations. But I do think that all reviewers should probably understand that there is a very explicit reason why the NIH allows this content in the Personal Statement. And should not view someone taking advantage of that as some sort of demerit in and of itself.
Program Officer Gatekeeping
May 21, 2019
Everyone who is an applicant to the NIH for funding hears, sooner or later, that they are supposed to contact one or more Program Officers for advice. I give that advice myself, even on this blog. That is what they are there for. To discuss your application plans and to try to help you propose something that is of interest to them, as a representative of the Program interests of a given Institute or Center of the NIH.
You, I suggest, should be familiar with who inhabits the Divisions and Branches of your closest interests. You should check who is listed as the scientific contact for a Funding Opportunity Announcement that is of interest to you. You are supposed to get in touch, make a phone call time and/or send them your Specific Aims.
I also tell you that the Program Officer’s opinion is but one of many considerations about whether you submit a proposal or not. Because they are just one scientist. And as with any one scientist, the PO comes with biases, preferences and blindspots. Who are they to tell you not to try your hand competing for a good score in a study section?
Well, do recall that Program does not have to fund anything, even if your grant proposal gets a 1% score it can be skipped. They can, and do, skip grants that fall within their paylines, published or virtual. Every bit of percentiles/funding data that I’ve seen has at least one apparent skip. So it could be that the PO is telling you this- no matter what, they will argue that your proposal does not fit and should not be funded. So you have to listen to what they are saying very carefully.
The other, larger side of this consideration is that the PO is trying to tell that in their estimation your proposal does not sound like one that will score very well. And here it is tricky. They have a lot of experience with study sections and with applications being scored. They have access to a lot of knowledge that you do not. And you could be barking up a ridiculously out of position tree. This kind of interaction saves you the time and effort…no small thing.
The problem is that nobody can predict very well, particularly when it comes to your general outline or Aims page. They know generally what the population of reviewers look like but cannot (unless there is illegit SRO/PO collusion) know who will be assigned to your application. Maybe for some reason your proposal will resonate with the reviewers. And a PO faced with a 1%ile score has a tendency not to fight so much about how it wasn’t a good idea to them when you chatted several months ago.
I’m sorry that I do not have hard and fast answers for you. PO advice can be heartfelt and still totally misplaced. I am, to this day, astonished by the degree to which POs express apparent ignorance of how grant review really goes down, despite long experience watching review play out. PO scientific preferences can led them explicitly or implicitly to discourage applications featuring ideas, models or people that they don’t favor and encourage ones that they like. They may have a very strong idea of who they would like a highly targeted FOA to end up funding….while peer reviewers might think some totally different approaches are a better way to advance the topic as they understand it.
So my advice is generally that if you really like you proposal and have a strong argument as to why it fits with the FOA ideas…..submit it anyway. Even if the PO has been fairly discouraging to you. Let peer review tell you that it doesn’t fit.
People worry about making the PO mad by going against their advice. I don’t know how to view that and it is another unknowable risk. Sure, it might very well happen. There are unprofessional people in this world. Someone could be having a bad day. Stuff happens. But…..this grant getting stuff is too critical. If you have a good idea and you think a panel of peer reviewers might go for it……worrying about a PO trying to spike your within-payline proposal because you submitted against their advice is a small consideration (imo).
Most of the time it won’t be this direct anyway. You’ll get unenthusiastic responses and mild opposition far more often than a flat “do not submit that”. IME. So if you’ve twitched things a bit since your conversation and the score is good enough to discuss, how can the PO get too frosty with you?
The tl;dr version of a response to a NIH IC’s RFI
May 20, 2019
Request for Information. NIH uses these to try to djinn up support for some action they wish to take. It is not clear to me why they bother but you can tell when you see no public response to a set of RFI questions that you know aren’t going to go their way. Or when you see a very restricted and upbeat set of sample responses published when you know you and others sent in critical ones.
But I’m cynical and, who knows, maybe sometimes the ICs really are trying to learn something. A recent one from NIDA caught my eye since it is right up my professional alley.
RFI: Inviting Comments on Non-Human Animal Models of Substance Use Disorders (NOT-DA-19-036) asks just what you might expect. Here are their four questions and my short and sweet answers to them. These will serve to represent the typical applicant PI response to such things.
NIDA seeks input from the scientific community on the following topics:
Current animal behavioral procedure(s)/model(s) that BEST recapitulate human substance use/SUD, including the aspect(s) of substance use/SUD (initiation of drug use, drug maintenance, pathological drug use, relapse) targeted by the/these procedure(s)/model(s)
Super easy to answer. The ones that I use. Obviously the gold standard and the best.
Animal procedures/models of SUDs that best balance the inherent trade-offs between resources (time, cost, etc.) and complexity/ecological validity
Mine. Man this is easy. I have the perfect tradeoff….but oh, let me tell you that what you really need to do is give me even more money because then the extra resources will really pay off.
Animal procedures/models of SUDs whose translational value are frequently misrepresented or overrepresented by the scientific community
Those guys’ procedures and models. Over there. The ones that I don’t use are totally bogus and a complete waste of time. Unfund them as soon as possible.
Aspects of substance use/SUD (e.g., specific DSM criteria for SUD) that are NOT currently being modelled in animals and how current procedures/models could be adapted to overcome technical/logistic challenges and address this gap in the field
Weeeellllll, we DO happen to have this new model that we’ve been struggling to get funded properly and by complete and utter coincidence I can manhandle at least one DSM criteria to fit what we are trying to do. FUND ME!
I’m telling you, this is a good 89.54% of the honest responses to any such RFI.