Reconsidering “Run to Daylight” in the Context of Hoppe et al.

January 7, 2022

In a prior post, A pants leg can only accommodate so many Jack Russells, I had elucidated my affection for applying Vince Lombardi’s advice to science careers.

Run to Daylight.

Seek out ways to decrease the competition, not to increase it, if you want to have an easier career path in academic science. Take your considerable skills to a place where they are not just expected value, but represent near miraculous advance. This can be in topic, in geography, in institution type or in any other dimension. Work in an area where there are fewer of you.

This came up today in a discussion of “scooping” and whether it is more or less your own fault if you are continually scooped, scientifically speaking.

He’s not wrong. I, obviously, was talking a similar line in that prior post. It is advisable, in a career environment where things like independence, creativity, discovery, novelty and the like are valued, for you NOT to work on topics that lots and lots of other people are working on. In the extreme, if you are the only one working on some topic that others who sit in evaluation of you see as valuable, this is awesome! You are doing highly novel and creative stuff.

The trouble is, that despite the conceits in study section review, the NIH system does NOT tend to reward investigators who are highly novel solo artists. It is seemingly obligatory for Nobel Laureates to complain about how some study section panel or other passed on their grant which described the plans to pursue what became the Nobel-worthy work. Year after year a lot of me-too grants get funded while genuinely new stuff flounders. The NIH has a whole system (RFAs, PAs, now NOSI) set up to beg investigators to submit proposals on topics that are seemingly important but nobody can get fundable scores to work on.

In 2019 the Hoppe et al. study put a finer and more quantitatively backed point on this. One of the main messages was the degree to which grant proposals on some topics had a higher success rate and some on other topics had lower success rates. You can focus on the trees if you want, but the forest is all-critical. This has pointed a spotlight on what I have taken to calling the inherent structural conservatism of NIH grant review. The peers are making entirely subjective decisions, particularly right at the might-fund/might-not-fund threshold of scoring, based on gut feelings. Those peers are selected from the ranks of the already-successful when it comes to getting grants. Their subjective judgments, therefore, tend to reinforce the prior subjective judgments. And of course, tend to reinforce an orthodoxy at any present time.

NIH grant review has many pseudo-objective components to it which do play into the peer review outcome. There is a sense of fair-play, sauce for the goose logic which can come into effect. Seemingly objective evaluative comments are often used selectively to shore up subjective, Gestalt reviewer opinions, but this is in part because doing so has credibility when an assigned reviewer is trying to convince the other panelists of their judgment. One of these areas of seemingly objective evaluation is the PI’s scientific productivity, impact and influence, which often touches on publication metrics. Directly or indirectly. Descriptions of productivity of the investigator. Evidence of the “impact” of the journals they publish in. The resulting impact on the field. Citations of key papers….yeah it happens.

Consideration of the Hoppe results, the Lauer et al. (2021) description of the NIH “funding ecology” in the light of some of the original Ginther et al. (2011, 2018) investigation into the relationship of PI publication metrics is relevant here.

Publication metrics are a function of funding. The number of publications a lab generates depend on having grant support. More papers is generally considered better, fewer papers worse. More funding means an investigator has the freedom to make papers meatier. Bigger in scope or deeper in converging evidence. More papers means, at the very least, a trickle of self-cites to those papers. More funding means more collaborations with other labs…which leads to them citing both of you at once. More funding means more trainees who write papers, write reviews (great for h-index and total cites) and eventually go off to start their own publication records…and cite their trainee papers with the PI.

So when the NIH-generated publications say that publication metrics “explain” a gap in application success rates, they are wrong. They use this language, generally, in a way that says Black PIs (the topic of most of the reports, but this generalizes) have inferior publication metrics so this causes a lower success rate. With the further implication that this is a justified outcome. This totally ignores the inherent circularity of grant funding and publication measures of awesomeness. Donna Gither has written a recent reflection on her work on NIH grant funding disparity, which doubles down on her lack of understanding on this issue.

Publication metrics are also a function of funding to the related sub-field. If a lot of people are working on the same topic, they tend to generate a lot of publications with a lot of available citations. Citations which buoy up the metrics of investigators who happen to work in those fields. Did you know, my biomedical friends, that a JIF of 1.0 is awesome in some fields of science? This is where the Hoppe and Lauer papers are critical. They show that not all fields get the same amount of NIH funding, and do not get that funding as easily. This affects the available pool of citations. It affects the JIF of journals in those fields. It affects the competition for limited space in the “best” journals. It affects the perceived authority of some individuals in the field to prosecute their personal opinions about the “most impactful” science.

That funding to a sub-field, or to certain approaches (technical, theoretical, model, etc, etc) has a very broad and lasting impact on what is funded, what is viewed as the best science, etc.

So is it good advice to “Run to daylight”? If you are getting “scooped” on the regular is it your fault for wanting to work in a crowded subfield?

It really isn’t. I wish it were so but it is bad advice.

Better advice is to work in areas that are well populated and well-funded, using methods and approaches and theoretical structures that everyone else prefers and bray as hard as you can that your tiny incremental twist is “novel”.

One Response to “Reconsidering “Run to Daylight” in the Context of Hoppe et al.”

  1. Science Geek Says:

    Your point is spot on. NIH leadership especially the crowd in extramural only understand metrics. The qualitative aspect is never appreciated. And will never be. The people running the “show” are detached from the community, the realities and challenges of new PIs especially women and minorities. They in charge there because they know how to work the system and move up. Then they publish reports telling us, we are wrong and everything is working as it should be.

    Like


Leave a comment