In a prior post, A pants leg can only accommodate so many Jack Russells, I had elucidated my affection for applying Vince Lombardi’s advice to science careers.

Run to Daylight.

Seek out ways to decrease the competition, not to increase it, if you want to have an easier career path in academic science. Take your considerable skills to a place where they are not just expected value, but represent near miraculous advance. This can be in topic, in geography, in institution type or in any other dimension. Work in an area where there are fewer of you.

This came up today in a discussion of “scooping” and whether it is more or less your own fault if you are continually scooped, scientifically speaking.

He’s not wrong. I, obviously, was talking a similar line in that prior post. It is advisable, in a career environment where things like independence, creativity, discovery, novelty and the like are valued, for you NOT to work on topics that lots and lots of other people are working on. In the extreme, if you are the only one working on some topic that others who sit in evaluation of you see as valuable, this is awesome! You are doing highly novel and creative stuff.

The trouble is, that despite the conceits in study section review, the NIH system does NOT tend to reward investigators who are highly novel solo artists. It is seemingly obligatory for Nobel Laureates to complain about how some study section panel or other passed on their grant which described the plans to pursue what became the Nobel-worthy work. Year after year a lot of me-too grants get funded while genuinely new stuff flounders. The NIH has a whole system (RFAs, PAs, now NOSI) set up to beg investigators to submit proposals on topics that are seemingly important but nobody can get fundable scores to work on.

In 2019 the Hoppe et al. study put a finer and more quantitatively backed point on this. One of the main messages was the degree to which grant proposals on some topics had a higher success rate and some on other topics had lower success rates. You can focus on the trees if you want, but the forest is all-critical. This has pointed a spotlight on what I have taken to calling the inherent structural conservatism of NIH grant review. The peers are making entirely subjective decisions, particularly right at the might-fund/might-not-fund threshold of scoring, based on gut feelings. Those peers are selected from the ranks of the already-successful when it comes to getting grants. Their subjective judgments, therefore, tend to reinforce the prior subjective judgments. And of course, tend to reinforce an orthodoxy at any present time.

NIH grant review has many pseudo-objective components to it which do play into the peer review outcome. There is a sense of fair-play, sauce for the goose logic which can come into effect. Seemingly objective evaluative comments are often used selectively to shore up subjective, Gestalt reviewer opinions, but this is in part because doing so has credibility when an assigned reviewer is trying to convince the other panelists of their judgment. One of these areas of seemingly objective evaluation is the PI’s scientific productivity, impact and influence, which often touches on publication metrics. Directly or indirectly. Descriptions of productivity of the investigator. Evidence of the “impact” of the journals they publish in. The resulting impact on the field. Citations of key papers….yeah it happens.

Consideration of the Hoppe results, the Lauer et al. (2021) description of the NIH “funding ecology” in the light of some of the original Ginther et al. (2011, 2018) investigation into the relationship of PI publication metrics is relevant here.

Publication metrics are a function of funding. The number of publications a lab generates depend on having grant support. More papers is generally considered better, fewer papers worse. More funding means an investigator has the freedom to make papers meatier. Bigger in scope or deeper in converging evidence. More papers means, at the very least, a trickle of self-cites to those papers. More funding means more collaborations with other labs…which leads to them citing both of you at once. More funding means more trainees who write papers, write reviews (great for h-index and total cites) and eventually go off to start their own publication records…and cite their trainee papers with the PI.

So when the NIH-generated publications say that publication metrics “explain” a gap in application success rates, they are wrong. They use this language, generally, in a way that says Black PIs (the topic of most of the reports, but this generalizes) have inferior publication metrics so this causes a lower success rate. With the further implication that this is a justified outcome. This totally ignores the inherent circularity of grant funding and publication measures of awesomeness. Donna Gither has written a recent reflection on her work on NIH grant funding disparity, which doubles down on her lack of understanding on this issue.

Publication metrics are also a function of funding to the related sub-field. If a lot of people are working on the same topic, they tend to generate a lot of publications with a lot of available citations. Citations which buoy up the metrics of investigators who happen to work in those fields. Did you know, my biomedical friends, that a JIF of 1.0 is awesome in some fields of science? This is where the Hoppe and Lauer papers are critical. They show that not all fields get the same amount of NIH funding, and do not get that funding as easily. This affects the available pool of citations. It affects the JIF of journals in those fields. It affects the competition for limited space in the “best” journals. It affects the perceived authority of some individuals in the field to prosecute their personal opinions about the “most impactful” science.

That funding to a sub-field, or to certain approaches (technical, theoretical, model, etc, etc) has a very broad and lasting impact on what is funded, what is viewed as the best science, etc.

So is it good advice to “Run to daylight”? If you are getting “scooped” on the regular is it your fault for wanting to work in a crowded subfield?

It really isn’t. I wish it were so but it is bad advice.

Better advice is to work in areas that are well populated and well-funded, using methods and approaches and theoretical structures that everyone else prefers and bray as hard as you can that your tiny incremental twist is “novel”.

Biased objective metrics

October 19, 2021

As you know, Dear Reader, one of the things that annoys me the most is being put in the position of having to actually defend Glam, no matter how tangentially. So I’m irritated.

Today’s annoyance is related to the perennial discussion of using metrics such as the Journal Impact Factor of journals in which a professorial candidate’s papers are published as a way to prioritize them for a job search. You can add h-index and citations of the candidate’s papers on an individual basis on this heap if you like.

The Savvy Scientist in these discussions is very sure that since these measures, ostensibly objective, are in fact subject to “bias”, this renders them risible as useful decision criteria.

We then typically downshift to someone yelling about how the only one true way to evaluate a scientist is to READ HER PAPERS and make your decisions accordingly. About “merit”. About who is better and who is worse as a scientist. About who should make the short list. About who should be offered employment.

The Savvy Scientist may even demonstrate that they are a Savvy Woke Scientist by yelling about how the clear biases in objective metrics of scientific ability and accomplishment work to the disfavor of non-majoritarians. To hinder the advancement of diversity goals by under-counting the qualities of URM, women, those of less famous training pedigree, etc.

So obviously all decisions should be made by a handful of people on a hiring committee reading papers deeply and meaningfully offering their informed view on merit. Because the only possible reason that academic science uses those silly, risibly useless, so called objective measures is because everyone is too lazy to do the hard work.

What gets lost in all of this is any thinking about WHY we have reason to use objective measures in the first place.

Nobody, in their Savvy Scientist ranting, seems to every consider this. They fail to consider the incredibly biased subjectivity of a handful of profs reading papers and deciding if they are good, impactful, important, creative, etc, etc.

Even before we get to the vagaries of scientific interests, there are hugely unjustified interpersonal biases in evaluating work products. We know this from the studies where legal briefs were de/misidentified. We can infer this from various resume-call back studies. We can infer this from citation homophily studies. Have you not every heard fellow scientists say stuff like “well, I just don’t trust the work from that lab”? or “nobody can replicate their work”? I sure have. From people that should know better. And whenever I challenge them as to why….let us just say the reasons are not objective. And don’t even get me started about the “replication crisis” and how it applies to such statements.

Then, even absent any sort of interpersonal bias, we get to the vast array of scientific biases that are dressed up as objective merit evaluations but really just boil down to “I say this is good because it is what I am interested in”. or “because they do things like I do”>

Citations metrics are an attempt to crowd source that quality evaluation so as to minimize the input of any particular bias.

That, for the slower members of the group, is a VERY GOOD THING!

The proper response to an objective measure that is subject to (known) biases is not to throw the baby out onto the midden heap of completely subjective “merit” evaluation.

The proper response is to account for the (known) biases.

The recent NOT-OD-21-073 Upcoming Changes to the Biographical Sketch and Other Support Format Page for Due Dates on or after May 25, 2021 indicates one planned change to the Biosketch which is both amusing and of considerable interest to us “process of NIH” fans.

For the non-Fellowship Biosketch, Section D. has been removed. … As applicable, all applicants may include details on ongoing and completed research projects from the past three years that they want to draw attention to within the personal statement, Section A.

Section D is “Additional Information: Research Support and/or Scholastic Performance“. The prior set of instructions read:

List ongoing and completed research projects from the past three years that you want to draw attention to. Briefly indicate the overall goals of the projects and your responsibilities. Do not include the number of person months or direct costs.

And if the part about “want to draw attention to” was not clear enough they also added:

Do not confuse “Research Support” with “Other Support.” Other Support information is not collected at the time of application submission.”

Don’t answer yet, there’s more!

Research Support: As part of the Biosketch section of the application, “Research Support” highlights your accomplishments, and those of your colleagues, as scientists. This information will be used by the reviewers in the assessment of each your qualifications for a specific role in the proposed project, as well as to evaluate the overall qualifications of the research team.

This is one of those areas where the NIH intent has been fought bitterly by the culture of peer review, in my experience (meaning in my ~two decades of being an applicant and slightly less time as a reviewer). These policy positions, instructions, etc and the segregation of the dollars and all total research funding into the Other Support documentation make it very clear to the naive reader that the NIH does not want reviewers contaminating their assessment of the merit of a proposal with their own ideas about whether the PI (or other investigators) have too much other funding. They do not want this at all. It is VERY clear and this new update to the Biosketch enhances this by deleting any obligatory spot where funding information seemingly has to go.

But they are paddling upstream in a rushing, spring flood, rapids Cat V river. Good luck, say I.

Whenever this has come up, I think I’ve usually reiterated the reasons why a person might be motivated to omit certain funding from their Biosketch. Perhaps you had an unfortunate period of funding that was simply not very productive for any of a thousand reasons. Perhaps you do have what looks to some eyes like “too much funding” for your age, tenure, institution type, sex or race. Or for your overall productivity level. Perhaps you have some funding that looks like it might overlap with the current proposal. Or maybe even funding from some source that some folks might find controversial. The NIH has always (i.e. during my time in the system) endorsed your ability to do so and the notion that these consideration should not influence the assessment of merit.

I have also, I hope consistently, warned folks not to ever, ever try to omit funding (within the past three years) from their Biosketch, particularly if it can be found in any way on the internet. This includes those foundation sites bragging about their awards, your own lab website and your institutional PR game which put out a brag on you. The reason is that reviewers just can’t help themselves. You know this. How many discussions have we had on science blogs and now science twitter that revolve around “solutions” to NIH funding stresses that boil down to “those guys over there have too much money and if we just limit them, all will be better”? Scores.

Believe me, all the attitudes and biases that come out in our little chats also are present in the heads of study section members. We have all sorts of ideas about who “deserves” funding. Sometimes these notions emerge during study section discussion or in the comments. Yeah, reviewers know they aren’t supposed to be judging this so it often come up obliquely. Amount of time committed to this project. Productivity, either in general or associated with specific other awards. Even ones that have nothing to do with the current proposal.

My most hilariously vicious personal attack summary statement critique ever was clearly motivated by the notion that I had “too much money”. One of the more disgusting aspects of what this person did was to assume incorrectly that I had a tap on resources associated with a Center in my department. Despite no indication anywhere that I had access to substantial funds from that source. A long time later I also grasped an even more hilarious part of this. The Center in question was basically a NIH funded Center with minimal other dollars involved. However, this Center has what appear to be peer Centers elsewhere that are different beasts entirely. These are Centers that have a huge non-federal warchest involving more local income and an endowment built over decades. With incomes that put R21 and even R01 money into individual laboratories that are involved in the Center. There was no evidence anywhere that I had these sorts of covert resources, and I did not. Yet this reviewer felt fully comfortable teeing off on my for “productivity” in a way that was tied to the assumption I had more resources than were represented by my NIH grants.

Note that I am not saying many other reviews of my grant applications have not been contaminated by notions that I have “too much”. At times I am certain they were. Based on my age at first. Based on my institution and job type, certainly. And on perceptions of my productivity, of course. And now in the post-Hoppe analysis….on my race? Who the fuck knows. Probably.

But the evidence is not usually clear.

What IS clear is that reviewers, who are your peers with the same attitudes they express around the water cooler, on average have strong notions about whether PIs “deserve” more funding based on the funding they currently have and have had in the past.

NIH is asking, yet again, for reviewers to please stop doing this. To please stop assessing merit in a way that is contaminated by other funding.

I look forward with fascination to see if NIH can managed to get this ship turned around with this latest gambit.

The very first evidence will be to monitor Biosketches in review to see if our peers are sticking with the old dictum of “for God’s sake don’t look like you are hiding anything” or if they will take the leap of faith that the new rules will be followed in spirit and nobody will go snooping around on RePORTER and Google to see if the PI has “too much funding”.

The Director of the NIH, in the wake of a presentation to the Advisory Committee to the Director meeting, has issued a statement of NIH’s commitment to dismantle structural racism.

Toward that end, NIH has launched an effort to end structural racism and racial inequities in biomedical research through a new initiative called UNITE, which has already begun to identify short-term and long-term actions. The UNITE initiative’s efforts are being informed by five committees with experts across all 27 NIH institutes and centers who are passionate about racial diversity, equity, and inclusion. NIH also is seeking advice and guidance from outside of the agency through the Advisory Committee to the Director (ACD), informed by the ACD Working Group on Diversity, and through a Request for Information (RFI) issued today seeking input from the public and stakeholder organizations. The RFI is open through April 9, 2021, and responses to the RFI will be made publicly available. You can learn more about NIH’s efforts, actions, policies, and procedures via a newly launched NIH webpage on Ending Structural Racism aimed at increasing our transparency on this important issue.

This is very much welcome, coming along as it does, a decade after Ginther and colleagues showed that Black PIs faced a huge disadvantage in getting their NIH grants funded. R01 applications with Black PIs were funded at only 58% of the rate that applications with white PIs were funded.

Many people in the intervening years, accelerated after the publication of Hoppe et al 2019 and even further in the wake of the murder of George Floyd at the hands of the Minneapolis police in 2020, have wondered why the NIH does not simply adopt the same solution they applied to the ESI problem. In 2006/2007 the then-Director of NIH, Elias Zerhouni, dictated that the NIH would practice affirmative action to fund the grants of Early Stage Investigators. As detailed in Science by Jocelyn Kaiser

Instead of relying solely on peer review to apportion grants, [Zerhouni] set a floor—a numerical quota—for the number of awards made to new investigators in 2007 and 2008.

A quota. The Big Bad word of anti-equity warriors since forever. Gawd forbid we should use quotas. And in case that wasn’t clear enough

The notice states that NIH “intends to support new investigators at success rates comparable to those for established investigators submitting new applications.” In 2009, that will mean at least 1650 awards to new investigators for R01s, NIH’s most common research grant.

As we saw from Hoppe et al, the NIH funded 256 R01s with Black PIs in the interval from 2011-2015, or 51 per year. In a prior blog post I detailed how some 119 awards to poorer-scoring applications with white PIs could have been devoted to better-scoring proposals with Black PIs. I also mentioned how doing so would have moved the success rate for applications with Black PIs fro 10.7% to 15.6% whereas the white PI success rate would decline from 17.7% to 17.56%. Even funding every single discussed application with a Black PI (44% of the submissions) by subtracting those 1057 applications from the pool awarded with white PIs would reduce the latter applications’ hit rate only to 16.7% which is still a 56% higher rate than the 10.7% rate that the applications with Black PIs actually experienced.

I have been, and continue to be, an advocate for stop-gap measures that immediately redress the funding rate disparity by mandating at least equivalent success rates, just as Zerhouni mandated for ESI proposals. But we need to draw a key lesson from that episode. As detailed in the Science piece

Some program directors grumbled at first, NIH officials say, but came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni. That is, a previous slight gap in review scores for new grant applications from first-time and seasoned investigators widened in 2007 and 2008, [then NIGMS Director Jeremy] Berg says. It revealed a bias against new investigators, Zerhouni says.

I don’t know for sure that this continued, but the FY 2012 funding data published by Kienholz and Berg certainly suggest that several NIH ICs continued to fund ESI applications at much lower priority scores/percentiles than were generally required for non-ESI applications to receive awards. And if you examine those NIH ICs pages that publish their funding strategy each year [see the writedit blog for current policy links], you will see that they continue to use a lower payline for ESIs. So. From 2007 to 2021 that is a long interval of policy which is not “affirmative action”, just, in the words of Zerhouni, “leveling the playing field”.

The important point here is that the NIH has never done anything to get to the “real reason” for the fact that early stage investigators proposals were being scored by peer review at lower priority than they, NIH, desired. They have not undergone spasms of reviewer “implicit bias” training. They have not masked the identity of the PI or done anything suggesting they think they can “fix” the review outcome for ESI PIs.

They have accepted the fact that they just need to counter the bias.

NIH will likewise need to accept that they will need to fund Black PI applications with a different payline for a very long time. They need to accept that study sections will “punish*” those applications with even worse scores. They will even punish those applications with higher ND rates. And to some extent, merely by talking about it this horse has left the stall and cannot be easily recalled. We exist in a world where, despite all evidence, white men regularly assert with great confidence that women or minority individuals have all the advantages in hiring and career success.

So all of this entirely predictable behavior needs to be accounted for, expected and tolerated.

__

*I don’t happen to see it as “punishing” ESI apps even further than whatever the base rate is. I think reviewers are very sensitive to perceived funding lines and to reviewing grants from a sort of binary “fund/don’t fund” mindset. Broadcasting a relaxed payline for ESI applications almost inevitably led to reviewers moving their perceived payline for those applications.

This is not news to this audience but it bears addressing in as many ways as possible, in the context of the Hoppe et al 2019 and Ginther et al 2011 findings. Behind most of the resistance to doing anything about the funding disparity for investigators and, as we’re now finding out, topics is still some lingering idea that the NIH grant selection process is mostly about merit.

Objective merit. Sure we sort of nod that we understand that there is some wiggle room but overall it is difficult to find anyone that appears to understand something in a deep way.

“Merit” of NIH grants is untethered to anything objective. It relies on the opinion of the peer reviewers. The ~3 reviewers who are assigned to do deep review and the members of the panel (which can be 20-30ish folks) who vote scores after the discussion.

This particular dumb twitter poll shows that 77% of experienced reviewers either occasionally or regularly have the experience of thinking a grant that should not receive funding is very likely to do so.

and this other dumb little twitter poll shows that 88% of experienced NIH grant reviewers either occasionally or frequently experience a panel voting a non-fundable score for a grant they think deserves funding.

It will not escape you that individual reviewers tend to think a lot more grants should be funded than can be funded. And this shows up in the polls to some extent.

This is not high falutin’ science and it is possible we have some joker contamination here from people who are not in fact NIH review experienced.

But with that caveat, it tends to support the idea that the mere chance of which individuals are assigned to review a grant can have a major effect on merit. After all, the post-discussion scores of these individuals tends to significantly constrain the voting. But the voting is important too, since panel members can go outside the range or decide en masse to side with one or the other ends of the post-discussion range.

Swap out the assigned reviewers for a different set of three individuals and the outcomes are likely to be very different. Swap out one panel for another and the tendencies could be totally different. Is your panel heavy in those interested in sex differences and/or folks heavily on board with SABV? Or is it dominated by SABV resisters?

Is the panel super interested in the health effects of cannabis and couldn’t give a fig about methamphetamine? What do YOU think is going to come out of that panel with fundable scores?

Does the panel think any non-mammalian species is horrible for modeling human health and should really never be funded? Does the panel geek away at tractable systems and adore anything fly or worm driven and complain about the lack of manipulability available in a rat?

Of course you know this. These kinds of whines and complaints are endemic to fireside chats whenever two or more NIH grant-seeking investigators are present!

But somehow when it is a disparity of race or of topics of interest to minority communities in the US, such as from Hoppe et al 2019, then nobody is concerned. Even when there are actual data on the table showing a funding disparity. And everyone asks their “yeahbutwhatabout” questions, springing right back into the mindset that at the very core the review and selection of grants is about merit. The fact their worm grant didn’t get selected is clear evidence of a terrible bias in the NIH approach. The fact African-American PIs face a payline far lower than they do…..snore.

Because in that case it is about objective merit.

And not about the coincidence of whomever the SRO has decided should review that grant.

The NIH has launched a new FOA called the Stephen I. Katz Early Stage Investigator Research Project Grant (Open Mike blog post). PAR-21-038 is the one for pre-clinical, PAR-21-039 is the one for clinical work. These are for Early Stage Investigators only and have special receipt dates (e.g. January 26, 2021; May 26, 2021; September 28, 2021). Details appear to be a normal R01- up to 5 years and any budget you want to try (of course over $500k per year requires permission).

The novelty here appears to be entirely this:

For this FOA, applications including preliminary data will be considered noncompliant with the FOA instructions and will be withdrawn. Preliminary data are defined as data not yet published. Existence of preliminary data is an indication that the proposed project has advanced beyond the scope defined by this program and makes the application unsuitable for this funding opportunity. Publication in the proposed new research direction is an indication that the proposed work may not be in a new research direction for the ESI.

This will be fascinating. A little bit more specification that the scientific justification has to rest on published (or pre-printed) work only:

The logical basis and premise for the proposed work should be supported by published data or data from preprints that have a Digital Object Identifier (DOI). These data must be labeled and cited adjacent to each occurrence within the application and must be presented unmodified from the original published format. Figures and tables containing data must include citation(s) within the legend. The data should be unambiguously identified as published through citation that includes the DOI (see Section IV.2). References and data that do not have an associated DOI are not allowed in any section of the application. Prospective applicants are reminded that NIH instructions do not allow URLs or hyperlinks to websites or documents that contain data in any part of the application

So how is this going to work in practice for the intrepid ESI looking to apply for this?

First, there is no reason you have to put the preliminary data you have available in the application. One very hot comment over at the Open Mike blog post about the proposals being unsupported and therefore the projects will be doomed to failure is totally missing this point. PIs are not stupid. They aren’t going to throw up stupid ideas, they are going to propose their good ideas that can be portrayed as being unsupported by preliminary data.

Twill be interesting to see how this is interpreted vis a vis meeting presentations, seminars and (hello!) job talks. What is a reviewer expected to do if they see an application without any preliminary data per the FOA, but have just seen a relevant presentation from the applicant which shows that Aim 1 is already completed? Will they wave a flag? See above, the FOA says the “existence” of preliminary data, not the “inclusion” of preliminary data will make the app non-compliant.

But there is an aspect of normal NIH grant review that is not supposed to depend on “secret” knowledge, i.e., that available only to the reviewer, not published. So it is frowned upon for a reviewer to say “well the applicant gave a seminar last month at our department and showed that this thing will work”. It’s special knowledge only available to that particular reviewer on the panel. Unverifiable.

This would be similar, no?

Or is this more like individual knowledge that the PI had faked data? In such cases the reviewers are obligated to report that to the SRO in private but not to bring it up during the review.

If they ARE going to enforce the “existence” of relevant preliminary data, how will it be possible to make this fair? It will be entirely unfair. Some applicants will be unlucky enough to have knowledgeable whistle blowers on the panel and some will evade that fate by chance. Reviewers being what they are, will only variably respond to this charge to enforce the preliminary data thing, even if semi-obligated. After all, what is the threshold for the data being specifically supportive of the proposal at hand?

Strategy-wise, of course I endorse ESI taking advantage of this. The FOAs list almost all of the ICs with relevant funding authority if I counted correctly (fewer for the human-subjects one, of course). There is an offset receipt date, so it keeps the regular submission dates clear. You can put one in, keep working on it and if the prelim data look good, put a revised version in for a regular FOA next time. Or, if you can’t work on it or the data aren’t going well, you can resubmit “For Resubmissions, the committee will evaluate the application as now presented, taking into consideration the responses to comments from the previous scientific review group and changes made to the project.” Win-win.

Second strategy thing. This is a PAR and the intent is to convene panels for this mechanism. This means that your relative ESI advantage at the point of review disappears. You are competing only against other ESI. Now, how each IC chooses to prioritize these is unknown. But once you get a score, you are presumably just within whatever ESI policy a given IC has set for itself.

I’m confused by the comments over at Open Mike. They seem sort of negative about this whole thing. It’s just another FOA, folks. It doesn’t remove opportunities like the R15. No it doesn’t magically fix every woe related to review. It is an interesting attempt to fix what I see as a major flaw in the evolved culture of NIH grant review and award. Personally I’d like to see this expanded to all applicants but this is a good place to start.

Why indeed.

I have several motivations, deployed variably and therefore, my answers to his question about a journal-less world vary.

First and foremost I review manuscripts as a reciprocal professional obligation, motivated by the desire I have to get my papers published. It is distasteful free-rider behavior to not review at least as often as you require the field to review for you. That is, approximately 3 times your number of unique-journal submissions. Should we ever move to a point where I do not expect any such review of my work to be necessary, then this prime motivator goes to zero. So, “none”.

The least palatable (to me) motivation is the gatekeeper motivation. I do hope this is the rarest of reviews that I write. Gatekeeper motivation leads to reviews that try really hard to get the editor to reject the manuscript or to persuade the authors that this really should not be presented to the public in anything conceivably related to current form. In my recollection, this is because it is too slim for even my rather expansive views on “least publishable unit” or because there is some really bad interpretation or experimental design going on. In a world where these works appeared in pre-print, I think I would be mostly unmotivated to supply my thoughts in public. Mostly because I think this would just be obvious to anyone in the field and therefore what is the point of me posturing around on some biorxiv comment field about how smart I was to notice it.

In the middle of this space I have the motivation to try to improve the presentation of work that I have an interest in. The most fun papers to review for me are, of course, the ones directly related to my professional interests. For the most part, I am motivated to see at least some part of the work in print. I hope my critical comments are mostly in the nature of
“you need to rein back your expansive claims” and only much less often in the vein of “you need to do more work on what I would wish to see next”. I hate those when we get them and I hope I only rarely make them.

This latter motivation is, I expect, the one that would most drive me to make comments in a journal-less world. I am not sure that I would do much of this and the entirely obvious sources of bias in go/no-go make it even more likely that I wouldn’t comment. Look, there isn’t much value in a bunch of congratulatory comments on a scientific paper. The value is in critique and in drawing together a series of implications for our knowledge on the topic at hand. This latter is what review articles are for, and I am not personally big into those. So that wouldn’t motivate me. Critique? What’s the value? In pre-publication review there is some chance that this critique will result in changes where it counts. Data re-analysis, maybe some more studies added, a more focused interpretation narrative, better contextualization of the work…etc. In post-publication review, it is much less likely to result in any changes. Maybe a few readers will notice something that they didn’t already come up with for themselves. Maybe. I don’t have the sort of arrogance that thinks I’m some sort of brilliant reader of the paper. I think people that envision some new world order where the unique brilliance of their critical reviews are made public have serious narcissism issues, frankly. I’m open to discussion on that but it is my gut response.

On the flip side of this is cost. If you don’t think the process of peer review in subfields is already fraught with tit-for-tat vengeance seeking even when it is single-blind, well, I have a Covid cure to sell you. This will motivate people not to post public, unblinded critical comments on their peers’ papers. Because they don’t want to trigger revenge behaviors. It won’t just be a tit-for-tat waged in these “overlay” journals of the future or in the comment fields of pre-print servers. Oh no. It will bleed over into all of the areas of academic science including grant review, assistant professor hiring, promotion letters, etc, etc. I appreciate that Professor Eisen has an optimistic view of human nature and believes these issues to be minor. I do not have an optimistic view of human nature and I believe these issues to be hugely motivational.

We’ve had various attempts to get online, post-publication commentary of the journal-club nature crash and burn over the years. Decades by now. The efforts die because of a lack of use. Always. People in science just don’t make public review type comments, despite the means being readily available and simple. I assure you it is not because they do not have interesting and productive views on published work. It is because they see very little positive value and a whole lot of potential harm for their careers.

How do we change this, I feel sure Professor Eisen would challenge me.

I submit to you that we first start with looking at those who are already keen to take up such commentary. Who drop their opinions on the work of colleagues at the drop of a hat with nary a care about how it will be perceived. Why do they do it?

I mean yes, narcissistic assholes, sure but that’s not the general point.

It is those who feel themselves unassailable. Those who do not fear* any real risk of their opinions triggering revenge behavior.

In short, the empowered. Tenured. HHMI funded.

So, in order to move into a glorious new world of public post-publication review of scientific works, you have to make everyone feel unassailable. As if their opinion does not have to be filtered, modulated or squelched because of potential career blow-back.

__

*Sure, there are those dumbasses who know they are at risk of revenge behavior but can’t stfu with their opinions. I don’t recommend this as an approach, based on long personal experience.

I last updated this topic in mid 2018 using finalized BRDPI inflation adjustment numbers from 2016 and projections out to 2018. The latest numbers get us finalized values to 2019 and projections beyond that. There have been some minor changes from the last set of projections so it’s worth doing another update.

Biomedical Research and Development Price Index adjustments to the NIH R01 Modular limit ($250,000 per year direct costs). Red bars indicate the constant 2001 dollar valuation and black bars indicate the current-year dollars needed to match the limit in 2001.

As you can see, the unrelenting march of inflation means that the spending power of the $250K NIH modular budget limit is now projected to be $138,678 for Fiscal Year 2021. This translates to 55.5% of the value in 2001. Looking at this another way, it takes $442,457 in 2021 dollars to equal the spending power of $250,000 in 2001.

So when you start demanding changes in the Modular limit at NIH, the proper value to lobby for is $450,000 per year in direct costs.

This is also critical for scientists who are getting their start now to understand when receiving career advice on grant strategy from colleagues and mentors who were in mid career in 2001. Their concepts of what you should be able to accomplish with “one R01 NIH grant” were established under far different conditions. It is unlikely that they have fully adjusted their thinking. They may need to be educated on these specific numbers.

Of course, the NIH is fully aware of this situation and has rejected multiple internal proposals to adjust the modular limit in the past. I’ve seen slide decks. As you can anticipate, the reason is to keep funding as many grants as possible so as to juke the success rate stats and pump up award numbers. This is also why across-the-board 10% cuts come down in times of budget stress- cut a module off of 9 awards and you get the 10th one free.

Note that this reality means that it now takes two R01 grants to have a lab running at the production level that one R01 would cover in 2001. And as we know, the odds of getting funded for any given grant submission are worse. I really don’t want to re-calculate the cumulative probability of now at least two grants, given X number of submissions. It would be too depressing. [ok, one quick one. The probability of 1 award in 10 tries when the hit rate is 17.7% is 85.7%, as mentioned in that prior post. This drops to 55.1% for the probability of at least two awards in 10 tries. ]

A quick google search turns up this definition of prescriptive: “relating to the imposition or enforcement of a rule or method.” Another one brings up this definition, and refinement, for descriptive: “describing or classifying in an objective and nonjudgmental way….. describing accents, forms, structures, and usage without making value judgments.

We have tread this duality a time or two on this blog. Back in the salad days of science blogging, it led to many a blog war.

In our typical fights, I or PP would offer comments describing the state of the grant-funded, academic biomedical science career as we see it. This would usually be in the course of offering what we saw as some of the best strategies and approaches for the individual who is operating within this professional sphere. Right now, as is, as it is found. Etc. For them to succeed.

Inevitably, despite all evidence, someone would come along and get all red about such comments as if we were prescribing, instead of describing, whatever specific or general reality we were describing.

Pick your issue. I don’t like writing a million grants to get the barest hope of winning one. I think this is a stupid way for the NIH to behave and a huge waste of time and taxpayer resources. So when I tell jr and not so jr faculty to submit a ton of grants this is not an endorsement of the NIH system as I see it. It is advice to help the individual to succeed despite the problems with the system. I tee off on Glam all the time….but would never tell a new PI not to seek JIF points wherever possible. There are many things I say about how NIH grant review should go, that might seem to contrast with my actual reviewer behavior for anyone who has been on study section with YHN. (For those who are wondering, this has mostly to do with my overarching belief that NIH grant review should be fair. Even if one objects to some of the structural aspects of review, one should not blow it all up at the expense of the applications that are in front of a given reviewer.) The fact that I bang on about first and senior authorship strategy for respective career stages doesn’t mean that I believe that chronic middle-author contributions shouldn’t be better recognized.

I can walk and chew gum.

Twitter has erupted in the past few days. There are many who are very angered by a piece published in Nature Communications by AlShebli et al which can be summarized by this sentence in the Abstract “We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors.” This was recently followed, in grand old rump sniffing (demi)Glam Mag tradition by an article by Sterling et al. in PNAS. The key Abstract sentence for this one was “we find women earn less than men, net of human capital factors like engineering degree and grade point average, and that the influence of gender on starting salaries is associated with self-efficacy“. In context, “self-efficacy” means “self-confidence“.

For the most part, these articles are descriptive. The authors of the first analyze citation metrics, i.e. “We analyze 215 million scientists and 222 million papers taken from the Microsoft Academic Graph (MAG) dataset42, which contains detailed records of scientific publications and their citation network”. The authors of the second conducted a survey investigation: “To assess individual beliefs about one’s technical ability we measure ESE, a five-item validated measure on a five-point scale (0 = “not confident” to 4 = “extremely confident,” alpha = 0.87; SI Appendix, section S1). Participants were asked, “How confident are you in your ability to do each of the following at this time?”:”

Quite naturally, the problem comes in where the descriptive is blurred with the prescriptive. First, because it can appear as if any suggestion of optimized behavior within the constraints of the reality that is being described, is in fact a defense of that reality. Intentional or unintentional. Second, because prescribing a course of action that accords with the reality that is being described, almost inevitably contributes to perpetuation of the system that is being described. Each of thse articles is a mixed bag, of course. A key sentence or two can be all the evidence that is needed to launch a thousand outraged tweets. I once famously described the NSF (in contrast to the NIH) as being a grant funding system designed for amateur scientists. You can imagine how many people failed to note the “designed for” and accused me of calling what I saw as the victims of this old fashioned, un-updated approach “amateurs”. It did not go well then.

The first set of authors’ suggestions are being interpreted as saying that nobody should train with female PIs because it will be terrible for their careers, broadly writ. The war against the second set of authors is just getting fully engaged, but I suspect it will fall mostly along the lines of the descriptive being conflated with the prescriptive, i.e., that it is okay to screw over the less-overconfident person.

You will see these issues being argued and conflated and parsed in the Twitter storm. As you are well aware, Dear Reader, I believe such imprecise and loaded and miscommunicated and angry discussion is the key to working through all of the issues. People do some of their best work when they are mad as all get out.

but…….

We’ve been through these arguments before. Frequently, in my recollection. And I would say that the most angry disputes come around because of people who are not so good at distinguishing the prescriptive from the descriptive. And who are very, very keen to first kill the messenger.

It is time. Well past time, in fact.

Time for the Acknowledgements sections of academic papers to report to report on a source of funding that is all to often forgotten.

In fact I cannot once remember seeing a paper or manuscript I have received to review mention it.

It’s not weird. Most academic journals I am familiar with do demand that authors report the source of funding. Sometimes there is an extra declaration that we have reported all sources. It’s traditional. Grants for certain sure. Gifts in kind from companies are supposed to be included as well (although I don’t know if people include special discounts on key equipment or reagents, tut, tut).

In recent times we’ve seen the NIH get all astir precisely because some individuals were not reporting funding to them that did appear in manuscripts and publications.

The statements about funding often come with some sort of comment that the funding agency or entity had no input on the content of the study or the decisions to/not publish data.

The uses of these declarations are several. Readers want to know where there are potential sources of bias, even if the authors have just asserted no such thing exists. Funding bodies rightfully want credit for what they have paid hard cash to create.

Grant peer reviewers want to know how “productive” a given award has been, for better or worse and whether they are being asked to review that information or not.

It’s common stuff.

We put in both the grants that paid for the research costs and any individual fellowships or traineeships that supported any postdocs or graduate students. We assume, of course, that any technicians have been paid a salary and are not donating their time. We assume the professor types likewise had their salary covered during the time they were working on the paper. There can be small variances but these assumptions are, for the most part, valid.

What we cannot assume is the compensation, if any, provided to any undergraduate or secondary school authors. That is because this is a much more varied reality, in my experience.

Undergraduates could be on traineeships or fellowships, just like graduate students and postdocs. Summer research programs are often compensated with a stipend and housing. There are other fellowships active during the academic year. Some students are on work-study and are paid a salary and in school related financial aid…in a good lab this can be something more advanced than mere dishwasher or cage changer.

Some students receive course credit, as their lab work is considered a part of the education that they are paying the University to receive.

Sometimes this course credit is an optional choice- something that someone can choose to do but is not absolutely required. Other times this lab work is a requirement of a Major course of study and is therefore something other than optional.

And sometimes…..

…sometimes that lab work is compensated with only the “work experience” itself. Perhaps with a letter or a verbal recommendation from a lab head.

I believe journals should extend their requirement to Acknowledge all sources of funding to the participation of any trainees who are not being compensated from a traditionally cited source, such as a traineeship. There should be lines such as:

Author JOB participated in this research as an undergraduate course in fulfilling obligations for a Major in Psychology.

Author KRN volunteered in the lab for ~ 10 h a week during the 2020-2021 academic year to gain research experience.

Author TAD volunteered in the lab as part of a high school science fair project supported by his dad’s colleague.

Etc.

I’m not going to go into a long song and dance as to why…I think when you consider what we do traditionally include, the onus is quickly upon us to explain why we do NOT already do this.

Can anyone think of an objection to stating the nature of the participation of students prior to the graduate school level?

There was a thread on the Twitters today complaining about graduate students being called trainees.

The conversation went in all of the usual directions.

Because, of course, the “hot take” is correct. We have increased the number of post-graduate trainees in doctoral granting programs so as to obtain cut-rate labor to service our biomedical science research laboratory work. Yes. Absolutely.

To service the work that our federal government is asking us to do, and paying us to do, via the NIH, NSF and a few other major grant-making entities.

Grants to not-for-profit Universities and Research Institutes are, of course, a way for the US federal government to try to get cut-rate labor to service its goals. By leveraging the power of calling middle management “Professors” to justify underpaying us for the job we are doing. (“Underpaying” is a concept I have on good authority from practically every academic I’ve spoken with about their satisfaction with their compensation.)

Getting back to the pre-doctoral exploit, however, their is this notion of a valuable credential being dangled as the additional compensation. The award of the PhD (and the presumed training that comes with it) is supposed to make up for any perceived deficiencies in month to month paychecks. And it does have value. This credential is necessary for many subsequent job categories that are perceived as being desired. Or at least more desired than the jobs that are available, or the compensation that is available, for those without this particular credential.

My question for today is, would things be better in academic science if, instead of the credential model we operated on the peformance based, resume building model?

Everyone enters this pipeline as a fresh faced bachelor’s degree recipient and gets paid as a real employee on technician wages. Just like our current tech class. From there on, advance to the first supervisory step (like the current postdoc stage) depends merely on performance, opportunity and drive. If you just put in your time, you stay a tech. And move up on that trajectory. If you take an interest in the broader science issues and do more than just put in your hours under direction of the higher-ups, more like what we expect out of current graduate students, well, at some point you are competitive for the entry level manager position. And you get some techs to direct.

Then again, if you want to move up to the next level, junior faculty-ish we can say, you have to produce. You have to produce and show you can “run a team and act in all ways like a PI save name” and….boom. You get to be PI.

From there, if you take the extra time to also teach classes, since we’re going to have the adjunctification of traditional teaching duties rolled into this re-alignment of course, maybe you eventually earn the title of Professor. If we still have that.

At every stage, the key is that you are more or less expected to be able to make a career at that stage if that is what fits you. Techs can remain techs. Job longevity. Steady raises. Benefits. Low level managers…ditto.

Look you still have to perform. Every workplace has turnover for competence and for fit. But then again I see checkout folks at my local Costco that I’ve seen there for well over two decades. Same job, presumably with incremental raises. No need to constantly run upward merely to stay in your job.

And I assume there are those who I saw two decades ago who have moved up in managerial tracks either within Costco or in some other retail business.

What would it look like if we de-credentialed academic science?

The first (I think) season of The Expanse space drama teevee show had a small sideline of something that I think I recall enough from similar fiction to be a trope. The show has the class element built into it, especially in that first season. There are rich people on the space station and poor people. The masses struggle to survive, live dirtily and envy the people living above them who they have to knuckle under to.

Resources are finite (unlike a Star Trek type of show, or even Star Wars) and of course the control of these resources is used to further oppress the masses and bend them to the will of the elite.

Well, one of those resources in this particular show is oxygen.

A pretty big deal, of course, when the only known place with an excess of this element that is essential to life is back on Earth. Now, perhaps in some space drama situations the supply of air IS like the trolley problem. A very direct sort of “If I get the oxygen, you die and vice versa” does pop up but usually this is within the trope of self-sacrifice. Like Jack and Rose and that damn door in the Titanic blockbuster.

But the better situation is the one in The Expanse where the rich people could just sort of lean out the oxygen for the poor people. Yeah, I’m not entirely sure how that sort of thing is pulled off in a space station but whatever. Go with it. The show sets it up as a class control issue I seem to recall, but it could very well be one of limited resources. On the space station, what happens if oxygen becomes a truly limited resource? Are the powerful going to keep themselves in normal operation even if they have to starve the powerless to do it?

Have you ever lived at altitude for a few days? The mile high in Denver sucks badly enough. When you first get there, you are…weak. You aren’t dying or anything. You can live and do stuff. You just suck at it. You can’t walk briskly up a staircase like you could at sea level. You can’t even think all that well, frankly. And if you have a condition that further compromises your oxygen uptake? Yikes. Could be very painful and nauseating.

Or maybe you went to Denver on the plane and then drove straight up to the mountains and had to operate at two miles up. Say, at a Keystone meeting or a Winter Brain, right peeps? At this point, shit is getting real. And many otherwise healthy folks are feeling really, really bad.

They don’t die though. For the most part. Unless there are exacerbating health conditions. If you sent some people to Keystone Colorado to live for awhile, so that you and your buddies could have the same amount of oxygen you are used to at sea level, it’s not like you are pulling the lever on the trolley track switch to make it run over someone else. They aren’t going to die.

You are just, well, making it suck for them.

Getting back to the space station drama, we reach another nasty little consideration. What if, just suppose, what IF, the rich people were knowingly overpopulating the space station so they’d have plenty of workers competing for scraps. Then nobody would be comfortable enough to come after the rich, they spend all their time just surviving. And the middle class is kept in a precarious enough situation that they don’t want to rock the boat lest their subsistance, semi-comfortable amount of oxygen might be scaled back. I mean that happens right? It happened just last month and I’m sure it was all a mistake and the Governor of the whole place wasn’t really trying to rein in the feisty Professors…

Whoops.

Did I give away the game?

Damn.

US Doctorates Awarded

April 29, 2020

One thing that I usual forget and Michael Hendricks has to remind me of is that you cannot explain the Boomer hegemony on the basis of the population size of the US Baby Boom relative to the GenX.

I was struck by this type of graph back a few years ago when the total number of births in the US, not the rate- the raw number, finally exceeded the peak year (1961 I believe) of the baby boom (this isn’t extending forward enough in this particular version but I posted it awhile ago so there you are). It makes it pretty clear why GenX always feels squeezed between the Boomers and their children, the Milennials, here termed Gen Y.

I used to think about this when I was grousing about the way GenX has been treated in academia, academic careers and grant funded science. But the issue is even worse because, as Professor Hendricks points out, GenX actually has MORE PhDs than do the Boomers. Not proportionally…total.

Roughly speaking the Boomers started exiting graduate school around 1972. They continued to be the majority of doctorates earned until the early 1990s when the first of the GenXers started to exit graduate school. You can see the year over year stability of doctorate production for the Boomers, followed by the acceleration in numbers when GenX hit their mid 20s. A smaller population of people, but more PhDs being generated, year after year.

Why? Well it’s complicated. Late 80s was still dicey economic times, prior to the Clinton era tech boom, and we earlier GenX were thinking grad school was a decent place to park ourselves for awhile. The doubling hit the NIH and more money was available for graduate stipends. There was the traditional loose talk about massive older faculty retirements, but this time coupled data! I.e., it was presented along with the anticipated need for higher education for the Boomer echo (aka their kids. aka the Milennials) that was already obvious to demographers. And, as mentioned in a prior post, everyone was either ignoring, or not realizing the impact of, the end of mandatory retirement policies. Apparently nobody realized the great investment in higher education during the 50s and 60s was not in fact merely temporary casualty of the Reagan “recover” strategy. They didn’t realize the tax payers weren’t going to come back to the table once economic times were better (and they were during the early 90s onward until Bush’s wars messed everything up). They didn’t realize the adjunctification of education would be the outcome of Reaganist policy.

So, as we know, the NIH doubling did not result in the funding of new labs for younger folks, the GenXers that were exiting grad school during the doubling. It resulted in the expansion of labs under one existing PI. It resulted, to lesser degree, in the expansion of funding to existing professors in institutions or departments that mayhap previously did not seek grants from the NIH as assiduously. It facilitated Deans who were responding to the gradual pull-back of public, State level funds from the Universities with a demand that their faculty secure more and more external funding. Which required…warm bodies.

The growing labs needed more graduate students to do the work. And then, of course, mid level management and higher skilled labor..enter the postdoc! It was a tremendous time for Boomer faculty (and pre-Boomer, let’s not let them off the hook). They were the ones that invented up reasons why graduate school now had to take 5, 6 or more years for their students to “be competitive”. (In an entirely made up game of scarcity directed to the benefit of…you guessed it). Then it was “oh but you definitely need some postdoctoral training to be competitive”. Never mind that they themselves barely did 2 years postdoc, if any, statistically speaking before they landed their jobs starting back in the early 1970s. But it was awesome! The NIH budget doubling meant it was easy to get and keep funding. Easy for the more energetic to get more and more funding and keep on growing their labs. More and more hands under each Professor’s direction and supervision. This is why nowadays when I ask people what they think of as a medium sized labs they settle on numbers around 7. “Medium”. Look at the “lab picture” page on your average faculty website these days. It coincides with this sort of interpretation of what people think of as “a lab” in academic, NIH grant funded biomedical science.

Anyway.

The main point is that GenX is not squeezed merely because we are fewer in (population) number relative to the Boomers. There are actually more of us with PhDs. We’ve just been kept away from the levers of power in disproportion to our PhD numbers.

The concept that we are “eating our seed corn” if we don’t do X, Y or Z to support junior scientists is completely misinformed, inapplicable and wrong.

This was super popular back when the ESI issues were being debated and the NIH was trying to justify giving special consideration to fund the applications of new comers to the system. I do happen to support continued efforts to help those who are on the short end of the NIH grant award stick, but this is mostly about the concept and how it leads to bad thinking.

“Eating our seed corn” has raised its misguided head in the Time of Corona as we are discussing University polices that have, apparently, started to slow walk new hires, pull back startup funds of recent hires, etc. There was even a little hint of graduate programs pulling offers of graduate stipends if the candidates had not responded to an offer yet, despite the deadline for response being in the future.

This is bad. Yes, it’s devastating for those individuals who are in the transition zones right now and are being denied opportunities that were in front of them. It’s devastating for departments and laboratories that were very much looking forward to securing new contributors. What it is NOT is “eating our seed corn”.

For those that have never so much as planted a food garden…. I am going to risk insulting your intelligence and to point out the obvious. “Seed corn” concerns were from a time of agriculture when a person hoping to grow a crop couldn’t just run down to the feed store and buy seeds whenever they wanted to. It comes from a time where you had to save your own seeds from the harvest time so that you could use them, about six months later after the winter snows had cleared, to grow next year’s crop(s). No problem right? Millenia of agriculture agrees- set aside enough seeds fro harvest to plant for next year. Easy peasy.

But…sometimes there wasn’t enough food to get through the winter. Seeds, of course, are also food. The corn kernels that we eat are those same seeds that can be planted to grow next year’s crops. And if you eat all your seeds to make it through this winter, you are going to have no corn crop next year. Or the year after that. or ever. Until someone takes pity on you and gives you some of their seed corn.

You can’t just make new seeds after you’ve eaten them.

New scientists are not like this. At all.

We CAN make new ones whenever we want, even if we’ve skipped several cycles. As I’ve noted in another context, if we have a department that literally closes it’s graduate program admissions for five years….they can start right back up in year six with essential zero headaches. The same professors suddenly forgot how to train graduate students? Please.

That’s because the proper analogy is more like acorns. Graduate student production is a perennial, not an annual, crop. If you have a big old oak tree on your property, it’s gonna grow acorns. Every year. We don’t chop it down to eat the tree when we get really hungry in the winter, right? It’s not edible. So next year, it’s gonna grow more acorns. And the cycle of health for that tree is really, really long. It doesn’t care if you ate 25% or 100% of the acorns it grew last year, it’s going to produce more next year. And the year after that. And after that.

If growing conditions are terrible, sure, many perennial agricultural producers may have low output that season. Some may even fail to produce anything edible that season at all. But as soon as the conditions return to favorable, that plant produces another crop. It takes a really, really bad set of conditions, sustained for years likely, to kill an oak tree. Short of devastating trauma, that is.

Sticking with the agricultural references, we are facing a water shortage and not a wildfire. We are not Little House on the Prairie where we have only ourselves on which to rely for seeds. We are most certainly not solely dependent on annual food crops. The enterprise of scientific research in the US is a perennial. It has persistence and resilience.

The ESI debate was no different. We were not then, and are not now, talking about the sort of existential emergency that is described by “eating our seed corn”. We are talking about priorities of how many plants and in what variety we can support, given a water supply that is rationed by external forces. We’re only getting so many acre-feet this year. And it looks to be less than we got last year.

The point is that we need to make rational, thinking choices about what we are going to prioritize and support. We should not panic, running all about screaming that every crop will be gone forever if we don’t water it just like it was watered last year.

Despite evidence to the contrary on this blog, some people who don’t like to write have occasionally said things in the vein of “oh, but you are such a good writer”. Sometimes this is by way of trying to get me to do some writing for them in the non-professional setting. Sometimes this is a sort of suggestion that somehow it is easier for me to write than it is for them to write, in the professional setting.

I don’t know. I certainly used to be a better writer and my dubious blogging hobby has certainly contributed to making my written product worse. Maybe I’m just getting that Agatha Christie thing early (her word variety constricted towards her final books, people suggest that was evidence of dementia).

But for decades now, I view my primary job as a writing job. When it comes right down to the essentials, an academic scientist is supposed to publish papers. This requires that someone write papers. I view this as the job of the PI, as much as anyone else. I even view it as the primary responsibility of the PI over everyone else, because the PI is where the buck stops. My personnel justification blurb in every one of my grants says so. That I’ll take responsibility for publishing the results. Postdocs are described as assisting me with that task. (Come to think of it, I can’t remember exactly how most people handle this in grants that I’ve reviewed.)

Opinions and practices vary on this. Some would assert that no PI should ever be writing a primary draft of a research paper and only rarely a review. Editing only, in the service of training other academic persons in the laboratory to, well, write. Some would kvetch about the relative ratio of writing effort of the PI versus other people in the laboratory. Certainly, when my spouse would prefer I was doing something other than writing, I get an earful about how in lab X, Y and Z the PI never writes and the awesome postdocs basically just hand over submit ready drafts and why isn’t my lab like that. But I digress.

I also have similar views on grant writing, namely that in order to publish papers one must have data from which to draw upon and that requires funds. To generate the data, therefore, someone has to write grant proposals. This is, in my view, a necessary job. And once again, the buck stops with the PI. Once again, practices vary in terms of who is doing the writing. Once again, strategies for writing those grants vary. A lot. But what doesn’t vary is that someone has to do a fair bit of writing.

I like writing papers. The process itself isn’t always smooth and it isn’t always super enjoyable. But all things equal, I feel LIKE I AM DOING MY JOB when I am sitting at my keyboard, working to move a manuscript closer to publication. Original drafting, hard core text writing, editing, drawing figures and doing analysis iteratively as you realize your writing has brought you to that necessity…I enjoy this. And I don’t need a lot of interruption (sorry, “social interaction”) when I am doing so.

In the past year or so, my work/life etc has evolved to where I spend 1-2 evenings a week in my office up to about 11 or 12 after dinner just writing. I dodge out for dinner so that my postdocs have no reason to stick around and then I come back in when the coast is clear.

I’m finding life in the time of Corona to simply push those intervals of quiet writing time earlier in the day. I have a houseful of chronologically shifted teens, which is awesome. They often don’t emerge from their rooms until noon…or later. Only my youngest needs much of my input on breakfast and even that is more a vague feeling of lingering responsibility than actual need. Sorry, not trying to rub it in for those of you with younger children. Just acknowledging that this is not a bad time in parenthood for me.

So I get to write. It’s the most productive thing I have to do these days. Push manuscripts closer and closer to being published.

It’s my job. We have datasets. We have things that should and will be papers eventually.

So on a daily and tactical level, things are not too bad for me.