Good Mentoring

July 1, 2021

One of the recurring discussions / rants in academic circles is the crediting of good mentoring to the Professor. I’m not going to tag the stimulus of the day because it generalizes and because I have no idea what motivates any particular person on any particular day.

There does seem to be a common theme. A Professor, usually female and usually more-junior, is upset that her considerable efforts to mentor students or postdocs does not seem to get as much credit as they should. This is typically contextualized by oblique or specific reference that some other professors do not put in the effort to “mentor” their trainees as well and this is not penalized. Furthermore, there is usually some recognition that a Professor’s time is limited and that the shoddiness of the mentoring of those other peers lets them work on what “really counts”, i.e., papers and grants, to an advantage over the self-identified good mentor.

Still with me?

There is a further contribution of an accusation, implicit or explicit, that those other peer Professors are not just advantaged by not spending time on “mentoring” but also advantaged by doing anti-mentoring bad things to their trainees to drive them to high scientific/academic output which further advantages the bad mentor colleagues against our intrepid hero Good Mentor.

Over on Twitter I’ve been pursuing one of my usual conundrums as I try to understand the nature of any possible fixes we might put in place with regard to “good” and “bad” academic mentoring, i.e., the role of career outcome in influencing how the mentee and evaluating bodies might view the quality of mentoring practices. My point is that I’ve seen a lot of situations where the same PI is viewed as providing both a terrible and a good-to-excellent mentoring environment by different trainees. And the views often align with whether the trainee is satisfied or dissatisfied with their career outcomes, and align less well with any particular behaviors of the PI.

Here, I want to take up the nature of tradeoffs any given Professor might have, in the context of trying to mentor more than one academic trainee, yes concurrently, but also in series.

My assertion is that “good mentoring” takes time, it takes the expenditure of grant and other funds and it takes the expenditure of social capital, in the sense of opportunities. In the case of most of the Professoriate, these are all limited resources.

Let us suppose we have two graduate students nearing completion, in candidacy and up against a program requirement for, e.g., three published papers. Getting to the three published papers, assuming all else equal between the two trainees, can be greatly affected by PI throw down. Provision of assistance with drafting the manuscript, versus terse, delayed “markup” activities? Insisting the paper needs to get into a certain high JIF journal, versus a strategy of hitting solid society journals. Putting research dollars into the resources, capital or personnel, that are going to speed progress to the end, versus expecting the trainee to just put in more hours themselves.

A PI should be “fair”, right? Treat everyone exactly the same, right? Well…it is never that simple. Research programs have a tendency not to go to plan. Projects can appear to level themselves up or down after each experiment. Peer review demands vary *tremendously* and not only by journal JIF.

Let us suppose we have two postdocs nearing a push for an Assistant Professor job. This is where the opportunities can come into play. Suggesting a fill-in for a conference presentation. Proposing conference symposia and deciding which trainee’s story to include. Choosing which project to talk about when the PI is herself invited. Pushing collaborations. Manuscript review participation with a note to the Associate Editor. Sure, it could be “fair”. But this is a game of competitive excellence and tiny incremental improvements to the odds of landing a faculty position. Is it “good mentoring” if taking a Harrison Bergeron approach means you never seem to land any postdocs from the laboratory in the plummiest of positions? When a focal throwdown on one would mean they have a good chance but divide-and-conquer fails to advance anyone?

More importantly, the PIs themselves have demands on their own careers. “Aha”, you cry, “this selfishness is what I’m ON about.”. Well yes…..but how good is the mentoring if the PI doesn’t get tenure while the grad student is in year 3? Personal experience on that one, folks. “not good” is the answer. Perhaps more subtly, how is the mentoring going to be for the next grad student who enters the laboratory when the PI has generated “fair” publishing prior trainees but not the glamourous publications needed to land that next grant? How much better is it for a postdoc entering the job market when the PI has already placed several Assistant Professors before them?

Or, less catastrophically, what if the PI has expended all of the grant money on the prior student’s projects which the student constructed and just happens to be highly expensive (“my mentor supports my research (1-5”))? Is that good mentoring? Well yeah, for the lucky trainee but it isn’t fair in a serial sense, is it?

Another common theme in the “good mentor” debate is extending “freedom” to the trainee. Freedom to work on “their ideas”. This is a tough one. A PI’s best advice on how to successfully advance the science career is going to be colored in may cases by practicality of making reasonable and predictable forward progress. I recently remarked that the primary goal of a thesis-proposal committee is to say “gee that’s nice, now pick one quarter of what you’ve proposed and do that for your dissertation/defense“. Free range scientists often have much, much larger ideas than can fit into a single unit of productivity. This is going to almost inevitably mean the PI is reining in the “freedom” of the trainee. Also see: finite resources of the laboratory. Another common PI mentoring pinch point on “freedom” has to do with getting manuscripts actually published. The PI has tremendous experience in what it takes to get a paper into a particular journals. They may feel it necessary to push the trainee to do / not do specific experiments that will assist with this. They may feel it necessary to edit the hell out of the trainees’ soaring rhetoric which goes on for three times the length of the usual Intro or Discussion material. …..this does come at a cost to creativity and novelty.

If the “freedom” pays off, without undue cost to the trainee or PI or other lab members…fantastic! “Good mentoring!”

If that “freedom” does not pay off- grad student without a defendable project or publishable data, significant expenditure of laboratory resources wasted for no return – well this is “Bad mentoring!”

Different outcome means the quality of the same behaviors on the part of the PI is evaluated as polar opposites.

A quick google search turns up this definition of prescriptive: “relating to the imposition or enforcement of a rule or method.” Another one brings up this definition, and refinement, for descriptive: “describing or classifying in an objective and nonjudgmental way….. describing accents, forms, structures, and usage without making value judgments.

We have tread this duality a time or two on this blog. Back in the salad days of science blogging, it led to many a blog war.

In our typical fights, I or PP would offer comments describing the state of the grant-funded, academic biomedical science career as we see it. This would usually be in the course of offering what we saw as some of the best strategies and approaches for the individual who is operating within this professional sphere. Right now, as is, as it is found. Etc. For them to succeed.

Inevitably, despite all evidence, someone would come along and get all red about such comments as if we were prescribing, instead of describing, whatever specific or general reality we were describing.

Pick your issue. I don’t like writing a million grants to get the barest hope of winning one. I think this is a stupid way for the NIH to behave and a huge waste of time and taxpayer resources. So when I tell jr and not so jr faculty to submit a ton of grants this is not an endorsement of the NIH system as I see it. It is advice to help the individual to succeed despite the problems with the system. I tee off on Glam all the time….but would never tell a new PI not to seek JIF points wherever possible. There are many things I say about how NIH grant review should go, that might seem to contrast with my actual reviewer behavior for anyone who has been on study section with YHN. (For those who are wondering, this has mostly to do with my overarching belief that NIH grant review should be fair. Even if one objects to some of the structural aspects of review, one should not blow it all up at the expense of the applications that are in front of a given reviewer.) The fact that I bang on about first and senior authorship strategy for respective career stages doesn’t mean that I believe that chronic middle-author contributions shouldn’t be better recognized.

I can walk and chew gum.

Twitter has erupted in the past few days. There are many who are very angered by a piece published in Nature Communications by AlShebli et al which can be summarized by this sentence in the Abstract “We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors.” This was recently followed, in grand old rump sniffing (demi)Glam Mag tradition by an article by Sterling et al. in PNAS. The key Abstract sentence for this one was “we find women earn less than men, net of human capital factors like engineering degree and grade point average, and that the influence of gender on starting salaries is associated with self-efficacy“. In context, “self-efficacy” means “self-confidence“.

For the most part, these articles are descriptive. The authors of the first analyze citation metrics, i.e. “We analyze 215 million scientists and 222 million papers taken from the Microsoft Academic Graph (MAG) dataset42, which contains detailed records of scientific publications and their citation network”. The authors of the second conducted a survey investigation: “To assess individual beliefs about one’s technical ability we measure ESE, a five-item validated measure on a five-point scale (0 = “not confident” to 4 = “extremely confident,” alpha = 0.87; SI Appendix, section S1). Participants were asked, “How confident are you in your ability to do each of the following at this time?”:”

Quite naturally, the problem comes in where the descriptive is blurred with the prescriptive. First, because it can appear as if any suggestion of optimized behavior within the constraints of the reality that is being described, is in fact a defense of that reality. Intentional or unintentional. Second, because prescribing a course of action that accords with the reality that is being described, almost inevitably contributes to perpetuation of the system that is being described. Each of thse articles is a mixed bag, of course. A key sentence or two can be all the evidence that is needed to launch a thousand outraged tweets. I once famously described the NSF (in contrast to the NIH) as being a grant funding system designed for amateur scientists. You can imagine how many people failed to note the “designed for” and accused me of calling what I saw as the victims of this old fashioned, un-updated approach “amateurs”. It did not go well then.

The first set of authors’ suggestions are being interpreted as saying that nobody should train with female PIs because it will be terrible for their careers, broadly writ. The war against the second set of authors is just getting fully engaged, but I suspect it will fall mostly along the lines of the descriptive being conflated with the prescriptive, i.e., that it is okay to screw over the less-overconfident person.

You will see these issues being argued and conflated and parsed in the Twitter storm. As you are well aware, Dear Reader, I believe such imprecise and loaded and miscommunicated and angry discussion is the key to working through all of the issues. People do some of their best work when they are mad as all get out.

but…….

We’ve been through these arguments before. Frequently, in my recollection. And I would say that the most angry disputes come around because of people who are not so good at distinguishing the prescriptive from the descriptive. And who are very, very keen to first kill the messenger.

Career Jealousy

August 26, 2019

As most of us will have experienced at one time or another, it is totally unfair that that person, over there, got this good thing that we did not get. They are, after all, no better or smarter than us, they were just lucky.

In the right place, at the right time. Anyone could have fallen into that luck. And we, ourselves, have not had such good fortune in our careers and of course that is unfair.

I have had my misfortunes in this career. I have also had several great bits of good fortune. I have most definitely felt the monster of jealousy hit for the observed good fortunes of other people that are my approximate peers at a given stage. In some cases I have felt that those folks, over there, are not just lucky but are super well-deserving. This is usually because I think they are brilliant and/or highly productive scientists, independent of the luck they enjoyed. In other cases I ground my teeth that such a dumbass incompetent lucked into that particular area of fortune while I, clearly more deserving, struggle.

I surmise, somewhat indirectly, that trainees these days are no different than I was. And they can be jealous of various things that seem to be falling into place for their peers, but not for them. It is, of course, amusing that their lucky peers often seem to be equally resentful of the luck of the person feeling sorry for themselves. It is a cycle.

As you know, Dear Reader, I am making an okay job of surviving in this career. So far, at least. It may crater any day, it may continue until I drop dead. But I’ve been doing this long enough to see a broad arc of what happens with people and their careers. And you know what? The distribution is an iron law of life.

Maybe this is just me, but I think the most profound effect that scientific training and participation has had on my mindset is that I think of just about everything in terms of distributions. In the distribution of career fortune, sure, there are going to be those that always get the sunny side of life. Surely a few people, just a few, have literally everything go their way. And conversely (and more sadly), some few people will have literally everything related to chance events fall against them. But the vast majority are in the middle. Where sometimes we get the good stuff happening and sometimes we get the bad stuff happening.

It’s only possible to see this in the career arc by living it for awhile.

At least one fellow graduate student that I thought achieved very, very high profile papers through no virtue of themselves, personally, ended up struggling just to earn the PhD and quickly exited science.

Postdocs that were much more productive in hotter fields than mine ended up out of science.

Then there are the faculty. Oh, yes, the faculty arcs. When I first started there were a fairly restricted number of individuals who I compared myself with. People in either approximately similar spheres of research or individuals in my own institution working under similar contingencies, albeit in strikingly different fields. Limiting myself to the first type, oh boy, you better believe I was slightly envious of the ones that seemed like shooting stars. Super productive or grant laden or just ones that seemed to enjoy better reputation as scientists. Some were viewed by me as highly deserving but one still gets a bit jealous, eh?

Well, shit happens. Maybe I had huge career and/or personal hurdles but eventually so did my peers. Because life happens. Some hurdles were run of the mill and some were truly life-changingly horrific. Some folks survived, some recovered and eventually thrived, and some said good-bye to academic science. Many folks just kind of faded away and I don’t know them well enough to know why. With some other folks it is clear that we only see part of the picture in public and there’s some weird shit going on somewhere. Not my circus, not my peanuts, but it is good to appreciate the impact of both good and bad circumstances even if you don’t quite know what they might be.

I keep learning about bullshit some peer or other had to put up with at various stages of his or her career. Even ones that seemed like they had it all. E.g., my previous institution was particularly uneven in terms of the insider club and the benefits they enjoyed relative to the rest of us. But eventually you realize there have been gradations of treatment within the ranks of the Annointed Ones. Within my fields of study, there are peers that seem like they were in the right research groups/departments/collaborations at the right time..but it turns out that in reality they were in a living hell.

Much of this information about other people’s careers has come to me long after I’ve made peace with my notion of the distribution of fortune. So I mostly just feel sorry for them and I lament the effect on their careers almost as much as I resent the effects of ill fortune on my own.

But I don’t know what to tell trainees. I just do this grampa thing of relating the anecdotes akin to the ones above and telling them the pendulum swings back and forth. Their peers who seem to be riding high will eventually be hit by misfortune. And any one thinking they themselves lack “luck” will eventually look back and admit some good fortune came their way. I’m sure it doesn’t help much.

A semi-thread from frustrated bioinformaticians emerged on twitter recently. In it they take shots at their (presumably) collaborators who do not take their requests for carefully curated and formatted data to heart.

Naturally this led me to taunt the data leech OpenScienceEleventy waccaloons for a little bit. The context is probably a little different (i.e., it seems to reference established collaborations between data-generating and data-analyzing folks) but the idea taps on one of my problems with the OpenScience folks. They inevitably don’t just mean they want access to the data that went into your paper but ALL of your data related to it. Down to the least little recorded unit (someone in the fighty thread said he wanted raw electrophysiological recording to test out his own scoring algorithm or some such). And of course they always mean that it should be nicely formatted in their favorite way, curated for easy understanding by computer (preferably) and, in all ways, the burden should be on the data-generating side to facilitate easy computational analysis. This is one of the main parts that I object to in their cult/movement- data curation in this way comes with a not-insubstantial cost expended to the benefit of some internet random. I also object on the basis of the ownership issues, bad actors (think: anti-science extremists of various stripes including right wing “think tanks” and left wing animal right terrorists), academic credit, opportunity loss among other factors.

However, the thought of the day is about data curation and how it affects the laboratory business and my mentoring of science trainees. I will declare that consistent data collation, curation, archiving and notation is a good thing for me and for my lab. It helps the science advance. However, these things come at a cost. And above all else when we consider these things, we have to remember that not every data point collected enters a scientific manuscript or is of much value five or ten years down the line. Which means that we are not just talking about the efficient expenditure of effort on the most eventually useful data, we’re talking about everything. Does every single study get the full data analysis, graphical depiction and writeup? Not in my lab. Data are used at need. Data are curated to the extent that it makes sense and sometimes that is less than complete.

Data are collected in slightly different ways over time. Maybe we changed the collection software. Maybe our experiments are similar, but have a bit of a tweak to them. Maybe the analyses that we didn’t think up until later might be profitably applied to earlier datasets but…..the upside isn’t huge compared to other tasks. Does this mean we have to go back and re-do the prior analyses with the current approach? If we want to, this sometimes that requires that third and fourth techniques (programs, analysis strategies, etc) be created and applied. This comes with additional effort costs. So why would we expend those efforts for something? If there was interest or need on the part of some member of the laboratory, sure. If a collaborator “needs” that analysis, well, this is going to be case by case on the basis of what it gains us, the collaboration or maybe the funded projects. Because it all costs. Time, which is money, and the opportunity cost of those staff members (and me) not doing other tasks.

Staff members. Ah, yes, the trainees. I am totally supportive of academic trainees who want to analyze data and come up with new ways to work with our various stock-in-trade data sets and archive of files. This, btw, is what I did at one of my postdoctoral stops. I was working with a model where we were somewhat captive to the rudimentary data analyses provided by the vendor’s software. The data files were essentially undocumented, save for the configuration data, dates and subject identifiers. I was interested in parsing the data in some new ways so I spent a lot of time making it possible to do so. For the current files I was collecting and for the archive of data collected prior to my arrival and for the data being collected by my fellow trainees. In short, I faced the kind of database that OpenData people claim is all they are asking for. Oh, just give us whatever you have, it’s better to have anything even if not annotated, they will claim. (Seriously). Well, I did the work. I was able to figure out the data structure in the un-annotated files. This was only possible because I knew how the programs were working, how the variables could be set for different things, what the animals were doing in a general sense in terms of possible responses and patterns, how the vendor’s superficial analysis was working (for validation), what errors or truncated files might exist, etc. I wrote some code to create the slightly-more-sophisticated analyses that I happened to dream up at the time. I then started on the task of porting my analysis to the rest of the lab. So that everyone from tech to postdoc was doing initial analysis using my programs, not the vendor ones. And then working that into the spreadsheet and graphing part of the data curation. And THEN, I started working my way back through the historical database from the laboratory.

It was a lot of work. A lot. Luckily my PI at the time was okay with it and seemed to think I was being productive. Some of the new stuff that I was doing with our data stream ended up being included by default in most of our publications thereafter. Some of it ended up in its own publication, albeit some 12 years after I had completed the initial data mining. (This latter paper has barely ever been cited but I still think the result is super cool.) The data mining of files from experiments that were run before I entered the laboratory required a second bit of work, as you might readily imagine. I had to parse back through the lab books to find out which subject numbers belonged together as cohorts or experiments. I had to separate training data from baseline / maintenance studies, from experimental manipulations of acute or longitudinal variety. And examine these new data extractions in the context of the actual experiment. None of this was annotated in the files themselves. There wasn’t really a way to even do it beyond 8 character file names. But even if it had been slightly better curated, I’m just not seeing how it would be useful without the lab books and probably some access to the research team’s memory.

Snapping forward to me as a PI, we have somewhat similar situation in my lab. We have a behavioral assay or two run by proprietary commercial software that generate data files that could, in theory, be mined by anyone that was interested* in some aspect of the behavior that struck their fancy. It would still take a lot of work and at least some access to the superordinate knowledge about the studies a given subject/date stamped file related to. I am happy for trainees in my lab to play with the data files, present and past. I’m happy for them to even replace analysis and reporting strategies that I have developed with their own, so long as they can translate this to other people in the lab. I.e., I am distinctly unkeen on the analysis of data being locked up in the proprietary code or software on a single trainee’s laptop. If they want to do that, fine, but we are going to belt-and-suspenders it. There is much value in keeping a set of data analysis structures more or less consistent over time. Sometimes the most rudimentary output from a single data file (say, how many pellets that rat earned) is all that we need to know, but we need to know that value has been used consistently across years of my work.

I have at least two interests when it comes to data curation in my lab. I need some consistency and I need to be able to understand as the PI what I am looking at. I need to be able to go back to some half-remembered experiment and quickly whip up a preliminary data or slide figure. This leans towards more orthodoxy of analysis. Towards orthodoxy of data structures and formats. Towards orthodoxy in the graphs, for pete’s sake. My attempts to manage this into reality has mixed results, I will note. At the level of an individual staffer, satisfying some data curation goal of the PI (or anyone else, really) can seem like make-work. And it is definitely work to the ends of someone else, I just happen to be the PI and am more equal that anyone else. But it is work. And this means that short cuts are taken. Often. And then it is down to the effort of someone to bring things back up to curation standard. Sure it may seem to be “just as easy” for the person to do it the way I want it, but whaddayaknow, they don’t always see it that way. Or are rushed. Or mean to get to that at the end of the study but then forget. Tomorrow. When it is really needed.

I get this. It is a simple human reality.

In my lab, I am the boss. I get to tell staff members what to do and if they won’t do it, eventually, I can fire them. Their personal efforts (and mine for that matter) are supposed to be directed towards the lab good, first, and the institutional good second. The NIH good is in there somewhere but we all know that since a grant is not a contract, this is a very undefined concept.

There is very little that suggests that the effort of my laboratory staff has to be devoted to the good of some other person who wants access to our data in a way that is useful to them. In fact, I am pretty sure in the extreme case that if I paid a tech or trainee from my grant to work substantial amounts of time on a data analysis/curation project demanded of us by a private for-profit company solely for their own ends, this would violate the rules. There would probably be a technical violation if we did the same for a grant project funded to another researcher if the work had nothing whatever to do with the funded aims in my own lab that were paying the staff member’s salary.

Data curation for others’ ends costs. It costs time and that means that it costs money. It is not trivial. Even setting up your data stream within lab so that it could possibly be easier to share with external data miners costs. And the costs apply to all of the data collected, not just that that eventually, one day is requested of you and ends up in a paper.

__

*as it happens we just fielded a request but this person asked us to collaborate, rightfully so.

The best career advice

January 31, 2019

For some reason the world likes to promote the career advice of those who have been super successful. The Art of the Deal is one of a related genre of published vanity pieces in which the supposedly successful person tells you how they did it. We ask stars of all careers to give advice to the youngsters. Academic science is no different. Famous people are published and promoted whenever they opine about the way the kids these days should behave in careers. We give awesome scores to NIH “training” applications that involve the most successful scientists as training supervisors….and poor scores to ones that involve less-accomplished mentors.

And yet when you ask real academic science folks about this, they seem to recognize that the best advice may not come from the haut monde.

As mentioned in Science, a new report from the US Academies of Sciences, Engineering, and Medicine have deduced we have a problem with too many PhDs and not enough of the jobs that they want.

The report responds to many years of warning signs that the U.S. biomedical enterprise may be calcifying in ways that create barriers for the incoming generation of researchers. One of the biggest challenges is the gulf between the growing number of young scientists who are qualified for and interested in becoming academic researchers and the limited number of tenure-track research positions available. Many new Ph.D.s spend long periods in postdoctoral positions with low salaries, inadequate training, and little opportunity for independent research. Many postdocs pursue training experiences expecting that they will later secure an academic position, rather than pursuing a training experience that helps them compete for the range of independent careers available outside of academia, where the majority will be employed. As of 2016, for those researchers who do transition into independent research positions, the average age for securing their first major NIH independent grant is 43 years old, compared to 36 years old in 1980.

No mention (in the executive summary / PR blurb) that the age of first R01 has been essentially unchanged for a decade despite the NIH ESI policy and the invention of the K99 which is limited by years-since-PhD.

No mention of the reason that we have so many postdocs, which is the uncontrolled production of ever more PhDs.

On to the actionable bullet points that interest me.

Work with the National Institutes of Health to increase the number of individuals in staff scientist positions to provide more stable, non-faculty research opportunities for the next generation of researchers. Individuals on a staff scientist track should receive a salary and benefits commensurate with their experience and responsibilities.

This is a recommendation for research institutions but we all need to think about this. The NCI launched the R50 mechanism in 2016 and they have 49 of them on the books at the moment. I had some thoughts on why this is a good idea here and here. The question now, especially for those in the know with cancer research, is whether this R50 is being used to gain stability and independence for the needy awardee or whether it is just further larding up the labs of Very Important Cancer PIs.

Expand existing awards or create new competitive awards for postdoctoral researchers to advance their own independent research and support professional development toward an independent research career. By July 1, 2023, there should be a fivefold increase in the number of individual research fellowship awards and career development awards for postdoctoral researchers granted by NIH.

As we know the number of NIH fellowships has remained relatively fixed relative to the huge escalation of “postdocs” funded on research grant mechanisms. We really don’t know the degree to which independent fellowships simply annoint the chosen (population wise) versus aid the most worthy and deserving candidates to stand out. Will quintupling the F32s magically make more faculty slots available? I tend to think not.

As we know, if you really want to grease the skids to faculty appointment the route is the K99/R00 or basically anything that means the prospective hire ” comes with money”. Work on that, NIH. Quintuple the K99s, not the F32s. And hand out more R03 or R21 or invent up some other R-mechanism that prospective faculty can apply for in place of “mentored” K awards. I just had this brainstorm. R-mechs (any really) that get some cutesy acronym (like B-START) and can be applied for by basically any non-faculty person from anywhere. Catch is, it works like the R00 part of the K99/R00. Only awarded upon successful competition for a faculty job and the offer of a competitive startup.

Ensure that the duration of all R01 research grants supporting early-stage investigators is no less than five years to enable the establishment of resilient independent research programs.

Sure. And invent up some “next award” special treatment for current ESI. and then a third-award one. and so on.

Or, you know, fix the problem for everyone which is that too many mouths at the trough have ruined the cakewalk that experienced investigators had during the eighties.

Phase in a cap – three years suggested – on salary support for all postdoctoral researchers funded by NIH research project grants (RPGs). The phase-in should occur only after NIH undertakes a robust pilot study of sufficient size and duration to assess the feasibility of this policy and provide opportunities to revise it. The pilot study should be coupled to action on the previous recommendation for an increase in individual awards.

This one got the newbie faculty all het up on the twitters.

and

being examples if you are interested.

They are, of course, upset about two things.

First, “the person like me”. Which of course is what drives all of our anger about this whole garbage fire of a career situation that has developed. You can call it survivor guilt, self-love, arrogance, whatever. But it is perfectly reasonable that we don’t like the Man doing things that mean people just like us would have washed out. So people who were not super stars in 3 years of postdoc’ing are mad.

Second, there’s a hint of “don’t stop the gravy train just as I passed your damn insurmountable hurdle”. If you are newb faculty and read this and get all angree and start telling me how terrible I am…..you need to sit down an introspect a bit, friend. I can wait.

New faculty are almost universally against my suggestion that we all need to do our part and stop training graduate students. Less universally, but still frequently, against the idea that they should start structuring their career plans for a technician-heavy, trainee-light arrangement. With permanent career employees that do not get changed out for new ones every 3-5 years like leased Priuses either.

Our last little stupid poll confirmed that everyone things 3-5 concurrent postdocs is just peachy for even the newest lab and gee whillikers where are they to come from?

Aaaanyway.
This new report will go nowhere, just like all the previous ones that reach essentially the same conclusion and make similar recommendations. Because it is all about the

1) Mouths at the trough.
and
2) Available slops.

We continue to breed more mouths PHDs.

And the American taxpayers, via their duly appointed representatives in Congress, show no interest in radically increasing the budget for slops science.

And even if Congress trebled or quintupled the NIH budget, all evidence suggests we’d just to the same thing all over again. Mint more PhDs like crazee and wonder in another 10-15 years why careers still suck.

It started off with a tweet suggesting the NIH game is rigged (bigly) against a “solo theoretician”…
https://twitter.com/cryptogenomicon/status/789118235182526465

interesting. Then there was a perfectly valid observation about the way “productivity” is assessed without the all-important denominators of either people or grant funding:
https://twitter.com/cryptogenomicon/status/789124227706281984

good point. Then there was the reveal:
https://twitter.com/cryptogenomicon/status/789177870027329536

“It’s her first NIH application”.

HAHHAHHHAAA. AYFK? Are you new here? Yes. Noobs get hammered occasionally. They even get hammered with stock critique type of comments. But for goodness sake we cannot possible draw conclusions about whether “NIH grant review can handle a solo theoretician” from one bloody review!

This guy doubled down:
https://twitter.com/cryptogenomicon/status/789196033653780481

Right? A disappointing first grant review is going to “drive a talented theoretical physicist out of biology”. You can’t make this stuff up if you tried.

and tripled down:
https://twitter.com/cryptogenomicon/status/789208927904862209

See, it’s really, really special, this flower. And a given “line of critique” (aka, StockCritique of subfieldX or situationY) is totes only a problem in this one situation.

News friggin flash. The NIH grant getting game is not for the dilettante or the faint of heart. It takes work and it takes stamina. It takes a thick hide.

If you happen to get lucky with your first proposal, or if you bat higher than average in success rate, hey, bully for you. But this is not the average expected value across the breadth of the NIH.

And going around acting like you (or your buddies or mentees or departmentmates or collaborators) are special, and acting as though is a particular outrage and evidence of a broken system if you are not immediately awarded a grant on first try, well……it is kind of dickish.

There is a more important issue here and it is the mentoring of people that you wish to help become successful at winning NIH grant support. Especially when you know that what they do is perhaps a little outside of the mainstream for a given IC or any IC. Or for any study section that you are aware of.

In my opinion it is mentoring malpractice to stomp about agreeing that this shows the system is awful and that it will never fund them. Such a response actually encourages them to drop out because it makes the future seem hopeless. My opinion is that proper mentoring involves giving the noobs a realistic view of the system and a realistic view of how hard it is going to be to secure funding. And my view is that proper mentoring is encouraging them to take the right steps forward to enhance their chances. Read between the summary statement lines. Don’t get distracted with the StockCritiques that so infuriate you. Don’t use this one exemplar to go all nonlinear about the ErrorZ OF FACT and INCompETENtz reviewers and whatnot. Show the newcomer how to search RePORTER to find the closest funded stuff. Talk about study sections and FOA and Program Officers. Work the dang steps!

Potnia Theron was a lot nicer about this than I was.

That post also got me wandering back to an older post by boehninglab about being a Working Class Scientist. Which is an excellent read.

An interesting retraction of an Editorial expression of concern hit the Twitts:
https://twitter.com/schneiderleonid/status/712278127519645696

The Editors and publisher have withdrawn an Expression of Concern previously contributed by noted neuroscientist David Amaral, with his agreement.

The original version of this Comment ‘Expression of Concern’ published by D. Amaral has been withdrawn by the Publisher in relation to the paper: ‘Organization of connections of the basal and accessory basal nuclei in the monkey amygdala’ by Eva Bonda, published in Volume 12, pp. 1971-1992 (doi: 10.1046/j.1460-9568.2000.00082.x). The review carried out at the University of California at Davis in December 2001 (brought to the publisher’s attention in February 2016) concluded that the allegation against Eva Bonda described in the commentary ‘Expression of Concern’ by D. Amaral did not meet The Office of Research Integrity’s definition of research misconduct, and was not pursued further.

That November 2000 Expression of Concern read, in part:

It has recently come to my attention that Eva Bonda has published a paper in the European Journal of Neuroscience entitled, ‘Organization of connections of the basal and accessory basal nuclei in the monkey amygdala’ ( Bonda, 2000). The data described in this paper were produced by my students and me at the University of California, Davis. Support for carrying out the experiments that produced these data was provided by the National Institute of Mental Health, through grant MH 41479 for which I am the Principal Investigator.
..The publication of this single-authored paper was totally unauthorized. Eva Bonda was a postdoctoral fellow in our laboratory.

Ok, so PI asserts ownership of data collected in his lab. Fine, fine… Typical story of postdoc who thinks that she owns and controls her data? And the PI was blocking publication for reasons unknown. We all have been down the various roads of he said/she said often enough to imagine a variety of scenarios where we might alternately side with the trainee or the postdoc.

Intriguing!

She had access to the preparations that were described in the paper. However, she did not carry out any of the experimental procedures involved in making the tracer injections reported in this paper. These injections were made by other students in the laboratory and by me. Moreover, other than processing the tissue from a small minority of the reported cases, it was the technical staff of our laboratory rather than Eva Bonda that carried out the histological processing of the reported experiments.

Ah. Well that sounds bad. This suggests it is a little more like theft of credit from more people than just the PI. I happen to disagree with the not-infrequent pose of postdocs on the internet that they own and control “their” data that they generated in the laboratory of a given PI. But that is much more of an arguable position than is taking data generated by many people other than one’s self and asserting control/ownership from a position that is not the PI.

Amaral finishes by making the charge of academic misconduct against Bonda very explicit:

In my view, the appropriation and publication of these data is a serious breach of scientific ethics. I have asked the Editor of the European Journal of Neuroscience to take appropriate action including publication of this Expression of Concern. Upon consultation with the Office of Research Integrity, Public Health Service, US Department of Health and Human Services, the agency responsible for protecting the integrity of NIH funded research programs, the UC Davis campus has agreed to initiate a review of the allegations of research misconduct. Based on the outcome of this review, further actions, including request for full retraction, may be taken concerning this.

Of course, the recent retraction of the Expression of Concern indicates that Bonda, the postdoc, was exonerated of misconduct charges in 2001!

Wow. Why did it take Amaral 15 years to retract his accusations? This seems spectacularly dickish to me.

And given the fact that the postdoc was not found guilty of misconduct by the University, it really questions the factual basis of his assertions in the original Expression of Concern. If I were the postdoc in question, I might have launched a counter accusation of professional misconduct. Depending, of course, on the details of the inquiry and what each party did and did not do. The exoneration of the postdoc may simply have been a lack of proof of intent, rather than any disagreement over the facts.

I notice, however, an interesting poll put up by an individual who both was RTing the tweets that alerted me to this situation and apparently co-published with Amaral.

https://twitter.com/mrhunsaker/status/712639543531319296

Gee, I wonder what the nature of the dispute was between Amaral and Bonda?

The subject of this poll is the juxtaposition of “good data” with “high quality standards” of the PI. Given what Amaral does, I’m going out on a limb and assuming we are talking about how pretty the immunohistochemical images are or are not (the Bonda paper is nearly all immuno-staining pictures).

On whitening the CV

March 18, 2016

I heard yet another news story* recently about the beneficial effects of whitening the resume for job seekers.

I wasn’t paying close attention so I don’t know the specific context. 

But suffice it to say, minority job applicants have been found (in studies) to get more call-backs for job interviews when the evidence of their non-whiteness on their resume is minimized, concealed or eradicated. 

Should academic trainees and job seekers do the same?

It goes beyond using only your initials if your first name is stereotypically associated with, for example, being African-Anerican. Or using an Americanized nickname to try to communicate that you are highly assimilated Asian-Anerican. 

The CV usually includes awards, listed by foundation or specific award title. “Ford Foundation” or “travel award for minority scholars” or similar can give a pretty good clue. But you cannot omit those! The awards, particularly the all-important “evidence of being competitively funded”, are a key part of a trainee’s CV. 

I don’t know how common it is, but I do have one colleague (I.e., professorial rank at this point) for whom a couple of those training awards were the only clear evidence on the CV of being nonwhite. This person stopped listing these items and/or changed how they were listed to minimize detection. So it happens.

Here’s the rub. 

I come at this from the perspective of one who doesn’t think he is biased against minority trainees and wants to know if prospective postdocs, graduate students or undergrads are of Federally recognized underrepresented status.

Why? 

Because it changes the ability of my lab to afford them. NIH has this supplement program to fund underrepresented trainees. There are other sources of support as well. 

This changes whether I can take someone into my lab. So if I’m full up and I get an unsolicited email+CV I’m more likely to look at it if it is from an individual that qualifies for a novel funding source. 

Naturally, the applicant can’t know in any given situation** if they are facing implicit bias against, or my explicit bias for, their underrepresentedness. 

So I can’t say I have any clear advice on whitening up the academic CV. 

__
*probably Kang et al.

**Kang et al caution that institutional pro-diversity statements are not associated with increased call-backs or any minimization of the bias.

Passed along by a very kind soul….I think there is a little something here for all of us.

Scene: Laboratory of Hibernation Studies

PI: “We need to discuss your thesis plans…what have you come up with so far?”

Grad Student: “Bears”

PI: “What? Dude, we have a sweet ground squirrel model all ready to go. What do you want to use it for?”

GS: “I want to start up a bear lab. It’ll be great.”

PI: -Dead Stare-

GS: “Bears! Hibernation! …..get it?”

……

GS: “Meanie”

Blooding the trainees

March 3, 2016

In that most English of pastimes, fox hunting, the noobs are smeared about the face with the blood of the poor unfortunate fox after dismembering by hound has been achieved.

I surmise the goal is to get the noob used to the less palatable aspects of their chosen sporting endeavor. 

Anyway, speaking of manuscript review and eventual publication, do you plan a course for new trainees in the lab?

I’m wondering if you have any explicit goals for them- Should a mentor try to get new postdocs or grads a pub, any pub as quickly and easily as possible?

Or should they be thrown into a multi-journal fight so as to fully experience the joys of desk rejection, ultimate denial after four rounds of review somewhere and the final relief of just dumping that Frankensteinian monster of a paper in a lowly journal and being done. 

Do you plan any of this out for your newest trainees?

Have you ever been in a lab with a golden-child trainee?

Was it you?

The post-triage stage

November 18, 2015

Holdworth Cheesington III Endowed Chair Professor K. Kristofferson has a few thoughts for you, as well.

Advice for faculty

November 17, 2015

From Holdworth Cheesington III Endowed Chair Professor K. Kristofferson: