Tracking sex bias in neuroscience conferences
August 31, 2015
A Tweep directed my attention to biaswatchneuro.com of which the About page says:
The progress of science is best served when conferences include a panel of speakers that is representative of the field. Male-dominated conference programs are generally not representing their field, missing out on important scientific findings, and are one important factor contributing to the “brain-drain” of talented female scientists from the scientific workforce. As a group, BiasWatchNeuro has formed to encourage conference organizers to make every effort to have their program reflect the composition of their field.
Send information about conferences, seminar series or other scientific programs to biaswatchneuro@gmail.com
Check it out.
Grantsmack: The logic of hypothesis testing
August 26, 2015
NIH grant review obsesses over testing hypotheses. Everyone knows this.
If there is a Stock Critique that is a more reliable way to kill a grant’s chances than “There is no discernible hypothesis under investigation in this fishing expedition“, I’d like to know what it is.
The trouble, of course, is that once you’ve been lured into committing to a hypothesis then your grant can be attacked for whether your hypothesis is likely to be valid or not.
A special case of this is when some aspect of the preliminary data that you have included even dares to suggest that perhaps your hypothesis is wrong.
Here’s what bothers me. It is one thing if you have Preliminary Data suggesting some major methodological approach won’t work. That is, that your planned experiment cannot result in anything like interpretable data that bears on the ability to falsify the hypothesis. This I would agree is a serious problem for funding a grant.
But any decent research plan will have experiments that converge to provide different levels and aspects of testing for the hypothesis. It shouldn’t rest on one single experiment or it is a prediction, not a real hypothesis. Some data may tend to support and some other data may tend to falsify the hypothesis. Generally speaking, in science you are not going to get really clean answers every time for every single experiment. If you do…..well, let’s just say those Golden Scientist types have a disproportionate rate of being busted for faking data.
So.
If you have one little bit of Preliminary Data in your NIH Grant application that maybe, perhaps is tending to reject your hypothesis, why is this of any different value than if it had happened to support your hypothesis?
What influence should this have on whether it is a good idea to do the experiments to fully test the hypothesis that has been advanced?
Because that is what grant review should be deciding, correct? Whether it is a good idea to do the experiments. Not whether or not the outcome is likely to be A or B. Because we cannot predict that.
If we could, it wouldn’t be science.
Grantsmack: Overambitious
August 25, 2015
If we are entering a period of enthusiasm for “person, not project” style review of NIH grants, then it is time to retire the criticism of “the research plan is overambitious”.
Updated:
There was a comment on the Twitters to the effect that this Stock Critique of “overambitious” is a lazy dismissal of an application. This can use some breakdown because to simply dismiss stock criticisms as “lazy” review will fail to address the real problem at hand.
First, it is always better to think of Stock Critique statements as shorthand rather than lazy.
Using the term “lazy” seems to imply that the applicant thinks that his or her grant application deserves a full and meticulous point-by-point review no matter if the reviewer is inclined to award it a clearly-triagable or a clearly-borderline or clearly-fundable score. Not so.
The primary job of the NIH Grant panel reviewer is most emphatically not to help the PI to funding nor to improve the science. The reviewer’s job is to assist the Program staff of the I or C which has been assigned for potential funding decide whether or not to fund this particular application. Consequently if the reviewer is able to succinctly communicate the strengths and weaknesses of the application to the other reviewers, and eventually Program staff, this is efficiency, not laziness.
The applicant is not owed a meticulous review.
With this understood, we move on to my second point. The use of a Stock Criticism is an efficient communicative tool when the majority of the review panel agrees that the substance underlying this review consideration is valid. That is, that the notion of a grant application being overambitious is relevant and, most typically, a deficiency in the application. This is, to my understanding, a point of substantial agreement on NIH review panels.
Note: This is entirely orthogonal to whether or not “overambitious” is being applied fairly to a given application. So you need to be clear about what you see as the real problem at hand that needs to be addressed.
Is it the notion of over-ambition being any sort of demerit? Or is your complaint about the idea that your specific plan is in fact over-ambitious?
Or are you concerned that it is unfair if the exact same plan is considered “over-ambitious” for you and “amazingly comprehensive vertically ascending and exciting” when someone else’s name is in the PI slot?
Relatedly, are you concerned that this Stock Critique is being applied unjustifiably to certain suspect classes of PI?
Personally, I think “over-ambitious” is a valid critique, given my pronounced affection for the NIH system as project-based, not person-based. In this I am less concerned about whether everything the applicant has been poured into this application will actually get done. I trust PIs (and more importantly, I trust the contingencies at work upon a PI) of any stage/age to do interesting science and publish some results. If you like all of it, and would give a favorable score to a subset that does not trigger the Stock Critique, who cares that only a subset will be accomplished*?
The concerning issue is that a reviewer cannot easily tell what is going to get done. And, circling back to the project-based idea, if you cannot determine what will be done as a subset of the overambitious plan, you can’t really determine what the project is about. And in my experience, for any given application, there are going to usually be parts that really enthuse you as a reviewer and parts that leave you cold.
So what does that mean in terms of my review being influenced by these considerations? Well, I suppose the more a plan creates an impression of priority and choice points, the less concern I will have. If I am excited by the vast majority of the experiments, the less concern I will have-if only 50% of this is actually going to happen, odds are good if I am fired up about 90% of what has been described.
*Now, what about those grants where the whole thing needs to be accomplished or the entire point is lost? Yes, I recognize those exist. Human patient studies where you need to get enough subjects in all the groups to have any shot at any result would be one example. If you just can’t collect and run that many subjects within the scope of time/$$ requested, well…..sorry. But these are only a small subset of the applications that trigger the “overambitious” criticism.
Completely uncontroversial graph preferences
August 24, 2015
I am sure that nobody has any opinions whatsoever on using the placement of significance symbols to…err….emphasize….. the magnitude of the effect.
Tales from the search committee
August 21, 2015
Prof Booty has written about chairing a recent search committee.
Starting a little over a year ago, I served as chair of my department’s search committee, which concluded in the spring with a successful hire. With that experience still relatively fresh, I hope I can share some important insights into how our top candidates caught our eye, as well as the behind-the-scenes process of selecting those candidates.
NIH grant applications are not competing with the reviewers!
August 18, 2015
So misguided. Understandable frustration…but misguided.
Think of it this way- do you dismiss Olympic judging of diving or figure skating because the judges can’t do that themselves? What about the scoring of boxing?
Your competition is not the judge. It is the other participants in the event that stand between you and glory.
In NIH grant review, that means the other applications that have been submitted.
Brief thought on GenX scientists
August 18, 2015
I detailed some of the ways that my generation of scientist had been screwed in a well received prior post.
Today I thought about another factor. Scientific impact of a scientist is captured by paper citations, which is related to the number of people working within a sphere of investigation. A given scientist’s reputation can be burnished by the number of publishing scientists that he or she is respected by and viewed by as a thought leader.
Scientific progeny are a key factor. The trainees that exit out labs, gain faculty positions and start up vigorous publication trains very frequently boost our own reputations.
When the odds of trainees becoming traditional, independent, academic research scientists are lower for a generation of mentoring scientists, this will cripple the apparent importance and influence of that generation.
How convenient for the Boomers.
Runts of the Litter
August 18, 2015
Sometimes, I page back through my Web of Science list of pubs to the minimal citations range.
I love all of my papers of course, and feel a little sorry for the ones that never garnered much appreciation.
Seriously? Payment for citations?
August 14, 2015
A Reader submitted this gem of a spam email:
We are giving away $100 or more in rewards for citing us in your publication! Earn $100 or more based on the journal’s impact factor (IF). This voucher can be redeemed your next order at [Company] and can be used in conjunction with our ongoing promotions!
How do we determine your reward?
If you published a paper in Science (IF = 30) and cite [Company], you will be entitled to a voucher with a face value of $3,000 upon notification of the publication (PMID).
This is a new one on me.
Repost: An Honor Codes’ Second Component and Research Science
August 12, 2015
This was originally posted October 4, 2007.
Many academic honor codes boil down to two essential statements, namely “I will not cheat and I will not tolerate those who do“. For “cheat” you may read any number of disreputable activities including plagiarism and research fraud. My alma mater had this sort of thing, I know the US military academies have this. Interestingly a random Google brings up some which include both components (Davidson College, Notre Dames, Florida State Univ (which as been in the academic cheating news lately), and some which do not (CU Boulder, Baylor); Wikipedia entry has a bunch of snippet Honor Codes. The first component, i.e. “don’t cheat” is easily comprehended and followed. The second component, the ” I will not tolerate those who do” part is the tricky one. Read the rest of this entry »
This entry was originally posted 2/9/2011.
For a highly related topic I recommend you re-read my old post Routes to Independence: Beyond Ye Olde Skool Tenure Track Assistant Professorships (original).
To distill it to a few simple points for the current discussion:
- The University (or Research Institution, company, etc) submits the grant to the NIH and receives the award from the NIH.
- Anyone who the submitting institution deems to be a PI can serve as the PI. Job title or status is immaterial as far as the NIH is concerned.
- Postdocs, Research Scientists, Staff Scientists, etc can be the listed PI on most broad NIH mechanisms (there may be the occasional special case like MD-required or something).
- The submitting institutions, for the most part, permit anyone of tenure track professorial appointment to prepare NIH grants for them to submit but it gets highly variable (across institutions, across their respective non-professorial and/or non tenure track…and across time) after that.
- The question of how study sections view applications submitted by those of other than tenure track professorial rank is a whole ‘nother question, but you would be making a mistake to think there are hard and fast exclusive principles.
The second issue has to do with moving the award to another institution, given that a PI on an NIH award decides to go somewhere else. Although technically the University owns the award, in the vast majority of cases that institution will relinquish the award and permit it to travel with the PI. Likewise, in the vast majority of cases, the NIH will permit the move. In all cases I am aware of this move will occur at the anniversary of funding. That is because the award is in yearly increments (maximum of 5 unless you win a PECASE or MERIT extension* of the non-competing interval). Each progress report you submit? That’s the “application” for the next year of funding. Noncompeting application, of course, because it does not go back to study section for review. At any rate it makes it less painful for all concerned to do the accounting if the move is at the anniversary.
Soooooo…..
Point being that if you are a postdoc or non tenure track scientist who wants to write and submit a grant, you need to start snooping around your local University about their policies. Sometimes they will only let you put in a R21 or R03 or some other nonrenewable mechanism. Sometimes they’ll let you throw down the R01. Just depends. Most of the time it will require a letter of exception to be generated within the University- Chair or Dean level stuff. Which requires the approval of your current lab head or supervisor, generally. You need to start talking to all these people.
Since these types of deals are frequently case-by-case and the rules are unwritten, don’t assume that everyone (i.e., your PI) knows about them. Snoop around on RePORTER for awards to your institution and see if anyone with non-TT professorial appointment has ever received an award from the NIH. Follow up on that rumour that Research Scientist Lee once had an award.
If you are really eager, be prepared to push the envelope and ask the Chair/Dean type person “Well why not? University of State1 and State University2 and IvyUni3 and Research Institute4 all permit it, why can’t we?”. This may require doing some background surveying of your best buddies spread around the country/world.
Final point:
Obviously I wouldn’t be bringing up these theoretical possibilities if I hadn’t seen it work, and with some frequency. As a reviewer on a study section I saw several applications come through from people who had the title of something below tenure track assistant professor. Instructor, Research Scientist and yes, even Postdoc. I myself submitted at least two R01 applications prior to being able to include the word “Professor” on my Biosketch. I have many peers that were in a similar circumstance at their early stage of grant writing/submitting and, yes, winning.
No, you will not be treated just like an Assistant Professor by the study sections. You will be beat up for Independence issues and with doubts about whether this is just the BigCheeze trying to evade perceptions of overfunding. You will have “helpful” reviewers busting on your appointment as evidence of a lack of institutional commitment that the reviewer really thinks will get the Dean or Chair to cough up a better title**.
In all of this however there is a chance. A chance that you will receive an award. This would have very good implications for your transition. (Assuming, of course, that you manage to get the grant written and submitted without too big of a hit to your scientific productivity, never forget that part.) And even if you do not manage to obtain a fundable score, I argue that you get valuable experience. In preparing and submitting a half-decent proposal. In getting some degree of study section feedback. In taking a shot across the bow of the study section that you have ideas and you plan to have them review them in the coming few years. In getting the PO familiar with your name. In wrangling local bureaucracy.
All of this without your own tenure clock running.
__
*there may be other extensions I am unaware of.
**One of the first questions I asked an experienced reviewer about after joining a study section. Sigh.
Repost: Peer Review- Advocates and Detractors Redux
August 10, 2015
This post originally went up on the blog 20 Aug 2014.
A comment on a recent post from Grumble is a bit of key advice for those seeking funding from the NIH.
It’s probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don’t, even a reviewer who likes everything else about your application is going to say to herself, “there’s no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws.” So she won’t even bother trying. She’ll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she’s really excited about.
This is something that I present as an “advocates and detractors” heuristic to improving your grant writing, surely, but it applies to paper writing/revising and general career management as well. I first posted comments on Peer Review: Friends and Enemies in 2007 and reposted in 2009.
The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one’s case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn’t worth discussing. The same reviewer can simultaneously express pro and con views but as we’ll discuss this is just a special case.
The next bit in my original phrasing is what Grumble is getting at in the referenced comment.
Give your advocates what they need to go to bat for you.
This is the biggie. In all things you have to give the advocate something to work with. It does not have to be overwhelming evidence, just something. Let’s face it, how many times are you really in position in science to overwhelm objections with the stupendous power of your argument and data to the point where the most confirmed critic cries “Uncle”. Right. Never happens.
The point here is that you need not put together a perfect grant, nor need you “wait” until you have X, Y or Z bit of Preliminary Data lined up. You just have to come up with something that your advocates can work with. As Grumble was pointing out, if you give your advocate a grant filled with StockCritique bait then this advocate realizes it is a sunk cause and abandons it. Why fight with both hands and legs trussed up like a Thanksgiving turkey?
Let’s take some stock critiques as examples.
“Productivity”. The goal here is not to somehow rush 8 first author papers into press. Not at all. Just give them one or two more papers, that’s enough. Sometimes reiterating the difficulty of the model or the longitudinal nature of the study might be enough.
“Independence of untried PI with NonTenureTrackSoundin’ title”. Yes, you are still in the BigPIs lab, nothing to be done about that. But emphasize your role in supervising whole projects, running aspects of the program, etc. It doesn’t have to be meticulously documented, just state it and show some sort of evidence. Like your string of first and second authorships on the papers from that part of the program.
“Not hypothesis driven”. Sure, well sometimes we propose methodological experiments, sometimes the outcome is truly a matter of empirical description and sometimes the results will be useful no matter how it comes out so why bother with some bogus bet on a hypothesis? Because if you state one, this stock critique is de-fanged, it is much easier to argue the merits of a given hypothesis than it is the merits of the lack of a hypothesis.
Instead of railing against the dark of StockCriticism, light a tiny candle. I know. As a struggling newb it is really hard to trust the more-senior colleagues who insist that their experiences on various study sections has shown that reviewers often do go to bat for untried investigators. But….they do. Trust me.
There’s a closely related reason to brush up your application to avoid as many obvious pitfalls as possible. Because it takes ammunition away from your detractors, which makes the advocates job easier.
Deny your detractors grist for their mill.
Should be simple, but isn’t. Particularly when the critique is basically a reviewer trying to tell you to conduct the science the way s/he would if they were the PI. (An all to common and inappropriate approach in my view) If someone wants you to cut something minor out, for no apparent reason (like say the marginal cost of doing that particular experiment is low), just do it. Add that extra control condition. Respond to all of their critiques with something, even if it is not exactly what the reviewer is suggesting; again your ultimate audience is the advocate, not the detractor. Don’t ignore anything major. This way, they can’t say you “didn’t respond to critique”. They may not like the quality of the response you provide, but arguing about this is tougher in the face of your advocating reviewer.
This may actually be closest to the core of what Grumble was commenting on.
I made some other comments about the fact that a detractor can be converted to an advocate in the original post. The broader point is that an entire study section can be gradually converted. No joke that with enough applications from you, you can often turn the tide. Either because you have argued enough of them (different reviewers might be assigned over time to your many applications) into seeing science your way or because they just think you should be funded for something already. It happens. There is a “getting to know you” factor that comes into play. Guess what? The more credible apps you send to a study section, the more they get to know you.
Ok, there is a final bit for those of you who aren’t even faculty yet. Yes, you. Things you do as a graduate student or as a postdoc will come in handy, or hurt you, when it comes time to apply for grants as faculty. This is why I say everyone needs to start thinking about the grant process early. This is why I say you need to start talking with NIH Program staff as a grad student or postdoc.
Plan ahead
Although the examples I use are from the grant review process, the application to paper review and job hunts are obvious with a little thought. This brings me to the use of this heuristic in advance to shape your choices.
Postdocs, for example, often feel they don’t have to think about grant writing because they aren’t allowed to at present, may never get that job and if they do they can deal with it later. This is an error. The advocate/detractor heuristic suggests that postdocs make choices to expend some effort in broad range of areas. It suggests that it is a bad idea to gamble on the BIG PAPER approach if this means that you are not going to publish anything else. An advocate on a job search committee can work much more easily with the dearth of Science papers than s/he can a dearth of any pubs whatsoever!
The heuristic suggests that going to the effort of teaching just one or two courses can pay off- you never know if you’ll be seeking a primarily-teaching job after all. Nor when “some evidence of teaching ability” will be the difference between you and the next applicant for a job. Take on that series of time-depleting undergraduate interns in the lab so that you can later describe your supervisory roles in the laboratory.
This latter bit falls under the general category of managing your CV and what it will look like for future purposes.
Despite what we would like to be the case, despite what should be the case, despite what is still the case in some cozy corners of a biomedical science career….let us face some facts.
- The essential currency for determining your worth and status as a scientist is your list of published, peer reviewed contributions to the scientific literature.
- The argument over your qualities between advocates and detractors in your job search, promotions, grant review, etc is going to boil down to pseudo quantification of your CV at some point
- Quantification means analyzing your first author / senior author /contributing author pub numbers. Determining the impact factor of the journals in which you publish. Examining the consistency of your output and looking for (bad) trends. Viewing the citation numbers for your papers.
- You can argue to some extent for extenuating circumstances, the difficulty of the model, the bad PI, etc but it comes down to this: Nobody Cares.
My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it?
This echos something Odyssey said on the Twitts today:
and
are true for your subfield stage as well as your University stage of performance.
Repost: Don’t tense up
August 7, 2015
I’ve been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.
If you’ve been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.
Your next proposals are stiff…and jam packed with what is supposed to be ammunition to ward off the criticisms you’ve been receiving lately. Excessive citation of the lit to defend your hypotheses…and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.
The trouble is, then your grant is wall to wall text and nearly unreadable.
Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don’t like the whole thing for reasons only tangentially related to the nits they are picking.
So your defensive crouch isn’t actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.
This originally appeared 16 Apr, 2013
One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:
How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?
This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.
To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head…sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit…shouldn’t this be the primary motivation?
Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.
Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers…this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It’s a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their “own project” from more or less scratch until they got to the point of a substantial first-author effort. That’s fine and all but I’ve come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven’t written up yet. If I have something like this, I’ll encourage trainees to pick it up and massage it into a paper.
Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I’ve mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.
on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.
It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I’m never furthering manuscripts I said I’d take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I’ve said before it interacts with his or her motivation to work on your draft. I don’t mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.
One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn’t want to look stupid and this kept me from bouncing drafts off my PI as a trainee.
Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.
But as I often remind myself, when it is their first few papers, the trainees want their words in press. The way they wrote them.
__
*this stuff is not Shakespeare, I reject this out of hand
Repost: I’ll let you know when I stop ruining my career…
August 5, 2015
This originally went up 28 Sept, 2009.
Female Science Professor related a tale of a scientist directing inter-laboratory rivalry in a remarkably petty direction:
Now consider a different situation – one in which a faculty member in Research Group 1 tells a recent PhD graduate of Research Group 2 that the student made a huge mistake in choice of adviser and had probably ruined his/her career by working with this person.
FSP has a nice dissection of laboratory conflict going but I was struck by a simple thought.
I must’ve ruined my career a half a dozen times…so far.
I can date my self-defeating, science-career-ruining behavior back at least to the selection of an undergraduate institution which didn’t have a research focus. It was a small school where the professors were expected primarily to teach, shouldered a heavy teaching load at that and only rarely engaged in primary research. I then picked an unfortunate major, considering the eventual direction of my career.
I wasn’t done.
I picked, by many measures, a disastrous laboratory in which to conduct graduate studies…and went on to some additional mistakes and choices of the career-ruining nature later.
To all appearances I still have a career.
Don’t get me wrong- I don’t recommend that anyone do things the way I have. I believe that I’ve survived in my career more by dumb luck than by making the right moves. There is little doubt that many things would have gone (and be going) better for me had I avoided some career-ruining mistakes. Lessons learned the hard way seems to be my stock in trade.
Nevertheless there is a positive lesson which is that it is frequently possible to overcome such obvious career-ending errors such as training in the wrong lab. Frequently. And I expect that the comments will contain some allusions to supposed career-ruining moves made by the commentariat.
When someone says that a person has ruined their career, particularly to a newly-minted PhD, they are full of petty vindictive crap.