There is a cautionary tale in the allegations against three Dartmouth Professors who are under investigation (one retired as a Dean reached a recommendation to fire him) for sexual harassment, assault and/or discrimination. From The Dartmouth:

several students in the PBS department described what they called an uncomfortable workplace culture that blurred the line between professional and personal relationships.

Oh, hai, buzzkill! I mean it’s just normal socializing. If you don’t like it nobody is forcing you to do it man. Why do you object to the rest of us party hounds having a little fun?

They said they often felt pressured to drink at social events in order to further their professional careers, a dynamic that they allege promoted favoritism and at times inappropriate behavior.

The answer is that this potential for nastiness is always lurking in these situations. There are biases within the laboratory that can have very lasting consequences for the trainees. Who gets put on what projects. Who gets preferential resources. Who is selected to attend a fancy meeting with a low trainee/PI ratio? Who is introduced around as the amazing talented postdoc and who is ignored? This happens all the time to some extent but why should willingness (and ability, many folks have family responsibilities after normal working hours) to socialize with the lab affect this?

Oh, come on, buzzkill! It’s just an occasional celebration of a paper getting accepted.

Several students who spoke to The Dartmouth said that Kelley encouraged his lab members to drink and socialize at least weekly, often on weeknights and at times during business hours, noting that Whalen occasionally joined Kelley for events off-campus.

Or, you know, constantly. Seriously? At the very least the PI has a drinking problem* and is covering it up with invented “lab” reasons to consume alcohol. But all too often it turns sinister and you can see the true slimy purpose revealed.

At certain social events, the second student said she sometimes refused drinks, only to find another drink in her hand, purchased or provided by one of the professors under the premise of being “a good host.”

Yeah, and now we get into the area of attempted drug-assisted sexual assault. Now sure, it could just be the PI thinking the grad student or postdoc can’t afford the drinks and wants to be a good chap. It could be. But then…..

She described an incident at a social event with members of the department, at which she said everyone was drinking, and one of the professors put his arm around her. She said his arm slid lower, to the point that she was uncomfortable and “very aware of where his hand [was] on [her] body,” and she said she felt like she was being tested.

Ugh. The full reveal of the behavior.

Look, as always, there is a spectrum here. The occasional lab celebration that involves the consumption of alcohol, and the society meeting social event that involves consumption of alcohol, can be just fine. Can be. But these traditions in the academic workplace are often co-opted by the creeper to his own ends. So you can end up with that hard-partying PI who is apparently just treating his lab like “friends” or “family” and belives that “everyone needs to blow off steam” to “build teamwork” and this lets everyone pull together….but then the allegations of harassment start to surface. All of the “buddies” who haven’t been affected (or more sinisterly have been affected for the good) circle the wagons.
Bro 1: Oh, he’s such a good guy.
Bro 2: Why are you being a buzzkill?
Bro 3: Don’t you think they are misinterpreting?

He isn’t, because people are being harmed and no, the victims are not “misinterpreting” the wandering arm/hand.

Keep a tight rein on the lab-based socializing, PIs. It leads to bad places if you do not.

__
*And that needs to be considered even when there is not the vaguest shred of sexual assault or harassment in evidence.

There has been a case of sexual harassment, assault and/or workplace misconduct at Dartmouth College that has been in the news this past year.

In allegations that span multiple generations of graduate students, four students in Dartmouth’s department of psychological and brain sciences told The Dartmouth this week that three professors now under investigation by the College and state prosecutors created a hostile academic environment that they allege included excessive drinking, favoritism and behaviors that they considered to be sexual harassment.

It was always a little bit unusual because three Professors from the same department (Psychological and Brain Sciences) were seemingly under simultaneous investigation and the NH State AG launched an investigation at the same time. It is not all clear to me yet but it seems to be a situation in which the triggering behaviors are not necessarily linked.

The news of the day (via Valley News) is that one of the professors under investigation has retired, “effective immediately”.

Professor Todd Heatherton has retired, effective immediately, following a recommendation by the dean of the faculty of arts and sciences, Elizabeth Smith, that his tenure be revoked and that he be terminated, Hanlon said in the email.

“In light of the findings of the investigation and the dean’s recommendation, Heatherton will continue to be prohibited from entering campus property or from attending any Dartmouth-sponsored events, no matter where they are held,” Hanlon wrote.

This comes hard on the heels of Inder Verma retiring from the Salk Institute just before their institutional inquiry was set to conclude.

I understand the role of plea bargains in normal legal proceedings. I am not sure I understand the logic of the approach when it comes to busting sexual harasser/discriminater individuals in academia. I mean sure, it may avoid a protracted legal fight between the alleged perpetrator and the University or Institute as the former fights to retain a shred of dignity, membership in the NAS or perhaps retirement benefits. But for the University or Institute, in this day and age of highly public attention they just like they are, yet again, letting a perp off the hook*. So any fine statements they may have made about taking sexual discrimination seriously and having zero tolerance rings hollow. I am mindful that what we’ve seen in the past is that the Universities and Institutes are fully willing to deploy their administrative and legal apparatus to defend an accused perpetrator, often for years and in repeated incidents, when they think it in their interest to do so. So saving money can’t really be the reason. It really does seem to be further institutional protection- they cannot be accused of having admitted to defending and harboring the perp over the past years or decades of his harassing behavior.

It is all very sad for the victims. The victims are left with very little. There is no formal finding of guilt to support their allegations. There is often no obvious punishment for a guy who should probably have long since retired (Verma is 70) simply retiring. There is not even any indirect apology from the University or Institution. I wish we could do better.

__
*At least in the Verma case, the news reporting made it very clear that the Salk Board of Trustees formally accepted Verma’s tender of resignation which apparently then halted any further consideration of the case. They could have chosen not to accept it, one presumes.

Self plagiarism

June 8, 2018

A journal has recently retracted an article for self-plagiarism:

Just going by the titles this may appear to be the case where review or theory material is published over and over in multiple venues.

I may have complained on the blog once or twice about people in my fields of interest that publish review after thinly updated review year after year.

I’ve seen one or two people use this strategy, in addition to a high rate of primary research articles, to blanket the world with their theoretical orientations.

I’ve seen a small cottage industry do the “more reviews than data articles” strategy for decades in an attempt to budge the needle on a therapeutic modality that shows promise but lacks full financial support from, eg NIH.

I still don’t believe “self-plagiarism” is a thing. To me plagiarism is stealing someone else’s ideas or work and passing them off as one’s own. When art critics see themes from prior work being perfected or included or echoed in the masterpiece, do they scream “plagiarism”? No. But if someone else does it, that is viewed as copying. And lesser. I see academic theoretical and even interpretive work in this vein*.

To my mind the publishing industry has a financial interest in this conflation because they are interested in novel contributions that will presumably garner attention and citations. Work that is duplicative may be seen as lesser because it divides up citation to the core ideas across multiple reviews. Given how the scientific publishing industry leeches off content providers, my sympathies are…..limited.

The complaint from within the house of science, I suspect, derives from a position of publishing fairness? That some dude shouldn’t benefit from constantly recycling the same arguments over and over? I’m sort of sympathetic to this.

But I think it is a mistake to give in to the slippery slope of letting the publishing industry establish this concept of “self-plagiarism”. The risk for normal science pubs that repeat methods are too high. The risks for “replication crisis” solutions are too high- after all, a substantial replication study would require duplicative Introductory and interpretive comment, would it not?

__

*although “copying” is perhaps unfair and inaccurate when it comes to the incremental building of scientific knowledge as a collaborative endeavor.

MeToo STEM

June 4, 2018

There is a new blog at MeTooSTEM.wordpress.com that seeks to give voice to people in STEM disciplines and fields of work that have experienced sexual harassment.

Such as Jen:

The men in the lab would read the Victoria’s Secret catalog at lunch in the break room. I could only wear baggy sweatshirts and turtlenecks to lab because when I leaned over my bench, the men would try to look down my shirt. Then came the targeted verbal harassment of the most crude nature

or Sam:

I’ve been the victim of retaliation by my university and a member of the faculty who was ‘that guy’ – the ‘harmless’ one who ‘loved women’. The one who sexually harassed trainees and colleagues.

or Anne:

a scientist at a company I wanted to work for expressed interest in my research at a conference. … When I got to the restaurant, he was 100% drunk and not interested in talking about anything substantive but instead asked personal questions, making me so uncomfortable I couldn’t network with his colleagues. I left after only a few minutes, humiliated and angry that he misled about his intentions and that I missed the chance to network with people actually interested in my work

Go Read.

Nature relates a case of a convicted science cheat attempting to rehabilitate himself.

last August, the University of Tokyo announced that five of Watanabe’s papers contained manipulated images and improperly merged data sets that amounted to scientific misconduct. One of those papers has since been retracted and two have been corrected. Two others have corrections under consideration, according to Watanabe. Another university investigation into nine other papers found no evidence of misconduct.

ok, pretty standard stuff. Dude busted for manipulating images. Five papers involved so it isn’t just a one time oopsie.

Watanabe says that the university’s investigation made him aware of “issues concerning contrast in pictures and checking original imaging files”. He says, however, that he did not intend to deceive and that the issues did not affect the main conclusions of the papers.

They always claim that. Oh, it doesn’t change the results so it isn’t fraud. Oh? Well if you needed that to get the paper accepted (and by definition you did) then it was fraud. Whether it changes the overall conclusions or whether (as is claimed in other cases) the data can be legitimately re-created is immaterial to the fraud.

Julia Cooper, a molecular biologist at the US National Cancer Institute in Bethesda, Maryland, says that data manipulation is never acceptable. But she thinks the sanctions were too harsh and incommensurate with the degree of wrongdoing. “Yoshinori absolutely deserves a second chance,” she says.

This is, of course, the central question for today’s discussion. Should we let science cheats re-enter science? Can they be “rehabilitated”? Should they be?

Uhlmann is unsure whether it will make a difference. He commends Watanabe’s willingness to engage with his retraining, but says “we will only know at the end of it whether his heart is where his mouth is”.

Watanabe emphasizes that his willingness to embark on the training and acknowledgement that he made errors is evidence that he will change his ways.

Fascinating, right? Watanabe says the investigation brought it to his attention that he was doing something wrong and he claims it as an “error” rather than saying “yeah, man, I faked data and I got caught”. Which one of these attitudes do you think predict a successful rehabilitation?

and, where should such a person receive their rehabilitation?

[Watanabe is] embarking on an intensive retraining programme with Nobel prizewinner Paul Nurse in London.

Nurse, who mentored Watanabe when he was a postdoctoral researcher in the 1990s, thinks that the biologist deserves the opportunity to redeem himself. “The research community and institutions need to think more about how to handle rehabilitation in cases like this,” says Nurse, a cell biologist and director of the Francis Crick Institute in London. Nurse declined to comment further on the retraining.

So. He’s going to be “rehabilitated” by the guy who trained him as a postdoc and this supervisor refuses to comment on how this rehabilitation is to be conducted or, critically, evaluated for success.

Interesting.

__
H/t a certain notorious troll

On a recent post, DNAMan asks:

If you were reviewing an NIH proposal from a PI who was a known (or widely rumored) sexual harasser, would you take that into account? How?

My immediate answer was:

I don’t know about “widely rumored”. But if I was convinced someone was a sexual harasser this would render me unable to fairly judge the application. So I would recuse myself and tell the SRO why I was doing so. As one is expected to do for any conflicts that one recognizes about the proposal.

I’m promoting this to a new post because this also came up in the Twitter discussion of Lander’s toast of Jim Watson. Apparently this is not obvious to everyone.

One is supposed to refuse to review grant proposals, and manuscripts submitted for publication, if one feels that one has a conflict of interest that renders the review biased. This is very clear. Formal guidelines tend to concentrate on personal financial benefits (i.e. standing to gain from a company in which one has ownership or other financial interest), institutional benefits (i.e., you cannot review NIH grants submitted from your University since the University is, after all, the applicant and you are an agent of that University) and mentoring / collaborating interests (typically expressed as co-publication or mentoring formally in past three years). Nevertheless there is a clear expectation, spelled out in some cases, that you should refuse to take a review assignment if you feel that you cannot be unbiased.

This is beyond any formal guidelines. A general ethical principle.

There is a LOT of grey area.

As I frequently relate, in my early years when a famous Editor asked me to review a manuscript from one of my tighter science homies and I pointed out this relationship I was told “If I had to use that standard as the Editor I would never get anything reviewed. Just do it. I know you are friends.“.

I may also have mentioned that when first on study section I queried an SRO about doing reviews for PIs who were scientifically sort of close to my work. I was told a similar thing about how reviews would never get done if vaguely working in the same areas and maybe one day competing on some topic were the standard for COI recusal.

So we are, for the most part, left up to our own devices and ethics about when we identify a bias in ourselves and refuse to do peer review because of this conflict.

I have occasionally refused to review an NIH grant because the PI was simply too good of a friend. I can’t recall being asked to review a grant proposal from anyone I dislike personally or professionally enough to trigger my personal threshold.

I am convinced, however, that I would recuse myself from the review of proposals or manuscripts from any person that I know to be a sexual harasser, a retaliator and/or a bigot against women, underrepresented groups generally, LGBTQ, and the like.

There is a flavor of apologist for Jim Watson (et rascalia) that wants to pursue a “slippery slope” argument. Just Asking the Questions. You know the type. One or two of these popped up on twitter over the weekend but I’m too lazy to go back and find the thread.

The JAQ-off response is along the lines of “What about people who have politics you don’t like? Would you recuse yourself from a Trump voter?”.

The answer is no.

Now sure, the topic of implicit or unconscious bias came up and it is problematic for sure. We cannot recuse ourselves when we do not recognize our bias. But I would argue that this does not in any way suggest that we shouldn’t recuse ourselves when we DO recognize our biases. There is a severity factor here. I may have implicit bias against someone in my field that I know to be a Republican. Or I may not. And when there is a clear and explicit bias, we should recuse.

I do not believe that people who have proven themselves to be sexual harassers or bigots on the scale of Jim Watson deserve NIH grant funding. I do not believe their science is going to be so much superior to all of the other applicants that it needs to be funded. And so if the NIH disagrees with me, by letting them participate in peer review, clearly I cannot do an unbiased job of what NIH is asking me to do.

The manuscript review issue is a bit different. It is not zero-sum and I never review that way, even for the supposedly most-selective journals that ask me to review. There is no particular reason to spread scoring, so to speak, as it would be done for grant application review. But I think it boils down to essentially the same thing. The Editor has decided that the paper should go out for review and it is likely that I will be more critical than otherwise.

So….can anyone see any huge problems here? Peer review of grants and manuscripts is opt-in. Nobody is really obliged to participate at all. And we are expected to manage the most obvious of biases by recusal.

If the lab head tells the trainees or techs that a specific experimental outcome* must be generated by them, this is scientific misconduct.

If the lab head says a specific experimental outcome is necessary to publish the paper, this may be very close to misconduct or it may be completely aboveboard, depending on context. The best context to set is a constant mantra that any outcome teaches us more about reality and that is the real goal.


*no we are not talking about assay validation and similar technical development stuff.

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don’t.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn’t matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn’t matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden’s top-ranked university was just sacked because of the Macchiarini scandal

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.

two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an “erroneous placeholder figure”, or a full on retraction by the “good” authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. “Several labs have replicated and extended our work”, is how it goes if the paper is an old one. “We’ve replicated the bad [nonWestern, can’t be located] postdoc’s work” if the paper is newer.

I say “aided and abetted” because the Editors have to approve the language of the authors’ erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining “good” authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So…corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman’s agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I’d bet that an inability to get beyond “he-said/he-said” is probably at the root of Baylor’s “inconclusive” preliminary inquiry result for this Liang/Feng dispute.

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine… , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn’t have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. “There was no ‘smoking gun’ direct evidence….the allegations..represent the classic ‘he-said, she-said’ dispute“. The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be “defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation

One witness testified that Dr. Chao had said “[Dr. Cohn] is a young scientist [and] doesn’t know how the experiments should come out, and I in my heart know how it should be.

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao’s original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

“[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations.”

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But…but….. I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I’d hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

at RetractionWatch:

After the University of Texas postponed a hearing to determine whether it should revoke a chemist’s PhD, her lawyer has filed a motion to stop the proceedings, and requested the school pay her $95,099 in lawyer fees and expenses.

We have discussed individuals convicted of scientific fraud in the course of doctoral studies before and wondered if a University could or would attempt to retract the doctoral award. Well, looks like this is one of those cases.

The Austin Statesman reports:

In Orr’s case, UT administrators moved to revoke her degree after finding that “scientific misconduct occurred in the production of your dissertation,” according to a letter to Orr from Judith Langlois, senior vice provost and dean of graduate studies.

The dissertation committee concluded that work related to “falsified and misreported data cannot be included in a dissertation and that the remaining work described in the dissertation is insufficient to support the award” of a Ph.D.,” Langlois wrote. Orr was invited to submit a new thesis summarizing other work to earn a master’s degree.

This is interesting because the justification is not that she is being punished for being a faker, otherwise why would they invite her to submit a master’s thesis? The justification is that ignoring the allegedly falsified work leaves her short of a minimum qualification for the doctorate. Given the flexibility involved with doctoral committee requirements and the sheer scope of data usually involved in a thesis, my eyebrows are raising at this. Back to the RetractionWatch piece:

The motion for final summary judgment includes an affidavit from Phillip Magnus, a chemistry professor at UT, who argues that…

her dissertation consisted of two branches of work towards alkaloid natural products and a methodology project to generate novel structures. She characterized about 100 organic compounds in her dissertation. Even without completed syntheses of natural products, her research towards the natural products was significant, and provided her the training to become a skillful and passionate scientist. Being correct or incorrect is part of scientific research. Being correct, or synthesizing a particular molecule are not requirements for passing a course at the University, or obtaining a Ph D degree. Furthermore, the possibility of being wrong is not a justifiable reason to rescind a former student’s degree.

Yeah, this certainly points at a usual sticking point between the RetractionWatch types and me.

It is ESSENTIAL to differentiate between merely being wrong or mistaken (or even sloppy) and intentional fraud.

The Austin Statesman piece goes on to detail how the supervising PI and a subsequent postdoc wanted to build on Dr. Orr’s work and she told them to re-do certain work. They didn’t, published a paper (with her as author) and it was subsequently retracted for a chemical step being non-reproducible. Was her warning due to knowing she’d faked some results? Or was it due to her gut feeling that it just wasn’t as nailed down as some other results and she’d like to see it replicated before publishing? Did her own subsequent work cast doubt on her prior (valid but perhaps mistaken) work? Etc.

Priority

February 19, 2016

I am working up a serious antipathy to the notion of scientific priority, spurred most recently by the #ASAPbio conference and the associated fervent promotion of pre-print deposit of scientific manuscripts.

In science, the concept of priority refers to the fact that we think of the first person to [think up, discover, demonstrate, support, prove, find, establish] something as somehow special and deserving of credit.

For example, the first paleontologist to show that this odd collection of fossils over here belonged to a species of Megatyrannoteethdeath* not previously known to us gets a lot of street cred for a new discovery.

Watson and Crick, similarly, are famed for working out the double helical structure of DNA** because they provided the scientific community with convincing amounts of rationale and evidence first.

Etc.

Typically the most special thing about the scientists being respected is that they got there first. Someone else could have stumbled across the right bits of fossil. Many someones were hotly trying to determine how DNA was structured and how it worked.

This is the case for much of modern bioscience. There are typically many someones that have at least thought about a given issue, problem or puzzle. Many who have spent more than just a tiny bit of thought on it. Sometimes multiple scientists (or scientific groups, typically) are independently working on a given idea, concept, biological system, puzzle or whathaveyou.

As in much of life, to the victor go the spoils. Meaning the Nobel prize in some cases. Meaning critical grant funding in other cases- funding that not only pays the salary of the scientists with priority but that goes to support their pursuit of other “first” discoveries. Remember in the Jurassic Park movies how the sober paleontology work was so desperately in need of research funds? That. In addition, the priority of a finding might dictate which junior scientists get Professorial rank jobs, the all-important credit for publication in a desired rank of scientific journal and ultimately the incremental accumulation of citations to that paper. Finally, if there ends up being a commercial value angle, the ones who have this priority may profit from that fact.

It’s all very American, right? Get there first, do something someone else has not done and you should profit from that accomplishment. yeeehaw***.

Problem is……****

The pursuit of priority holds back the progress of science in many ways. It keeps people from working on a topic because they figure that some other lab is way ahead of them and will beat them to the punch (science always can use a different take, no two labs come up with the exact same constellation of evidence). It unfairly keeps people from being able to get rewarded for their work (in a multi-year, multi-person, expensive pursuit of the same thing does it make sense that a 2 week difference in when a manuscript is submitted is all-critical to the credit?). It keeps people from collaborating or sharing their ideas lest someone else swoop in and score the credit by publishing first. It can fuel the inability to replicate findings (what if the group with priority was wrong and nobody else bothered to put the effort in because they couldn’t get enough credit?).

These are the things I am pondering as we rush forward with the idea that pre-publication manuscripts should be publicized in a pre-print archive. One of the universally promoted reasons for this need is, in fact, scientific priority. Which has a very, very large downside to it.
__
*I made that Genus up but if anyone wants to use it, feel free

**no, not for being dicks. that came later.

***NSFW

****NSFW

Story boarding

June 23, 2015

When you “storyboard” the way a figure or figures for a scientific manuscript should look, or need to look, to make your point, you are on a very slippery slope.

It sets up a situation where you need the data to come out a particular way to fit the story you want to tell.

This leads to all kinds of bad shenanigans. From outright fakery to re-running experiments until you get it to look the way you want. 

Story boarding is for telling fictional stories. 

Science is for telling non-fiction stories. 

These are created after the fact. After the data are collected. With no need for storyboarding the narrative in advance.

Placeholder figures

June 23, 2015

Honest scientists do not use “placeholder” images when creating manuscript figures. Period. 

See this nonsense from Cell

An article by Dan Vergano at Buzzfeed alerts us:

Electric shocks, brain surgery, amputations — these are just some of the medical experiments widely performed on American slaves in the mid-1800s, according to a new survey of medical journals published before the Civil War.

Previous work by historians had uncovered a handful of rogue physicians conducting medical experiments on slaves. But the new report, published in the latest issue of the journal Endeavour, suggests that a widespread network of medical colleges and doctors across the American South carried out and published slave experiments, for decades.

Savitt first reported in the 1970s that medical schools in Virginia had trafficked in slaves prior to the Civil War. But historians had seen medical experiments on slaves as a practice isolated to a few physicians — until now.

to the following paper.

Kenny, S.C. Power, opportunism, racism: Human experiments under American slavery. Endeavour,
Volume 39, Issue 1, March 2015, Pages 10–20[Publisher Link]

Kenny writes:

Medical science played a key role in manufacturing and deepening societal myths of racial difference from the earli- est years of North American colonisation. Reflecting the practice of anatomists and natural historians throughout the Atlantic world, North American physicians framed andinscribed the bodies, minds and behaviours of black subjects with scientific and medical notions of fundamental and inherent racial difference. These medical ideas racialised skin, bones, blood, diseases, with some theories specifically designed to justify and defend the institution of racial slavery, but they also manifested materially as differential treatment – seen in medical education, practice and research.

I dunno. Have we changed all that much?