Per ORI, one Michael W Miller, most recently Chair of Neuroscience and Physiology at SUNY Upstate Medical University, is a data faker.

ORI finds that the Respondent engaged in research misconduct by falsifying and/or fabricating data that were included in grant applications R01 AA07568-18, R01 AA07568-18A1, R01 AA006916-25, and P50 AA017823-01 and in the following:

The “following” included some paper retractions detailed over at the RetractionWatch blog.

One of the interesting things is that this guy published in decidedly normal journals. There was something in the ORI finding about a PNAS submission, but that seemed to be the high IF watermark for Miller. I make special note of this since I am one of those fond of pointing out the positive correlation between journal IF and retractions.

You will be unsurprised that my attention is drawn in this case to the grant support. That year -18 grant renewal application mentioned? It got converted into an R37 MERIT (10 yrs of non competing renewals instead of the usual 5) in the A1 version. Which means it was scored very highly, from what I deduce about the R37 process. The P50 is, of course, a Center.

Big monetary commitment for NIAAA and very prestigious for Professor Miller.

Boo, hiss, fraudster bad guy….

Except think about those folks who didn’t get something because if this guy. The Chair position was an external hire. The P50 took the place of another one- and it isn’t just the PI/PD. Each competing Center that didn’t get funded probably also had a handful of Component PIs. Who put in a lot of hard work and had a lot of great science. The R37? Well it probably counts at least double because of the 10 year interval. And of course some other worthy mid-career NIAAA scientist didn’t receive this honor for her work.

I’m irritated on behalf of anyone who applied to NIAAA for grant support and didn’t get the award during the interval they were supporting this Miller fraudster.

Final note. He was supposedly ratted out by someone in his lab. Since the offense seemed to be making up bar graphs rather than the all-too-typical duplicated gel/blot/image, there really would have been no other way to nail him. So the reviewers really can’t be blamed for missing anything.

The BMJ policy on the criteria for deserving an authorship on a scientific paper raised it’s ugly head today on the Twitts.

For additional edification and background, one @mattjhodgkinson provided a link to his editorial and blog post on the topic. You may also wish to review the International Committee of Medical Journal Editors standards.

My take on these standards is two fold.

First, the exclusion of those who “merely” collect data is stupid to me. I’m not going to go into the chapter and verse but in my lab, anyway, there is a LOT of ongoing trouble shooting and refining of the methods in any study. It is very rare that I would have a paper’s worth of data generated by my techs or trainees and that they would have zero intellectual contribution. Given this, the asymmetry in the BMJ position is unfair. In essence it permits a lab head to be an author using data which s/he did not collect and maybe could not collect but excludes the technician who didn’t happen to contribute to the drafting of the manuscript. That doesn’t make sense to me. The paper wouldn’t have happened without both of the contributions.

Second, and the real topic for today, is the notion of “courtesy authorships”. It is a not infrequent punching bag. What I want to know is, where’s the evidence of a problem? What is the nature of the problem? What is going to be solved by this that justifies the denial of credit to the deserving (see above)?

@mattjhodgkinson offered:

Authors – inc. gift authors – take responsibility for papers. The gift may be poisoned.

Yes…but this is the case for any author on a multi-contributor paper. So I’m not seeing where this specifically affects so-called courtesy authors.

How about you Reader? Can you describe for me why gift authors are a systematic problem?

How frequent are genuine, totally noncontributing “gifts”? Are you sure you are not just whinging about the degree of contribution? Have you never had an offhand comment made in a discussion absolutely crystallize your thinking?

Assuming that we are not talking about pushing someone else meaningfully* out of deserved credit, where lies the harm even if it is a total gift?

Who is hurt? How are they damaged?

*by pushing them off the paper entirely or out of first-author or last-author position. Adding a 7th in the middle of the authorship list doesn’t affect jack squat folks.

crossposting from Scienceblogs.

I’ve been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.

Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.

I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.


Direct link to the poll in case you can’t see it.

My position can be found after the jump…. Read the rest of this entry »

Namnezia is sorely provoking YHN.

I do the best imitation of myself

So why is it plagiarism? Because you are copying text of something that already has been published. And since most journals own the copyright to your manuscripts, re-using your own text verbatim is likely a copyright violation. It’s a bit silly, but apparently that’s the way it is and according to the article in The Scientist, papers have been retracted by journals because of this. My approach that I tell people in my lab is that it’s OK to take the old methods and change them around a bit, but that the introduction should be written from scratch. They can read an old introduction and then replicate it by memory, and this is usually enough to make the two texts sufficiently different, but they should never cut and paste text form their old papers.

No, no, NO! This is NOT plagiarism. There is no intent to deceive and academic papers are not supposed to contain lyrical text of overwhelming genius and originality. NOT. The text is there to service understanding the data, how it was collected and to advance the scientific interpretations. Which might be repeated over and over again.

“…which all supports Darwin’s conception of the Origin of Species”. This is science. We all create unique studies to address issues of common interest. We use common techniques. For common reasons. There are only so may ways to say “dopamine overflow in the nucleus accumbens” for chrissakes! Or to say “the rat presses the lever to get an intravenous infusion of drug”.

On Expanding Diversity

For the most part, in these programs the definition of enhancing diversity in sciences and other academic fields means increasing participation of minorities (and women) within the sciences. However, I’ve started to think that this is a somewhat narrow view of what diversity should mean. In my view, the current rationale for providing programs to help minority students is that these students traditionally don’t have access to the same type of educational resources as non-minorities do. This is due in part to the fact that many come from socioeconomically disadvantaged areas which simply do not have the same resources and academic support networks. Yet, there are several other people who also do not have access to these resources because they also come from socioeconomically disadvantaged areas, but just not happen to be from an underrepresented racial or ethnic minority group.

Straw-man. I’m sorry but if your University has not clued into the need to include first-generation college students and other indicators of impoverished background into the diversity effort, it has been living under a policy rock for at least 5-8 years, maybe more.

Take the wording of the NIH program traditionally shorthanded as “Minority Supplements”.

Individuals from disadvantaged backgrounds which are defined as:

1. Individuals who come from a family with an annual income below established low-income thresholds. These thresholds are based on family size; published by the U.S. Bureau of the Census; adjusted annually for changes in the Consumer Price Index; and adjusted by the Secretary for use in all health professions programs. The Secretary periodically publishes these income levels at . For individuals from low income backgrounds, the institution must be able to demonstrate that such candidates have qualified for Federal disadvantaged assistance or they have received any of the following student loans: Health Professions Student Loans (HPSL), Loans for Disadvantaged Student Program, or they have received scholarships from the U.S. Department of Health and Human Services under the Scholarship for Individuals with Exceptional Financial Need.

2. Come from a social, cultural, or educational environment such as that found in certain rural or inner-city environments that have demonstrably and recently directly inhibited the individual from obtaining the knowledge, skills, and abilities necessary to develop and participate in a research career. Eligibility related to a disadvantaged background is most applicable to high school and perhaps to undergraduate candidates, but would be more difficult to justify for individuals beyond that level of academic achievement.

That’s the 2005 Notice which replaced the older 2001 Notice which did not contain this sort of language and was apparently exclusive to ethnic minorities. So ever since 2005 even the NIH has been on board with this. I am aware of University level revisions of diversity language that date to at least five years before that. The times have changed so people who keep banging on about how this is needed are a bit out of step. If your local University hasn’t adapted, it needs to get with the program pronto.

Given my understanding of this change in the reality of diversity efforts in modern academia, it rings quite jarringly in my ears to bang on with the “what about the poor white folks” line. That, my friend, is falling right into anti-diversity talking points. Are you sure you want to align yourself with those folks?
Grrr, Namnezia-Goat Gruff. Grr.

A Finding of Misconduct Notice in the NIH Guide today (NOT-OD-10-130) holds more than the usual interest, Dear Reader.

Elizabeth Goodwin, PhD, University of Wisconsin-Madison: Based on the report of an investigation conducted by the University of Wisconsin-Madison (UW-M) and additional analysis conducted by ORI in its oversight review, the U.S. Public Health Service (PHS) found that Elizabeth Goodwin, PhD, former associate professor of genetics and medical genetics, UW-M, engaged in scientific misconduct while her research was supported by National Institute of General Medical Sciences (NIGMS), National Institutes of Health (NIH), grants R01 GM051836 and R01 GM073183. PHS found that the Respondent engaged in misconduct in science by falsifying and fabricating data that she included in grant applications 2 R01 GM051836-13 and 1 R01 GM073183-01.

The recent Marc Hauser misconduct case has been widely reported to have depended on or been triggered by whistleblowers from within his own lab. Comments in several places have praised the laboratory members for their bravery and willingness to take the inevitable career hit (possibly irrecoverable hit) in the service of correcting the scientific record.

Remember the profile in Science a number of years ago which described a group of trainees who blew the whistle on Elizabeth B. Goodwin?

Chantal Ly, 32, had already waded through 7 years of a Ph.D. program at the University of Wisconsin (UW), Madison. Turning in her mentor, Ly was certain, meant that “something bad was going to happen to the lab.” Another of the six students felt that their adviser, geneticist Elizabeth Goodwin, deserved a second chance and wasn’t certain the university would provide it. A third was unable for weeks to believe Goodwin had done anything wrong and was so distressed by the possibility that she refused to examine available evidence.

Two days before winter break, as the moral compass of all six swung in the same direction, they shared their concerns with a university administrator. In late May, a UW investigation reported data falsification in Goodwin’s past grant applications and raised questions about some of her papers. The case has since been referred to the federal Office of Research Integrity (ORI) in Washington, D.C. Goodwin, maintaining her innocence, resigned from the university at the end of February.

2006. My how time has flown. There was a brief note from someone in the student’s department indicating that other laboratories had helped them to bring at least one paper to press.

Most noteworthy are the young scientists who worked so hard on the paper at early stages of their careers–because they are victims of this unfortunate situation and are doubly victimized if the conclusion the scientific community reaches is that this paper has no merit. Although the scientific results are the most important component of the vindication of the work, I feel strongly that we owe it to our young scientists to draw attention to the verification.

Hmm, that appears to be the last paper published by O. Lakiza.

An update, which I missed, in Science from June of this year gave us a preview of the Notice.

Four years after a group of graduate students faced the agonizing experience of turning in their mentor for apparently falsifying scientific data, she has pleaded guilty to a criminal charge in the case. Elizabeth Goodwin, who was a biologist at the University of Wisconsin (UW), Madison, until resigning in February 2006, admitted “that she included manipulated data” in a grant progress report “to convince reviewers that more scientific progress had been made with her research than was actually the case,”

it also says this about the fate of the trainees.

But the outcome for several students, who were told they had to essentially start over, was unenviable. One, Chantal Ly, had gone through 7 years of graduate school and was told that much of her work was not useable and that she had to start a new project for her Ph.D. (The reason wasn’t necessarily because of falsified data but rather, Ly and the others thought, because Goodwin stuck by results that were questionable.) Along with two of the others, she quit graduate school. Allen moved to a school in Colorado. Just two students chose to stay at UW.

One hopes the outcome is slightly better for the Hauser trainees…

Another interesting thing that popped up in the Hauser affair was the mention of involvement from the US Attorney’s office. Maybe I’m so focused on the misconduct that I usually ignore any mention of legal penalties. But the Science bit on Goodwin emphasizes that the Department of Justice press release (pdf) indicates a $50,000 fine to the HHS has already been issued. Furthermore:

Goodwin will be sentenced on 3 September on the charge of making false statements and faces up to 1 year in jail and a $100,000 fine.

Again, I can’t recall seeing these before but I may just have missed it. I’m wondering if this represents a new stance in these prosecutions, or perhaps if the PIs in question were just so egregious in their misconduct and obstinate in their defense that the BigGuns were brought to bear.

An ethical scenario was forwarded to the blog today with a request for the wisdom of the crowd. I can but oblige. The query has been lightly modified for anonymity purposed.

A member of my department informed me that a collaborator, another faculty member in our department, gave, without this person’s knowledge or permission, quantitities of unique compounds that synthesized by the lab to an investigator outside our university. That external researcher has recently published work based upon those compounds, and included in that paper an acknowledgement of the collaborator as the source of those materials.

The rest of the note indicates that the person who had synthesized the compounds was not informed by the departmental collaborator or the external investigator. This person only learned about it through a roundabout way that boiled down to “hey, have you seen this paper about this stuff you are working with?”
This is entirely simple, as depicted. Nowadays it is nearly impossible that an investigator would not know that anything sent to a collaborator external to the University requires a Materials Transfer Agreement. Everything. Compounds, reagents, mice, tissues. Everything.
Now true, many times people sort of overlook this for the small stuff. Or overlook it until something actually works out and it looks like a publication is ahead. But c’mon. How can you not know?
Furthermore, everyone knows that you don’t get to screw a collaborator and doing so in your own department is incredibly stupid. It will come out and you will look like a jerk. Particularly when you have failed to file the right MTA paperwork. And, depending on your University policies, you may be in a world of local hurt for letting intellectual property that belongs (formally speaking) to the University into a competing institution’s hands without protecting the intellectual property rights.
(Look, I don’t make the rules and I actually think they are bad for science. But the roots of this go back a long, long ways. Universities have a structural stance toward intellectual property that is highly corporate.)
My view is that the fault here is almost exclusively with the in-house collaborating investigator because s/he could have told the external collaborator that it was all coolio and conveniently neglected to mention that a third lab had actually made the compound in question. Maybe a *slight* possibility that the external collaborator had proceeded to publication without appropriate notification of the in-house collaborator who provided the (third lab’s) compound.
What do you think DearReader? Straightforward? or am I missing something?

A comment over at the Sb blog raises an important point in the Marc Hauser misconduct debacle. Allison wrote:

Um, it would totally suck to be the poor grad student /post doc IN HIS LAB!!!!

He’s RUINED them. None of what they thought was true was true; every time they had an experiment that didn’t work, they probably junked it, or got terribly discouraged.

This is relevant to the accusation published in the Chronicle of Higher Education:

the experiment was a bust.

But Mr. (sic) Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it.

So far as we’ve been able to tell from various reports, the misconduct charges are related to making up positive results. This is common…it sounds quite similar to a whole lot of other scientific misconduct cases. Making up “findings” which are in fact not supported by a positive experimental result.

The point about graduate students and postdocs that Allison raised, however, push me in another direction. What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? “Are you sure?”, the PI asks, “Maybe you better repeat it a few more times…and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better”. For video coding of a behavioral observations study, well, there are all kinds of objections to be raised. Starting with the design, moving on to the data collection phase (there is next to nothing that is totally consistent and repeatable in a working animal research vivarium across many days or months) and ending with the data analysis.

Pretty easy to question the results of the new trainee and claim that “Well, Dr. Joe Blow, our last post-doc didn’t have any trouble with the model, perhaps you did something wrong”.

Is this misconduct? The scientist usually has plenty of work that could have ended up published, but circumstances have decided otherwise. Maybe it just isn’t that exciting. Maybe the project got partially scooped and the lab abandoned a half a paper’s worth of work. Perhaps the results are just odd, the scientist doesn’t know what to make of it and cannot sustain the effort to run eight more control studies that are needed to make sense of it.

None of this is misconduct in my view. This is the life of a scientist who has limited time and resources and is looking to publish something that is exciting and cool instead of something that appears to be pedestrian or derivative.

I think it would be pretty hard to make a misconduct case over quashing experiments. Much easier to make the case over positive fraud than it is to make the case over negative fraud.

As you know, this is something that regulatory authorities are trying to address with human clinical trials by requiring the formal recording of each one. Might it bring nonclinical research to a crashing halt if every study had to be reported/recorded in some formal way for public access? Even if this amounted to making available lab books and raw, unanalyzed data I can see where this would have a horrible effect on the conduct of research. And really, the very rarity of misconduct does not justify such procedures. But I do wonder if University committees tasked with investigation fraud even bother to consider the negative side of the equation.

I wonder if anyone would ever be convicted of fraud for not publishing a study.

The ScienceInsider blog published a letter from Harvard’s Dean of Faculty of Arts and Sciences which states that Marc Hauser was indeed found guilty of scientific misconduct under their investigation process.

it is with great sadness that I confirm that Professor Marc Hauser was found solely responsible, after a thorough investigation by a faculty investigating committee, for eight instances of scientific misconduct

None of this pushing it off on the hapless trainee anymore. He was to blame.
The Chronicle of Higher Ed published an accusation supposedly from a former lab member.

The research assistant who analyzed the data and the graduate student decided to review the tapes themselves, without Mr. Hauser’s permission, the document says. They each coded the results independently. Their findings concurred with the conclusion that the experiment had failed: The monkeys didn’t appear to react to the change in patterns.
They then reviewed Mr. Hauser’s coding and, according to the research assistant’s statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn’t so much as flinch. It wasn’t simply a case of differing interpretations, they believed: His data were just completely wrong.

Certainly the dude was charismatic. And had a good media reputation. And Greg Laden thinks he’s a great guy.
But he was also willing to fake data. The odds are good that he is a case of pushing a little too hard to demonstrate what he just “knew” a priori to be true. I saw a comment on him somewhere or other that referred to him as a master experimentalist. He was just sooooo skilled at putting together the experimental conditions in the right way to demonstrate…something. The assumption has to be that this is in contrast to others in his field that had more, shall we say, difficulty. Well, perhaps the reason his experiments were seemingly so brilliant, effortless and beyond the reach of mere mortal primatologists was because Hauser was making up data? Fudging it the whole way?

ScienceInsider has published a letter from Harvard Dean of the Faculty of Arts and Sciences, Michael Smith, addressed to his faculty.

it is with great sadness that I confirm that Professor Marc Hauser was found solely responsible, after a thorough investigation by a faculty investigating committee, for eight instances of scientific misconduct under FAS [Faculty of Arts and Sciences] standards.

The dean notes that their internal inquiry is over but that there are ongoing investigations from the NIH and NSF. So my curiosity turns to Hauser’s NIH support- I took a little stroll over to RePORTER.

From 1997 to 2009 there are nine projects listed under the P51RR000168 award which is the backbone funding for the New England Primate Research Center, one of the few places in which the highly endangered cotton top tamarin is maintained for research purposes. The majority of the projects are titled “CONCEPTUAL KNOWLEDGE AND PERCEPTION IN TAMARINS”. RePORTER is new and the prior system, CRISP, did not link the amounts but you can tell from the most recent two years that these are small projects amounting to $50-60K.

Hauser appears to have only had a single R01 “Mechanisms of Vocal Communication” (2003-2006).

Of course we do not know how many applications he may have submitted that were not selected for funding and, of course, ORI considers applications that have been submitted when judging misconduct and fraud, not just the funded ones. One of the papers that has been retracted was published in 2002 so the timing is certainly such that there could have been bad data included in the application.

The P51 awards offer a slight twist. I’m not totally familiar with the system but it would not surprise me if this backbone award to the Center, reviewed every 5 years, only specified a process by which smaller research grants would be selected by a non-NIH peer review process. Perhaps it is splitting hairs but it is possible that Hauser’s subprojects were not reviewed by the NIH. There may be some loopholes here.

Wandering over to NSF’s Fastlane search I located 10 projects on which Hauser was PI or Co-PI. This is where his big funding has been coming from, apparently. So yup, I bet NSF will have some work to do in evaluating his applications to them as well.

This announcement from the Harvard Dean is just the beginning.

Nice one in PNAS today:

Retraction for “HOS10 encodes an R2R3-type MYB transcription factor essential for cold acclimation in plants” by Jianhua Zhu, Paul E. Verslues, Xianwu Zheng, Byeong-ha Lee, Xiangqiang Zhan, Yuzuki Manabe, Irina Sokolchik, Yanmei Zhu, Chun-Hai Dong, Jian-Kang Zhu, Paul M. Hasegawa, and Ray A. Bressan, which appeared in issue 28, July 12, 2005, of Proc Natl Acad Sci USA (102:9966–9971; first published online July 1, 2005; 10.1073/pnas.0503960102).

The authors wish to note the following: “The locus AT1g35515 that was claimed to be responsible for the cold sensitive phenotype of the HOS10 mutant was misidentified. The likely cause of the error was an inaccurate tail PCR product coupled with the ability of HOS10 mutants to spontaneously revert to wild type, appearing as complemented phenotypes. The SALK alleles of AT1g35515 in ecotype Columbia could not be confirmed by the more reliable necrosis assay. Therefore, the locus responsible for the HOS10 phenotypes reported in ecotype C24 remains unknown. The other data reported were confirmed with the exception of altered expression of AT1g35515, which appears reduced but not to the extent shown in Zhu et al. The authors regrettably retract the article.” [Emphasis added]

Sounds like these fuckers were–at best–too happy to see the complementation support their hypothesis, and thus didn’t do appropriate fucken controls, which would have revealed that the rate of complementation was exactly the same as the rate of spontaneous reversion. Or worse, there was some cherry picking of data going on. Also, it is pretty suspicious that–in addition to the bogus complementation data–there was also, coincidentally, altered expression of the same locus that was not confirmable after publication. Again, sounds like some cherry picking may have been going on.

Worst case scenario, all this shit was totally cherry picked data within the normal range of variability and ginned up into a totally fake fucken story.

Inselgate linkage

June 9, 2010

Additional reading:
Carlat blog
The Great Beyond Blog
Grassley throws down with the Inspector General of HHS and University of Miami
Healthcare Renewal Blog (from a sustained critic of Nemeroff, fwiw)
Public Trust at NIMH
Happy Times at NIMH Part 3

At all.
ScienceInsider overviewed a dismal story being reported by The Chronicle of Higher Education. It involves a tale I’ve discussed before with a new twist. ScienceInsider:

In 2008, a Senate investigation found that Nemeroff failed to report at least $1.2 million of more than $2.4 million that he had received for consulting for drug companies. NIH suspended one of Nemeroff’s grants, and in December 2008, Emory announced that it would not allow Nemeroff to apply for NIH grants for 2 years.

As I was just saying, this is the scope of the real problem. Changing the reporting rules from $10K per year to $5K per year does absolutely nothing about a guy who fails to report some or all of his outside activity.
Still, a 2 year suspension sounds like something, doesn’t it?

Read the rest of this entry »

At all.
ScienceInsider overviewed a dismal story being reported by The Chronicle of Higher Education. It involves a tale I’ve discussed before with a new twist. ScienceInsider:

In 2008, a Senate investigation found that Nemeroff failed to report at least $1.2 million of more than $2.4 million that he had received for consulting for drug companies. NIH suspended one of Nemeroff’s grants, and in December 2008, Emory announced that it would not allow Nemeroff to apply for NIH grants for 2 years.

As I was just saying, this is the scope of the real problem. Changing the reporting rules from $10K per year to $5K per year does absolutely nothing about a guy who fails to report some or all of his outside activity.
Still, a 2 year suspension sounds like something, doesn’t it?

Read the rest of this entry »

I first saw the story break in a retraction notice published in PNAS.

The authors wish to note the following: “After a re-examination of key findings underlying the reported conclusions that B7-DCXAb is an immune modulatory reagent, we no longer believe this is the case. Using blinded protocols we re-examined experiments purported to demonstrate the activation of dendritic cells, activation of cytotoxic T cells, induction of tumor immunity, modulation of allergic responses, breaking tolerance in the RIP-OVA diabetes model, and the reprogramming of Th2 and T regulatory cells. Some of these repeated studies were direct attempts to reproduce key findings in the manuscript cited above. In no case did these repeat studies reveal any evidence that the B7-DCXAb reagent had the previously reported activity. In the course of this re-examination, we were able to study all the antibodies used in the various phases of our work spanning the last 10 years. None of these antibodies appears to be active in any of our repeat assays. We do not believe something has happened recently to the reagent changing its potency. Therefore, the authors seek to retract this work.”

Although curious as to who was the bad apple, given that all authors signed the PNAS retraction, I have to admit that “10 years” thing really got my attention. I have been waiting for the other shoe to drop…turns out it was a closet full of shoes.

Read the rest of this entry »

Something that arose in the comments after my last post on the Brodie affair was underlined in the newspaper report from 2007.

Oddly enough, Brodie’s conclusions were found to be correct and supported by later research, said UW professor Lawrence Corey, head of the UW’s virology division in the Department of Laboratory Medicine, in The Seattle Times. Brodie worked in Corey’s retrovirus laboratory.
“Did he set back crucial research? The answer is no,” Corey said in the Times article.

Corey, btw, was the substitute PI for one year of one of the Brodie NIH grants.
And it isn’t just this case either. This theme that the faked data supported conclusions that were correct anyway can be seen elsewhere. The Linda Buck laboratory retraction that PhysioProf described long ago featured this, as the author suspected of data fakery claimed to be working on replacement data that would prove he was right. There are several cases of errata and even retractions being followed up with replacement figures or papers showing the original purported data could be replicated.
I smell an implication in these situations that we are supposed to modulate our ire at the original data faking simply because the authors’ conclusions were supportable by later investigations.
Bah, I say. Bah.