Eight retractions…so far.

May 19, 2010

I first saw the story break in a retraction notice published in PNAS.

The authors wish to note the following: “After a re-examination of key findings underlying the reported conclusions that B7-DCXAb is an immune modulatory reagent, we no longer believe this is the case. Using blinded protocols we re-examined experiments purported to demonstrate the activation of dendritic cells, activation of cytotoxic T cells, induction of tumor immunity, modulation of allergic responses, breaking tolerance in the RIP-OVA diabetes model, and the reprogramming of Th2 and T regulatory cells. Some of these repeated studies were direct attempts to reproduce key findings in the manuscript cited above. In no case did these repeat studies reveal any evidence that the B7-DCXAb reagent had the previously reported activity. In the course of this re-examination, we were able to study all the antibodies used in the various phases of our work spanning the last 10 years. None of these antibodies appears to be active in any of our repeat assays. We do not believe something has happened recently to the reagent changing its potency. Therefore, the authors seek to retract this work.”

Although curious as to who was the bad apple, given that all authors signed the PNAS retraction, I have to admit that “10 years” thing really got my attention. I have been waiting for the other shoe to drop…turns out it was a closet full of shoes.


Today I got alerted on Twitter to six retractions in Journal of Immunology. The first one has this to say:

In the course of investigating suspicious patterns of experimental results in the laboratory, a systematic and in-depth study of key findings in this article was carried out using blinded protocols. In these repeat studies, no evidence was found to support our original conclusions that B7-DC XAb modulates dendritic cell functions. We do not believe our failure to reproduce our earlier findings is the result of a technical problem. A member of the B7-DC XAb investigative team, Dr. Suresh Radhakrishnan, who was involved in or had access to all the work on this subject, was found in a formal investigation to have engaged in scientific misconduct in unpublished experiments involving the B7-DC XAb reagent. This finding of misconduct and our inability to reproduce key findings using blinded protocols has undermined our confidence in our published report. We seek, therefore, to retract this body of work.

Retraction number 8 is in PLoS One. It tells a similar story..

An investigation by the Mayo Clinic has determined that one of the researchers in Professor Pease’s laboratory at the Mayo Clinic, Dr. Suresh Radhakrishnan, tampered with another investigator’s experiment with the intent to mislead toward the conclusion that the B7-DCXAb reagent has cell activating properties. Using blinded protocols, experiments were done to see if the results based on this reagent could be replicated. Specifically, the repeat experiments examined the activation of dendritic cells, activation of cytotoxic T cells, induction of tumor immunity, modulation of allergic responses, breaking tolerance in the RIP-OVA diabetes model, and the reprogramming of Th2 and T regulatory cells. In no case did these repeat studies reveal any evidence that the B7-DCXAb reagent had the previously reported activity. The authors of this paper therefore wish to retract this paper because of the inability to reproduce key aspects of the studies and hence the results in them cannot be considered reliable.

It all boils down to one fake assay, right? That’s what I’d assume. One scientist had the useful assay in the lab and was the go-to dude to add this to any story at need. Why would anyone else try to repeat his studies, he was da man.
update: via writedit, Radhakrishnan’s patents.
update 2: As you might expect, my next question was about the grants. Let’s check on the PI, shall we? A RePORTER search for Pease, Larry turns up
POTENTIATING DC FUNCTION THROUGH B7-DC (Aug 03 – Jul 06)
PROMOTING TUMOR IMMUNITY BY CROSS-LINKING B7-DC (Feb 04 – Jan 08). R56 Bridge (2008), Competing continuation ( ends Nov 13).
BLOCKING AIRWAY INFLAMMATION WITH B7-DC CROSS-LINKING AB (Jan 06 – Dec 09)
A RePORTER search for “B7-DCXAb” pulls 25 projects in the 2010 Fiscal Year.
Okay, okay, I’m starting to get the picture. The statement in PNAS

“that B7-DCXAb is an immune modulatory reagent, we no longer believe this is the case. ..In no case did these repeat studies reveal any evidence that the B7-DCXAb reagent had the previously reported activity.”

is a big, big, big thing.
Update 3: Number nine…
http://jem.rupress.org/content/early/2010/04/02/jem.2002146632610r.short?rss=1&ssource=mfc

29 Responses to “Eight retractions…so far.”

  1. Gummibears Says:

    This may be at least partially attributable to the crazy rush to patent everything that can be patented (with the addition of things that are obviously non-patentable). The rush compromises the quality of the work, and the (at least partial) secrecy makes repeating the studies by others difficult. I have quickly typed a few words in Google, and here it is:
    http://www.wipo.int/pctdb/ja/wo.jsp?WO=2008067071&IA=US2007081975&DISPLAY=DESC

    Like


  2. It all boils down to one fake assay, right? That’s what I’d assume. One scientist had the useful assay in the lab and was the go-to dude to add this to any story at need. Why would anyone else try to repeat his studies, he was da man.

    Maybe. But then how did this bag of fuck always know what was the “right” answer when someone gave him some shit to assay?

    Like

  3. neurolover Says:

    “Maybe. But then how did this bag of fuck always know what was the “right” answer when someone gave him some shit to assay?”
    Why indeed. Perhaps a king asked to be rid of a turbulent data set (or to produce one). I firmly believe that if incentives reward cheating (even in the short term), that cheating will occur, and that no amount of ethics training and “messaging” will change the behavior.
    The retraction will take down the offender, the guy who actually faked stuff. But, there probably won’t be any consequences to the big guys in charge, which will still incent them to not look too hard for this kind of cheating. I’m getting pretty cynical in my old age.

    Like

  4. LadyDay Says:

    Here’s another famous falsification of data, in case you hadn’t heard (sorry, don’t have time to check the archives right now).
    http://www.nature.com/news/2009/091222/full/462970a.html
    Pretty big.

    Like

  5. DrugMonkey Says:

    Not so fast neurolover, science is full of failure so undetected cheating is always incentivized. Need to focus more on te chances of being detected and the consequences.

    Like

  6. druggie Says:

    DM,
    1. Your post is not clear to me. Would you please re-word it ?.
    2. Neurolover is pointing to the fact that small guys are the ones for whom there are consequences. Big guys almost always escape. This is more frequent in science than it is in other social spheres.

    Like

  7. Namnezia Says:

    I actually think that cheating is more likely to occur in a big powerhouse lab than in a small lab, and that it is almost always driven by individual lab members. In a big lab:
    1. You have the implicit authority which comes with a bigshot PI – “This must be right if it comes from this lab, why question it.” Plus publications are more likely to be in top tier journals.
    2. Cheating by an individual lab member (eg. faking an assay) is more likely to be undetected by the PI when there are som many projects/techniques going on in the lab, some of which the PI doesn’t know the first thing about how to evaluate (see post from a few weeks ago about this).
    3. The large amount of postdocs and students fosters competition between lab members, incentivizing cheating.
    4. Outside labs are more likely to collaborate with bigshot lab to include their famous assay in their study than to try and replicate it themselves.
    I’ve known several people who have quit grad school/postdoc after spending three years trying and failing to replicate bogus findings from bigshot labs. I’ve also known people in bigshot labs who have reported that cheating was fairly common in their lab, but that the PI had turned a blind eye, either knowingly or unkowingly.

    Like


  8. Your post is not clear to me. Would you please re-word it ?.

    AHAHAHAHAHAHAHAH!!!!!!!!!!!!!!!! Accurate perception of motherfucking reality FAIL!

    Like

  9. neurolover Says:

    “Not so fast neurolover, science is full of failure so undetected cheating is always incentivized. Need to focus more on te chances of being detected and the consequences.”
    Not sure why the frequency of failure incentivizes undetected cheating? Just ’cause people like to succeed?
    I don’t actually have an opinion on whether cheating is more likely to occur in a powerhouse lab. There’s a good argument that it’s more likely to be detected (because they’re doing more visible work), even if it’s not more common. The difference is that a small lab with significant cheating is likely to be out of business, while the big lab can survive cheaters in its midst. A supersized lab can afford to retract 8 papers and still continue, while the little lab will fail.
    I think I’m actually focusing on the chance of being detected and the consequences. I think that as long as the big structures in which the cheating occurs remain stable in the face of cheating (i.e. departments, big labs, etc.), that detecting the individual cheaters and punishing them will have little effect in preventing the kinds of cheating that we’re hearing about, when it’s possible.
    I think we’ve reached the point in science where we need to think about how to make it less possible, rather than relying on ethics and honesty and self policing and failure of replication as the safeguards. An example might be the blind assay that they’ve suggested here. If the individual doing the assay doesn’t know, for example, which sample is the control, they can’t cheat. Perhaps more of these types experiments need to be structured in ways where the person can’t cheat.
    I’m not sure which recent big fraud cases this kind of thing would prevent, but it might help with some. Maybe we’ve reached the stage where we have to blind the data before people do analysis, and then have a reveal, wherever that’s possible, and work towards structuring experiments that way.
    The question of whether this is a *big* problem or not? Well, I don’t know. One could assume that the cases we hear about are the fraud cases, that, in general, they get detected and fixed in the literature. That’s the premise of replication being the gold standard in science. Ultimately you can’t fraudulently maintain a scientific idea that’s contrary to reality. That’s the beauty of science.
    But you can create 8 year digressions, let fraudulent but snappy science drive out the true but merely good, destroy carers. And, there’s the realm of work in which our knowledge is broad and flat enough that replication is only a theoretical desire, and not a practical reality. How long for us to uncover the truth in those, after fraud has seeded itself?
    Right now, the fraud that’s driving me crazy is the CDC report on lead levels tests in children in DC who had been drinking water with high levels of dead. The report suggests it’s classic data massaging science (ignore data sets you can’t find, and assume they work in your favor, disregard some explanations for data sets at the expense of your favored ones), and it’s biggest effect is going to be to devalue all the science done by the CDC, buttressing the woo at the expense of real science.

    Like

  10. DrugMonkey Says:

    Not sure why the frequency of failure incentivizes undetected cheating? Just ’cause people like to succeed?
    If the job is to pick up this pile of rocks and move it over there, success is not variable. The job lies in the doing.
    If the job is to run 100 meters down a track every other month, success is available to a large part of the population.
    If the job is to swim across the pool *faster than everyone else* then success is variable.
    In science, the job is not just to conduct reasonable-seeming experiments with a moderate degree of proficiency. The job is to generate results that conform to a particular outcome (i.e., reaching the all sainted p-less-than-oh-point-oh-five standard) and that hang together into a coherent story about something or other. In many cases, the job is to beat a bunch of other people (within or between labs) to the punch. The job is to find out new things that nobody else has figured out before. As I said, there are a lot of ways to fail.
    This incentivizes cheating in a way that some other categories of profession do not.
    Obviously, for most of us, the motivations of finding stuff out that is actually likely to be true far, far outstrip the motivation to get a “successful” outcome even if it is fake.
    The fascination question is always why this superiority of the real-finding changes* in the cheaters.
    *I am one that does not believe the population of fraudsters is only populated by dishonest faker who would cheat no matter what the situation. I believe that we ignore environmental contributions to data falsification at our peril as an industry.

    Like

  11. DK Says:

    @Namnezia Re: cheating is more common in bigshot labs.
    My experience agrees well with your points. Still, not sure about “more common”. Could very well be a selection bias – the stuff coming from bigshots draws more attention. It is entirely possible that there is a lot of cheating goes on in small labs where PI is desperate for a tenure. Good stats on this is very difficult to get!

    Like

  12. Suresh Radhakrishnan Says:

    This is indeed the saddest time. During this whole process, irreparable damage has been done to-both professional and personal reputation of -Dr. Pease, Pease laboratory members, and me. The Investigation committee conclusion is, based on multiple- guilt by association and corroborative-evidences. Because of lack of significant alternative explanations, this conclusion seems appropriate.
    The analysis of risk- to – benefit ratio associated with falsification of data would have discouraged me from performing such actions, both in the past and in the future. Moreover, the alleged modus operandi of the falsification process excludes all form of rationalization. For instance, I am aware that presence of laboratory personnel, including me, can be monitored by a video camera, as demonstrated by Dr. Pease on
    May, 2009. The alleged contamination act performed by me was on July 2009. The lab technician was present in the laboratory at the time of my alleged addition of the chemical. Because of the pure fear of being observed and therefore getting caught, as Dr. Pease might have a camera in Mike laboratory, I would not have had the courage to perform the alleged action. Moreover, the presence of Mike at close proximity would have again caused enormous fear in me of getting caught red handed. Therefore, I would have avoided such actions.
    It should be emphasized that the invaluable interactions with Dr. Pease made me always believe that data will take care of itself and there exist a mechanism that cause self-correction: if it is true, it shall withstand the test of time; if it is not, it will not stand the test of time. Extraordinary mentoring by Dr. Pease has always made me follow highest standards regarding the performance of experiments.
    In this context, I wish to state two examples that underscore the conceptual verification of our findings. First, we observed that T-bet, a transcription factor expressed in mouse dendritic cells was required for the conversion of mouse T helper cell phenotype, which was outside the paradigm at that time. In support of this idea, a recent article in the Journal of Immunology describes the effect of expression of T-bet in human dendritic cells and the requirement of the expression for conversion of human T helper cell phenotype. Second, we had compelling data to conclude that the T cell repertoire is shaped by the selecting MHC molecule which was again an outside-the-box finding. However, recent inconsistencies observed with respect to the antibody functions precluded the finding from being published in a peer-reviewed journal including Nature or Science. Recently, Ted Hansen’s group from Washington University School of Medicine published their observation that was similar to ours in the journal Science. These examples highlight the existence of self-correction process of data. Moreover, it would have required supernatural abilities on my part to see the future in an attempt to falsify the outcome as there was no existing precedent to the observation. However, the recent observations of functional inconsistencies exhibited by the antibody are a matter of grave concern.
    However, I do not have an explanation for the current loss of effect of the antibody preparation. It could be due to a combination of factors that are described below
    The concomitant presence of mutants species that fails to bind, or bind inappropriately or block the binding of the agonist version of the antibody species present in the serum derived antibodies. This was shown to be not the case as, the serum derived from the original bleed failed to cause detectable functional effects on the dendritic cells. However, this could be vaguely attributed to the shelf life of the antibody preparation. The shelf life prediction made by us, based on the interpretation of the data from the experiments performed initially, may not reflect accurately the actual shelf-life. Identification of detectable proteins in the serum by biochemical methods do not always positively correlate with the functional ability of the antibody. This might be a potential reason for the precipitous drop in functional consequences of the antibody binding on dendritic cells.
    As most of the findings were based on initial observations made by me, it shall be in my best interest if I can manipulate the antibody in such a manner that it induces positive effects, including phosphorylation events, consistently, regardless of the person performing the experiment or the nature of the dendritic cell culture. Therefore, it shall be highly prudent, and extremely comfortable, on my part to adulterate the original antibody stock vials present in the freezer during the wee hours of any morning in general, and weekends in particular. After all, I am the “ da man”. HOWEVER, I DID NOT COMMIT THE ACT.
    In short, if I was instrumental in falsification of the data, it is my responsibility to assure that EVEN THE FALSIFICATION occurs in a reproducible manner without any intra and interpersonal variations. Therefore, I would have adopted the “whatever-it-takes” approach to my alleged corrupt actions in a manner that safe-guards my vested interest. HOWEVER, I DID NOT.
    Summary:
    Taken together, I would like to restate my innocence with regard to the alleged claim. I would not have nor will I carry out an act that will cause irreversible damage to the pride and prestige of the following people and Institutions: A: My SON B: Dr. PEASE. C: Pease laboratory Members. D: the Mayo Clinic. E: The United States including NIH. F: My mother land, India G: MY FATHER H: my family members. I: My teachers

    Like

  13. jojo Says:

    Step #1: Uncover fraud
    Step #2: Take back grant money, fire cheaters, and otherwise punish cheaters
    Step #3: Make sure everyone who might stumble upon them knows which articles are bogus and why, and make this information available permanently (watermark .pdfs, post retraction notices on pubmed abstracts and other search sites, etc).
    #3 is as important as steps #1 and #2 because otherwise how will young scientists trying to learn a field know? Maybe they have a big-wig boss who knows the story, but that’s not always the case. Sometimes you have to learn a field from the lit, and if retractions aren’t obvious (see links posted by Kirmelic above), you are going to believe fake data is real (do you google every author’s name of every paper you read to make sure there’s no fraud associated with anyone? I didn’t think so). The effort costs to science won’t stop with the fraud announcement.

    Like

  14. DrugMonkey Says:

    There is a bit of coverage on this story up at The Scientist today-
    http://www.the-scientist.com/blog/display/57449/
    and additional comment from MsPHD

    Like

  15. Kent Says:

    Yes, science fraud is common. It happens in some of the best labs, and mostly NIH’s ORI tries to do nothing. Since there is a 6 year statute of limitations (which technically restarts any time the original author(s) cite the work again) ORI has an excellent excuse. So such things never come to light unless the head of the lab does it.
    When the head of the lab does not, and gives every indication of approving of such things, then you have a real problem. I have heard the name of a highly placed chair who had a sworn complaint filed, but ORI did nothing. That postdoc who manufactured the data got a plum position as a professor of immunology. I will see if I can find the text of the complaint with papers and names.

    Like

  16. DrugMonkey Says:

    I will see if I can find the text of the complaint with papers and names.
    I want to make it clear that we try to stick to externally verifiable stuff around here if names are going to be named. Newspaper accounts, published ORI findings, that sort of thing. I lieu of that, I’m willing to tolerate “Fig 3 in paper Blah et al is a rotation of Fig 4 in Tweedledee and Tweedledum” kinds of things as long as it is specific enough that the informed readership can make the comparison for themselves.
    long way of saying I may unpublish comments that are accusatory with minimal backing evidence.

    Like

  17. Anonymous Says:

    Ouch:

    But despite the scope of the retractions, the impact on the field is likely to be minimal, researchers said. The loss will be “not too significant because it was a unique reagent with a unique proposed mechanism of action,” Freeman said. “The damaging effect is clear but as far as I know the clinical projects derived were not of key importance for our understanding on how the immune system works,” immunologist Ignacio Melero of the University of Navarra in Spain agreed in an email.

    Like

  18. Neuro-conservative Says:

    ^^^Whoops — forgot to sign in — #18 above was me.
    Talk about adding insult to injury — that paragraph was harsh.

    Like

  19. msphd Says:

    Having read Suresh’s comment, I have to say, this is pretty scary. I’ve worked with antibodies, and I can completely understand the situation if it really is a case of an antibody that seemed to work in the assays in which it was used – maybe recognized some complex epitope(s) – and then went bad.
    If that’s the case, one has to wonder about the details. Are we seeing evil where there was only ineptitude?
    What were the assays? How was the antibody stored? Fridge or freezer; with or without additives or dilution? Were there multiple batches? For how long was it working before it went bad? Years?
    I don’t know enough about the case in question, but maybe those of us who haven’t actually read all the articles shouldn’t be so quick to judge? Glass houses, anyone?
    I think we’ve all had the bad experience of having reagents go bad, sometimes suddenly, or giving other people our reagents & protocols and they don’t follow them, then claim that our data were not reproducible.
    This is definitely a nightmare scenario, regardless of whether the original results were in error – reproducibly or otherwise – or whether Suresh’s explanations are correct.
    I’m not sure if I understand how deliberate falsification (as opposed to reproducible artifact or sudden reagent death) can be proven, unless the data can be shown to be manipulated?
    I have to wonder about a lab where the PI feels it’s necessary to videotape the lab members at work? Or is this actually really common? (???)
    Of course, it’s rare that the person who falsified the data will admit to it, even in cases where the evidence is irrefutable and can’t be explained any other way.
    Still, some famous person said something about how a civilization should be judged by how it treats the weakest members.
    I agree with neurolover – we’re beyond relying on ethics and honesty. We need better systems for preventing these kinds of messes in the first place.

    Like


  20. RE: Good thinking MsPhD!
    That’s exactly what I had in my mind when I first read and inquired of the Mayo case report here: “Promising therapy scuttled by alleged misconduct — RE: Fraudulent scientific research and misconduct at Mayo!?” (NatureBlogsUK; May 26); and in the DrugMonkey herein above; and elsewhere, including yours as well.
    Best wishes, Mong 5/28/10usct2:41p; practical science-philosophy critic; author “Decoding Scientism” and “Consciousness & the Subconscious” (works in progress since July 2007), Gods, Genes, Conscience (iUniverse; 2006) and Gods, Genes, Conscience: Global Dialogues Now (blogging avidly since 2006).

    Like

  21. science Says:

    Though Suresh’s crocodile tears are meant to justify his stance, people who know him personally will swear his penchant for manipulation of data. He is definitely not the weakest member of civilization. And this is not the first time he is caught manipulating data. The last time he escaped without much ado because it was found before being published. For the likes of Suresh, anything is ok, as long as they can benefit out of it.
    Actually it is scary to have such colleagues.
    What strikes as odd is that over 10 years and 15 papers, not for once he stopped to think about the impact and career of colleagues, PI, institute etc. What if there is no penalty in scientific field for falsifying data, what happened to personal work ethics???

    Like

  22. Suresh Says:

    I agree. More regulations are necessary. However, as rightly noted by “Science”, unless the individual’s good conscience and morality determines his or her actions, there is only so much that can be done.
    Suresh.

    Like

  23. Anonymous Says:

    I reiterate my sincere gratitute and respect towards Dr. Pease, Mayo Clinic, and the members of Pease lab and it shall be unabated.

    Like

  24. Hopeful Says:

    Fraudulent scientists are increasing in number. They only consider or utilize data that supports their hypothesis and hide the data that shows they’re wrong. This is happening at an alarming rate because it’s the big stories that sell, and fraudulent scientists will go to great lengths to make sure that select pieces of data don’t get in the way of the desired story line. Data gets buried all the time. Unfortunately, many people are getting away with it and are also getting jobs in their respective field. After all, it takes many many years for a case of misconduct to be concluded (particularly in the US), and by that time the individual already has an appointment. If nobody investigates the authenticity of the data, then the culprit is off the hook and lives happily ever after. Gotta feel for the honest scientist who lost their dream appointment/job to a fraudulent scientist.
    If these deemed fraudulent scientists really want to help they need to come clean. Explain what was done. Why it was done. Why the “system” was unable to detect it for so long. How they found a way to exploit the system. Why the reviewers didn’t stand a chance to unearth the truth during the peer-review process. And to comment on the stresses and hardships (both financial and emotional) they’ve faced to push them to do these things. Otherwise, history will repeat itself and there will be another Suresh tomorrow.
    In the above case, I also am shocked that the co-authors and PIs on these retracted papers didn’t know. Yah sure they didn’t. Years of federal funding, patents, publicity, clinical trials being planned… and not one of the authors nor the PI sensed anything?
    That says something about this particular lab and its leadership. Time for a whole sale change, and hopefully others will be caught and banned from science.

    Like

  25. rhiggs Says:

    If the above commenter #13 is indeed Suresh, his response to this is quite strange. It comes across as if…
    – He wishes he could have falsified the data, but he didn’t
    – He knows exactly how he could have falsified the data, but he didn’t
    – Even if he had falsified the data, which he didn’t, the self-correctiing nature of science would have sorted it all out, so it’s not a problem
    Very strange!
    Also, the shelf-life excuse is weak. Very weak. Anyone who works with anibodies has problems with shelf-life. Some are more reliable than others, but antibodies regularly ‘go off’ without any explanation. I seriously doubt whether you would retract 10+ papers based on a series of experiments without first establishing whether the lack of repeatability was simply due to the shelf-life of the antibody.

    Like

  26. SG Says:

    I am not sure why The Scientist published this “Cheap-Damaged-Scientist Available for Work” ad as a commentary.
    http://www.the-scientist.com/news/display/57557/

    Like

  27. Joe Says:

    Indeed. Because it’s not some unstable reagent or some test or experiment that he ran that is the sole reason for his dismissal:
    http://blogs.nature.com/news/thegreatbeyond/2010/05/promising_therapy_scuttled_by.html
    “According to Robert Nellis, a Mayo Clinic spokesperson, researchers in Pease’s lab began to get different findings in working with the antibody than those the group had previously reported in published work. They tried unsuccessfully to replicate those experiments. Pease then turned to the Mayo Clinic authorities, accusing Suresh Radhakrishnan, a researcher in the group, of tampering with their attempts to validate the past work. An investigation launched by the institution found Radhakrishnan guilty of scientific misconduct. The institution fired Radhakrishnan, and the lab decided to retract all of the published work containing data that could not be verified, Nellis says. ”
    That’s right. As they were trying to recreate his data, he tampered with their work.
    Sorry. No soup for you.

    Like

  28. SG Says:

    Why are they still publishing this stuff?
    http://www.the-scientist.com/news/display/57738/

    Like


Leave a comment