On negative and positive data fakery

August 22, 2010

A comment over at the Sb blog raises an important point in the Marc Hauser misconduct debacle. Allison wrote:

Um, it would totally suck to be the poor grad student /post doc IN HIS LAB!!!!

He’s RUINED them. None of what they thought was true was true; every time they had an experiment that didn’t work, they probably junked it, or got terribly discouraged.

This is relevant to the accusation published in the Chronicle of Higher Education:

the experiment was a bust.

But Mr. (sic) Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it.

So far as we’ve been able to tell from various reports, the misconduct charges are related to making up positive results. This is common…it sounds quite similar to a whole lot of other scientific misconduct cases. Making up “findings” which are in fact not supported by a positive experimental result.

The point about graduate students and postdocs that Allison raised, however, push me in another direction. What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? “Are you sure?”, the PI asks, “Maybe you better repeat it a few more times…and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better”. For video coding of a behavioral observations study, well, there are all kinds of objections to be raised. Starting with the design, moving on to the data collection phase (there is next to nothing that is totally consistent and repeatable in a working animal research vivarium across many days or months) and ending with the data analysis.

Pretty easy to question the results of the new trainee and claim that “Well, Dr. Joe Blow, our last post-doc didn’t have any trouble with the model, perhaps you did something wrong”.

Is this misconduct? The scientist usually has plenty of work that could have ended up published, but circumstances have decided otherwise. Maybe it just isn’t that exciting. Maybe the project got partially scooped and the lab abandoned a half a paper’s worth of work. Perhaps the results are just odd, the scientist doesn’t know what to make of it and cannot sustain the effort to run eight more control studies that are needed to make sense of it.

None of this is misconduct in my view. This is the life of a scientist who has limited time and resources and is looking to publish something that is exciting and cool instead of something that appears to be pedestrian or derivative.

I think it would be pretty hard to make a misconduct case over quashing experiments. Much easier to make the case over positive fraud than it is to make the case over negative fraud.

As you know, this is something that regulatory authorities are trying to address with human clinical trials by requiring the formal recording of each one. Might it bring nonclinical research to a crashing halt if every study had to be reported/recorded in some formal way for public access? Even if this amounted to making available lab books and raw, unanalyzed data I can see where this would have a horrible effect on the conduct of research. And really, the very rarity of misconduct does not justify such procedures. But I do wonder if University committees tasked with investigation fraud even bother to consider the negative side of the equation.

I wonder if anyone would ever be convicted of fraud for not publishing a study.

No Responses Yet to “On negative and positive data fakery”

  1. antipodean Says:

    I wonder if anyone would ever be convicted of fraud for not publishing a study.

    The day may be coming where this is true of clinical trials. But when you are a responsible researcher getting a negative clinical trial published can take fricken years. But must keep at it…

    Like

  2. becca Says:

    Well, back in the day, wasn’t anyone ever funded by tobacco companies and NOT published data suggesting cigarettes would kill you?

    I can think of times when NOT publishing a study is not only fraud, but downright evil.

    Like

  3. DrugMonkey Says:

    I am not suggesting anything different becca. I am speculating that it would be comparatively harder to make the case stick.

    Like

  4. "Shecky R" Says:

    The complexity of biological/medical studies means that numerous experimental nuances aren’t always recognized, let alone reported, in any given instance. As a result, the methodologies as described in publication almost never precisely match the methodologies as actually conducted, and the data reported does not precisely match the data as collected. In short, “fraud-like” elements inevitably enter MOST published studies, especially as data gets ‘cleaned,’ ‘massaged,’ ‘scrubbed,’ for publication. What can be difficult to determine is the degree to which those elements impinge on a meaningful interpretation of the results in a given case.

    Like

  5. bsci Says:

    What also makes this harder to stick is that good positive studies are also sometimes hard to get through peer review. What if there’s an interesting negative result and you tried to publish it, but it was rejected. How hard do you keep trying? Is presenting it at a conference, say a poster at a conference with 15,000 other posters, sufficient to say the negative result wasn’t hidden?

    Like

  6. Pascale Says:

    Exactly. When a manuscript has been rejected by the “bottom of the barrel” journal in your specialty, what obligation do you have to keep trying? And should the PI be held responsible for hiding data, even though s/he submitted the study to multiple journals? Perhaps the editors and reviewers then bear the blame for not recognizing the importance of the data?

    Like

  7. DrugMonkey Says:

    In my view having submitted it for possible publication would go a looooong way towards dissolving any of the complaints I’m referring to. I meant the situation where the PI is the one doingthe blocking.

    Like


  8. […] This post was mentioned on Twitter by Bora Zivkovic, Drug Monkey and Jani Kotakoski, ScientopiaBlogs. ScientopiaBlogs said: On negative and positive data fakery http://dlvr.it/46pTg […]

    Like

  9. Sili Says:

    What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? “Are you sure?”, the PI asks, “Maybe you better repeat it a few more times…and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better”.

    This was exactly what Millikan did to poor Anderson when the latter discovered the positron. Millikan didn’t like it, and insisted Anderson had to be wrong.

    As for destroying students, I think it’s wholly correct that Hauser’s will have a problem. Noöne likes a tattle-tale. Very few people will dare hire them for fear that they’ll get told on, too.

    Like


  10. As allusion to “What if there’s an interesting negative result and you tried to publish it, but it was rejected. How hard do you keep trying?”, I really think negative results is a vital part of scientific knowledge. We are publishing a new open-access journals focus on publishing scientific negative results which, from our point of view, are suffering a notably publication bias. Sometimes this bias comes from the journals editors (not in The All Results Journals) but it is also important the self-bias the researchers auto-impose when getting negative results.

    Check our early views articles in:

    http://www.arjournals.com/ojs/index.php?journal=Chem&page=issue&op=current

    http://www.arjournals.com/ojs/index.php?journal=Biol&page=issue&op=current

    The All Results Journals need the collaboration of all scientists to succeed, so I encourage you all to write up your negative results and submit them to our journal.
    Thank you,
    David A.

    Like

  11. David Says:

    This is a very complicated question and relates to the intrinsic human tendency to over-value information that affirms our hypotheses and to seek to explain away or otherwise dismiss information that dis-confirms our hypotheses. I once worked for someone who dismissed any single data point (not overall effect – each data point) I gathered that failed to affirm his/her hypothesis; s/he would question me about all the details of the context of the data collection and invariably conclude “Oh yes, well, it’s clear that the subject was stressed. We should exclude that data point.” No questions asked when the data conformed to hypothesis!

    Because this tendency is so universal (being exhibited across ethnic groups, nationalities, sexes, occupations), it has to be a basic and innate aspect of the hypothesis-testing aspects of human cognition and mentation. Therefore, I don’t think it’s “fraud”, but it is antithetical to the scientific process. I’d leave it to peer review to deal with this kind of problem.

    [As an aside, I’d like to comment that the scenario you describe is one that injurs the trainee much more than the scientific establishment. Data gathered well based upon a reasonable experimental design is meaningful data, and should be published for the sake of the student, if nothing else. This is where I see the biggest ethical issue in your scenario.]

    Like


  12. Therefore, I don’t think it’s “fraud”, but it is antithetical to the scientific process. I’d leave it to peer review to deal with this kind of problem.

    But peer review completely missed this problem in Hauser’s case; worse, peer review ‘declared’ that Hauser was a brilliant experimentalist.

    Like

  13. DrugMonkey Says:

    But peer review completely missed this problem in Hauser’s case

    right, and of course peer review cannot deal with selective quashing of negative results until we make the radical step to OpenNotebook science where you are obliged to record everything related to the project and submit that with your manuscript. That would be such a nightmare that I fear we have to live with the presumption of professionalism and occasional bad actors/actions…

    Like

  14. Namnezia Says:

    Aside from times where you can’t tell if a result is due to an experimental artifact, what you describe is essentially cherry-picking of data to fit a hypothesis. While unethical, it would be almost impossible to prove that this is willful misconduct, even with open lab notebooks. For most types of basic science experiments it would be impossible to standardize the way different people record their data in a way that it is understandable by third parties.

    Like


Leave a comment