On negative and positive data fakery
August 22, 2010
A comment over at the Sb blog raises an important point in the Marc Hauser misconduct debacle. Allison wrote:
Um, it would totally suck to be the poor grad student /post doc IN HIS LAB!!!!
He’s RUINED them. None of what they thought was true was true; every time they had an experiment that didn’t work, they probably junked it, or got terribly discouraged.
This is relevant to the accusation published in the Chronicle of Higher Education:
the experiment was a bust.
But Mr. (sic) Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.
The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.
A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it.
So far as we’ve been able to tell from various reports, the misconduct charges are related to making up positive results. This is common…it sounds quite similar to a whole lot of other scientific misconduct cases. Making up “findings” which are in fact not supported by a positive experimental result.
The point about graduate students and postdocs that Allison raised, however, push me in another direction. What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? “Are you sure?”, the PI asks, “Maybe you better repeat it a few more times…and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better”. For video coding of a behavioral observations study, well, there are all kinds of objections to be raised. Starting with the design, moving on to the data collection phase (there is next to nothing that is totally consistent and repeatable in a working animal research vivarium across many days or months) and ending with the data analysis.
Pretty easy to question the results of the new trainee and claim that “Well, Dr. Joe Blow, our last post-doc didn’t have any trouble with the model, perhaps you did something wrong”.
Is this misconduct? The scientist usually has plenty of work that could have ended up published, but circumstances have decided otherwise. Maybe it just isn’t that exciting. Maybe the project got partially scooped and the lab abandoned a half a paper’s worth of work. Perhaps the results are just odd, the scientist doesn’t know what to make of it and cannot sustain the effort to run eight more control studies that are needed to make sense of it.
None of this is misconduct in my view. This is the life of a scientist who has limited time and resources and is looking to publish something that is exciting and cool instead of something that appears to be pedestrian or derivative.
I think it would be pretty hard to make a misconduct case over quashing experiments. Much easier to make the case over positive fraud than it is to make the case over negative fraud.
As you know, this is something that regulatory authorities are trying to address with human clinical trials by requiring the formal recording of each one. Might it bring nonclinical research to a crashing halt if every study had to be reported/recorded in some formal way for public access? Even if this amounted to making available lab books and raw, unanalyzed data I can see where this would have a horrible effect on the conduct of research. And really, the very rarity of misconduct does not justify such procedures. But I do wonder if University committees tasked with investigation fraud even bother to consider the negative side of the equation.
I wonder if anyone would ever be convicted of fraud for not publishing a study.