On contradicting your own stuff

July 10, 2012

I am struck, today, by the thought that a significant benefit of growing old and comfortable as a laboratory is that you don’t care anymore.

I was just reading a paper, published in a fairly pedestrian journal. It was authored by a postdoc in a fairly well-established laboratory which has published extensively with their…..Bunny Hopper HedgerowDash model.

The paper is actually pretty cool and I’m way down with their findings. It isn’t big stuff but it sheds some light on the workings of the HedgerowDash model. Light, in the nature of methodological variations which might just fundamentally change the outcome of a manipulation, say, amphetamine doping.

Now, as you all know, the Bunny is a crepuscular species. It is most active at dawn and dusk. Your average postdoc or graduate student, however, is most active either in the middle of the day or late at night. (This is established fact.) This may be important.

Now suppose that when the Lab was young, a trainee (or three) generated a set of key initial findings that really put the HedgerowDash model, and therefore the Lab, on the map. Cocaine, amphetamine, nicotine….you name it, any stimulant possible was thrown at the model. Antagonists were deployed. Aaaaaand, the trainee sac’d the Bunnies at the end of the Dash and came up with all kinds of key pathway changes to identify the neuronal circuitry and neuropharmacology that was involved in the HedgerowDash.

Off they went! Whee! and the field followed along and adopted the Dash model in their own Laboratories. And the NIH looked and found that it was Good.

And many, many trainees succeeded….but many, many failures were obtained*. Failures to replicate even the positive control findings with the most basic challenge, amphetamine. Even, years later, in the original Dash Lab. Science is hard, no?

The negative findings never saw the light of day. After all, the Dash Lab had a MODEL. It must work. So any trainees that failed to get the positive control, well, clearly they were doing things wrong. “Go back to GoldenHairedTrainee’s protocol and do that. Precisely”, says the helpful PI. Similar thoughts were extended in the other labs “Did you call the Dash Lab and ask for help? I’ll chat with the PI at the next meeting and see what is up….” says the other PI. Trouble was shot and the graduate student got her data. The postdoc finally got the model to “work”. All were happy.

Meanwhile…..the contrarians.

It doesn’t make any sense, they would observe. This is such a great effect, why can’t we do the experiment in the smart way and get it to work? I run my Bunnies at dawn and dusk….that’s their active period! That’s when they need to run away from the foxes. Don’t you know the Dash Lab papers from back in the day ran their animals at…..NIGHTIME! They bloody well ran studies in sleepy Bunnies, woke them up and hit them with amphetamines. Don’t you think maybe they were a little biased for a low baseline Dash speed? Huh??!!!???? Oh, and did you know it was all single-challenge, between groups analysis? The trainee was sac’ing them for brain measures. These lab bred Bunnies never even saw the Hedgerow course but once. It was totally novel. And they weren’t even hungry for clover. No wonder they ran like hell for cover when they smelled the fox odor.

Perhaps they tried to get their papers published, perhaps a grant Specific Aim or two, on the basis of their objections. “How do we know this is relevant” they would query. And the world would kick them in the teeth because clearly they were incompetent at getting the Dash result that all of the real scientists could produce (finally whew!). Nevermind what makes “sense” for this model, what matters is what works.

Until…finally….in its dotage seniority. The Dash Lab finally fesses up. The PI allows a trainee to publish the warts. And compare the basic findings, done at nighttime in naive bunnies, with what you get during the dawn/dusk period. In Bunnies who have seen the Dash arena before. And maybe they are hungry for clover now. And they’ve had a whiff of fox without seeing the little blighters before.

And it turns out these minor methodological changes actually matter.

Whee! Now we’re cooking with gas, Bunny Hoppers!

As I said, what I’ve been thinking about is the point at which you** have the confidence to contradict yourself in published papers. Do you let out a story which has contradiction right from the get-go? Let the warts out there, even when you don’t yet understand the critical methodological differences that produced a seemingly contradictory outcome?

Or do you publish, even papers which seemingly contradict each other, and let the field start worrying away over the differences and help you figure it out?

__
*In the event any Readers know what paper I am talking about, at this point we are well off into the fiction part of our blogpost. I have no specific knowledge it shook down like this.

**I’m going to leave my apparent views on this (based on my publishing behavior) out for now…

No Responses Yet to “On contradicting your own stuff”

  1. mikka Says:

    Sure, if you want your paper to get rejected without review go ahead and carefully lay out the caveats and precautions. The editor will be very grateful that you gave him/her the reasons to RTA without having to put too much thought into it.

    I wish the game was different (I wish it hadn’t become a game), but anything less than whiz-bang-paradigm-shift-no-question-about-it will not do well. Let the reviewers point out the caveats and concede them if you agree.

    Like

  2. DashLabGrad Says:

    And many, many trainees succeeded….but many, many failures were obtained*. […] Even, years later, in the original Dash Lab. Science is hard, no?

    this is where I’m at.

    grad students before me (and in PI’s former postdoc lab) discover that PI’s postdoc findings (aka, foundation for ALL work done in this lab) only replicate under certain conditions. conditions which The Model should be invariant to, as they just reflect habits of how the PI did his thing.

    subtlety of effect is known to those in-the-know but hasn’t been reported yet, except for corner of posters at meetings.

    why? b/c “under which conditions” is hard Q to characterize and, even if characterized, subtlety makes The Model much more complicated

    Like

  3. miko Says:

    “In the event any Readers know what paper I am talking about…”

    You are talking about most papers.

    Confucius once said, “Never trust a previous postdoc/students reagents, results, notebook, code, etc. Make your project as separate as possible and not dependent on prior results in which your PI is politically invested.”

    Like

  4. drugmonkey Says:

    Well yes, that was the point, miko. But there *was* a paper and there are certain of my readers that may figure out exactly which one I mean. I just wanted to be clear on where the generalizing started….

    Like

  5. proflikesubstance Says:

    I’m about to publish something that contradicts (albeit slightly) something we published only a couple of years ago. We discovered the issue as more data became available and I would rather say “we were wrong, but it’s still cool” than have someone else point it out. So I would be a proponent of getting the word out early, but the culture in my field is often gravely at odds with the stuff I read on NIH-land blogs. So, grain of salt, and all.

    Like

  6. Dr 27 Says:

    I agree with Miko. I remember beating my head against the wall many, many times during the first few months of my PhD. Until the lab manager and the boss talked and somehow one convinced the other that the difficulties of my project were too great to have a naive student like me do it. I switched projects and all went well. I learned from people before me and taught new students. I don’t know how things are stacking up now. My PhD PI occasionally has a change of heart, starts killing off projects and doing things in totally new directions. I faced lots of frustration in those first months and even wrote to her first students. All replied the same way, they shared some tips but basically said that each project was its own beast and that effort and a bit of black magic were needed to move it forward.

    Like

  7. Dave Says:

    With the mass hysteria going on over at Retraction Watch these days, I would be concerned that some would call for a retraction of the original work. I’m serious. There is a discussion over there about whether or not Linus Pauling’s original (and incorrect) DNA-model paper should be retracted.

    Like

  8. drugmonkey Says:

    Dave-
    I totally agree that we science types need to be on guard against the humanities type douche sticking an ignorant nose into the process of science. However, this is no reason to justify not publishing contradicting results. We need to fight for the scientific process, not capitulate to Puritan witch hunters.

    Like


  9. @Dave. Yes, retraction (beyond actual fraud) is a tricky topic. I mean, you’d *hope* old papers got things wrong — otherwise there isn’t much point to doing new work.

    Like


  10. I haven’t been in a position to contradict old data like you describe, but we are about to publish a paper whose conclusion is “the signal that induces C is not A, but B”, and four years ago, we published a paper whose title is (paraphrasing) “A induces C”. All the data are fine and repeatable: the issue is one of changing interpretations as additional experiments are performed.

    Like

  11. Dave Says:

    I agree with both of you. I am seriously concerned that some misinformed types are out to destroy the reputation of science and tarnish the very nature of scientific research. Publishing articles that contradict ones earlier work is totally appropriate in my opinion and should be openly encouraged. Conversely, delaying such work for fear of repercussions is definitely problematic on so many levels.

    Like

  12. drugmonkey Says:

    The Retraction Watch crowd has also ventured that retracted papers should be disappeared from the literature. Crazy talk. I totally agree that every effort should be taken to identify the retracted work in databases like PubMed and on the journal sites themselves as retracted….but to remove them as if they never existed? nuts.

    Like

  13. ninacat Says:

    Think you need to have one of those “incompetents” that the world kicked in the teeth respond. Nice that Dash finally came around–but how many careers went bust in the interim? Results evolve as our techniques get better, our perceptions change, our knowledge base grows–no issue there. Maybe one day, conceding that you got it wrong –or at least that there are contradictions–will be considered the measure of a good scientist.

    Like

  14. drugmonkey Says:

    I haven’t been in a position to contradict old data like you describe,

    I may perhaps have been excessive in my description but as with your formulation, it isn’t that it is wrong, per se. More that the effects (i.e., the “model”) obtain in one very specific set of circumstances which may not be the most logical or have the best face validity or, therefore, be the most convincing foundation on which to infer something about human health. Yet that model is deployed with the implicit or even explicit assertion that it indeed does tell us something about human health (and, consequently the laboratories win NIH funding to use the model).

    Not exactly Emperor’s New Clothes. The effects are real…they just fail to generalize.

    Like

  15. kevin. Says:

    My PhD supervisor said that we should be the ones to publish why our previous model was wrong, rather than it being our competitors.

    Like

  16. DashLabGrad Says:

    The effects are real…they just fail to generalize.
    Exactly.

    Like


  17. BTW, from a career standpoint, there is another lesson here:

    As a trainee, if you are considering joining a lab where all of the work in the lab is directed towards “proving” some grand Theory or Model that the PI has staked her entire independent career upon, run away. Fast.

    Like

  18. MIles Says:

    I tell everyone who joins my lab that they are not expected to produce certain results but only great science. I don’t care about the message as long as it’s based on solid data. I f they show that I was wrong: Even better. That’s more stimulating than doing the same thing over and over again.

    Like

  19. Jim Thomerson Says:

    Some years ago I read a book about doing science, no idea of title or author. One incident described was the author getting one set of results with experiments on bunnies and a colleague getting contradictory results. They got together and compared notes. It turned out that they had different ideas about the optimum size experimental bunny, and were using different age rabbits. Both their sets of results were valid.

    In the late 90s I published my first phylogenetic tree based on morphology. A few years later, I coauthored a paper presenting a DNA based phylogeny which does not support my previously published phylogeny. How can there be subsequent progress without previous error?

    Like

  20. Spiny Norman Says:

    Great discussion.

    As a postdoc I was part of a group of trainees that gently escorted the lab off of a couple of runaway Crazy Train cars… a series of papers published in journals that certain people obsess over had conclusions (and in some cases experiments) that were simply wrong. Our PI sometimes resisted but in the end was wonderfully good about publishing revised findings, methodology papers explaining how not to repeat the errors of the past, etc. Things have really been cleaned up nicely. Well, except for that one ex-postdoc who continues to ride the Crazy Train. The rest of us avoid engaging his, lab’s er, work, and publish the occasional corrective.

    Like

  21. Spiny Norman Says:

    “As a trainee, if you are considering joining a lab where all of the work in the lab is directed towards “proving” some grand Theory or Model that the PI has staked her entire independent career upon, run away. Fast.”

    No joke, that.

    Like

  22. Paul Orwin Says:

    Over at RW especially in the comments you see a distinct naivete. Apparently no one has ever misinterpreted there results, and every poorly rendered figure (especially WB) is grounds for the assumption of fraud. In the real world, stuff like you describe happens all the time. It’s human nature, unfortunately, to protect your personal interests rather than explore the ways that you might have gone astray. Also, I think the key word is “model.” A model system is just that, a model – not an analogy, not a stand in. Assumptions that what happens in the model are the same as what happens “in the real world” make you look stupid. The proper use for a model system is to do specific, clear experiments on how the treatment (in whatever contexts) alter the state of the model. So the “DashLab” (and I’m not in your field, and have no idea or interest in who they are – but it’s a universal issue) wasn’t wrong then to use this one specific model system, and they aren’t wrong now. They are just doing different things. This does bring up another tension in science, though – often it is impossible to replicate work because M&M sections are truncated and incomplete, but the first thing any reviewer will say is “cut the M&M it’s way too long”. Another tension (sort of aligned) is when reviewers ding you for not extending your results to bigger ideas in the discussion, but of course that speculation and hypothesizing (which is necessary and important) is exactly what can lead you to mistreat the model.

    good discussion topic – something to bring up in our next lab meeting

    Like


Leave a comment