“Correct” Versus “Interesting”

April 6, 2008

Maria Brumm has a very nice post up at Green Gabbro discussing some of the journalistic obligations that bloggers inherit when they discuss the primary scientific literature. I want to amplify a little on what she said about this, and then go off on a tangent about the distinction inherent in the conduct of experimental science itself between being “correct” and being “interesting”.


I agree wholeheartedly with Maria that bloggers who adopt the journalistic voice of factual reporting are obligated to make a reasonable effort at getting their facts correct, and to correct themselves when their ostensibly factual reporting turns out to be incorrect or misleading. In fact, I have taken other ScienceBloggers to task for failures to fulfill this obligation (Hi, Greg!).
On the other hand, of course, expressions of opinion are not subject to this same obligation. And I believe strongly that there is a higher-level obligation for bloggers to clearly distinguish between their factual-reporting voice and their expressing-opinion voice, so as not to mislead their readers. (Among their grotesque stomach-churning Framers-turning-in-their-graves despicable fuckwittitude over the last decade or so, the mainstream media have egregiously failed to even come close to satisfying this obligation, although that is a topic for another day.)
To amplify on something else Maria said, about the distinction between bloggers being “correct” and being “interesting”, this also applies to the conduct of science itself.
It is essential that one’s experiments be “correct” in the sense that performing the same experiment in the same way leads to the same result no matter when the experiment is performed or who performs it. In other words, the data need to be valid.
But it is not at all important that one’s interpretation of the data–from the standpoint of posing a hypothesis that is consistent with the data–turns out to be correct or not. All that matters is that the hypothesis that is posed be “interesting”, in the sense of pointing the way to further illuminating experiments.
I spend a lot of time with my trainees on this distinction, because some of them tend to be so afraid of being “wrong” in their interpretations that they effectively refuse to interpret their data at all, and their hypotheses are nothing more than restatements of the data themselves. This makes it easy to be “correct”, but impossible to think creatively about where to go next.
Some tend in the opposite direction, going on flights of fancy that are so unmoored from the data as to result in hypotheses that are also useless in leading to further experiments with a reasonable likelihood of yielding interpretable results.
As an aside, it is absolutely pathetic to see scientists who are emotionally invested in their hypotheses being “proved correct”, instead of treating them as tools whose utility is in leading to further interesting experiments. This results in the embarrassing spectacle of a laboratory posing an interesting, somewhat speculative, hypothesis that leads clearly to definitive experiments that could rule out the hypothesis, but then never performing those experiments out of fear that their hypothesis will, indeed, be ruled out.
Sometimes scientists dance around their hypothesis like this for years, even decades, never doing the definitive experiments. And other people start to talk: “Maybe they really did the experiment, and didn’t like the answer, so they are suppressing it.” It is sad to see scientists get seduced into this kind of pernicious delusion and end up the subject of whispered derisive chatter.
Do not fall in love with your hypotheses. Ruling out your hypothesis with a definitive experiment is good!
(I hope that Janet will weigh in on this, as I’m sure she has some interesting thoughts on the topic. And maybe what I’m saying is demented fucking wackaloonery, in which case you’ll get to see PhysioProf get smacked around!)

11 Responses to ““Correct” Versus “Interesting””

  1. Paul Mohr Says:

    IMHO
    Interesting and lucid, I thought. It is not always possible to have a result like an assembly line when dealing with the unknown or that which is to be discovered. People still need to eat and if publishing an article of things that they suspect could be true pays and helps guide others , I see no problem if it is phrased that way. I always view it that way, whether they say it or not.

    Like


  2. That’s a lovely distinction. The reason I’m rewriting a paper now is that my grad advisor objected that the discussion section was too “It will be of interest to examine whether such and such could underlie what we see” and not enough “There are two explanations for such and such. We favor the second one, for these reasons. Here are the predictions that second explanation makes. Someday we can test them,” which is how I’m writing it now.
    The first version is just what you see in your students–being too afraid to be wrong to say what you think the data mean.
    On the topic of being “proved correct,” we once had a wonderful “Address to Young Scientists” talk from a distinguished emeritus faculty member who said, “Any number of experiments can yield results that are consistent with your hypothesis. Try to do experiments that will falsify your hypothesis.”

    Like


  3. That’s a lovely distinction. The reason I’m rewriting a paper now is that my grad advisor objected that the discussion section was too “It will be of interest to examine whether such and such could underlie what we see” and not enough “There are two explanations for such and such. We favor the second one, for these reasons. Here are the predictions that second explanation makes. Someday we can test them,” which is how I’m writing it now.
    The first version is just what you see in your students–being too afraid to be wrong to say what you think the data mean.
    On the topic of being “proved correct,” we once had a wonderful “Address to Young Scientists” talk from a distinguished emeritus faculty member who said, “Any number of experiments can yield results that are consistent with your hypothesis. Try to do experiments that will falsify your hypothesis.”

    Like


  4. That’s a lovely distinction. The reason I’m rewriting a paper now is that my grad advisor objected that the discussion section was too “It will be of interest to examine whether such and such could underlie what we see” and not enough “There are two explanations for such and such. We favor the second one, for these reasons. Here are the predictions that second explanation makes. Someday we can test them,” which is how I’m writing it now.
    The first version is just what you see in your students–being too afraid to be wrong to say what you think the data mean.
    On the topic of being “proved correct,” we once had a wonderful “Address to Young Scientists” talk from a distinguished emeritus faculty member who said, “Any number of experiments can yield results that are consistent with your hypothesis. Try to do experiments that will falsify your hypothesis.”

    Like

  5. Dave S. Says:

    Do not fall in love with your hypotheses. Ruling out your hypothesis with a definitive experiment is good!
    I’m not so sure falling in love with a hypothesis is a bad thing, otherwise there wouldn’t be nearly enough dedicated scientists working on any one particular problem. Without some amount of infatuation intertwined with undying dedication on the part of the individual scientist, how else would good (and god awful) hypotheses become theories (or cast aside into obscurity)?
    Of course, marrying a hypothesis is a big mistake.

    Like


  6. I have a hypothesis – those wedded to their hypotheses have no incentive to prove themselves wrong (other than self-respect) when they can get grant renewals by dancing around the experiments that will answer the question. Go for the jugular – if you are wrong, direct your energies elsewhere. But for some folks, it’s easier to go for year 18 than to write a new and exciting application. Boring to me (and I would guess to you as well) but safe nevertheless. I’d rather be wrong and have to work on an entirely new area or idea.
    For students/postdoc trainees, a mentor’s example of when to cut bait is crucial in their development. Emotional attachment to a hypothesis is safe to the point of self-delusion, but it can be a wasteful act that discourages innovation by one’s self or others. As the tight federal funding climate seems to reward safe science vs. risky innovative science, these times only seem to encourage the behavior you deem pathetic.

    Like

  7. NeuroStudent Says:

    As a grad student I’ve had the lovely opportunity to completely shatter my mentor’s pet hypothesis by directly testing it and getting the completely opposite result…let’s just say I’m glad that I have a mentor that when presented with sufficient data says “well, that’s what the data says, it must be the way that it is (although, I did need a high number of n’s and had to run the experiment 2 different ways–but not only was this his pet hypothesis, it’s a large majority of our field’s pet hypothesis)” as opposed to “well, your data doesn’t fit with my pet hypothesis so you must be running the experiments incorrectly” which is something that I’ve seen over and over again with other PIs and is kind of scary.
    I’m a big fan of directly testing hypotheses instead of dancing around them…so what if you were wrong? maybe it’s more interesting if you’re wrong… hopefully someday when I have my own lab and students I can be the same kind of mentor that mine has been to me and my students will throw my pet hypotheses out the window with their experiments.
    anyway, this is an important topic and I’m really glad that you blogged on it

    Like

  8. Eric Says:

    This is an interesting topic, especially for me. I just published my first paper….of course the last figure is a grand hypothesis that mechanistically ties together and predicts how the system is working based on the data so far. I was afraid when we published that the hypothesis would be wrong, but my boss is of the same opinion laid out above. Make sure the data is rock-solid. Publish your hypothesis in the discussion and then move on and perform the experiments that challenge your hypothesis. He actually doesn’t even call them hypotheses, he says “testable-hypothesis” every single time. A hypothesis is only as good as the experiments that it inspires. Luckily all of the fun, hypothesis testing experiments are holding up my hypothesis for now. While, I must admit this is good for the ego, if the hypothesis were wrong, it would get replaced by the next hypothesis that fits the data better.
    Besides, if you knew for sure that the hypothesis was correct, it would be a theory then, wouldn’t it?

    Like

  9. CC Says:

    But it is not at all important that one’s interpretation of the data–from the standpoint of posing a hypothesis that is consistent with the data–turns out to be correct or not.
    In theory, sure, but we the lazy would prefer if speculations presented as fact in the title and abstract were, in fact, correct.

    Like

  10. TeaHag Says:

    It’s been my understanding since graduate school that all experiments should be designed to rule out the null hypothesis no? Thus, once you’ve come up with your model, all your subsequent experiments should be designed to rip it apart… not demonstrate how nifty it is.

    Like


  11. I’d like to make 3 comments.
    1.) As a journalist: We really do try, but some of us are just plain stupid when it comes to science. I was sent on a science-related assignment in grad school with a couple of rather science illiterate journalism students. During the presentation, the researchers mentioned that they were seeking additional funding to bring their research to the next stage. We get back to class and the prof asks my classmates if they thought the story was newsworthy. Both said no b/c the presenters were “just asking for money.” I said yes b/c the research as presented was promising and important if correct and that asking for money is part of how science works, not a negative reflection on the researchers. (This research later gained backing from the WHO and NIH.) Guess who the prof believed? Not me.
    2.) As a science lover and occasional science writer: I think it’s stupid to be so afraid of being wrong. In the overwhelming majority of cases, you probably will be wrong, but potentially wrong in the right way. Unfortunately, I’ve read far too many peer reviewed articles where researchers interpret data in illogical ways, ignore obvious gaps in the data or go waaaay beyond what the data could possibly demonstrate. A little more care in developing an interpretation is definitely called for in many cases, especially where you’re dealing with the hypothesis you love a bit too much.
    3.) Corrections. I try, even in blogging, to correct myself publicly when I err. In my journalism career, I’ve specifically requested a correction or even apology be posted as prominently as the original piece only to see it get hidden somewhere or left out entirely. I can tell you I was quite peeved. My point: it’s not always the journalist. Sometimes, it’s the editor. There’s a whole journalist-editor war that I won’t get into.

    Like


Leave a comment