How Not To Write A Scientific Manuscript

November 21, 2008

Friend of the DoucheMonkey blog Sol Rivlin had the following to say in response to other friend of the blog Isis the Scientist’s query to her minions regarding how to handle unexpected or unplanned experimental outcomes when writing them up for publication:

As to the unexpected results. My suggestion for you is to be truthful about your intial intent and expectations and to tell the story as it happened, including the unexpected. In reality, that is exactly what happens to many of us, but too frequently, we are tempted to appear smarter than we really are, pretending that the unexpected outcome was actually very expected and that we knew exactly what will happened long before we did the experiments. Most scientists tend to lie in this way, we know they lie because we have done it ourselves and yet, we continue doing it.

That ranks among the absolute stupidest gibbering dumbfuck advice concerning manuscript preparation I have ever seen or heard.


No one reading scientific literature qua scientific literature gives a flying fuck about the internal mental state of a scientist when she performs an experiment. All anyone cares about in this context is the conceptual relationship between the novel information revealed by the experiment and the existing conceptual landscape. Period.
The mental state of scientists when they perform experiments may be of great interest in other contexts, for example to sociologists or philosophers or to their mothers or something like that. But for scientists operating in a particular scientific arena reading the scientific literature in that arena, the internal state of the scientists performing experiments is nothing but a distraction from what they really want and need to know.
And the idea that it is “lying” to say, “In order to test the hypothesis that blah induces bleh, we generated a transgenic whoozis expressing fuckdribble in the bleezer”, when what really happened was you thought, “I wonder what the fuck would happen if we expressed fuckdribble in the bleezer”, represents as pathetically deficient an understanding of how scientific discourse works as it is possible to have.
Every single fucking scientist ever born knows that most experiments are performed to see what the fuck would happen if you performed some particular manipulation on some particular thing. And every scientist knows that “In order to…, we did…” is a *fiction*. But it is an exceedingly convenient and useful fiction, because it makes it much easier for the reader of a scientific paper to embed everything she is reading into the appropriate conceptual framework as she reads along.
Building suspense is for shitty novels and hackfuck horror movies. No scientist wants to read, “We decided to see what the fuck would happen if we expressed fuckdribble in the bleezer, and MUCH TO OUR SURPRISE WE FOUND (DRUM ROLL, PLEASE)…HOLY FUCKNOLY!! WOULD YOU BELIEVE IT? THE BLEH WENT KABLOOIE! So, then we scratched our asses and drank some Jameson down at the motherfucking bar, and realized that, HOLY FUCKNOLY!! THIS RESULT TESTS THE HYPOTHESIS THAT BLAH INDUCES BLEH!”
It’s a waste of the intended readers’ time and mental effort to write scientific papers that way. How many times do you need to read, “In order to see what would happen, we…” to get the point? Employing a convenient and useful fiction that everyone understands to be a fiction, but that everyone agrees to be both convenient and useful is not “lying”.
And as long as we’re talking about this kind of “lying”, let’s address the issue of the order in which experiments were performed versus the order in which they are presented in a paper. By Sol’s definition, virtually all scientists are “lying” when they write their papers, because the experiments were almost never performed in the order in which they are presented.
Authors might even use phraseology like, “Having established that the flanknozzle is hikerny (see Figure 1), we then flinked the flanknozzle”, when they know damn fucking well that they flinked the flanknozzle months before establishing that the flanknozzle is hikerny. LIARS!!!!!!111111!!1!!11!!1!!!!
Again, this is a convenient and useful fiction that everyone knows is invariably employed. You present results in the order that makes sense conceptually in order to make your readers’ life easier.
For example, when scientists try out a new experimental manipulation, they frequently employ it first in the very complex experimental context that they actually care about in order to see if it is going to yield interesting fruit. If it does, then they go back and fill in various experiments performed in simpler, more reduced, contexts to provide convincing evidence that the new manipulation does, indeed, do what it purports to.
But from the standpoint of the reader, it makes their lives much easier if they are first presented with convincing evidence in simpler contexts that a novel experimental manipulation is effective at doing what it purports to, before then describing the novel insights into some biological question that is has revealed in a more complex, yet interesting, experimental context. This is called “crafting a convincing story”, and everyone knows, expects, and desires that the authors of the scientific papers they read do so.

No Responses Yet to “How Not To Write A Scientific Manuscript”


  1. I agree with everything you say but I have to admit a soft spot for Sol’s intent. I love to sit in the library in the journal stacks with actual, pre-1966 journals and read classic papers in my field and others on which I lecture. Back when reports were really reports as Sol describes, the literature does indeed give a valuable conceptual view of the approach to a problem. Yes, I realize this is best left for one’s memoirs or the Nobel lecture ( have you read any of these? they are remarkable!).
    Just go ahead and call me an old gibbering Sol-cophant.

    Like

  2. Physiogroupie IV Says:

    My most cited paper came about from a negative finding. Generally, negative findings are very hard to publish on their own. As a result, I expanded the scope of the paper and came upon a cool, positive finding. I don’t think it’s necessary to explain the null hypothesis was true in the former case. It’s kind of irrelevant in my case, actually.

    Like

  3. Becca Says:

    My comment got eaten over there, so anyway…
    You’re right, CPP, at least for what people should be doing most of the time (if for no other reason than meeting readers expectations). It makes it vastly more efficient all around. And I’ve not read any papers that are 100% accurate narratives of the experimental process (god- that could make for some awful papers).
    That said, I don’t think there’s anything wrong with the sort of narrative: “Previous work suggested blehs induce blahs. Suprisingly, we found that blahs induce blehs.” Highlighting what is novel in your study and alerting readers to the context your work took place in can be useful.
    I think a lot of us like to hear the thought processes, the story-behind-the-science. So I think it’s good to include a few such tidbits in lectures- surely, stories are such a critical component of the public speaking toolkit that we can’t afford to banish them to the sole domain of Illustrious Nobel Scientists. But even I don’t really want a really great story cluttering up the explaination of the figures presented.

    Like

  4. Coturnix Says:

    This post made me really want to generate a transgenic whoozis expressing fuckdribble in the bleezer. I think that would be a fascinating thing to do.

    Like

  5. S. Rivlin Says:

    Coturnix,
    You have found the right mentor for such a project. He has already wrote a whole thesis about it in the post you commented on.
    But you would be better off listening to Able Pharmboy! Go to the library and read older scientific research papers that are a pleasure to read.

    Like

  6. Beaker Says:

    Every single fucking scientist ever born knows that most experiments are performed to see what the fuck would happen if you performed some particular manipulation on some particular thing. And every scientist knows that “In order to…, we did…” is a *fiction*.
    Thanks PProf for identifying the elements of fiction in our manuscripts. I agree that this fiction writing actually helps us communicate our science efficiently. We all write our manuscripts so that they re-tell the story to make it seem far more hypothesis-driven than what was reality at the bench. I accept this, and I am comfortable with the white lies.
    On the other hand, I am disgusted with precisely the same thing when it comes to grantsmanship. Does anybody on a study section actually believe a typical Specific Aims page, with the big-ass central hypothesis, broken down into sub-hypotheses? Each aim will purport to test a sub-hypothesis that may consist of crap like: “we hypothesize that our fucking awesome protein is a crucial component of the molecular mechanisms underlying bad, bad disease.” The null hypothesis then becomes that the fucking awesome protein is totally irrelevant to the disease. Furthermore, it follows that the “pitfall” is that we will be unable to figure out whether or not our fucking awesome protein has anything whatsoever do do with bad, bad disease.
    Why do we have to play this game? Is is because we are reassuring each other that we are doing serious science, according to Popper?
    Why is it unacceptable to say: “Problem X is incredibly fucking important to understand bad, bad disease.” We have created a brand new approach to studying problem X (method, conceptual advance, new preliminary data). We will try a bunch of different shit and when we find something important, we will purse this and ignore the trivial shit.” Our long term goal is to solve Problem X.
    Why is this not an accepted approach for a grant application? Is it because no funding agency can justifiably give you money for “messing around in the lab?””
    The hypothesis-driven mentality has gone far beyond its actual power. And yet–a great irony arises when one considers microarrays and other high-throughput screens. In those case, hypothesis-driven is thrown out the window and different rules apply. In fact, a high-throughput approach cannot have a hypothesis because that would bias he “hits” found in the screen. I guess the hypothesis tested in high-throughput approaches is, “we hypothesize that we will find data that is not noise.”

    Like


  7. I agree wholeheartedly with Beaker about the disconnect between the reality of how science porgresses and the demands made by study sections.

    Like


  8. But you would be better off listening to Able Pharmboy! Go to the library and read older scientific research papers that are a pleasure to read.

    Also, we should type our manuscripts on typewriters with ink ribbons, hand draw our figures, and perform our statistics with a slide rule. That way our papers will have an old timey feel. I mean, really, who wants progress anyway?

    Like


  9. I mostly agree, with the exception that it is sometimes useful to follow the train of thought of the experimenter with certain types of surprise (as Becca said). I was recently reading a lovely paper in which the authors said something like, “It was completely fucking weird that this blocker had this effect on these currents, but we thought it suggested the following hypothesis….blah blah.”
    What’s nice about it is that it gives the reader permission to be momentarily confused, too. And since that’s often how I feel, it’s a warm and cozy moment.
    There’s also the aspect that students read these perfect, well-thought-out (apparently!) papers and get discouraged about their own “seeing what happens” work.

    Like

  10. Dan Says:

    I might be worth considering if it was just that you, the experimenter, didn’t expect to find the result because of some hunch that you had, or if it was the field as a whole didn’t expect the result. There are times when the field has an intuition about the way that things work, and this intuition is expressed in review/opinion articles, talking with colleagues, ect…, but when someone finally figures out how to do the experiment, a different result is found. When it wasn’t just you, but lots of people who were wrong, I think you need to tell the reader this background expectation.

    Like

  11. Lou Says:

    Following on from Becca, where such a style would work well is in seminars and talks. I still remember a talk from a few years ago where the speaker did the
    “We tried this….it didn’t work. We then tried that….that didn’t work. So we thought we change tack and tried this….which did work.”
    It was a good talk, memorable, and refreshing to know that thought process.

    Like

  12. Nat Says:

    Highlighting the fact that some results were unexpected doesn’t really speak to the mental state of the experimenters at that point. Rather, it suggests how the conceptual framework is wrong, and needs to be adjusted.
    But I often do enjoy reading more about the background of scientific work. To me it’s just interesting, and also helps demonstrate to people how incredibly fun and kick ass being a scientist is, which doesn’t come across to the neophyte or layman when reading a paper. While in school, using textbooks, it was never obvious how passionate and creative being a scientist can be. There’s this hyper-rational image of the scientist that is so divorced from reality.
    Still though, I wouldn’t want that crap in the paper, as it would just muck up the effectiveness of the information content.

    Like

  13. S. Rivlin Says:

    Isis,
    Let’s not forget that the type of shoes and pants the investigator wore during the performance of the experiments was also mentioned.

    Like

  14. cashmoney Says:

    Sol!!!!! FTW!!!11!!!!!!! Hahahahahahahahhahahahhahahaaa!!!

    Like

  15. juniorprof Says:

    I recently needed to go back to some of John Carew Eccles’ original manuscripts for a review I was working on. While I don’t remember any lines about his shoes or pants (although I’m sure they were high waisted) there were certainly descriptions of heading down to the animal facility and picking out a suitable animal for the given experiment, among other completely superfluous descriptions of things we wouldn’t dream of including in a manuscript now. While I share Abel’s, Sol’s and Nat’s nostalgia for these types of manuscripts, they are also quite comical. I have no doubt that in the not so distant future some young bucks will laugh their asses off at all of our manuscripts and our elementary understanding of the biological processes we thought we were studying in excruciating detail.
    Another good example are Sol Snyder’s older papers on GABA binding in the brain. Na+-dependent and Na+-independent binding sites with different Kds long before we knew about GABA-A and GABA-B receptors. While these papers were really quite brilliant in setting the stage for further discoveries, they are also quite humorous in hind-sight (at least I get a kick out of them).
    Finally, I also agree with beaker (#6).

    Like

  16. Lora Says:

    Hard to say really. Are the totally unexpected results totally unexpected because they fly in the face of the literature? Or are they unexpected because they took everything in a completely tangential direction? I can see the revision of the original hypothesis and pretending you meant that all along in the latter case, but in the former case, you’ve really got a buttload more work to do in order to explain why you are right and they are wrong.
    Or are they unexpected because there wasn’t any literature in the first place, so you have to come up with some imaginative explanation? That’s a whole other can of worms, when you find out the reason no one studied that thing is because it’s such a pain in the ass to study.
    Can think of a few more reasons why one might get unexpected results, wherein simply reporting them by themselves might not be the best route. Such as, when you find out Really Big Name’s previous publications on the subject were a tissue of half-baked lies.

    Like


  17. Juniorprof–I’m reading one of Eccles’ (later) papers, and at one point he refers to a particular result as “disappointing.” Now that right there is a great description. Wish I could get away with it today.

    Like

  18. leigh Says:

    the entire point of a paper is to get the point across, not tell an old timey story. sorry, but my view on this is: please cut to the fucking chase asap so i can know wtf you just did and make my own interpretation of the data and apply to my own work kthx.
    you do the cool shit first to see if it’s worth pursuing at all, then you go back and add the little supporting things to rule out the alternate conclusions and add more support to the coolness of your cool shit. simple.
    when i give talks, i present in a story format, because talks are not strictly for data presentation but contain an element of performance, keeping the attention of your audience. there is the appropriate place for that type of thing, it is not in the scientific journal.

    Like

  19. S. Rivlin Says:

    I find it interesting the DouchePP “publish” his same crap both on Isis the Scientist’s blog and here. I wonder if he uses a similar practice where his scientific publications are concerned.

    Like

  20. David Marjanović Says:

    I find it interesting the DouchePP “publish” his same crap both on Isis the Scientist’s blog and here. I wonder if he uses a similar practice where his scientific publications are concerned.

    Completely tangential ad hominem argument (but I repeat myself).

    Like

  21. drdrA Says:

    Beaker that’s such an excellent point and a problem in terms of grantsmanship that I’m currently sitting on almost exactly for high throughput applications- which are inevitably going to become more and more common in this genomic age.
    To take that one step further- you can form a hypothesis about protein x and protein y, which are already known and studied somewhat. But that leaves us with proteins 1-2000 from an organism that are hypothetical or completely unstudied. Hard to make a hypothesis about those- but does that mean we should never figure out if proteins 1-2000 are important, that we should never study them, that they are not worth studying because we can’t make a protein X-protein Y type hypothesis?
    I bang my head against problems like this all day long- and wonder why, if we are going to have the attitude that we are only going to pay for the problems we can study in x-y like detail- why on earth did we bother sequencing all those genomes???

    Like

  22. DrugMonkey Says:

    The finest impact of this chip chip dippity snip crap will be the restoration of the fishing expedition hypothesis generating experiment to respectability.

    Like

  23. drdrA Says:

    HA! I love that ‘hypothesis generating’, I must be too young to know when that verbiage was in vogue last.
    But I’m not kidding around. There are entire fields that work on a small handful of genes- and sometimes these are genes that have no known role in the disease process in question (lets just take that for the sake of argument). So, we get an exquisite amount of fantastically beautiful molecular detail on something that’s NOT IMPORTANT IN THE DISEASE. But- we are prohibited from identifying the factors that really are important (and developing hypotheses about how those work) because this is a ‘fishing expedition’.
    And now someone is going to scream about how you can’t predict where this fantastically beautiful molecular detailed knowledge might lead in the future… bla bla bla… and that’s all fine, and I agree… but when doing those experiment precludes looking at any other genes- I have a problem.
    I’m on a rant, sorry ’bout that.

    Like

  24. S. Rivlin Says:

    DM, drdrA,
    Isn’t that the real problem of scientific research today, where most of it is justified only if we can somehow predict in advance its usefulness?
    We need the money to conduct the research, but the only way to get the money is to show, apriori, the benefit of that research. That’s the influence of the business world and Wall Street on today’s science.
    I can understand why the tax payer insists on getting her money worth and thus I do not expect the government to fund curiosity driven projects. The latter is the real role of the research university, which has been established for the purpose of discovering and teaching new knowledge. The benefits of knowing the unknown are unpredictable yet, based on the history of scientific research, we know that we always benefit from new knowledge. While the price of producing new knowledge is unknown, the NIH and the NSF have no choice but to put a price on it and consequently, the tax payer doesn’t always get the expected new knowledge for the buck.

    Like

  25. Scicurious Says:

    Dr. J and Mrs. H: “I’m reading one of Eccles’ (later) papers, and at one point he refers to a particular result as “disappointing.”
    But see, even there they are using fiction! Because “disappointing” actually means “my advisor yelled at me, and I went to the bar and got shitfaced and cried for days”. “Disappointed” just doesn’t cover it. 🙂
    Beaker: I completely blame Karl Popper for most of the inconsistensies with which we are stuck in modern scientific discourse. It makes me feel better.

    Like

  26. Coturnix Says:

    Back in the day, they had to write like that and they could – no page limits, etc. But today you can write a paper in a proper manner AND give us the anecdotes on your blog so we can see how you really got to it. The paper may present your work as a careful test of a hypothesis, while in the blog post you can say “My reaction to the crappy result was ‘fuck, fuck, fuck’ after which I went and got wasted at the local bar, but next morning I decided to just go along with whatever the result was and rewrite the manuscript as if that’s exactly what I was predicting all along”.

    Like


  27. Sorry to miss out on the progress of this discussion as I was out back all day tanning some hides, chiseling my latest diatribe onto a stone tablet, and mining the coal needed to power the generator that gives electricity to my VAX VT-100. Gotta run now and hunt an antelope for dinner for the family back at the cave.
    Damn kids. And get the hell offa my lawn!

    Like

  28. hm Says:

    High impact journals have limits on the number of words for various sections. For crying out loud, now they have restrictions on figures. Not sure where the historical perspective can fit, other than Supplementary Information…

    Like

  29. S. Rivlin Says:

    And that’s how all the fun of doing science was lost!!!

    Like


  30. And that’s how all the fun of doing science was lost!!!

    Yeah, all the fun of doing science is lost. All scientists are completely miserable and not having any fun at all.
    Sol, you’re a bitter nasty obsessive asshole who feels aggrieved by “science”. Not everyone is like you.

    Like

  31. S. Rivlin Says:

    CPP, you are a tight ass, angry punk without a funny bone in his body. Stop taking yourself so seriously. I can almost see the level of cortisol rising in your blood.
    No one enjoys science more than I do. Science has never aggrieved me, only crooked scientists.

    Like

  32. Eric Says:

    I don’t think science papers should be chronological narratives (which would be hideous to read), but I also don’t think that papers should be terse injections of pure information devoid of a scientist’s personal voice. Something in between would be nice…I’ve come across a few papers, such as Crick’s triplet-code paper, that are just a pleasure to read, sounding like a good conversation, but at the same time very logical, well-organized, and efficient. If we could have more of those, I’d be a much happier grad student.
    Still, it’s not impossible to have a nice paper with a chronological narrative. Take http://www.pnas.org/content/97/9/4730.abstract this one, for example. Pretty neat paper, but check out that first paragraph of the introduction! Not something you see every day…

    Like

  33. Renee Says:

    I think that if you started out expecting to see one thing, and then saw the opposite, it might be useful to explain that to show that the conceptual framework that you- and many others, perhaps- were working off of is flawed. This should certainly be addressed. To completely cover that up, is, I think, dishonest. But if you had no idea what you were going to see beforehand, well… why didn’t you think about it? Explaining your results after the fact is not only kind of cheating, but can also lead to improper experimental design.

    Like

  34. S. Rivlin Says:

    Renee, you are absolutely correct, but please be forewarned that your comment could bring upon you the wrath of the angry, know-it-all, PhysioProf.

    Like

  35. kiwi Says:

    This argument seems crazy as a snake. For example, if you started out expecting one thing and ended up with another, it does not necessarily mean that the conceptual framework is flawed. This result could be due to any number of factors and hence should be considered in that light. By all means, the conceptual framework might need a tweak, but I wouldn’t jump into that one straight away – so your comment, Renee, about covering that up, seems a bit bizarre.
    Explaining unexplained results after the fact requires thought, and perhaps a leap of intuition that you didn’t have before. Rescheduling that timewise in a paper as one of CPPs “fictions” can make a lot more sense to a reader than hearing about your weird dreams and agonising thoughts for the 6 months after the experiment.

    Like

  36. Dr. Feelgood Says:

    Hahaha,
    We are all just like P.T. Barnum arent we? Swapping our stories of how to get people out the freakshow exhibit by encouraging them to see the “Amazing Egress!”.
    We are such deceptive hosers, while being so honest with our data. Well, at least we admit it.
    (That make any sense? Im still drunk from SFN.)
    Dr. F.(T. Barnum)

    Like


  37. It may just be my pissy mood, but both CPP and S Rivlin seem to be angry know-it-alls who are convinced that their way is the correct way and be damned to anyone else.
    They both sound like jerks to me.

    Like

  38. TCO Says:

    I support Becca’s point of view. Given that you have an introduction and a conclusion…that these are sections that describe the prior work and what is novel about this work, it is pobably actually HELPFUL to point out the departure from expected results, especially if the rationale for the departure is not known. For one thing, it clarifies to the reader that the experimenter is aware of expected results and thus more likely to have taken nescessary controls and care in experimental method, given a converse result that was still published. As opposed to someone with no framework, just playing.
    I have also seen cases where the “lying” in certain sexy aspects of nanotechnoology has actually made first workers tend to jump to quick to an understanding anf has hindered the science in terms of actual understnading of strange phenomena. Plus they are fucking tossers for acting like they know stuff they don’t.

    Like

  39. neurolover Says:

    My worries about the “creative fiction” of post-hoc hypotheses that were tested by your result is that they are circular. If you see a pattern of results, that don’t support the idea you were testing, and then come up with a hypothesis to explain your results, how do you avoid circularity? Since, of course, there has to be *some* explanation for any result you acquired?
    I think it’s possible that in some fields, what people are really doing is really “hypothesis generating”: That is, they make some measurements, come up with a hypothesis, and then test that hypothesis with further experiments. But, other fields the post-hoc explanation becomes the end of the story.
    There are also some nice articles from the psychology literature, titled something like “the tyranny of hypothesis testing” talking about the ways in which framing questions as “hypothesis testing” can skew results.

    Like

  40. S. Rivlin Says:

    Throughout the history of science, hypotheses mostly failed and fell when tested with a new experimental design that proved them to be wrong and new hypotheses were born. The story of the events that led to the fall of an old hypothesis and the rise of a new one should be and is part of the published paper. To tell it any other way (like a eureka moment) is cheating.

    Like

  41. DSKS Says:

    “My worries about the “creative fiction” of post-hoc hypotheses that were tested by your result is that they are circular.”
    Bingo.
    “I think it’s possible that in some fields, what people are really doing is really “hypothesis generating”: That is, they make some measurements, come up with a hypothesis, and then test that hypothesis with further experiments.”
    No two ways about it: to neglect to do this and attempt to cast the data of the original experiments in the light of a hypothesis generated post hoc, with no further experiments to substantiate it, is intellectual dishonesty. You cannot apply statistics meaningfully to data interpreted in this manner, and thus the validity of the conclusions is completely undermined.

    Like

  42. Tualha Says:

    Goddammit, PP! I’ve been working on fuckdribble expression in transgenic bleezers for months now, and here you go and scoop me! Damn your eyes!

    Like


  43. As a scientist with refereed publications, and a fiction author, and an editor, I believe that the bottom line is: a publication is easier to read if it has a narrative structure. The human being is a story-telling animal, and the story that we tell is our own
    Re: Hierarchy and Emergence
    I suppose one might argue that several steps in your sequence involve taking a collection of weakly interacting homogeneous entities, and that is why mathematics continues to work adequately, where the organism as a combination of disparate tissues and organs is less mathematisable.
    Even with very mathematisable entities there’s a tendency to use a narrative description. E.g., for stars:
    After a star has formed, it generates energy at the hot, dense core region through the nuclear fusion of hydrogen atoms into helium. During this stage of the star’s lifetime, it is located along the main sequence at a position determined primarily by its mass, but also based upon its chemical composition and other factors. In general, the more massive the star the shorter its lifespan on the main sequence. After the hydrogen fuel at the core has been consumed, the star evolves away from the main sequence.
    In a paper on modelling in Hadron physics, Stephan Hartmann writes
    Working in various physics departments for a couple of years, I had the chance to attend several PhD examinations. Usually, after the candidate derived a wanted result formally on the blackboard, one of the members of the committee would stand up and ask: “But what does it mean? How can we understand that x is so large, that y does not contribute, or that z happens at all?” Students who are not able to tell a “handwaving” story in this situation are not considered to be good physicists.
    Judging from my experience, this situation is not only typical for the more phenomenological branches of physics (such as nuclear physics) but also for the highly abstract segments of mathematical physics (such as conformal field theory), though the expected story may be quite different.
    In this paper, I want to show that stories of this kind are not only important when it comes to finding out if some examination candidate “really understands” what he calculated. Telling a plausible story is also an often used strategy to legitimate a proposed model.
    If I am right about this, empirical adequacy and logical consistency are not the only criteria of model-acceptance. A model may also be provisionally entertained (to use a now popular term) when the story that goes with it is a good one. But what criteria do we have to assess the quality of a story? How do scientists use the method of storytelling to convince their fellow scientists of the goodness of
    their model? Is the story equally important for all models or do some models need a stronger story than others? These are some of the questions that I address in my paper. In doing so, I draw on material from a case-study in hadron physics.
    He concludes
    We will now make more precise what a story is. A story is a narrative told around the formalism of the model. It is neither a deductive consequence of the model nor of the underlying theory. It is, however, inspired by the underlying theory (if there is one). This is because the story takes advantage of the vocabulary of the theory
    (such as ‘gluon’) and refers to some of its features (such as its complicated vacuum structure). Using more general terms, the story fits the model in a larger framework (a ‘world picture’) in a non-deductive way. A story is, therefore, an integral part of a model; it complements the formalism. To put it in a slogan: a model is an (interpreted) formalism + a story.
    I think this chimes well with Polanyi’s point that all forms of knowing rely on unformalisable components. In fact, he claimed all forms of knowledge rely on what is tacit, or inarticulable. We might say that stories can take us further in the effort to articulate as much as possible than formalisms alone.
    Posted by: David Corfield on July 21, 2008 9:44 AM |
    Emperor’s Blog Threads of Ideas; Re: Hierarchy and Emergence
    I am convinced that what David Corfield quotes and interprets here is VERY important.
    Stories can and do take us further than formalism alone — and Intuitionistic Logic is only one approach to explicating it.
    Part of the problem, as I see it, with (for instance) String Theory, or Cosmology via Inflation and Dark Matter and Dark Energy, is that we have no good way but increasingly slow and expensive experiment to choose between a good model with a bad story and a bad model with a good story.
    Science Fiction branches off from Science in valorizing Story over Empirical verification.
    Poetry goes further than Story, but that’s another thread.
    Speaking of which: the brilliant novel Galatea 2.2 by Richard Powers [1995] has the viewpoint character (also named Richard Powers) as a best-selling novelist intermediating between English Literature department and mad scientists in the “enormous new Center for the Study of Advanced Sciences. My official title was Visitor.
    Unofficially, I was the token humanist.”
    The eponymous connectionist AI is given the story of the Emperor’s New Clothes.
    “I … asked ‘What are the new clothes made of?”
    “After a long time, she answered, ‘The clothes are made of threads of ideas.'”
    [p.220]
    Posted by: Jonathan Vos Post on July 21, 2008 6:53 PM

    Like

  44. S. Rivlin Says:

    There was my advice to Isis:
    As to the unexpected results. My suggestion for you is to be truthful about your intial intent and expectations and to tell the story as it happened, including the unexpected. In reality, that is exactly what happens to many of us, but too frequently, we are tempted to appear smarter than we really are, pretending that the unexpected outcome was actually very expected and that we knew exactly what will happened long before we did the experiments. Most scientists tend to lie in this way, we know they lie because we have done it ourselves and yet, we continue doing it.
    There was PP’s response to this advice:
    That ranks among the absolute stupidest gibbering dumbfuck advice concerning manuscript preparation I have ever seen or heard.
    And then there was Jonathan Vos Post’s response:
    As a scientist with refereed publications, and a fiction author, and an editor, I believe that the bottom line is: a publication is easier to read if it has a narrative structure. The human being is a story-telling animal, and the story that we tell is our own.
    There’s the beast and then there’s the beauty 😉

    Like

  45. ace Says:

    Srry I’m behind on this discussion but I have “gotten away with it” a few times (in J. Neurosci kind of impact journals).
    This was my comment back at the original thread – should have been here:
    Wow PhysioProf, you have just called {insert all the derogatory terms in your post} my postdoc supervisor who advised I do just as Sol suggested about unexpected findings. For context, this person is the director of {insert prestigious neuroscience institute}, extremely well-funded and most important, a great scientist.
    All I will say about the outcome was that it was positive and it was the best thing we could have done with this data.
    I have also received e-mail about that paper and others from students, who thanked me for providing some picture of the mental state of the scientist…
    And no, I don’t write 40 page long anecdotal manuscripts. It is still possible within the pages of a normal length neuroscience article to explain the initial reasoning and how it changed in the light of data, it *can* be a good way to present results.
    I really learn from and appreciate this blog but please no need for such black/white and angry views on things when clearly things are more grey…
    Sometimes new shit just comes to light man…

    Like


  46. I really learn from and appreciate this blog but please no need for such black/white and angry views on things when clearly things are more grey…

    For due consideration by the DrugMonkey editorial staff, please put all suggestions for how we can improve our blogging in the suggestion box. It’s right over…Wait! Where’s the suggestion box?
    Oh, right. We don’t have a suggestion box, because we don’t give a flying fuck whether you think we’re blogging properly!
    HAHAHAHAHAHAHAHAHAHA!!!

    Like

  47. S. Rivlin Says:

    Ace,
    I can count at least two huge mistakes in your cooment that are responsible for CPP’s sweet response to your comment:
    1. You had the chutzpah to disagree with CPP’s opinion=knowledge;
    2. you had opined not so nicely on his blogging style.
    Both are absolutely forbidden and could activate the censorship practiced on this blog. Next time be more careful and mix your criticism with some ass-kissing compliments.

    Like


  48. If you’re really getting the urge to ass kiss, come over to Dr. Isis’s blog because I am actually kind of in to that kind of thing….

    Like


  49. Dr. Philip Vos Fellman [Full Professor at Southern New Hampshire University] emailed me to add:
    “The narrative approach was very popular at Yale. About 50/50 at Cornell, but where at Cornell it was not obligatory it was understood that anyone who could do it belonged with the really smart folks. Yale comprehensive exams particularly stressed the ability to narratively interpret (in a conceptually correct fashion) scientific results.”

    Like

  50. pinus Says:

    I am sure we can all agree that there is a WORLD of difference between comprehensive (qualifying) exams and manuscripts.

    Like

  51. S. Rivlin Says:

    Isis,
    I am not sure whether or not Ace is into ass-kissing. However, maybe if he would visit your blog and see your beautiful legs with those gorgeous pumps on your feet, he would jump at the opportunity to do it.

    Like


  52. I am sure we can all agree that there is a WORLD of difference between comprehensive (qualifying) exams and manuscripts.

    Not only that, but I sure we can also agree that there is a huge difference between science and international relations/finance, which is what the quoted Vos Fellman dude trained in at Yale and Cornell.

    Like

  53. S. Rivlin Says:

    Here’s some quotes from a scientific article:
    From the abstract:
    What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and interconnexions. It would be astonishing if such a structure did not profoundly
    modify the response patterns of fibres coming into it.

    From the Introduction:
    It has become increasingly
    apparent to us that cortical cells differ in the complexity of their receptive fields. The great majority of fields seem to fall naturally into two groups, which we have termed ‘simple’ and ‘complex’. Although the fields to
    be described represent the commonest subtypes of these groups, new varieties are continually appearing, and it is unlikely that the ones we have listed give anything like a complete picture of the striate cortex.

    And then from the Results and Discussion section:
    At first glance it may seem astonishing that the complexity of thirdorder neurones in the frog’s visual system should be equalled only by that of sixth-order neurones in the geniculo-cortical pathway of the cat. Yet this is less surprising if one notes the great anatomical differences in the two animals, especially the lack, in the frog, of any cortex or dorsal lateral geniculate body. There is undoubtedly a parallel difference in the use each animal makes of its visual system: the frog’s visual apparatus is presumably specialized to recognize a limited number of stereotyped patterns or situations, compared with the high acuity and versatility found in the cat. Probably it is not so unreasonable to find that in the cat the specialization of cells for complex operations is postponed to a higher level, and that when it does occur, it is carried out by a vast number of cells, and in great detail. Perhaps even more surprising, in view of what seem to be profound physiological differences, is the superficial anatomical similarity of retinas in the cat and the frog. It is possible that with Golgi methods a omparison
    of the connexions between cells in the two animals may help us in understanding the physiology of both structures.

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: