The F1000Research will be waiving the publication fee for negative result manuscripts up through the end of August.


If you have negative results in your lab notebooks, this is the time to write them up! Like all journals, we of course publish traditional full-length research papers but, in addition, we accept short single-observation articles, data articles (i.e. a dataset plus protocol), and negative- and null-result submissions.

For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work. We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days.

The follow up post requesting comment is here.

This is a great idea and the original post nails down why.

This is not only a disappointment for the researchers who conducted the work, it’s also damaging to the overall scientific record. This so-called “publication bias” toward positive results makes it appear as though the experiments with negative or null results never happened.

Sometimes the unpublished experiments are obvious next steps in elucidating a particular biological mechanism, making it likely that other researchers will try the same thing, not realizing that someone else already did the work. This is a waste of time and money.

On other occasions, the positive results that are published are the exception: they could have been specific to a narrow set of conditions, but if all the experiments that didn’t work are not shown, these exceptional cases now look like the only possible result. This is especially damaging when it comes to drug development and medical research, where treatments may be developed based on an incomplete understanding of research results.

The waste of time and money cannot be emphasized enough, especially in these tight funding times. Why on earth should we tolerate any duplication of effort that is made necessary simply by the culture of not publicizing results that are not deemed sexy enough? This is the information age, people!

One example from my field is the self-administration of delta9-tetrahydrocannabinol (THC) by the common laboratory species used for self-administration studies of other drugs of abuse. Papers by Goldberg and colleagues (Tanda et al, 2000; Justinova et al, 2003) showed that squirrel monkeys will self-administer THC intravenously which was big news. It was the first relatively clear demonstration in lab animals for a substance we know humans readily self-administer. As the Goldberg group related in their 2005 review article, there is no clear evidence that rodents will self-administer THC i.v. in literature stretching back to the 1970s when the self-administration technique was being used for studies of numerous drugs.

Over the last three decades, many attempts to demonstrate intravenous self-administration of THC or of synthetic cannabinoid CB1 receptor agonists by experimental animals were relatively unsuccessful (Pickens et al., 1973; Kaymakcalan, 1973; Harris et al., 1974; Carney et al., 1977; van Ree et al., 1978; Mansbach et al., 1994) (Table 1). None of these studies clearly demonstrated persistent, dose-related, self-administration behavior maintained by THC or synthetic cannabinoids, which would be susceptible to vehicle extinction and subsequent reinstatement in the absence of unusual ‘‘foreign’’ conditions.

The thing is that rats “wouldn’t” self-administer nicotine either. Nor alcohol. That is, until people came up with the right conditions to create a useful model. In the case of ethanol it was helpful to either force them to become dependent first (via forced liquid diets adulterated with ethanol or ethanol inhalation chambers) or to slowly train them up on cocktails (called the flavorant-fade procedure). In the case of nicotine, the per-infusion dose was all critical and it helped to provide intermittent access, e.g., with four days on, three days off. Interestingly, while making rats dependent on nicotine using subcutaneous osmotic pumps didn’t work (as it does for heroin) very well, a recent study suggests that force inhalation-based dependence on nicotine results in robust intravenous self-administration.

For many drugs of abuse, subtle factors can make a difference in the rodent model. Strain, sex, presence of food restriction, exact age of animals, circadian factors, per-infusion dose, route of administration, duration of access, scheduling of access…. the list goes on and on. A fair read of the literature suggests that when you have cocaine or heroin, many factors have only quantitative effects. You can move the means around, even to the p<0.05 level, but hey, it's cocaine or heroin! They'll still exhibit clear evidence that they like the drug.

When it comes to other drugs, maybe it is a little trickier. The balance between pleasurable and aversive effects may be a fine one (ever tried buccal nicotine delivery via chew or dip? huh?). The route of administration may be much more critical. Etc.

So the curious person might ask, how much has been tried? How many curious grad students or even postdocs have “just tried it” for a few months or a year? How many have done the most obvious manipulations and failed? How many have been told to give it up as a bad lot by older and wiser PIs (who tried to get THC self-administration going themselves back 20 years ago)?

I’m here to tell you that it has been attempted a lot more than has been published. Because the lab lore type of advice keeps rolling.

It is really hard, however, to get a comprehensive look at what has been tried and has led to failure. What were the quality of those attempts? N=8 and out? Or did some poor sucker run multiple groups with different infusion doses? Across the past thirty years, how many of the obvious tweaks have been unsuccessful?

Who cares, right? Well, my read is that there are some questions that keep coming around, sometimes with increased urgency. The current era of medical marijuana legalization and tip-toeing into full legalization means that we’re under some additional pressure to have scientific models. The explosion of full-agonist cannabimimetic products (K2, Spice, Spike, etc containing JWH-018 at first and now a diversity of compounds) likewise rekindles interest. Proposals that higher-THC marijuana strains increase dependence and abuse could stand some controlled testing….if we only had better models.

Well, this is but one example. I have others from the subfields of science that are of my closest interests. I think it likely that you, Dear Reader, if you are a scientist can come up with examples from your own fields where the ready availability of all the failed studies would be useful.

intrepid reporter @eperlste filed a dispatch from the front lines of the OpenScience, CrowdFund War.

I’ve reached out to several @qb3 incubator biotech startups to learn more about leasing lab space. $900/bench/month is a pretty penny!

$10,800 per year just for the bench space alone. One bench. He didn’t elaborate so it is hard to know what is included, but I think we can safely assume that normal costs go up from there. Freezer space, hourly use of shared big-ticket equipment, etc. Vivarium fees to maintain mouse lines won’t come cheaply. Waste disposal.

Just another data point for you in your efforts to assess what can reasonably be accomplished for a given threshold of crowd-fund science support money and in determining where your Indirect Cost dollars for a traditional grant go.

Jane Goodall, Plagiarist

March 27, 2013

From the WaPo article:

Jane Goodall, the primatologist celebrated for her meticulous studies of chimps in the wild, is releasing a book next month on the plant world that contains at least a dozen passages borrowed without attribution, or footnotes, from a variety of Web sites.

Looks pretty bad.

This bit from one Michael Moynihan at The Daily Beast raises the more interesting issues:

No one wants to criticize Jane Goodall—Dame Goodall—the soft-spoken, white-haired doyenne of primatology. She cares deeply about animals and the health of the planet. How could one object to that?

Because it leads her to oppose animal research using misrepresentation and lies? That’s one reason why one might object.

You see, everyone is willing to forgive Jane Goodall. When it was revealed last week in The Washington Post that Goodall’s latest book, Seeds of Hope, a fluffy treatise on plant life, contained passages that were “borrowed” from other authors, the reaction was surprisingly muted.

It always starts out that way for a beloved writer. We’ll just have to see how things progress. Going by recent events it will take more guns a’smokin’ in her prior works to start up a real hue and cry. At the moment, her thin mea culpa will very likely be sufficient.

A Jane Goodall Institute spokesman told The Guardian that the whole episode was being “blown out of proportion” and that Goodall was “heavily involved” in the book bearing her name and does “a vast amount of her own writing.” In a statement, Goodall said that the copying was “unintentional,” despite the large amount of “borrowing” she engaged in.

Moynihan continues on to catalog additional suspicious passages. I think some of them probably need a skeptical eye. For example I am quite willing to believe a source might give the exact same pithy line about a particular issue to a number of interviewers. But this caught my eye:

Describing a study of genetically modified corn, Goodall writes: “A Cornell University study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: their caterpillars reared on milkweed leaves dusted with Bt corn pollen ate less, grew more slowly, and suffered higher mortality.”

A report from Navdaya.org puts it this way: “A 1999 Nature study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: butterflies reared on milkweed leaves dusted with bt corn pollen ate less, grew more slowly, and suffered higher mortality.” (Nor does Goodall mention a large number of follow-up studies, which the Pew Charitable Trust describes as showing the risk of GM corn to butterflies as “fairly small, primarily because the larvae are exposed only to low levels of the corn’s pollen in the real-world conditions of the field.”

And here is the real problem. When someone who has a public reputation built on what people think of as science weighs in on other matters of science, they enjoy a lot of trust. Goodall certainly has this. So when such a person misuses this by misrepresenting the science to further their own agenda…it’s a larger hurdle for the forces of science and rational analysis to overcome. Moynihan is all over this part as well:

One of the more troubling aspects of Seeds of Hope is Goodall’s embrace of dubious science on genetically modified organisms (GMO). On the website of the Jane Goodall Foundation, readers are told—correctly—that “there is scientific consensus” that climate change is being driven by human activity. But Goodall has little time for scientific consensus on the issue of GMO crops, dedicating the book to those who “dare speak out” against scientific consensus. Indeed, her chapter on the subject is riddled with unsupportable claims backed by dubious studies.

So in some senses the plagiarism is just emblematic of un-serious thinking on the part of Jane Goodall. The lack of attribution is going to be sloughed off with an apology and a re-edit of the book, undoubtedly. We should not let the poor scientific thinking go unchallenged though, just to raise a mob against plagiarism. The abuse of scientific consensus is a far worse transgression.

They sure do get huffy when they themselves are the ones being subjected to open peer review.

Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.

And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.

My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.

I wonder how this would be received:

“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”

Anyway, in citizen-journalist PhysioProffe’s investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter’s recent guest post on the former’s blog blogge.

Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.

When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.

CDC: U.S. Public Health Service Syphilis Study at Tuskegee
NPR: Remembering Tuskegee
PubMed: Syphilitic Gumma

The NOT-OD-13-039 was just published, detailing the many data faking offenses of one Bryan Doreian. There are 7 falsifications listed which include a number of different techniques but mostly involve falsely describing the number of samples/repetitions that were performed (4 charges) and altering the numeric values obtained to reach a desired result (3 charges). The scientific works affected included:

Doreian, B.W. “Molecular Regulation of the Exocytic Mode in Adrenal Chromaffin Cells.’ Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, August 2009; hereafter referred to as the “Dissertation.’

Doreian, B.W., Fulop, T.G., Meklemburg, R.L., Smith, C.B. “Cortical F-actin, the exocytic mode, and neuropeptide release in mouse chromaffin cells is regulated by myristoylated alanine-rich C-kinase substrate and myosin II.’ Mol Biol Cell. 20(13):3142-54, 2009 Jul; hereafter referred to as the “Mol Biol Cell paper.’

Doreian, B.W., Rosenjack, J., Galle, P.S., Hansen, M.B., Cathcart, M.K., Silverstein, R.L., McCormick, T.S., Cooper, K.D., Lu, K.Q. “Hyper-inflammation and tissue destruction mediated by PPAR-γ activation of macrophages in IL-6 deficiency.’ Manuscript prepared for submission to Nature Medicine; hereafter referred to as the “Nature Medicine manuscript.’

The ORI notice indicates that Doreian will request that the paper be retracted.

There were a couple of interesting points about this case. First, that Doreian has been found to have falsified information in his dissertation, i.e., that body of work that makes up the major justification for awarding him a PhD. From the charge list, it appears that the first 4 items were both included in the Mol Bio Cell paper and in his Dissertation. I will be very interested to see if Case Western Reserve University decides to revoke his doctorate. I tend to think that this is the right thing to do. If it were my Department this kind of thing would make me highly motivated to seek a revocation.

Second, this dissertation was apparently given an award by the Case Western Reserve University School of Medicine:

The Doctoral Excellence Award in Biomedical Sciences is established to recognize exceptional research and scholarship in PhD programs at the School of Medicine. Nominees’ work should represent highly original work that is an unusually significant contribution to the field. A maximum of one student per PhD program will be selected, but a program might not have a student selected in a particular year. The Graduate Program Directors chosen by the Office of Graduate Education will review the nominations and select recipients of each Award.
Eligibility

Open to graduating PhD students in Biochemistry, Bioethics, Biomedical Engineering, Epidemiology and Biostatistics, Genetics, Molecular Medicine, Neurosciences, Nutrition, Pathology, Pharmacology, Physiology and Biophysics, and Systems Bio and Bioinformatics.

This sidebar indicates the 2010 winners are:

Biochemistry: Matthew Lalonde
Biomedical Engineering: Jeffrey Beamish
Epidemiology and Biostatistics: Johnie Rose
Neurosciences: Phillip Larimer
Nutrition: Charlie Huang
Pathology: Joshua Rosenblum
Pharmacology: Philip Kiser
Physiology and Biophysics: Bryan Doreian

Now obviously with such an award it is not a given that Mr. Doreian’s data faking prevented another deserving individual from gaining this recognition and CV item. It may be that there were no suitable alternatives from his Department that year, certainly it did not get one in 2011. It may also be the case that his apparent excellence had no impact on the selection of other folks from other Departments…or maybe he did set a certain level that prevented other folks from gaining an award that year. Hard to say. This is unlike the zero sum nature of the NIH Grant game in which it is overwhelmingly the case that if a faker gets an award, this prevents another award being made to the next grant on the list.)

But still, this has the potential for the same problem with only discovering the fakes post-hoc. The damage to the honest scientist has already been done. There is another doctoral student who suffered at the hands of this fellow’s cheating. This is even before we get to the more amorphous effect of “raising the bar” for student performance in the department.

Now fear not, it does appear that this scientific fraudster has left science.

Interestingly he appears to be engaging in a little bit of that Web presence massaging that we discussed in the case of alcohol research fraudster Michael Miller, Ph.D., last of SUNY Upstate. This new data faking fraudster Bryan Doreian, has set up a “brandyourself” page.

“Our goal is to make it as easy as possible to help anyone improve their own search results and online reputation.

Why should Mr. Doreian needs such a thing? Because he’s pursuing a new career in tutoring for patent bar exams. Hilariously it has this tagline:

My name is Bryan and I am responsible for the operations, management and oversight of all projects here at WYSEBRIDGE. Apart from that some people say I am pretty good at data analysis and organization.

This echos something on the “brandyourself” page:

Bryan has spent years in bio- and medical- research, sharpening his knack for data analysis and analytical abilities while obtaining a PhD in Biophysics.

Well, the NIH ORI “says” that he is pretty good at, and/or has sharpened his knack for, faking data analysis. So I wonder who those “some people” might be at this point? His parents?

His “About” page also says:

Doctoral Studies

In 2005, I moved to Cleveland, OH to begin my doctoral studies in Cellular and Molecular Biophysics. As typical for a doctoral student, many hours were spent studying, investigating, pondering, researching, the outer fringes of information in order to attempt to make sense of what was being observed. 5 years later, I moved further on into medical research. After 2+ years and the conclusion of that phase of research, I turned my sights onto the Patent Bar Exam.

At this point you probably just want to take this down my friend. A little free advice. You don’t want people coming to your new business looking into the sordid history of your scientific career as a fraudster, do you?

From an ASM Forum bit by Casadevall and Fang:

An example of a rejected descriptive manuscript would be a survey of changes in gene expression or cytokine production under a given condition. These manuscripts usually fare poorly in the review process and are assigned low priority on the grounds that they are merely descriptive; some journals categorically reject such manuscripts (B. Bassler, S. Bell, A. Cowman, B. Goldman, D. Holden, V. Miller, T. Pugsley, and B. Simons, Mol. Microbiol. 52: 311–312, 2004). Although survey studies may have some value, their value is greatly enhanced when the data lead to a hypothesis-driven experiment. For example, consider a cytokine expression study in which an increase in a specific inflammatory mediator is inferred to be important because its expression changes during infection. Such an inference cannot be made on correlation alone, since correlation does not necessarily imply a causal relationship. The study might be labeled “descriptive” and assigned low priority. On the other hand, imagine the same study in which the investigators use the initial data to perform a specific experiment to establish that blocking the cytokine has a certain effect while increasing expression of the cytokine has the opposite effect. By manipulating the system, the investigators transform their study from merely descriptive to hypothesis driven. Hence, the problem is not that the study is descriptive per se but rather that there is a preference for studies that provide novel mechanistic insights.

But how do you choose to block the cytokine? Pharmacologically? With gene manipulations? Which cells are generating those cytokines and how do you know that? Huh? Are there other players that regulate the cytokine expression? Wait, have you done the structure of the cytokine interacting with its target?

The point is that there is always some other experiment that really, truly explains the “mechanism”. Always.

Suppose some species of laboratory animal (or humans!) are differentially affected by the infection and we happen to know something about differences in that “mediator” between species. Is this getting at “mechanism” or merely descriptive? How about if we modify the relevant infectious microbe? Are we testing other mechanisms of action…or just further describing the phenomenon?

This is why people who natter on with great confidence that they are the arbiters of what is “merely descriptive” and what is “mechanistic” are full of stuff and nonsense. And why they are the very idiots who compliment the Emperor on his fine new Nature publication clothes.

They need to be sent to remedial philosophy of science coursework.

The authors end with:

Descriptive observations play a vital role in scientific progress, particularly during the initial explorations made possible by technological breakthroughs. At its best, descriptive research can illuminate novel phenomena or give rise to novel hypotheses that can in turn be examined by hypothesis-driven research. However, descriptive research by itself is seldom conclusive. Thus, descriptive and hypothesis-driven research should be seen as complementary and iterative (D. B. Kell and S. G. Oliver, Bioessays 26:99–105, 2004). Observation, description, and the formulation and testing of novel hypotheses are all essential to scientific progress. The value of combining these elements is almost indescribable.

They almost get it. I completely agree with the “complementary and iterative” part as this is the very essence of the “on the shoulders of giants” part of scientific advance. However, what they are implying here is that the combining of elements has to be in the same paper, certainly for the journal Infection and Immunity. This is where they go badly wrong.