A query came in through the email box:

Do you use ELNs in your lab? Is that something that you think would make a useful blog post? I haven’t found much elsewhere in the blogosphere about ELNs. Maybe you will find this to be a shining example of why you have stuck with paper and pen.

I don’t use one so I’m turning this over to you folks. Any recommendations for your fellow Reader?

The Legislative Mandates have been issued for FY 2014.

The intent of this Notice is to provide information on the following statutory provisions that limit the use of funds on NIH grant, cooperative agreement, and contract awards for FY2014.

It contains the usual familiar stuff, of pointed interest is the prohibition against using grant funds to promote the legalization of Schedule I drugs and the one that prohibits any lobbing of the government. With respect to the Schedule I drugs issue, for a certain segment of my audience, I remind you of the critical exception:

(8) Limitation on Use of Funds for Promotion of Legalization of Controlled Substances (Section 509)
“(a) None of the funds made available in this Act may be used for any activity that promotes the legalization of any drug or other substance included in schedule I of the schedules of controlled substances established under section 202 of the Controlled Substances Act except for normal and recognized executive-congressional communications. (b)The limitation in subsection (a) shall not apply when there is significant medical evidence of a therapeutic advantage to the use of such drug or other substance or that federally sponsored clinical trials are being conducted to determine therapeutic advantage.”

I wouldn’t like to find out the hard way but I would presume this means that research into the medical benefits of marijuana, THC and/or other cannabinoid compounds are just fine. I seem to recall reading more than one paper listing NIH support that might be viewed in this light.

What I found more fascinating was a little clause that I had not previously noticed in the anti-lobbying section.

(3) Anti-Lobbying (Section 503)

(c) The prohibitions in subsections (a) and (b) shall include any activity to advocate or promote any proposed, pending or future Federal, State or local tax increase, or any proposed, pending, or future requirement or restriction on any legal consumer product, including its sale or marketing, including but not limited to the advocacy or promotion of gun control.”

there is also another stand-alone section in case you didn’t get the point:

(2) Gun Control (Section 217)
“None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.”

I was sufficiently curious to go back through the years and found out that this language did not appear in the Notice for FY 2011 and was inserted for FY 2012. This was part of the “FY 2012 the Consolidated Appropriations Act, 2012 (Public Law 112-74) signed into law on December 23, 2011“. I didn’t bother to go back through the legislative history and try to figure out when the gun control part was added but it looks like something similar that affected the CDC appropriation was put into place in 1996.

So I guess we should have expected the anti-gun-control forces to get around to it eventually?

PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I’m sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.
has_user_comments[sb]

In case you are interested in seeing what sorts of comments are being made.

On showing the data

September 5, 2013

If I could boil things down to my most fundamental criticism of the highly competitive chase for the “get” of a very high Impact Factor journal acceptance in science, it is the inefficiency.

GlamourDouchery of this type is an inefficient way to do science.

This is because of several factors related to the fundamental fact that if the science you conduct isn’t published it may as well never have happened.

Science is an incremental business, ever built upon the foundations and structures created by those who came before. And built in sometimes friendly, sometimes uneasy collaboration with peers. No science stands alone.

Science these days is also a very large enterprise with many, many thousands of people beavering away at various topics. It is nearly impossible to think of a research program or project that doesn’t benefit by the existence of peer labs doing somewhat-related work.

Consequently, it is a near truism that all science benefits from the quickest and comprehensive knowledge of what other folks are doing.

The “get” of an extremely high Impact Factor Journal article acceptance requires that the authors, editors and reviewers temporarily suspend disbelief and engage in the mass fantasy that this is not the case. The participants engage in the fantasy that this work under consideration is the first and best and highly original. That it builds so fundamentally different an edifice that the vast majority of the credit adheres to the authors and not to any part of the edifice of science upon which they are building.

This means that the prospective GlamourArticle authors are highly motivated to keep a enormous amount of their progress under wraps until they are ready to reveal this new fine stand-alone structure.

Otherwise, someone else might copy them. Leverage their clever advances. Build a competing tower right next door and overshadow any neighboring accomplishments. Which, of course, builds the city faster….but it sure doesn’t give the original team as much credit.

The average Glamour Article is also an ENORMOUS amount of work. Many, many person years go into creating one. Many people who would otherwise get a decent amount of credit for laying a straight and true foundation will now be entirely overlooked in the aura of the completed master work. They will never become architects themselves, of course. How could they? Even if they travel to Society Journal Burg, there is no record of them being the one to detail the windows or come up with a brilliant new way to mix the mortar. That was just scut work for throwaway labor, don’t you know.

But the real problem is that the collaborative process between builders is hindered. Slowed for years. The dissemination of tools and approaches has to wait until the entire tower is revealed.

Inefficiency. Slowness. These are the concerns.

Sure, it is also a problem that the builders of the average Glamour Article tower may not share all their work even after the shroud has been removed. It would be nice to let everyone know just where the granite was found, how it was quarried and something about the brand new amazing mortar that (who was that anonymous laborer again? shrug) created. But there isn’t really any pay for that and the original team has moved on. Good luck. So yes, it would be good to require them to show their work at the end.

Much, much more important, however, is that they show each part of the tower as it is being created. I mean, no, I don’t think people need to work with a hundred eyes tracking their every move. I don’t think every little mistake has to be revealed, nor do I think we necessarily need to know how each laborer holds her trowel. But it would be nice to show off the foundation when it is built. To reveal the clever staircase and the detailing around the windows once they are installed. Then each sub-team can get their day in the sun. Get the recognition they deserve.

[And if they are feeling a little oppressed, screw it, they can leave and take their credit with them. And their advances in knowledge can be spread to another town who will be happy to hire this credentialed foundation builder instead of some grumpy nobody who only claims to have built a foundation.]

The competition for Glamour Article building can’t really catch up directly, after all it takes a good bit of work to lay a foundation or create a new window design. They can copy techniques and leverage them, but there is less chance of an out and out scoop of the full project.

So if the real problem is inefficiency, Dear Reader, the solution is most assuredly the incremental reveal of progress made. We don’t need to watch the stirring and the endless recipes for mortar that have been attempted, we just need to know how the successful one was made. And to see the sections of the tower as they are completed.

Ironically enough, this is how it is done outside of GlamourCity. In Normalville, the builders do show their work. Not all of it in nauseating detail but incrementally. Sections are shown as they are completed. It is not necessary to wait for the roof to be laid to show the novel floorplan or for the paint to be on to show the craft that went into the floor joists.

This is a much more efficient way to advance.

It has to be, since resources are scarce and people in Society Burgh kind of give a shit if one of their neighbors is crushed under a block of limestone. And care if an improperly supported beam cracks and they have to get a new one.

This is unlike the approach of Glamour City where they just curse the laborer and draft three new ones to lift the block into place. And pull another beam out of their unending pile of lumber.

Show me the data, Jerry!!!!!!

September 3, 2013

Today’s Twittsplosion was brought to you by @mbeisen:

he then elaborated

and

and

There was a great deal of distraction in there from YHN, MBE and the Twitteratti. But these are the ones that get at the issue I was responding to. I think the last one here shows that I was basically correct about what he meant at the outset.

I also agree that it would be GREAT if all authors of papers had deposited all of their raw data, carefully annotated, commented and described (curated, in a word) with all of the things that I might eventually want to know. That would be kickass.

And I have had NUMEROUS frustrations that I cannot tell even from methods sections what was done, how the data were selected and groomed, etc in many critical papers.

It isn’t because I assume fraud but rather that I find that when it comes to behaving animals in laboratory studies that details matter. Unfortunately we all wish to overgeneralize from published reports….the authors want to imply they have reported a most universal TRUTH and other investigators wish to believe it so that they don’t have to sweat the details.

This is never true in science, as much as we want to pretend.

Science is ever only a description of what has occurred under these specific conditions. Period. Including the ones we’ve bothered to describe in the Methods and those we have not bothered to describe. Including those conditions of which we have no knowledge or understanding that they might have contributed.

Let us take our usual behavioral pharmacology model, the 10 m Hedgerow BunnyHopper assay. The gold standard, of course. And everyone knows it is trivial to speed up the BunnyHopping with a pretreatment of amphetamine.

However, we’ve learned over the years that the time of day matters.

Until…finally….in its dotage seniority. The Dash Lab finally fesses up. The PI allows a trainee to publish the warts. And compare the basic findings, done at nighttime in naive bunnies, with what you get during the dawn/dusk period. In Bunnies who have seen the Dash arena before. And maybe they are hungry for clover now. And they’ve had a whiff of fox without seeing the little blighters before.

And it turns out these minor methodological changes actually matter.

We also know that dose response curves can be individual for amphetamine and if the dose is too high the Bunny just stims (and gets eaten by the fox). Perhaps this dose threshold is not identical so we’re just going to chop off the highest dose because half of them were eaten after that dose. Wait…individuals? Why can’t we show the individuals? Because maybe a quarter are speeded up by 4X and a quarter by 10X and now that there are these new genetic data on Bunny myocytes under stressors as diverse as….

So why do the new papers just report the effects of single doses of amphetamine in the context of this fancy transcranial activation of vector-delivered Channelrhodopsin in motor cortex? Where are the training data? What time of day were they run? How many Bunnies were aced out of the study because the ReaChr expression was too low? I want to do a correlation, dammit! and a multivariate analysis that includes my favorite myocyte epigenetic markers! Say, how come these damn authors aren’t required to bank genomic DNA from every damn animal they run just so I can ask for it and do a whole new analysis?

After all, the taxpayers paid for it!

I can go on, and on and on with arguments for what “raw” data need to be included in all BunnyHopping papers from now into eternity. Just so that I can perform my pet analyses of interest.

The time and cost and sheer effort involved is of no consequence because of course it is magically unicorn fairy free time that makes it happen. Also, there would never be any such thing as a protracted argument with people who simply prefer the BadgerDigger assay and have wanted to hate on BunnyHopping since the 70s. Naaah. One would never get bogged down in irrelevant stuff better suited for review articles by such a thing. Never would one have to re-describe why this was actually the normal distribution of individual Hopping speeds and deltas with amphetamine.

What is most important here is that all scientists focus on the part of their assays and data that I am interested in.

Just in case I read their paper and want to write another one from their data.

Without crediting them, of course. Any such requirement is, frankly my dear, gauche.

There’s a new post up over at Speaking of Research that documents The Double Life of Dr. Lawrence A. Hansen. The most astonishing thing is that this AR wackanut has the gall to hold research funding as PI and publish papers that, you guessed it, involve animal research. Including a study in “mongrel dogs” [cited 21 times including twice in 2012] which he first authored some ten years before hitting the scene in outrage over med school physiology labs which used canine models.

Go Read.

23andme and the Cold Case

August 15, 2013

By way of brief introduction, I last discussed the 23andme genetic screening service in the context of their belated adoption of IRB oversight and interloper paternity rates. You may also be interested in Ed Yong’s (or his euro-caucasoid doppelganger’s) results.

Today’s topic is brought to you by a comment from my closest collaborator on a fascinating low-N developmental biology project.

This collaborator raised a point that extends from my prior comment on the paternity post.

But, and here’s the rub, the information propagates. Let’s assume there is a mother who knows she had an affair that produced the kid or a father who impregnated someone unknown to his current family. Along comes the 23 and me contact to their child? Grandchild? Niece or nephew? Brother or sister? And some stranger asks them, gee, do you have a relative with these approximate racial characteristics, of approximately such and such age, who was in City or State circa 19blahdeblah? And then this person blast emails their family about it? or posts it on Facebook?

It also connects with a number of issues raised by the fact that 23andme markets to adoptees in search of their genetic relatives. This service is being used by genealogy buffs of all stripes and one can not help but observe that one of the more ethically complicated results will be the identification of unknown genetic relationships. As I alluded to above, interloper paternity may be identified. Also, one may find out that a relative gave a child up for adoption…or that one fathered a child in the past and was never informed.

That’s all very interesting but today’s topic relates to crimes in which DNA evidence has been left behind. At present, so far as I understand, the DNA matching is to people who have already crossed the law enforcement threshold. In fact there was a recent broughha over just what sort of “crossing” of the law enforcement threshold should permit the cops to take your DNA if I am not mistaken. This does not good, however, if the criminal has never come to the attention of law enforcement.

Ahhhh, but what if the cops could match the DNA sample left behind by the perpetrator to a much larger database. And find a first or second cousin or something? This would tremendously narrow the investigation, wouldn’t it?

It looks like 23andme is all set to roll over for whichever enterprising police department decides to try.

From the Terms of Service.

Further, you acknowledge and agree that 23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary to: (a) comply with legal process (such as a judicial proceeding, court order, or government inquiry) or obligations that 23andMe may owe pursuant to ethical and other professional rules, laws, and regulations; (b) enforce the 23andMe TOS; (c) respond to claims that any content violates the rights of third parties; or (d) protect the rights, property, or personal safety of 23andMe, its employees, its users, its clients, and the public.

Looks to me that all the cops would need is a warrant. Easy peasy.

__
h/t to Ginny Hughes [Only Human blog] for cuing me to look over the 23andme ToS recently.

The F1000Research will be waiving the publication fee for negative result manuscripts up through the end of August.


If you have negative results in your lab notebooks, this is the time to write them up! Like all journals, we of course publish traditional full-length research papers but, in addition, we accept short single-observation articles, data articles (i.e. a dataset plus protocol), and negative- and null-result submissions.

For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work. We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days.

The follow up post requesting comment is here.

This is a great idea and the original post nails down why.

This is not only a disappointment for the researchers who conducted the work, it’s also damaging to the overall scientific record. This so-called “publication bias” toward positive results makes it appear as though the experiments with negative or null results never happened.

Sometimes the unpublished experiments are obvious next steps in elucidating a particular biological mechanism, making it likely that other researchers will try the same thing, not realizing that someone else already did the work. This is a waste of time and money.

On other occasions, the positive results that are published are the exception: they could have been specific to a narrow set of conditions, but if all the experiments that didn’t work are not shown, these exceptional cases now look like the only possible result. This is especially damaging when it comes to drug development and medical research, where treatments may be developed based on an incomplete understanding of research results.

The waste of time and money cannot be emphasized enough, especially in these tight funding times. Why on earth should we tolerate any duplication of effort that is made necessary simply by the culture of not publicizing results that are not deemed sexy enough? This is the information age, people!

One example from my field is the self-administration of delta9-tetrahydrocannabinol (THC) by the common laboratory species used for self-administration studies of other drugs of abuse. Papers by Goldberg and colleagues (Tanda et al, 2000; Justinova et al, 2003) showed that squirrel monkeys will self-administer THC intravenously which was big news. It was the first relatively clear demonstration in lab animals for a substance we know humans readily self-administer. As the Goldberg group related in their 2005 review article, there is no clear evidence that rodents will self-administer THC i.v. in literature stretching back to the 1970s when the self-administration technique was being used for studies of numerous drugs.

Over the last three decades, many attempts to demonstrate intravenous self-administration of THC or of synthetic cannabinoid CB1 receptor agonists by experimental animals were relatively unsuccessful (Pickens et al., 1973; Kaymakcalan, 1973; Harris et al., 1974; Carney et al., 1977; van Ree et al., 1978; Mansbach et al., 1994) (Table 1). None of these studies clearly demonstrated persistent, dose-related, self-administration behavior maintained by THC or synthetic cannabinoids, which would be susceptible to vehicle extinction and subsequent reinstatement in the absence of unusual ‘‘foreign’’ conditions.

The thing is that rats “wouldn’t” self-administer nicotine either. Nor alcohol. That is, until people came up with the right conditions to create a useful model. In the case of ethanol it was helpful to either force them to become dependent first (via forced liquid diets adulterated with ethanol or ethanol inhalation chambers) or to slowly train them up on cocktails (called the flavorant-fade procedure). In the case of nicotine, the per-infusion dose was all critical and it helped to provide intermittent access, e.g., with four days on, three days off. Interestingly, while making rats dependent on nicotine using subcutaneous osmotic pumps didn’t work (as it does for heroin) very well, a recent study suggests that force inhalation-based dependence on nicotine results in robust intravenous self-administration.

For many drugs of abuse, subtle factors can make a difference in the rodent model. Strain, sex, presence of food restriction, exact age of animals, circadian factors, per-infusion dose, route of administration, duration of access, scheduling of access…. the list goes on and on. A fair read of the literature suggests that when you have cocaine or heroin, many factors have only quantitative effects. You can move the means around, even to the p<0.05 level, but hey, it's cocaine or heroin! They'll still exhibit clear evidence that they like the drug.

When it comes to other drugs, maybe it is a little trickier. The balance between pleasurable and aversive effects may be a fine one (ever tried buccal nicotine delivery via chew or dip? huh?). The route of administration may be much more critical. Etc.

So the curious person might ask, how much has been tried? How many curious grad students or even postdocs have “just tried it” for a few months or a year? How many have done the most obvious manipulations and failed? How many have been told to give it up as a bad lot by older and wiser PIs (who tried to get THC self-administration going themselves back 20 years ago)?

I’m here to tell you that it has been attempted a lot more than has been published. Because the lab lore type of advice keeps rolling.

It is really hard, however, to get a comprehensive look at what has been tried and has led to failure. What were the quality of those attempts? N=8 and out? Or did some poor sucker run multiple groups with different infusion doses? Across the past thirty years, how many of the obvious tweaks have been unsuccessful?

Who cares, right? Well, my read is that there are some questions that keep coming around, sometimes with increased urgency. The current era of medical marijuana legalization and tip-toeing into full legalization means that we’re under some additional pressure to have scientific models. The explosion of full-agonist cannabimimetic products (K2, Spice, Spike, etc containing JWH-018 at first and now a diversity of compounds) likewise rekindles interest. Proposals that higher-THC marijuana strains increase dependence and abuse could stand some controlled testing….if we only had better models.

Well, this is but one example. I have others from the subfields of science that are of my closest interests. I think it likely that you, Dear Reader, if you are a scientist can come up with examples from your own fields where the ready availability of all the failed studies would be useful.

intrepid reporter @eperlste filed a dispatch from the front lines of the OpenScience, CrowdFund War.

I’ve reached out to several @qb3 incubator biotech startups to learn more about leasing lab space. $900/bench/month is a pretty penny!

$10,800 per year just for the bench space alone. One bench. He didn’t elaborate so it is hard to know what is included, but I think we can safely assume that normal costs go up from there. Freezer space, hourly use of shared big-ticket equipment, etc. Vivarium fees to maintain mouse lines won’t come cheaply. Waste disposal.

Just another data point for you in your efforts to assess what can reasonably be accomplished for a given threshold of crowd-fund science support money and in determining where your Indirect Cost dollars for a traditional grant go.

Jane Goodall, Plagiarist

March 27, 2013

From the WaPo article:

Jane Goodall, the primatologist celebrated for her meticulous studies of chimps in the wild, is releasing a book next month on the plant world that contains at least a dozen passages borrowed without attribution, or footnotes, from a variety of Web sites.

Looks pretty bad.

This bit from one Michael Moynihan at The Daily Beast raises the more interesting issues:

No one wants to criticize Jane Goodall—Dame Goodall—the soft-spoken, white-haired doyenne of primatology. She cares deeply about animals and the health of the planet. How could one object to that?

Because it leads her to oppose animal research using misrepresentation and lies? That’s one reason why one might object.

You see, everyone is willing to forgive Jane Goodall. When it was revealed last week in The Washington Post that Goodall’s latest book, Seeds of Hope, a fluffy treatise on plant life, contained passages that were “borrowed” from other authors, the reaction was surprisingly muted.

It always starts out that way for a beloved writer. We’ll just have to see how things progress. Going by recent events it will take more guns a’smokin’ in her prior works to start up a real hue and cry. At the moment, her thin mea culpa will very likely be sufficient.

A Jane Goodall Institute spokesman told The Guardian that the whole episode was being “blown out of proportion” and that Goodall was “heavily involved” in the book bearing her name and does “a vast amount of her own writing.” In a statement, Goodall said that the copying was “unintentional,” despite the large amount of “borrowing” she engaged in.

Moynihan continues on to catalog additional suspicious passages. I think some of them probably need a skeptical eye. For example I am quite willing to believe a source might give the exact same pithy line about a particular issue to a number of interviewers. But this caught my eye:

Describing a study of genetically modified corn, Goodall writes: “A Cornell University study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: their caterpillars reared on milkweed leaves dusted with Bt corn pollen ate less, grew more slowly, and suffered higher mortality.”

A report from Navdaya.org puts it this way: “A 1999 Nature study showed adverse effects of transgenic pollen (from Bt corn) on monarch butterflies: butterflies reared on milkweed leaves dusted with bt corn pollen ate less, grew more slowly, and suffered higher mortality.” (Nor does Goodall mention a large number of follow-up studies, which the Pew Charitable Trust describes as showing the risk of GM corn to butterflies as “fairly small, primarily because the larvae are exposed only to low levels of the corn’s pollen in the real-world conditions of the field.”

And here is the real problem. When someone who has a public reputation built on what people think of as science weighs in on other matters of science, they enjoy a lot of trust. Goodall certainly has this. So when such a person misuses this by misrepresenting the science to further their own agenda…it’s a larger hurdle for the forces of science and rational analysis to overcome. Moynihan is all over this part as well:

One of the more troubling aspects of Seeds of Hope is Goodall’s embrace of dubious science on genetically modified organisms (GMO). On the website of the Jane Goodall Foundation, readers are told—correctly—that “there is scientific consensus” that climate change is being driven by human activity. But Goodall has little time for scientific consensus on the issue of GMO crops, dedicating the book to those who “dare speak out” against scientific consensus. Indeed, her chapter on the subject is riddled with unsupportable claims backed by dubious studies.

So in some senses the plagiarism is just emblematic of un-serious thinking on the part of Jane Goodall. The lack of attribution is going to be sloughed off with an apology and a re-edit of the book, undoubtedly. We should not let the poor scientific thinking go unchallenged though, just to raise a mob against plagiarism. The abuse of scientific consensus is a far worse transgression.

They sure do get huffy when they themselves are the ones being subjected to open peer review.

Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.

And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.

My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.

I wonder how this would be received:

“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”

Anyway, in citizen-journalist PhysioProffe’s investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter’s recent guest post on the former’s blog blogge.

Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.

When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.

CDC: U.S. Public Health Service Syphilis Study at Tuskegee
NPR: Remembering Tuskegee
PubMed: Syphilitic Gumma

The NOT-OD-13-039 was just published, detailing the many data faking offenses of one Bryan Doreian. There are 7 falsifications listed which include a number of different techniques but mostly involve falsely describing the number of samples/repetitions that were performed (4 charges) and altering the numeric values obtained to reach a desired result (3 charges). The scientific works affected included:

Doreian, B.W. “Molecular Regulation of the Exocytic Mode in Adrenal Chromaffin Cells.’ Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, August 2009; hereafter referred to as the “Dissertation.’

Doreian, B.W., Fulop, T.G., Meklemburg, R.L., Smith, C.B. “Cortical F-actin, the exocytic mode, and neuropeptide release in mouse chromaffin cells is regulated by myristoylated alanine-rich C-kinase substrate and myosin II.’ Mol Biol Cell. 20(13):3142-54, 2009 Jul; hereafter referred to as the “Mol Biol Cell paper.’

Doreian, B.W., Rosenjack, J., Galle, P.S., Hansen, M.B., Cathcart, M.K., Silverstein, R.L., McCormick, T.S., Cooper, K.D., Lu, K.Q. “Hyper-inflammation and tissue destruction mediated by PPAR-γ activation of macrophages in IL-6 deficiency.’ Manuscript prepared for submission to Nature Medicine; hereafter referred to as the “Nature Medicine manuscript.’

The ORI notice indicates that Doreian will request that the paper be retracted.

There were a couple of interesting points about this case. First, that Doreian has been found to have falsified information in his dissertation, i.e., that body of work that makes up the major justification for awarding him a PhD. From the charge list, it appears that the first 4 items were both included in the Mol Bio Cell paper and in his Dissertation. I will be very interested to see if Case Western Reserve University decides to revoke his doctorate. I tend to think that this is the right thing to do. If it were my Department this kind of thing would make me highly motivated to seek a revocation.

Second, this dissertation was apparently given an award by the Case Western Reserve University School of Medicine:

The Doctoral Excellence Award in Biomedical Sciences is established to recognize exceptional research and scholarship in PhD programs at the School of Medicine. Nominees’ work should represent highly original work that is an unusually significant contribution to the field. A maximum of one student per PhD program will be selected, but a program might not have a student selected in a particular year. The Graduate Program Directors chosen by the Office of Graduate Education will review the nominations and select recipients of each Award.
Eligibility

Open to graduating PhD students in Biochemistry, Bioethics, Biomedical Engineering, Epidemiology and Biostatistics, Genetics, Molecular Medicine, Neurosciences, Nutrition, Pathology, Pharmacology, Physiology and Biophysics, and Systems Bio and Bioinformatics.

This sidebar indicates the 2010 winners are:

Biochemistry: Matthew Lalonde
Biomedical Engineering: Jeffrey Beamish
Epidemiology and Biostatistics: Johnie Rose
Neurosciences: Phillip Larimer
Nutrition: Charlie Huang
Pathology: Joshua Rosenblum
Pharmacology: Philip Kiser
Physiology and Biophysics: Bryan Doreian

Now obviously with such an award it is not a given that Mr. Doreian’s data faking prevented another deserving individual from gaining this recognition and CV item. It may be that there were no suitable alternatives from his Department that year, certainly it did not get one in 2011. It may also be the case that his apparent excellence had no impact on the selection of other folks from other Departments…or maybe he did set a certain level that prevented other folks from gaining an award that year. Hard to say. This is unlike the zero sum nature of the NIH Grant game in which it is overwhelmingly the case that if a faker gets an award, this prevents another award being made to the next grant on the list.)

But still, this has the potential for the same problem with only discovering the fakes post-hoc. The damage to the honest scientist has already been done. There is another doctoral student who suffered at the hands of this fellow’s cheating. This is even before we get to the more amorphous effect of “raising the bar” for student performance in the department.

Now fear not, it does appear that this scientific fraudster has left science.

Interestingly he appears to be engaging in a little bit of that Web presence massaging that we discussed in the case of alcohol research fraudster Michael Miller, Ph.D., last of SUNY Upstate. This new data faking fraudster Bryan Doreian, has set up a “brandyourself” page.

“Our goal is to make it as easy as possible to help anyone improve their own search results and online reputation.

Why should Mr. Doreian needs such a thing? Because he’s pursuing a new career in tutoring for patent bar exams. Hilariously it has this tagline:

My name is Bryan and I am responsible for the operations, management and oversight of all projects here at WYSEBRIDGE. Apart from that some people say I am pretty good at data analysis and organization.

This echos something on the “brandyourself” page:

Bryan has spent years in bio- and medical- research, sharpening his knack for data analysis and analytical abilities while obtaining a PhD in Biophysics.

Well, the NIH ORI “says” that he is pretty good at, and/or has sharpened his knack for, faking data analysis. So I wonder who those “some people” might be at this point? His parents?

His “About” page also says:

Doctoral Studies

In 2005, I moved to Cleveland, OH to begin my doctoral studies in Cellular and Molecular Biophysics. As typical for a doctoral student, many hours were spent studying, investigating, pondering, researching, the outer fringes of information in order to attempt to make sense of what was being observed. 5 years later, I moved further on into medical research. After 2+ years and the conclusion of that phase of research, I turned my sights onto the Patent Bar Exam.

At this point you probably just want to take this down my friend. A little free advice. You don’t want people coming to your new business looking into the sordid history of your scientific career as a fraudster, do you?

From an ASM Forum bit by Casadevall and Fang:

An example of a rejected descriptive manuscript would be a survey of changes in gene expression or cytokine production under a given condition. These manuscripts usually fare poorly in the review process and are assigned low priority on the grounds that they are merely descriptive; some journals categorically reject such manuscripts (B. Bassler, S. Bell, A. Cowman, B. Goldman, D. Holden, V. Miller, T. Pugsley, and B. Simons, Mol. Microbiol. 52: 311–312, 2004). Although survey studies may have some value, their value is greatly enhanced when the data lead to a hypothesis-driven experiment. For example, consider a cytokine expression study in which an increase in a specific inflammatory mediator is inferred to be important because its expression changes during infection. Such an inference cannot be made on correlation alone, since correlation does not necessarily imply a causal relationship. The study might be labeled “descriptive” and assigned low priority. On the other hand, imagine the same study in which the investigators use the initial data to perform a specific experiment to establish that blocking the cytokine has a certain effect while increasing expression of the cytokine has the opposite effect. By manipulating the system, the investigators transform their study from merely descriptive to hypothesis driven. Hence, the problem is not that the study is descriptive per se but rather that there is a preference for studies that provide novel mechanistic insights.

But how do you choose to block the cytokine? Pharmacologically? With gene manipulations? Which cells are generating those cytokines and how do you know that? Huh? Are there other players that regulate the cytokine expression? Wait, have you done the structure of the cytokine interacting with its target?

The point is that there is always some other experiment that really, truly explains the “mechanism”. Always.

Suppose some species of laboratory animal (or humans!) are differentially affected by the infection and we happen to know something about differences in that “mediator” between species. Is this getting at “mechanism” or merely descriptive? How about if we modify the relevant infectious microbe? Are we testing other mechanisms of action…or just further describing the phenomenon?

This is why people who natter on with great confidence that they are the arbiters of what is “merely descriptive” and what is “mechanistic” are full of stuff and nonsense. And why they are the very idiots who compliment the Emperor on his fine new Nature publication clothes.

They need to be sent to remedial philosophy of science coursework.

The authors end with:

Descriptive observations play a vital role in scientific progress, particularly during the initial explorations made possible by technological breakthroughs. At its best, descriptive research can illuminate novel phenomena or give rise to novel hypotheses that can in turn be examined by hypothesis-driven research. However, descriptive research by itself is seldom conclusive. Thus, descriptive and hypothesis-driven research should be seen as complementary and iterative (D. B. Kell and S. G. Oliver, Bioessays 26:99–105, 2004). Observation, description, and the formulation and testing of novel hypotheses are all essential to scientific progress. The value of combining these elements is almost indescribable.

They almost get it. I completely agree with the “complementary and iterative” part as this is the very essence of the “on the shoulders of giants” part of scientific advance. However, what they are implying here is that the combining of elements has to be in the same paper, certainly for the journal Infection and Immunity. This is where they go badly wrong.

Creative Anger

December 11, 2012

Maybe it is just me.

A not insignificant fraction of my scientific life is motivated by Creative Anger.

Another way to explain this state of mind is when you respond to some issue that arises with “No way, that is total bullshit….and here’s why.”

The “why” is where the creative process is engaged. It may be a marshaling of relevant literature. Perhaps to the level of writing a scientific argument down. Introduction to a paper, a discussion section…maybe even a whole review article. If you have it really bad, even a grant application.

Other “whys” may stimulate you to a new experiment or line of them. A lot of my creative anger responses seem to involve the discovery that no, nobody has published what are incredibly obvious studies. And clearly, “Clearly!”, I say, these must needs be done.

And we’re off….