Inder Verma has resigned his position at the Salk Institute before a formal conclusion was reached in their internal investigation. One can only imagine they were moving toward a finding of guilt and he was tipped to resign.

http://www.sciencemag.org/news/2018/06/leading-salk-scientist-resigns-after-allegations-harassment

If the lab head tells the trainees or techs that a specific experimental outcome* must be generated by them, this is scientific misconduct.

If the lab head says a specific experimental outcome is necessary to publish the paper, this may be very close to misconduct or it may be completely aboveboard, depending on context. The best context to set is a constant mantra that any outcome teaches us more about reality and that is the real goal.


*no we are not talking about assay validation and similar technical development stuff.

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don’t.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn’t matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn’t matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden’s top-ranked university was just sacked because of the Macchiarini scandal

Story boarding

June 23, 2015

When you “storyboard” the way a figure or figures for a scientific manuscript should look, or need to look, to make your point, you are on a very slippery slope.

It sets up a situation where you need the data to come out a particular way to fit the story you want to tell.

This leads to all kinds of bad shenanigans. From outright fakery to re-running experiments until you get it to look the way you want. 

Story boarding is for telling fictional stories. 

Science is for telling non-fiction stories. 

These are created after the fact. After the data are collected. With no need for storyboarding the narrative in advance.

I agree with the following Twitter comment

Insofar as it calls for the Editorial Board of the Journal of Neuroscience to explain why it banned three authors from future submissions. As I said on the prior post, this step is unusual and seems on the face of it to be extreme.

I also said I could see justification for the decision to retract the paper. I say that also could stand some explanation, given the public defense and local University review decision.

There is one thing that concerns me about the Journal of Neuroscience banning three authors from future submission in the wake of a paper retraction.

One reason you might seek to get harsh with some authors is if they have a track record of corrigenda and errata supplied to correct mistakes in their papers. This kind of pattern would support the idea that they are pursuing an intentional strategy of sloppiness to beat other competitors to the punch and/or just don’t really give a care about good science. A Journal might think either “Ok, but not in our Journal, chumpos” or “Apparently we need to do something to get their attention in a serious way”.

There is another reason that is a bit worrisome.

One of the issues I struggle with is the whisper campaign about chronic data fakers. “You just can’t trust anything from that lab“. “Everyone knows they fake their data.

I have heard these comments frequently in my career.

On the one hand, I am a big believer in innocent-until-proven-guilty and therefore this kind of crap is totally out of bounds. If you have evidence of fraud, present it. If not, shut the hell up. It is far to easy to assassinate someone’s character unfairly and we should not encourage this for a second.

Right?

I can’t find anything on PubMed that is associated with the last two authors of this paper in combination with erratum or corrigendum as keywords. So, there is no (public) track record of sloppiness and therefore there should be no thought of having to bring a chronic offender to task.

On the other hand, there is a lot of undetected and unproven fraud in science. Just review the ORI notices and you can see just how long it takes to bust the scientists who were ultimately proved to be fraudsters. The public revelation of fraud to the world of science can be many years after someone first noticed a problem with a published paper. You also can see that convicted fraudsters have quite often continued to publish additional fraudulent papers (and win grants on fraudulent data) for years after they are first accused.

I am morally certain that I know at least one chronic fraudster who has, to date, kept one step ahead of the long short and ineffectual arm of the ORI law despite formal investigation. There was also a very curious case I discussed for which there were insider whispers of fraud and yet no findings that I have seen yet.

This is very frustrating. While data faking is a very high risk behavior, it is also a high reward behavior. And the risks are not inevitable. Some people get away with it.

I can see how it would be very tempting to enact a harsh penalty on an otherwise mild pretext for those authors that you suspected of being chronic fraudsters.

But I still don’t see how we can reasonably support doing so, if there is no evidence of misconduct other than the rumor mill.

A post at Retraction Watch alerts us to to a paper retraction at the Journal of Neuroscience. The J Neuro notice on this paper reads:

The Journal of Neuroscience has received notification of an investigation by the Perelman School of Medicine at the University of Pennsylvania, which supports the journal’s findings of data misrepresentation in the article “Intraneuronal APP, Not Free Aβ Peptides in 3xTg-AD Mice: Implications for Tau Versus Aβ-Mediated Alzheimer Neurodegeneration” by Matthew J. Winton, Edward B. Lee, Eveline Sun, Margaret M. Wong, Susan Leight, Bin Zhang, John Q. Trojanowski, and Virginia M.-Y. Lee, which appeared on pages 7691–7699 of the May 25, 2011 issue. Because the results cannot be considered reliable, the editors of The Journal are retracting the paper.

From RetractionWatch we learn that the Journal has also issued a submission ban to three of the authors:

According to author John Trojanowski … he and Lee have been barred from publishing in Journal for Neuroscience for several years. Senior author Edward Lee is out for a year.

This is the first time I have ever heard of a Journal issuing a ban on authors submitting papers to them. This is an interesting policy.

If this were a case of a conviction for academic fraud, the issues might be a little clearer. But as it turns out, it is a very muddy case indeed.

A quote from the last author:

In a nut shell, Dean Glen Gaulton asserted that the findings in the paper were correct despite mistakes in the figures. I suggested to J. Neuroscience that we publish a corrigendum to clarify these mistakes for the readership of J Neuroscience

The old “mistaken figures” excuse. Who, might we ask is at fault?

RetractionWatch quotes the second-senior author Trojanowski:

Last April, we got an email about an inquiry into figures that I would call erroneously used. An error was made by [first author] Matt Winton, who was leaving science and in transition between Penn and his new job. He was assembling the paper to submit it, there were several iterations of the paper. One set of figures was completely correct – I still don’t know what happened, but he got the files mixed up, and used erroneous figures

Winton has apparently landed a job as a market analyst*, providing advice to investors on therapeutics for Alzheimer’s Disease. Maybe the comment from Trojanowski is true and he was in a rush to get the paper off his desk as he started the new job**. Maybe. Maybe there is all kinds of blame to go around and the other authors should have caught the problem.

Or maybe this was one of those deliberate frauds in which someone took shortcuts and represented immunohistochemical images or immunoblots as something they were not. The finding from the University’s own investigation appears to confirm, however, that a legitimate mistake was made.

…so let us assume it was all an accident. Should the paper be retracted? or corrected?

I think there are two issues here that support the Journal’s right to retract the paper.

We cannot ignore that publication of a finding first has tremendous currency in the world of academic publishing. So does the cachet of publishing in one Journal over another. If a set of authors are sloppy about their manuscript preparation, provide erroneous data figures and they are permitted to “correct” the figures, they gain essentially all the credit. Potentially taking credit for priority or a given Journal level away from another group that works more carefully.

Since we would like authors to take all the care they possibly can in submitting correct data in the first place, it makes some sense to take steps to discourage sloppiness. Retraction is certainly one such discouragement. A ban on future submissions does seem, on the face of it, a bit harsh for a single isolated error. I might not opt for that if it were my decision. But I can certainly see where another scientist might legitimately want to bring down the ban hammer and I would be open to argument that it is necessary.

The second issue I can think of is related. It has to do with whether the paper acceptance was unfairly won by the “mistake”. This is tricky. I have seen many cases in which even to the relatively uninformed viewer, the replacement/correct figure looks a lot crappier/dirtier/equivocal than the original mistaken image. Whether right or wrong that so-called “pretty” data change the correctness of the interpretation and strength of the support, it is often interpreted this way. This raises the question of whether the paper would have gained acceptance with the real data instead of the supposedly mistaken data. We obviously can’t rewind history, but this theoretical concern should be easy to appreciate. Maybe the Journal of Neuroscience review board went through all of the review materials for this paper and decided that the faked figure sealed the acceptance? For this concern it really makes no difference to the Journal whether the mistake was unintentional or not, there is a strong argument that the integrity of its process requires retraction whenever there is significant doubt the paper would have been accepted without the mistaken image(s).

Given these two issues, I see no reason that the Journal is obligated to “abide by the Penn committee’s investigation” as Trojanowski appears to think they should be. The Journal could accept that it was all just a mistake and still have good reason to retract the paper. But again, a ban on further submissions from the authors seems a bit harsh.

Now, I will point out one thing in this scenario that chaps my hide. It is a frequent excuse of the convicted data faker that they were right, so all is well. RetractionWatch further quotes the senior author, Lee:

…the findings of this paper are extremely important for the Alzheimer’s disease field because it provided convincing evidence pointing out that a previous report claiming accumulation of intracellular Abeta peptide in a mouse model (3XFAD) is wrong (Oddo et al., Neuron 2003), as evidenced by the fact that this paper has been cited by others for 62 times since publication. Subsequent to our 2011 J. Neuroscience paper, others also have found no evidence of intracellular Abeta in the 3XFAD mice (e.g. Lauritzen et al., J. Neurosci, 2012).

I disagree that whether the figures are correct and/or repeatable is an issue that affects the decision here. You either have the correct data or you do not. You either submitted the correct data for review with the manuscript or you did not. Whether you are able to obtain the right data later, whether other labs obtain the right data or whether you had the right data in a mislabeled file all along is absolutely immaterial to whether the paper should be retracted.

The system itself is what needs to be defended. Because if you don’t protect the integrity of the peer review system – where authors are presumed to be honest – then it encourages more sloppiness and more outright fraud.

__
*An interesting alt-career folks. One of my old grad school peeps has been in this industry for years and appears to really love it.

**I will admit, my eyebrows go up when the person being thrown under the bus for a mistake or a data fraud is someone who is no longer in the academic science publishing game and has very little to lose compared with the other authors.

The joke about how you’d like to have some financial conflicts of interest to declare, but sadly you have none, is no longer amusing.

Knock it off.

Someone or other on the Twitts, or possible a blog comment, made a remark about academic citation practices that keeps eating at me.

It boils down to this.

One of the most fundamental bits of academic credit that accrues to authors are the citations of their research papers. Citations form the ballyhooed h-index (X papers with at least X cites each) go into the “Highly Cited” measure of awesomeness and are generally viewed as an important indication of your impact on science.

Consequently, when you choose to cite a review article to underline a point you are making in your own article, you are taking the credit that rightfully goes to the people who did the actual work, and handing it over to some review author.

Review authors are extracting surplus value from the people who did the actual creating. Kind of like a distributor of widgets extracts value from those people who actually made them by providing the widgets in an easy/efficient location for use. Good for them but…..

So here’s the deal. If you are citing a review only as a sort of collected works, stop doing that. I can make an exception when you are citing the review for the unique theoretical or synthetic contribution made by the review authors. Fine. But when you are just doing it because you want to make a general “..it is well established that Bunnies make it to the hedgerow in 75% of baseline time when they are given amphetamine” type of point, don’t do that. Cite some of the original authors!

If you really need to, you can cite (Jo et al, 1954, Blow et al 1985, Moe et al 2005; see Pig and Dog, 2013 for recent review).

Look at it this way. Would you rather your papers were cited directly? Or are you okay with the citations for something to which you contributed fundamentally being meta-cites of some review article?

This is, vaguely, related to an ongoing argument we have around here with respect to the proper treatment of authors who are listed as contributing “co-equally” to a given published paper. My position is that if we are to take this seriously, then it is perfectly fine* for the person listed second, third or eighth in the list of allegedly equal contributors to re-order the list on his or her CV. When I say this, my dear friend and ex-coblogger Comrade PhysioProffe loses his marbles and rants about how it is falsifying the AcademicRecord to do so. This plays into the story I have for you.

Up for your consideration today is an obscure paper on muramyl peptides and sleep (80 PubMed hits).

I ran across Muramyl peptides and the functions of sleep authored by one Richard Brown from The University of Newcastle in what appears to be a special issue of Behavioural Brain Research on The Function of Sleep (Volume 69, Issues 1–2, July–August 1995, Pages 85–90). The Preface to the issue indicates these Research Reports (on the original PDFs; termed Original Research Article on the online issue list; remember that now) arise from The Ravello Symposium on ‘The Function of Sleep’ held May 28-31, 1994.

So far so good. I actually ran across this article by clicking on an Addendum in the Jan 1997 issue. This Addendum indicates:

In the above paper an acknowledgement of unpublished data was omitted from the text during preparation. This omission could affect the future publication of the full set of data. Thus the author, Dr. Richard Brown, has agreed to share the authorship of the paper with the following persons: J. Andren, K. Andrews, L. Brown, J. Chidgey, N. Geary, M.G. King and T.K. Roberts.

So I tried to Pubmed Brown R and a few of the co-authors to see if there was any subsequent publication of the “full set of data” and….nothing. Hmmm. Not even the original offending article? So I looked for Brown R and sleep, muramyl, etc. Nada. Wow, well maybe for some reason the journal wasn’t indexed? No, because the first other article I looked for was there. Ok, weird. Next I searched for the journal date and month. Fascinatingly, PubMed lists these as “Review”. When the print PDFs say “Research Report” and the journal’s online materials list them as “Original Research Articles”.

But it gets better….scanning down the screen and …..Whoa!

Behav Brain Res. 1995 Jul-Aug;69(1-2):85-90. Muramyl peptides and the functions of sleep. Andren J, Andrews K, Brown L, Chidgey J, Geary N, King MG, Roberts TK. Department of Psychology, University of Newcastle, Australia.

Now this Richard Brown guy has been disappeared altogether from the author line! Without any obvious indication of this on the ScienceDirect access to the journal issue or article.

The PubMed record indicates there is an Erratum in Behav Brain Res 1997 Jan;82(2):245, but this is the Addendum I quoted above. Searching ScienceDirect for “muramyl peptides pulls up the original article and Addendum but no further indication of Erratum or correction or retraction.

Wow. So speaking to PP’s usual point about falsifying the academic record, this whole thing has been a clusterbork of re-arranging the “academic record”.

Moving along, the Web of Science indicates that the original, credited solely to Brown has been cited 9 times. First by the Addendum and then 8 more times after the correction…including one in 2011 and one in 2012. Who knows when the PubMed record was changed but clearly the original Addendum indicating credit should be shared was ignored by ISI and these citing authors alike.

The new version, with the R. Brown-less author line, has been cited 4 times. There are ones published in Jan 2008 and Sept 2008 and they indeed cite the R. Brown-less author list. So the two and possibly three most-recent citations of the R. Brown version have minimal excuse.

Okay, okay, obviously one would have to have done a recent database search for the article (perhaps with a reference management software tool) to figure out there was something wrong. But even so, who the heck would try to figure out why EndNote wasn’t finding it rather than just typing this single-author reference in by hand. After all, the pdf is right there in front of you…..clearly the damn thing exists.

This is quite possibly the weirdest thing I’ve seen yet. There must have been some determination of fraud or something to justify altering the Medline/PubMed record, right? There must have been some buyin from the journal Publisher (Elsevier) that this was the right thing to do.

So why didn’t they bother to fix their ScienceDirect listing and the actual PDF itself with some sort of indication as to what occurred and why these folks were given author credit and why Richard Brown was removed entirely?

__

*The fact that nobody seems to agree with me points to the fact that nobody really views these as equal contributions one little bit.

h/t: EvilMonkey who used to blog at Neurotopia.

23andme and the Cold Case

August 15, 2013

By way of brief introduction, I last discussed the 23andme genetic screening service in the context of their belated adoption of IRB oversight and interloper paternity rates. You may also be interested in Ed Yong’s (or his euro-caucasoid doppelganger’s) results.

Today’s topic is brought to you by a comment from my closest collaborator on a fascinating low-N developmental biology project.

This collaborator raised a point that extends from my prior comment on the paternity post.

But, and here’s the rub, the information propagates. Let’s assume there is a mother who knows she had an affair that produced the kid or a father who impregnated someone unknown to his current family. Along comes the 23 and me contact to their child? Grandchild? Niece or nephew? Brother or sister? And some stranger asks them, gee, do you have a relative with these approximate racial characteristics, of approximately such and such age, who was in City or State circa 19blahdeblah? And then this person blast emails their family about it? or posts it on Facebook?

It also connects with a number of issues raised by the fact that 23andme markets to adoptees in search of their genetic relatives. This service is being used by genealogy buffs of all stripes and one can not help but observe that one of the more ethically complicated results will be the identification of unknown genetic relationships. As I alluded to above, interloper paternity may be identified. Also, one may find out that a relative gave a child up for adoption…or that one fathered a child in the past and was never informed.

That’s all very interesting but today’s topic relates to crimes in which DNA evidence has been left behind. At present, so far as I understand, the DNA matching is to people who have already crossed the law enforcement threshold. In fact there was a recent broughha over just what sort of “crossing” of the law enforcement threshold should permit the cops to take your DNA if I am not mistaken. This does not good, however, if the criminal has never come to the attention of law enforcement.

Ahhhh, but what if the cops could match the DNA sample left behind by the perpetrator to a much larger database. And find a first or second cousin or something? This would tremendously narrow the investigation, wouldn’t it?

It looks like 23andme is all set to roll over for whichever enterprising police department decides to try.

From the Terms of Service.

Further, you acknowledge and agree that 23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary to: (a) comply with legal process (such as a judicial proceeding, court order, or government inquiry) or obligations that 23andMe may owe pursuant to ethical and other professional rules, laws, and regulations; (b) enforce the 23andMe TOS; (c) respond to claims that any content violates the rights of third parties; or (d) protect the rights, property, or personal safety of 23andMe, its employees, its users, its clients, and the public.

Looks to me that all the cops would need is a warrant. Easy peasy.

__
h/t to Ginny Hughes [Only Human blog] for cuing me to look over the 23andme ToS recently.

There is an entry up on the Scientific American Blog Network’s Guest Blog by two of the principals of μBiome. In Crowdfunding and IRBs: The Case of uBiome Jessica Richman and Zachary Apte address prior criticism of their approach to the treatment of human subjects. In particular, the criticism over their failure to obtain approval from an Institutional Review Board (IRB) prior to enrolling subjects in their study.

In February, there were several posts about the ethics of this choice from a variety of bloggers. (See links from Boundary Layer Physiology (here, here, here) Comradde Physioprof (here, here, here), Drugmonkey (here), Janet Stemwedel (here), Peter Lipson (here).) We greatly appreciate the comments, suggestions and criticisms that were made. Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach.

If you follow the linked blog posts, you will find that when Richman and/or Apte engaged with the arguments, they took a wounded tone. This is a stance they continue.

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I was one of the ones who brought up the Tuskegee Syphilis Experiment. Naturally, this was by way of making an illustrative example of why we have modern oversight of research experiments. I did not anticipate that any of the research planned by the uBiome folks would border on this sort of horrible mistreatment of research subjects. Not at all. And mentioning that older history does not so accuse them either.

PhysioProf made this point very well.

UPDATE 2: The need for IRB review has little to do with researchers’ intentions to behave ethically–nowadays it is rare that we are talking about genuinely evil exploitative abusive shitte–but rather that it is surprisingly complicated to actually implement processes, procedures, and protocls that thoroughly safeguard human subjects’ rights and safety, even with the best of intentions. This inquiry has absolutely nothing to so with whether the uBiome people are nice guys who just want to do cool science with the best of intentions. That is irrelevant.

IRBs are there exactly to ensure that earnest scientists with the best of intentions in their hearts are forced to think through all of the possible ramifications of their proposed human subjects research projects in a thorough and systematic manner before they embark on their research. The evidence we are in possession of as of now suggests strongly that uBiome has not done so.

This is a critical reason why scientists using human or animal subjects need to adhere to the oversight/protection mechanisms. The second critical reason is that the people doing the research are biased. Again, it is not the case that one thinks all scientists are setting out to do horrible Mengele type stuff in pursuit of their obsessions. No. It is that we all are subject to subtle influences on our thinking. And we, as humans, have a very well documented propensity to see things our own way, so to speak. Even when we think we are being totally objective and/or professional. By the very nature of this, we are unable to see for ourselves where we are going astray.

Thus, external oversight and review provides a needed check on our own inevitable bias.

We can all grumble about our battles with IRBs (and Institutional Animal Care and Use Committees for animal subject research). The process is far from perfect so a little bit of criticism is to be expected.

Nevertheless I argue that we should all embrace the oversight process unreservedly and enthusiastically. We should be proud, in fact, that we conduct our research under such professional rules. And we should not operate grudgingly, ever seeking to evade or bypass the IRB/IACUC process.

Richman and Apte of μBiome need to take this final step in understanding. They are not quite there yet:

Before we started our crowdfunding campaign, we consulted with our advisors at QB3, the startup incubator at UCSF, and the lawyers they provided us. We were informed (correctly) that IRBs are only required for federally funded projects, clinical trials, and those who seek publication in peer-reviewed journals. That’s right — projects that don’t want federal money, FDA approval, or to publish in traditional journals require no ethical review at all as far as we know.

Well, that is just plain wrong. Being a professional scientist is what “requires” us to seek oversight of our experiments. I believe I’ve used the example in the past of someone like me buying a few operant chambers out of University Surplus, setting them up in my garage and buying some rats from the local pet store. I could do this. I could do this without violating any laws. I could dose them* with all sorts of legally-obtainable substances, very likely. Sure, no legitimate journal would take my manuscript but heck, aren’t we in an era where the open access wackaloons are advocating self-publishing everything on blogs? I could do that. Or, more perniciously, this could be my little pilot study incubator. Once I figured I was on to something, then I could put the protocols through my local IACUC and do the “real” study and nobody would be the wiser.

Nobody except me, that is. And this is why such a thing is never going to happen. Because I know it is a violation of my professional obligations as I see them.

Back to Richman and Apte’s excuse making:

Although we are incubated in the UCSF QB3 Garage, we were told that we could not use UCSF’s IRB process and that we would have to pay thousands of dollars for an external IRB. We didn’t think it made sense (and in fact, we had no money) to pay thousands of dollars on the off chance that our crowdfunding campaign was a success.

and whining

We are happy to say that we have completed IRB review and that our protocol has been approved. The process was extremely time-consuming, and expensive. We went back and forth for months to finally receive approval, exchanging literally hundreds of pages of documents. We spent hundreds of hours on the project.

First, whatever the UCSF QB3 Garage is, it was screwing up if it never considered such issues. Second, crying poverty is no excuse. None whatsoever. Do we really have to examine how many evils could be covered under “we couldn’t afford it”? Admittedly, this is a problem for this whole idea of crowd-funded science but..so what? Solve it. Just like they** had to solve the mechanisms for soliciting the donations in the first place. Third….yeah. Doing things ethically does require some effort. Just like conducting experiments and raising the funds to support them requires effort. Stop with the whining already!

The authors then go on in a slightly defensive tone about the fact they had to resort to a commercial IRB. I understand this and have heard the criticisms of such Pay-for-IRB-oversight entities. From my perspective this is much, much lesser of a concern. The absolute key is to obtain some oversight that is independent of the research team. That is first-principles stuff to my view. They also attempt to launch a discussion of whether novel approaches to IRB oversight and approvals need to be created to deal with citizen-science and crowd-funded projects. I congratulate them on this and totally agree that it needs to be discussed amongst that community.

What I do not appreciate is their excuse making. Admitting their error and seeking to generate new structures which satisfy the goal of independent oversight for citizen-science in the future is great. But all the prior whinging and excuse making, combined with the hairsplitting over legal requirements, severely undercuts progress. That aspect of their argument is telling their community that the traditional institutional approaches do not apply to them.

This is wrong.

UPDATE: Read uBiome is determined to be a cautionary tale for citizen science over at thebrokenspoke blog.
__
*orally. not sure civilians can get a legal syringe needle anywhere.

**(the global crowdfund ‘they’)

Additional Reading:

Animals in Research: The conversation begins
Animals in Research: IACUC Oversight

Animals in Research: Guide for the Care and Use of Laboratory Animals

Animals in Research: Mice and Rats and Pigeons…Oh My!
Virtual IACUC: Reduction vs. Refinement
Animals in Research: Unnecessary Duplication

RetractionVsNIHsuccessWell this is provocative. One James Hicks has a new opinion bit in The Scientist that covers the usual ground about ethics, paper retractions and the like in the sciences. It laments several decades of “Responsible Conduct of Research” training and the apparent utter failure of this to do anything about scientific misconduct. Dr. Hicks has also come up with a very provocative and truthy graph. From the article it appears to plot annual data from 1997 to 2011 in which the retraction rate (from this Nature article) is plotted against the NIH Success Rate (from Science Insider).

Like I said, it appears truthy. Decreasing grant success is associated with increasing retraction rates. Makes a lot of sense. Desperate times drive the weak to desperate measures.

Of course, the huge caveat is the third factor…..time. There has been a lot more attention paid to scientific retractions lately. Nobody knows if increased retraction rates over time are being observed because fraud is up or because detection is up. It is nearly impossible to ever discover this. Since NIH grant success rates have likewise been plummeting as a function of Fiscal Year, the relationship is confounded.

You know how waving the word “integrity” around in a discussion of the quotidian practice of science works on Your Humble Narrator, right Dear Reader?

well, one @dr_beckie mused:

tweeting for networking (baby limits conference attendance), but snr colleague warned against talking to openly about my research (1/2)

and

in such an open forum, is it really so naive to have faith in scientific integrity? (2/2)

a little prodding brought forth this revelation:

@drugmonkeyblog not fear it would be disappointment. Integrity is acknowledging input ie the chat in the pub that gave you initial idea.

As I observed, her Acknowledgement sections must be a wonder to behold. Perhaps the first ever need for Supplemental Acknowledgements?

Now of course I cannot possibly know the full subtlety of dr_beckie’s views on scientific priority and the “integrity” of differing thresholds for formal acknowledgement of input from scientific peers. But I do know there is an awful lot of wackaloon delusion out there about these issues.

So let me say this. A failure to appreciate that your sub(sub, sub)field of science is overladen with bushels of extremely smart, well trained and motivated individuals who are reading the exact same published literature that you are is not evidence of a lack of “integrity” in the field. If you had some brilliant idea or synthesis, odds are very good that someone else has the exact same idea.

This is why I take an exceptionally skeptical view of claims that so-and-so “stole” the ideas of some other scientist.

I am not saying intellectual theft doesn’t occur in science. I am confident it does. Someone taking the ideas expressed by another, that they have not yet arrived at, and managing to reach the threshold of academic credit (a paper authorship, usually) with that idea without properly crediting the original person. Somewhere below this is a vast, vast territory of normal scientific operation in which “theft” is not really appropriate.

Chats in the pub, discussions at meeting presentations and thoughts expressed at lab meeting do not all deserve formal Acknowledgement. If these roots of a scientific paper were accurately recorded, I’m not kidding that the Acknowledgement section would go on for pages. Clearly, this section is not intended to cover all possible casual interactions that led up to the clicks in your brain that crystallized into a scientific Idea. There is a threshold.

I guarantee you that there are almost as many opinions about this precise threshold as their are scientists who are publishing. Multiplied by two, in fact, because I feel confident that any given scientist will have a different standard for crediting some other loser colleague versus when they see it appropriate that their own brilliant thoughts receive proper attribution!

Now we come around to the original Twitt and @dr_beckie’s concern that discussing her work online involves concerns about scientific integrity when it comes to proper acknowledgement, presumably, of her brilliant 140 character contributions to her subfield. Acknowledgement, one assumes, in published papers down the road.

I am not seeing where there is any specific concern. All that differs here is the potential size of the audience…but recall that really it is only participants in a scientific subfield that matter. So you could have made the observation at a meeting during the question period. Or at your poster to several meeting attendees. Most of the time a normal scientist is not looking around the meeting room trying to gauge the “integrity” of some 200 or 500 scientists before they ask their question or make their observation. Each and every one of these people who hear you might, if the notion strikes them, communicate your brilliance to other scientists who didn’t happen to be in attendance for some reason. You have no control over this. Most of us rely, as @dr_beckie would have it, on the normal practices and “integrity” of our fields in these situations.

Furthermore, many of us realize the fundamental reality of science priority and scientific ideas. It doesn’t matter who has the idea. What matters is who can conduct the experiments, interpret the data and publish the paper. This is the way science is credited. By. Producing.

Getting into he said/she said over who came up with an idea first? Nearly a complete waste of time. If you are really paranoid about these matters? STFU! Don’t talk to anyone about “your” ideas. Fine. Whatever. But don’t come whining around about “integrity” when the off-hand remark you made in the pub* seems to be a fundamental building block of a paper that appears a year later with the author lines including one of your drinking buddies!

Let me just note here that I’ve been around the block a few times myself. There are published papers out there where I got screwed out of an authorship (and even Acknowledgement) in a manner anyone at all would admit showed a lack of integrity. It is going to happen now and again. I deal. I move on. Against this background, a lack of “We’d really like to thank DrugMonkey for his random spewing at the pub one night late at the CPDD annual meeting” kinda pales. It isn’t like your appearance in the “Acknowledgement” section carries any sort of weight or would be put on your CV or tenure package, right**?

For full disclosure, I’m sure I’ve published papers that someone else thinks should have included an Acknowledgement of their brilliant input. I know for a certain fact that a particular colleague of mine is pouty*** about not being an author on one particular paper. This person’s position is that s/he expressed the “idea” before I did. Of course I remember the event quite clearly and this person is high as a kite…it was my idea. But guess what? Either of us could very well have said it out loud first. Easily. It was an obvious thing to do. I just said it first, outloud and with that particular person in hearing’s distance. It is pathetic for me to claim that the idea was a result of my unique brilliance.

Now as chance would have it I was the one who actually did the study and published it; my colleague did not. I will note that this colleague and I probably talked about dozens of ideas that could have been, later became or may yet become papers back in the day. Hell, we still talk about many ideas that could/may/will become papers.

Back to Twitter.

It strikes me that there is one nasty little implication here, one that I think pervades a lot of the rationale of these OpenScience and WeNeedCreditForBlogging!!!11!! types. They are trying to get credit for “having the idea” when they do not deserve it and should not deserve it. I don’t blog about actual science all that much but I’ve done it now and then. I’m pretty sure in a handful of such posts I’ve made observations or expressed curiosity about matters that could possibly be addressed in the field by a publication or two. Just like I’ve expressed observations or curiosity, IRL, in 1) poster sessions, 2) platform presentations, 3) shooting the shit with colleagues, 4) grant reviews, 5) paper reviews, 6) lab meetings and other places.

I don’t expect credit. I do not assume as a default that papers that come out later that can be six-degrees-of-Kevin-Bacon connected to my blathering must have stolen my ideas. It is nice to receive an Acknowledgement if the authors believe it is appropriate. Sure. Everyone loves that. But I’m not on the barricades screaming about “integrity” if it doesn’t happen.

Life is too short.

And I have science to publish.
__
*for all you know, the drinking buddies are all “Oh shit, I better k3rn my postdoc! Some lame-brain in doc_beckie’s group finally thought up the thought we’ve been working on for six months….we’re gonna get scooped!!!

**Please tell me I’m right.

***This is not infrequently in a context in which this person may be trying to get me to buy the next round, FWIW.

Professor David E. Nichols is a legend for scientists who are interested in the behavioral pharmacology of 3,4-methylenedioxymethamphetamine (MDMA, aka, ‘Ecstasy’). If you look carefully at many of the earlier papers (and some not-so-early) you will see that people obtained their research supply of this drug from him. As well as much of their background knowledge from publications he has co-authored. He has also worked on a number of other compounds which manipulate dopaminergic and/or serotonergic neurotransmission, some of which are of great interest to those in the recreational user community who seek (ever and anon) new highs, particularly ones that might be similar to their favorite illicit drugs but that may not currently be controlled. Those who are interested in making money supplying the recreational consumer population are particularly interested in the latter, of course.

Professor Nichols has published a recent viewpoint in Nature in which he muses on the uses to which some of his work has been put:

A few weeks ago, a colleague sent me a link to an article in the Wall Street Journal. It described a “laboratory-adept European entrepreneur” and his chief chemist, who were mining the scientific literature to find ideas for new designer drugs — dubbed legal highs. I was particularly disturbed to see my name in the article, and that I had “been especially valuable” to their cause. I subsequently received e-mails saying I should stop my research, and that I was an embarrassment to my university.

I have never considered my research to be dangerous, and in fact hoped one day to develop medicines to help people.

As with most scientists, I have little doubt. And ultimately, I agree with his observation that

There really is no way to change the way we publish things, although in one case we did decide not to study or publish on a molecule we knew to be very toxic. I guess you could call that self-censure. Although some of my results have been, shall we say, abused, one cannot know where research ultimately will lead. I strive to find positive things, and when my research is used for negative ends it upsets me.

It is unfortunate that Professor Nichols has been put in this position. Undoubtedly John Huffman of JWH-018 fame (one of the more popular synthetic full-agonist cannabinoids sprayed on herbal incense products) feels much the same about his own work. But I suppose this is the risk that is run with many lines of basic and pre-clinical work. Not just recreational drug use but even therapeutic use- after all off-label prescribing has to start somewhere. And individual health (or do I mean “health”) practices such as high-dosing on blueberries or cranberries, various so-called “nutritional supplements”, avoiding certain foods, exercise regimes, diets, etc may be based on no more than a single scientific paper, right?

So we should all feel some bit of Professor Nichols’ pain, even if our own work hasn’t been mis-used or over-interpreted…yet.

UPDATE: Thoughts from David Kroll over at the cenblog home of Terra Sigillata.