It is time. Well past time, in fact.

Time for the Acknowledgements sections of academic papers to report to report on a source of funding that is all to often forgotten.

In fact I cannot once remember seeing a paper or manuscript I have received to review mention it.

It’s not weird. Most academic journals I am familiar with do demand that authors report the source of funding. Sometimes there is an extra declaration that we have reported all sources. It’s traditional. Grants for certain sure. Gifts in kind from companies are supposed to be included as well (although I don’t know if people include special discounts on key equipment or reagents, tut, tut).

In recent times we’ve seen the NIH get all astir precisely because some individuals were not reporting funding to them that did appear in manuscripts and publications.

The statements about funding often come with some sort of comment that the funding agency or entity had no input on the content of the study or the decisions to/not publish data.

The uses of these declarations are several. Readers want to know where there are potential sources of bias, even if the authors have just asserted no such thing exists. Funding bodies rightfully want credit for what they have paid hard cash to create.

Grant peer reviewers want to know how “productive” a given award has been, for better or worse and whether they are being asked to review that information or not.

It’s common stuff.

We put in both the grants that paid for the research costs and any individual fellowships or traineeships that supported any postdocs or graduate students. We assume, of course, that any technicians have been paid a salary and are not donating their time. We assume the professor types likewise had their salary covered during the time they were working on the paper. There can be small variances but these assumptions are, for the most part, valid.

What we cannot assume is the compensation, if any, provided to any undergraduate or secondary school authors. That is because this is a much more varied reality, in my experience.

Undergraduates could be on traineeships or fellowships, just like graduate students and postdocs. Summer research programs are often compensated with a stipend and housing. There are other fellowships active during the academic year. Some students are on work-study and are paid a salary and in school related financial aid…in a good lab this can be something more advanced than mere dishwasher or cage changer.

Some students receive course credit, as their lab work is considered a part of the education that they are paying the University to receive.

Sometimes this course credit is an optional choice- something that someone can choose to do but is not absolutely required. Other times this lab work is a requirement of a Major course of study and is therefore something other than optional.

And sometimes…..

…sometimes that lab work is compensated with only the “work experience” itself. Perhaps with a letter or a verbal recommendation from a lab head.

I believe journals should extend their requirement to Acknowledge all sources of funding to the participation of any trainees who are not being compensated from a traditionally cited source, such as a traineeship. There should be lines such as:

Author JOB participated in this research as an undergraduate course in fulfilling obligations for a Major in Psychology.

Author KRN volunteered in the lab for ~ 10 h a week during the 2020-2021 academic year to gain research experience.

Author TAD volunteered in the lab as part of a high school science fair project supported by his dad’s colleague.

Etc.

I’m not going to go into a long song and dance as to why…I think when you consider what we do traditionally include, the onus is quickly upon us to explain why we do NOT already do this.

Can anyone think of an objection to stating the nature of the participation of students prior to the graduate school level?

Forgiveness

September 9, 2019

I’ve already lost the thread to it but some friend of Joi Ito, the MIT Media Lab guy who took Epstein’s money, was recently trying to defend his actions. If I caught the gist of the piece, it was that Ito allegedly really believed that Epstein ad been reformed, or at least had been sufficiently frightened by his legal consequences not to re-offend with his raping of children.

I want to get past the question of whether Ito was disingenuous or so blinded by what he wanted (Epstein’s money) that he was willing to fool himself. I want to address the issue of forgiveness. Because even if Ito genuinely believed Epstein was reformed, scared and would never in a million years offend again…he had to forgive him for his past actions.

I was pondering this on my commute this morning.

I do not forgive.

I only rarely forget.

I hold grudges for decades.

I have been known to ruminate and dwell and to steep.

I am trying my best to come up with cases where I’ve suffered a significant harm or insult from someone and managed to forgive them at a later date. I’m not recalling any such thing.

On the other hand, nobody has ever offered me millions of dollars to overlook their past behavior, either.

On a recent post, DNAMan asks:

If you were reviewing an NIH proposal from a PI who was a known (or widely rumored) sexual harasser, would you take that into account? How?

My immediate answer was:

I don’t know about “widely rumored”. But if I was convinced someone was a sexual harasser this would render me unable to fairly judge the application. So I would recuse myself and tell the SRO why I was doing so. As one is expected to do for any conflicts that one recognizes about the proposal.

I’m promoting this to a new post because this also came up in the Twitter discussion of Lander’s toast of Jim Watson. Apparently this is not obvious to everyone.

One is supposed to refuse to review grant proposals, and manuscripts submitted for publication, if one feels that one has a conflict of interest that renders the review biased. This is very clear. Formal guidelines tend to concentrate on personal financial benefits (i.e. standing to gain from a company in which one has ownership or other financial interest), institutional benefits (i.e., you cannot review NIH grants submitted from your University since the University is, after all, the applicant and you are an agent of that University) and mentoring / collaborating interests (typically expressed as co-publication or mentoring formally in past three years). Nevertheless there is a clear expectation, spelled out in some cases, that you should refuse to take a review assignment if you feel that you cannot be unbiased.

This is beyond any formal guidelines. A general ethical principle.

There is a LOT of grey area.

As I frequently relate, in my early years when a famous Editor asked me to review a manuscript from one of my tighter science homies and I pointed out this relationship I was told “If I had to use that standard as the Editor I would never get anything reviewed. Just do it. I know you are friends.“.

I may also have mentioned that when first on study section I queried an SRO about doing reviews for PIs who were scientifically sort of close to my work. I was told a similar thing about how reviews would never get done if vaguely working in the same areas and maybe one day competing on some topic were the standard for COI recusal.

So we are, for the most part, left up to our own devices and ethics about when we identify a bias in ourselves and refuse to do peer review because of this conflict.

I have occasionally refused to review an NIH grant because the PI was simply too good of a friend. I can’t recall being asked to review a grant proposal from anyone I dislike personally or professionally enough to trigger my personal threshold.

I am convinced, however, that I would recuse myself from the review of proposals or manuscripts from any person that I know to be a sexual harasser, a retaliator and/or a bigot against women, underrepresented groups generally, LGBTQ, and the like.

There is a flavor of apologist for Jim Watson (et rascalia) that wants to pursue a “slippery slope” argument. Just Asking the Questions. You know the type. One or two of these popped up on twitter over the weekend but I’m too lazy to go back and find the thread.

The JAQ-off response is along the lines of “What about people who have politics you don’t like? Would you recuse yourself from a Trump voter?”.

The answer is no.

Now sure, the topic of implicit or unconscious bias came up and it is problematic for sure. We cannot recuse ourselves when we do not recognize our bias. But I would argue that this does not in any way suggest that we shouldn’t recuse ourselves when we DO recognize our biases. There is a severity factor here. I may have implicit bias against someone in my field that I know to be a Republican. Or I may not. And when there is a clear and explicit bias, we should recuse.

I do not believe that people who have proven themselves to be sexual harassers or bigots on the scale of Jim Watson deserve NIH grant funding. I do not believe their science is going to be so much superior to all of the other applicants that it needs to be funded. And so if the NIH disagrees with me, by letting them participate in peer review, clearly I cannot do an unbiased job of what NIH is asking me to do.

The manuscript review issue is a bit different. It is not zero-sum and I never review that way, even for the supposedly most-selective journals that ask me to review. There is no particular reason to spread scoring, so to speak, as it would be done for grant application review. But I think it boils down to essentially the same thing. The Editor has decided that the paper should go out for review and it is likely that I will be more critical than otherwise.

So….can anyone see any huge problems here? Peer review of grants and manuscripts is opt-in. Nobody is really obliged to participate at all. And we are expected to manage the most obvious of biases by recusal.

A Reader submitted this gem of a spam email:

We are giving away $100 or more in rewards for citing us in your publication! Earn $100 or more based on the journal’s impact factor (IF). This voucher can be redeemed your next order at [Company] and can be used in conjunction with our ongoing promotions!

How do we determine your reward?
If you published a paper in Science (IF = 30) and cite [Company], you will be entitled to a voucher with a face value of $3,000 upon notification of the publication (PMID).

This is a new one on me.

Citation practices

July 27, 2015

I think that at some point, protracted refusal to cite relevant work amounts to scientific misconduct.

There is one thing that concerns me about the Journal of Neuroscience banning three authors from future submission in the wake of a paper retraction.

One reason you might seek to get harsh with some authors is if they have a track record of corrigenda and errata supplied to correct mistakes in their papers. This kind of pattern would support the idea that they are pursuing an intentional strategy of sloppiness to beat other competitors to the punch and/or just don’t really give a care about good science. A Journal might think either “Ok, but not in our Journal, chumpos” or “Apparently we need to do something to get their attention in a serious way”.

There is another reason that is a bit worrisome.

One of the issues I struggle with is the whisper campaign about chronic data fakers. “You just can’t trust anything from that lab“. “Everyone knows they fake their data.

I have heard these comments frequently in my career.

On the one hand, I am a big believer in innocent-until-proven-guilty and therefore this kind of crap is totally out of bounds. If you have evidence of fraud, present it. If not, shut the hell up. It is far to easy to assassinate someone’s character unfairly and we should not encourage this for a second.

Right?

I can’t find anything on PubMed that is associated with the last two authors of this paper in combination with erratum or corrigendum as keywords. So, there is no (public) track record of sloppiness and therefore there should be no thought of having to bring a chronic offender to task.

On the other hand, there is a lot of undetected and unproven fraud in science. Just review the ORI notices and you can see just how long it takes to bust the scientists who were ultimately proved to be fraudsters. The public revelation of fraud to the world of science can be many years after someone first noticed a problem with a published paper. You also can see that convicted fraudsters have quite often continued to publish additional fraudulent papers (and win grants on fraudulent data) for years after they are first accused.

I am morally certain that I know at least one chronic fraudster who has, to date, kept one step ahead of the long short and ineffectual arm of the ORI law despite formal investigation. There was also a very curious case I discussed for which there were insider whispers of fraud and yet no findings that I have seen yet.

This is very frustrating. While data faking is a very high risk behavior, it is also a high reward behavior. And the risks are not inevitable. Some people get away with it.

I can see how it would be very tempting to enact a harsh penalty on an otherwise mild pretext for those authors that you suspected of being chronic fraudsters.

But I still don’t see how we can reasonably support doing so, if there is no evidence of misconduct other than the rumor mill.

A post at Retraction Watch alerts us to to a paper retraction at the Journal of Neuroscience. The J Neuro notice on this paper reads:

The Journal of Neuroscience has received notification of an investigation by the Perelman School of Medicine at the University of Pennsylvania, which supports the journal’s findings of data misrepresentation in the article “Intraneuronal APP, Not Free Aβ Peptides in 3xTg-AD Mice: Implications for Tau Versus Aβ-Mediated Alzheimer Neurodegeneration” by Matthew J. Winton, Edward B. Lee, Eveline Sun, Margaret M. Wong, Susan Leight, Bin Zhang, John Q. Trojanowski, and Virginia M.-Y. Lee, which appeared on pages 7691–7699 of the May 25, 2011 issue. Because the results cannot be considered reliable, the editors of The Journal are retracting the paper.

From RetractionWatch we learn that the Journal has also issued a submission ban to three of the authors:

According to author John Trojanowski … he and Lee have been barred from publishing in Journal for Neuroscience for several years. Senior author Edward Lee is out for a year.

This is the first time I have ever heard of a Journal issuing a ban on authors submitting papers to them. This is an interesting policy.

If this were a case of a conviction for academic fraud, the issues might be a little clearer. But as it turns out, it is a very muddy case indeed.

A quote from the last author:

In a nut shell, Dean Glen Gaulton asserted that the findings in the paper were correct despite mistakes in the figures. I suggested to J. Neuroscience that we publish a corrigendum to clarify these mistakes for the readership of J Neuroscience

The old “mistaken figures” excuse. Who, might we ask is at fault?

RetractionWatch quotes the second-senior author Trojanowski:

Last April, we got an email about an inquiry into figures that I would call erroneously used. An error was made by [first author] Matt Winton, who was leaving science and in transition between Penn and his new job. He was assembling the paper to submit it, there were several iterations of the paper. One set of figures was completely correct – I still don’t know what happened, but he got the files mixed up, and used erroneous figures

Winton has apparently landed a job as a market analyst*, providing advice to investors on therapeutics for Alzheimer’s Disease. Maybe the comment from Trojanowski is true and he was in a rush to get the paper off his desk as he started the new job**. Maybe. Maybe there is all kinds of blame to go around and the other authors should have caught the problem.

Or maybe this was one of those deliberate frauds in which someone took shortcuts and represented immunohistochemical images or immunoblots as something they were not. The finding from the University’s own investigation appears to confirm, however, that a legitimate mistake was made.

…so let us assume it was all an accident. Should the paper be retracted? or corrected?

I think there are two issues here that support the Journal’s right to retract the paper.

We cannot ignore that publication of a finding first has tremendous currency in the world of academic publishing. So does the cachet of publishing in one Journal over another. If a set of authors are sloppy about their manuscript preparation, provide erroneous data figures and they are permitted to “correct” the figures, they gain essentially all the credit. Potentially taking credit for priority or a given Journal level away from another group that works more carefully.

Since we would like authors to take all the care they possibly can in submitting correct data in the first place, it makes some sense to take steps to discourage sloppiness. Retraction is certainly one such discouragement. A ban on future submissions does seem, on the face of it, a bit harsh for a single isolated error. I might not opt for that if it were my decision. But I can certainly see where another scientist might legitimately want to bring down the ban hammer and I would be open to argument that it is necessary.

The second issue I can think of is related. It has to do with whether the paper acceptance was unfairly won by the “mistake”. This is tricky. I have seen many cases in which even to the relatively uninformed viewer, the replacement/correct figure looks a lot crappier/dirtier/equivocal than the original mistaken image. Whether right or wrong that so-called “pretty” data change the correctness of the interpretation and strength of the support, it is often interpreted this way. This raises the question of whether the paper would have gained acceptance with the real data instead of the supposedly mistaken data. We obviously can’t rewind history, but this theoretical concern should be easy to appreciate. Maybe the Journal of Neuroscience review board went through all of the review materials for this paper and decided that the faked figure sealed the acceptance? For this concern it really makes no difference to the Journal whether the mistake was unintentional or not, there is a strong argument that the integrity of its process requires retraction whenever there is significant doubt the paper would have been accepted without the mistaken image(s).

Given these two issues, I see no reason that the Journal is obligated to “abide by the Penn committee’s investigation” as Trojanowski appears to think they should be. The Journal could accept that it was all just a mistake and still have good reason to retract the paper. But again, a ban on further submissions from the authors seems a bit harsh.

Now, I will point out one thing in this scenario that chaps my hide. It is a frequent excuse of the convicted data faker that they were right, so all is well. RetractionWatch further quotes the senior author, Lee:

…the findings of this paper are extremely important for the Alzheimer’s disease field because it provided convincing evidence pointing out that a previous report claiming accumulation of intracellular Abeta peptide in a mouse model (3XFAD) is wrong (Oddo et al., Neuron 2003), as evidenced by the fact that this paper has been cited by others for 62 times since publication. Subsequent to our 2011 J. Neuroscience paper, others also have found no evidence of intracellular Abeta in the 3XFAD mice (e.g. Lauritzen et al., J. Neurosci, 2012).

I disagree that whether the figures are correct and/or repeatable is an issue that affects the decision here. You either have the correct data or you do not. You either submitted the correct data for review with the manuscript or you did not. Whether you are able to obtain the right data later, whether other labs obtain the right data or whether you had the right data in a mislabeled file all along is absolutely immaterial to whether the paper should be retracted.

The system itself is what needs to be defended. Because if you don’t protect the integrity of the peer review system – where authors are presumed to be honest – then it encourages more sloppiness and more outright fraud.

__
*An interesting alt-career folks. One of my old grad school peeps has been in this industry for years and appears to really love it.

**I will admit, my eyebrows go up when the person being thrown under the bus for a mistake or a data fraud is someone who is no longer in the academic science publishing game and has very little to lose compared with the other authors.

Most laboratories buy stuff that they need to do their research. It varies. From latex gloves to pipette tips. From mice to bunnies. From cocaine to ABD-xld500BZN….whatever that is. Operant boxes to sequencers. Stuff.

All of these cost money which generally comes from the laboratory budgets. Startup, unattached funds if you have ’em and, for the most part research grants.

Consider this scenario.

We usually get our genotyping done outside of the lab. I mean, I could have this service performed in house by staff but there are many small vendors in my biotech/university/science community that will do it for us.

I met this guy at the bar. Or, maybe I recently ran into an old grad school friend. Some woman I postdoc’d with back in the day. A friend of my spouse. Whomever.

This person is starting up a brand new biotech support company, mom-and-pop kind of thing. This GenesRUs company is happy to take over our genotyping services.

I secure a quote. Wow. Two times the most expensive bottom line I came up with for doing it in-house that convinced me to hit the vendors in the first place. Maybe 3X the price of other locals.

But. But. This person is so nice. And we have a personal connection of some sort. Gee, they are still so small that they will come pick up from us at basically any time we want? And have results back prontissimo?

And you know. I HAVE the grant money. It isn’t going to kill our budget to dump a few extra thousands on this top-cost option every year. Even if it amounts to tens of thousands, hey, it’s just grant money, right?

The question, Dear Reader, is this.

Is it okay for me to use my PI’s prerogative to spend my grant money this way? Just because I want to?

Well this is interesting. After being spanked by the FDA for selling their services without proper review and approval of their medical test (as the FDA interpreted it), the 23andme company is back.

I received an email spam suggesting I purchase one of their kits as a Mother’s Day present.

Intrigued, I see this in an alert banner across the linked page.

23andMe provides ancestry-related genetic reports and uninterpreted raw genetic data. We no longer offer our health-related genetic reports. If you are a current customer please go to the health page for more information.

When you go to purchase a new kit you are obliged to check a box indicating you’ve read an additional warning.

I understand I am purchasing ancestry reports and uninterpreted raw genetic data from 23andMe for $99. I understand I will not receive any reports about my health in the immediate future, and there is no timeline as to which health reports might be available or when they might be available.

Ok. Got it.

So what about existing customers who purchased their kit in the old, pre-ban era? Guess I’d better visit that “health page“.

Current 23andMe customers who received health-related results prior to November 22, 2013 will continue to have access to that information. However, no new health-related updates will be provided to your account.

Customers who purchased kits before November 22, 2013 will still receive
health-related results.

Customers who purchase or have purchased 23andMe’s Personal Genome Service (PGS) on or after November 22, 2013, (date of compliance letter issued by the FDA) will receive their ancestry information and uninterpreted raw genetic data. At this time, we do not know the timeline as to which health reports might be available in the future or when they might be available.

Customers who purchased kits on or after November 22, 2013 through December 5, 2013 are eligible for a refund. 23andMe has notified all eligible customers by email with refund instructions. If you are eligible and have not received an email, please click here.

Ok, so they are not turning off the results already provided to the older customers. If you fell into the cease-and-desist gap, you don’t get your info (boo FDA) but you can get a refund.

In the mean time, 23andme is an ancestry / genealogy company.

I suppose that is it until they pass regulatory approval for their health and trait information?

23andme and the Cold Case

August 15, 2013

By way of brief introduction, I last discussed the 23andme genetic screening service in the context of their belated adoption of IRB oversight and interloper paternity rates. You may also be interested in Ed Yong’s (or his euro-caucasoid doppelganger’s) results.

Today’s topic is brought to you by a comment from my closest collaborator on a fascinating low-N developmental biology project.

This collaborator raised a point that extends from my prior comment on the paternity post.

But, and here’s the rub, the information propagates. Let’s assume there is a mother who knows she had an affair that produced the kid or a father who impregnated someone unknown to his current family. Along comes the 23 and me contact to their child? Grandchild? Niece or nephew? Brother or sister? And some stranger asks them, gee, do you have a relative with these approximate racial characteristics, of approximately such and such age, who was in City or State circa 19blahdeblah? And then this person blast emails their family about it? or posts it on Facebook?

It also connects with a number of issues raised by the fact that 23andme markets to adoptees in search of their genetic relatives. This service is being used by genealogy buffs of all stripes and one can not help but observe that one of the more ethically complicated results will be the identification of unknown genetic relationships. As I alluded to above, interloper paternity may be identified. Also, one may find out that a relative gave a child up for adoption…or that one fathered a child in the past and was never informed.

That’s all very interesting but today’s topic relates to crimes in which DNA evidence has been left behind. At present, so far as I understand, the DNA matching is to people who have already crossed the law enforcement threshold. In fact there was a recent broughha over just what sort of “crossing” of the law enforcement threshold should permit the cops to take your DNA if I am not mistaken. This does not good, however, if the criminal has never come to the attention of law enforcement.

Ahhhh, but what if the cops could match the DNA sample left behind by the perpetrator to a much larger database. And find a first or second cousin or something? This would tremendously narrow the investigation, wouldn’t it?

It looks like 23andme is all set to roll over for whichever enterprising police department decides to try.

From the Terms of Service.

Further, you acknowledge and agree that 23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary to: (a) comply with legal process (such as a judicial proceeding, court order, or government inquiry) or obligations that 23andMe may owe pursuant to ethical and other professional rules, laws, and regulations; (b) enforce the 23andMe TOS; (c) respond to claims that any content violates the rights of third parties; or (d) protect the rights, property, or personal safety of 23andMe, its employees, its users, its clients, and the public.

Looks to me that all the cops would need is a warrant. Easy peasy.

__
h/t to Ginny Hughes [Only Human blog] for cuing me to look over the 23andme ToS recently.

As you know, the Boundary Layer blog and citizen-journalist Comradde PhysioProffe have been laying out the case for why institutionally unaffiliated, crowd funded ostensibly open science projects should be careful to adhere to traditional, boring, institutionally hidebound “red tape” procedures when it comes to assuring the ethical use of human subjects in their research.

I raised the parallel case of 23andme at the get go and was mollified by a comment from bsci that 23andme has IRB oversight for their operation. Turns out, they too were brought to this by the peer review process and not by any inherent professionalism or appreciation on the part of the company participants.

A tip from @agvaughn points to a PLoS Genetics Editorial written concerning their decision to publish a manuscript from people associated with 23andme.

The first issue that attracted our attention was that the initial submission lacked a document indicating that the study had passed review by an institutional review board (IRB). The authors responded by submitting a report, obtained after the initial round of review, from the Association for the Accreditation of Human Research Protection Programs (AAHRPP)–accredited company Independent Review Consulting, Inc. (IRC: San Anselmo, CA), exempting them from review on the basis that their activity is “not human subjects research.” On the face of it, this seems preposterous, but on further review, this decision follows not uncommon practices by most scientists and institutional review boards, both academic and commercial, and is based on a guidance statement from the United States Department of Health and Human Services’ Office of Human Research Protection (http://www.hhs.gov/ohrp/humansubjects/gu​idance/cdebiol.htm). Specifically (and as documented in part C2 of the IRC report), there are two criteria that must be met in order to determine that a study involves human subjects research: will the investigators obtain the data through intervention or interaction with the participants, and will the identity of the subject be readily ascertained by the investigator or associated with the information. For the 23andMe study, the answer to both tests was “no,” ostensibly because there was never any interpersonal contact between investigator and participant (that is, data and samples are provided without participants meeting any investigator), and the participant names are anonymous with respect to the data seen by the investigators. It follows from the logic of the IRC review, in accordance with the OHRP guidance documents, that this study does not involve human subjects research.

The journal should never have accepted this article for publication. I find no mention of ethics regarding the use of human or nonhuman vertebrate animals on their guidelines for authors page but it is over here on their Policies page.

Research involving human participants. All research involving human participants must have been approved by the authors’ institutional review board or equivalent committee(s), and that board must be named in the manuscript. For research involving human participants, informed consent must have been obtained (or the reason for lack of consent explained — for example, that the data were analyzed anonymously) and all clinical investigation must have been conducted according to the principles expressed in the Declaration of Helsinki. Authors should be able to submit, upon request, a statement from the research ethics committee or institutional review board indicating approval of the research. PLOS editors also encourage authors to submit a sample of a patient consent form, and might require submission on particular occasions.

Obviously, the journal decided to stand on a post-hoc IRB decision that the work in question was not ever “involving human participants” in the first place. This is not acceptable to me.

The reason why is that any reasonable professional involved with anything like this would understand the potential human subjects concern. Once there is that potential than the only possible ethical way forward is to seek external review by an IRB or IRB-like body. [ It has been a while since I kicked up a stink about “silly little internet polls” back in the Sb days. For those new to the blog, I went so far as to get a ruling from my IRB (informal true, but I retain the email) on the polls that I might put up.] Obviously, the 23andme folks were able to do so……after the journal made them. So there is no reason they could not have done so at the start. They overlooked their professional responsibility. Getting permission after the fact is simply not the way things work.

Imagine if in animal subjects research we were to just go ahead and do whatever we wanted and only at the point of publishing the paper try to obtain approval for only those data that we chose to include in that manuscript. Are you kidding me?

Ethical review processes are not there only to certify each paper. They are there to keep the entire enterprise of research using human or nonhuman vertebrate animals as ethical, humane, responsible etc as is possible.

This is why hairsplitting about “controlling legal authority” when it comes to academic professionals really angers me. We work within these ethical “constraints” (“red tape” as some wag on the Twitts put it) for good reasons and we should fully accept and adopt them. Not put up with them grudgingly, as an irritation, and look for every possible avenue to get ourselves out from under them. We don’t leave our professionalism behind when we leave the confines of our University. Ever. We leave it behind when we leave our profession (and some might even suggest our common-decency-humanity) behind.

Somehow I don’t think these crowdfunders claim to be doing that.

A few more examples of why we need IRB oversight of human subjects research.
UC Davis Surgeons banned
Ethics of 2 cancer studies questioned [h/t: reader Spiny Norman]

Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.

And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.

My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.

I wonder how this would be received:

“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”

Anyway, in citizen-journalist PhysioProffe’s investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter’s recent guest post on the former’s blog blogge.

Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.

When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.

CDC: U.S. Public Health Service Syphilis Study at Tuskegee
NPR: Remembering Tuskegee
PubMed: Syphilitic Gumma

Apparently some epic dumbasses decided that the common housecat, bloodthirsty lethal little murder-cat killing machine that it is, wasn’t quite badass enough.

What. Is. Wrong. With. People?

If you want to understand the child molestation case that has rocked Penn State University in full, you need to read PhysioProf’s take on the matter.

Joe Paterno–who has been the head coach for 46 years is the absolute monarch of that program, with absolute power. Regardless of whether he satisfied the bare minimum of legal requirements to report what he knew about the rape of children to his “superiors”–which as absolute monarch at Penn State, he really had none

emphasis added, but not really needed.

Go Read.

I’ve been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.
Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.
I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.

A retracted paper meansonline survey

Direct link to the poll in case you can’t see it.
My position can be found after the jump….

Read the rest of this entry »