Fascinating.

The dog botherers always insist the dog attacks are due to “bad owners”. And that presumptively “good owners” will never have a dog that attacks or kills anyone.

We’ll leave aside their denialism about their own doggy’s noninjurious but threatening behavior and the inherent circularity of their argument for now.

The interesting point is what it takes to be a “good” owner. You have to train and “socialize” the dog. Control it. Keep it in the right circumstances. Train toddlers how to “approach it properly”. Leash it. Lock the gate. Etc. never let down your vigilance for one little second.

What is all of this but a frank admission that these alleged domesticated animals are INHERENTLY dangerous to other citizens? If they weren’t, the only problem would be “bad owners” who actively train the dog to aggress.

Remember the Maryland Court of Appeals decision labeling pitbulls as inherently dangerous?

Well apparently that was undermined by the Maryland House of Delegates.

One day after the Maryland House of Delegates unanimously approved a bill that overturned the Maryland Court of Appeals decision labeling Pit Bulls as “inherently dangerous,” an Edgemere woman was attacked and seriously injured by at least one of her two pit bulls. One of the Pit Bulls was later found a few blocks away and was so aggressive that Baltimore County Police officers were forced to shoot and kill it, the Baltimore Sun reports. – See more at: http://www.opposingviews.com/i/society/animal-rights/pit-bull-attacks-md-woman-1-day-after-state-overturns-court-decision-breed#sthash.lmqabn7f.dpuf

At least one of HER OWN DAMN DOGS?????

Hmm, wonder if anything has happened this month or was January a particularly bad one for pitbull attacks on innocent people and other nonhuman animals?

Horse and Rider pulled down by….well, it was probably a bichon actually.

Bankok….c’mon, it couldn’t have been an American Staffordshire Terrier (pitbull) now could it?

No one admitted to owning the pitbull …. duh, cause it was a dachshund!

I’m sure this Labrador was just asking for it….and they are only “pitbull-like” dogs….so that pretty much could be anything. like a Yorkie.

How do we know Fluffie didn’t just want to play?

Cockapoo….Vicious

UPDATE: naturally there was a Twitter side of the discussion today. I love this. Because when you get one of these dog lurvers on the line, they will quickly make absolutely insane statements. This one was by way of discussing which dogs might represent a threat to a person. @invertenerd opined:

You said maim a toddler, yorkies can and do maim little kids.

yorkshire-terrier1Here’s an image (source) that I grabbed off the web for reference. Now, admittedly the Yorkie is an aggressive little Napoleon complex of a dog. Which frankly is why it makes for such a good example. Even with its outsized attitude problem, however, the size of the beast tends to limit the damage. Admittedly in this figure, well…this is the scenario where a kid gets its face bitten, frequently enough. I stipulate that. So if we were talking “disfigure”…mmmm maybe. Arguable. Especially since owners of these little pocket dogs are more likely to be neglectful around kids. But I digress. The main point is how absolutely insane you are if you think that this kind of dog “maims” kids with anything like the same severity or probability that occurs with larger breeds. Insane. And of course that is the point. Dog owners are, to all intents and purposes, insane in their willful denial of what dogs do on a constant basis week in and weekout.

If you look around a bit on the NIH funding data at RePORT, you will find the following definitions.

Research Project Grants: Defined as R00, R01, R03, R15, R21, R22, R23, R29, R33, R34, R35, R36, R37, R55, R56, RC1, RC2, RC3, RC4, RF1, RL1, RL2, RL5, RL9, P01, P42, PN1, UA5, UC1, UC2, UC3, UC4, UC7, UF1, UH2, UH3, UH5, UM1, U01, U19, U34, DP1, DP2, DP3, DP4, and DP5 . Research projects were first coded to NLM in fiscal year 2007.

R01-Equivalent Grants: Defined as activity codes R01, R29 and R37.

The R29 was the FIRST award program and the R37 is MERIT, generally an extension of the noncompeting interval for a continuation R01 that scored really well. So…basically these are all R01s.

A post from Steven Salzberg begs to “Please save the unsolicited R01s” which includes this graph sourced from FASEB.
number-of-new-r01s

Making the same leap of considering these the “real” investigator initiated awards, we can see that the number of new awards in the past two Fiscal Years is lower than it has been since 95-96, *prior* to the doubling.

Everytime the NIH officialdom chooses to respond to criticism and concern about how their latest initiative will hurt the traditional strength (investigator initiated R01 equivalents) they try to claim that these are not paying the price. In various ways and with various incomplete analyses they try to give the impression that despite the invention of RC this and DP that, the failure to dismantle boondoggle Ps and the increased use of U-mechs…that the R01 remains sacred.

This graph gives you a retort.

R.I.P. C. Everett Koop

February 25, 2013

via Wikipedia

For those of us of a certain age, Dr. C. Everett Koop will always be the iconic Surgeon General of the United States of America.

For me, the reasons that he was also a great Surgeon General is summed up in these few lines in the ABC News item on Koop’s passing.

Koop carried out a crusade to end smoking in the United States; his goal had been to do so by 2000. He said cigarettes were as addictive as heroin and cocaine. And he shocked his conservative supporters when he endorsed condoms and sex education to stop the spread of AIDS.

These were both very, very important things for the nation’s top health official to do at the time. Especially when the President himself couldn’t bear to say “AIDS” in public and many people still believed that smoking was just a ‘habit’, that low-tar and filtered cigarettes were safer and that the link to cancer had never been “scientifically proven” anyway.

RIP Dr. Koop

Driving test

February 24, 2013

They sure do get huffy when they themselves are the ones being subjected to open peer review.

As you know, the Boundary Layer blog and citizen-journalist Comradde PhysioProffe have been laying out the case for why institutionally unaffiliated, crowd funded ostensibly open science projects should be careful to adhere to traditional, boring, institutionally hidebound “red tape” procedures when it comes to assuring the ethical use of human subjects in their research.

I raised the parallel case of 23andme at the get go and was mollified by a comment from bsci that 23andme has IRB oversight for their operation. Turns out, they too were brought to this by the peer review process and not by any inherent professionalism or appreciation on the part of the company participants.

A tip from @agvaughn points to a PLoS Genetics Editorial written concerning their decision to publish a manuscript from people associated with 23andme.

The first issue that attracted our attention was that the initial submission lacked a document indicating that the study had passed review by an institutional review board (IRB). The authors responded by submitting a report, obtained after the initial round of review, from the Association for the Accreditation of Human Research Protection Programs (AAHRPP)–accredited company Independent Review Consulting, Inc. (IRC: San Anselmo, CA), exempting them from review on the basis that their activity is “not human subjects research.” On the face of it, this seems preposterous, but on further review, this decision follows not uncommon practices by most scientists and institutional review boards, both academic and commercial, and is based on a guidance statement from the United States Department of Health and Human Services’ Office of Human Research Protection (http://www.hhs.gov/ohrp/humansubjects/gu​idance/cdebiol.htm). Specifically (and as documented in part C2 of the IRC report), there are two criteria that must be met in order to determine that a study involves human subjects research: will the investigators obtain the data through intervention or interaction with the participants, and will the identity of the subject be readily ascertained by the investigator or associated with the information. For the 23andMe study, the answer to both tests was “no,” ostensibly because there was never any interpersonal contact between investigator and participant (that is, data and samples are provided without participants meeting any investigator), and the participant names are anonymous with respect to the data seen by the investigators. It follows from the logic of the IRC review, in accordance with the OHRP guidance documents, that this study does not involve human subjects research.

The journal should never have accepted this article for publication. I find no mention of ethics regarding the use of human or nonhuman vertebrate animals on their guidelines for authors page but it is over here on their Policies page.

Research involving human participants. All research involving human participants must have been approved by the authors’ institutional review board or equivalent committee(s), and that board must be named in the manuscript. For research involving human participants, informed consent must have been obtained (or the reason for lack of consent explained — for example, that the data were analyzed anonymously) and all clinical investigation must have been conducted according to the principles expressed in the Declaration of Helsinki. Authors should be able to submit, upon request, a statement from the research ethics committee or institutional review board indicating approval of the research. PLOS editors also encourage authors to submit a sample of a patient consent form, and might require submission on particular occasions.

Obviously, the journal decided to stand on a post-hoc IRB decision that the work in question was not ever “involving human participants” in the first place. This is not acceptable to me.

The reason why is that any reasonable professional involved with anything like this would understand the potential human subjects concern. Once there is that potential than the only possible ethical way forward is to seek external review by an IRB or IRB-like body. [ It has been a while since I kicked up a stink about “silly little internet polls” back in the Sb days. For those new to the blog, I went so far as to get a ruling from my IRB (informal true, but I retain the email) on the polls that I might put up.] Obviously, the 23andme folks were able to do so……after the journal made them. So there is no reason they could not have done so at the start. They overlooked their professional responsibility. Getting permission after the fact is simply not the way things work.

Imagine if in animal subjects research we were to just go ahead and do whatever we wanted and only at the point of publishing the paper try to obtain approval for only those data that we chose to include in that manuscript. Are you kidding me?

Ethical review processes are not there only to certify each paper. They are there to keep the entire enterprise of research using human or nonhuman vertebrate animals as ethical, humane, responsible etc as is possible.

This is why hairsplitting about “controlling legal authority” when it comes to academic professionals really angers me. We work within these ethical “constraints” (“red tape” as some wag on the Twitts put it) for good reasons and we should fully accept and adopt them. Not put up with them grudgingly, as an irritation, and look for every possible avenue to get ourselves out from under them. We don’t leave our professionalism behind when we leave the confines of our University. Ever. We leave it behind when we leave our profession (and some might even suggest our common-decency-humanity) behind.

Somehow I don’t think these crowdfunders claim to be doing that.

A few more examples of why we need IRB oversight of human subjects research.
UC Davis Surgeons banned
Ethics of 2 cancer studies questioned [h/t: reader Spiny Norman]

Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.

And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.

My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.

I wonder how this would be received:

“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”

Anyway, in citizen-journalist PhysioProffe’s investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter’s recent guest post on the former’s blog blogge.

Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.

When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.

CDC: U.S. Public Health Service Syphilis Study at Tuskegee
NPR: Remembering Tuskegee
PubMed: Syphilitic Gumma

There is little doubt that shortening the length of the NIH R01 application from 25 pages to 12 put a huge premium on the available word space. The ever declining success rates have undoubtedly accelerated the desire of applicants to cram every last bit of information that they possibly can into the application.

Particularly since StockCritiqueTM having to do with methodological detail has hardly disappeared.

It is possible that a somewhat frustrated, tongue-in-cheek comment of YHN may have led some folks astray.

Since I am finally getting serious about trying to write one of these new format grants, I am thinking about how to maximize the information content. One thought that immediately strikes me is….cheat!

By which I mean taking sections that normally I would have put in the page-limited part of the grant and sneaking them in elsewhere. I have come up with the following and am looking for more tips and ideas from you, Dear Reader.
1) Moving the animal methods to the Vertebrate Animals section. I’m usually doing quite a bit of duplication of the Vertebrate Animals stuff in my General Methods subheading at the very end of the old Research Design section. I can move much of that, including possibly some research stuff that fits under point 4 (ensuring discomfort and distress is managed), to the Vertebrate Animals section.

Now mind you, one of my always perspicacious commenters was all over me right from the start:

DM – Please don’t encourage people to cheat their way out of 12 pages. Please tell them to write a 12-page grant.
I would warn grant-writers to be careful of cheating too much. I was at a study section recently where someone lost about a point of score because one of the reviewers (it wasn’t me, although I agree with the reviewer) complained about “cheating” by moving methods into the vertebrate animals section.

That was all back in March 2010. Here we are down the road and I have to say, DearReader, I am hearing a constant drum beat of irritation at people who cheat in just this way. My suggestion (a serious one) is to be very wary of putting what should be your research plan methods into the Vertebrate Animals section.

I am hearing and seeing situations in which reviewers pretty obviously are ticked and almost certainly are punishing the applications accordingly. Nobody likes a cheat. I have even heard of rare cases of people having their grants kicked back, unreviewed, because of this.

So be careful. Keep the Vertebrate Animals section on task and put your Methods where they belong.

On Reposting and Republishing

February 19, 2013

You will have noticed that I repost my old blog entries with frequency. I do so mostly when I think it has been long enough that the blog readership has changed enough that it will be new to some eyes. This is related to the fact that I am convinced blog readership is more like news readership…ephemeral and current. The majority of the viewer traffic lands on the blog through current links rather than through google searches that land on older content.

My view of the reading of scientific content is different. Sure, new and topical stuff will get the most eyes but this is not, precisely, where the primary value of academic papers lies. Particularly when it comes to review articles? I think so.

I’ve run across a most curious situation. I noticed this in one of my various feeds.

Cosyns B, Droogmans S, Rosenhek R, Lancellotti P. Republished: Drug-induced valvular heart disease. Postgrad Med J. 2013 Mar;89(1049):173-8. doi: 10.1136/postgradmedj-2012-302239rep.

Since “Republished” caught my eye, I clicked on the first author and found:

Cosyns B, Droogmans S, Rosenhek R, Lancellotti P. Drug-induced valvular heart disease. Heart. 2013 Jan;99(1):7-12. doi: 10.1136/heartjnl-2012-302239. Epub 2012 Aug 8.

Tracking over to the journal page for the Republished one I found it has the following “Footnote”.

This is a reprint of a paper that first appeared in Heart, 2013, Volume 99, pages 7–12.

That note appears prominently on the PDF of the article (in the sidebar block for author details and the submission/acceptance dates) and there is a header on every page of the article that reads “Republishedreview”(sic).

I then did a PubMed search for “Republished” and found that the Postgrad Med J really is quite fond of this strategy. There are some other players in this game too, though. The Br J Sports Med seems to like the “Republished research” tag, for example.

I’ve seen the occasional retracted-and-republished strategy for dealing with errata. But this was a new one for me, to my recollection. I scanned through the Postgrad Med J Instructions to Authors and it wasn’t really clear if these are unsolicited submissions or requested by the Editorial staff. I’d tend to suspect the latter but both versions of the review say “Provenance and peer review Not commissioned; externally peer reviewed.” at the bottom. Interestingly the Republished one has color figures where the original has B/W images….it does look nicer. And I can make out no indication in the Republished one that it has the permission of the original journal Heart to republish the work. They are both in the BMJ Group, however, so maybe this issue* is irrelevant?

I find myself curious about the advantages and disadvantages for both authors and the journals/publishers for doing this sort of thing. To be honest, what I’d like to see is the the bloggy “Update:” tag added to the title of those reviews from authors that seem to publish essentially the same review over and over. Particularly when it is just an updating of progress since they last wrote a review. That would be a great service to the reader.

__
*I.e., if the Publisher, not the journal, holds copyright and the Publisher is the same for both journals…the “permission” is implied or implicit? But then we have the issue of “self-plagiarism” that seems to bother the humanities majors’ sentiments which are insinuating themselves into the business of science lately.

BAM!

February 19, 2013

Opposition to Obama’s boondoggle brain activity map project is jealousy pure and simple.

Discuss.

I was having a few exchanges with successful science-project crowdfunder Ethan Perlstein (@perelste) who apparently was on NPR today. Good for him, good for his project, whee.

This stuff can work fine for small scale projects, one offs, etc. But placing this in the context of an alternative or replacement for major federal funding is deeply flawed.

1) Overhead rate. Now admittedly, not all Universities bother going after their indirect costs for small philanthropic donations. But if a lab tries to exist on this strategy? You can be damn sure they are going to come after indirects. Some Universities do this already. And donors don’t like it. You can bet there’s some fancy tapdancing trying to figure out how to minimize revealing to the medium ticket donors that their donation are getting taxed. The big ones fight it, obvs.

2) Chump change. Sorry but it is. Perlstein raised $25K. The NIH R03 is $50K for two years. The R21 is $275K over two years and the R01, as we’ve discussed, is most typically $250K (in direct costs, mind you) for 4-5 years. There is going to be very, very little that can be accomplished with the kind of cash that is available via crowdfunding.

3) Yeahbut! The uBiome and American Gut projects raised over $600K, man! Yeah, the former is at $286,548 and the latter is at $339,541 as of this writing. Impressive. Right? but the total is less than the cost of two years of NIH R01 funding. And these may be the best examples. Time will show how many of these can go viral and make big bucks, how many can get $25,000 and how many struggle to get $5,000. Color me extremely skeptical on the big-bucks ones.

4) Can it repeat? That’s another critical question. All well and good for Perlstein to pull down $25K in crowdfunding. But he needs to do it again. and again. and again. No offense but crowd funding works the first time on novelty, your buddies and people looking to make a point. Think they’d be lining up to throw down for Perlstein’s second project in such numbers? Will people who don’t even know him flog the shit out of the Twitt stream like they did for his Meth study? Here’s a hint: hell no.

5) Deliverables. Part of the problem is the nature of the deliverables. What is the crowd to see that has been done with their money? Well, from Perlstein’s project description, the data are going up online as they roll in. So…figures. basically. Not even clear that there will be a pub on which they can be acknowledged. The small scope of the project make it likely that at best one publishable panel will result. And dude, will regular journals put up with the Supplementary Acknowledgement Table approach so all donors can be listed? maybe. but what, now you are going to return to the same crowd and say “Hey, throw down another $25K and we’ll do Figure 2…if I still have a job, that is”.

6) Science is a tough sell. Still. It is extremely difficult to see where anything Perlstein happens to find about the intracellular distribution of methamphetamine is going to so engage the crowd that it jumps in with more funding. This is pretty basic science. It would take “I am mere inches away from curing Meth addiction” level stuff to grab the crowd if you ask me (and anyway, if you did that, Pharma would come a’callin’). In contrast, I offer the outcome for one of my favorite scifi authors. Tobias Buckell had a decent fan base, a book series for which there was a clamor for more from his crowd and he was asking for a mere $10K. He raised it, wrote the book and delivered that sucker to his readers (Kickstarter backers and nonbackers alike). It was, to my read, the same book he would have written (and I would have purchased) if he’d had a schweet advance deal at a major publisher. Or if he’d (somehow) still been able to write on spec like a noob author. Same damn product. Can we say the same for a $25k SCIENCE project? I think not.

Open Thread

February 14, 2013

If you just can’t wait for us to get our Scientopia domain back in action…..

The phones are open. (As they used to say, kids. GOML)

Apparently some epic dumbasses decided that the common housecat, bloodthirsty lethal little murder-cat killing machine that it is, wasn’t quite badass enough.

What. Is. Wrong. With. People?