The NIH Office of Extramural Research has a howler in the recent Nexus.
Application titles, abstracts and statements of public health relevance that are part of your application are read by reviewers, program officers and other NIH staff, but once funded, this information is also available to the public
so therefore
The extramural community has a responsibility to clearly communicate the intent and value of their research to all those interested in learning more—Congress, the public, administrators, and scientists. Take every opportunity to tell people what you do, why you do it, and why they should care.
Well yes, this is true. DM’s always going on about the taxpayer being “the boss”. Ok. Gotcha there.
But how stupid do they think we are? Why the emphasis on the items that show up in RePORTER?
Right wing wackaloon politicians making hay by bashing peer reviewed and funded scientific projects. That’s why.
Like ol’ Proxmire
and
and
and
and the latest….John McCain and Coburn on supposed ARRA excesses.
Gee, I somehow think that us scientists explaining the importance of our research a little better isn’t going to do much good. These dumbasses don’t care about the science. Heck, they don’t even care about the money- one R01 is a mere dustmote in the Congressional allocation process. This is about drumming up political mouth breathers with anti-science blather, preferably focused on social health issues which run afoul of their moralistic viewpoints on human behavior.
Sex, drugs and HIV.
Work on those topics and all the explaining in the world isn’t going to fend of critique from Congress.
I think we can safely ignore this request.
The Twitts are all atwitter today about a case of academic misconduct. As reported in the Boston Globe:
Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.
The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.
There is an ongoing investigation and other allegations or admissions of scientific misconduct or fraud. More observations from the Nature blog The Great Beyond and NeuroSkeptic. We’ll simply have to see how that plays out. I have a few observations on the coverage so far however. Let’s start with the minor ones.
The PubMed page and the ScienceDirect publishers page have no indication that this paper has been retracted. I did a quick search for retraction, for Hauser and for tamarin on the ScienceDirect site and did not find any evidence of a published retraction notice by this method either. The Boston Globe article is datelined today but still. You would think that the publishers would have been informed of this situation loooong before it went public and would have the retraction linkage ready to roll.
The accusation in the paper correction by Hauser is, as is traditional, that the trainee faked it. As NeuroSkeptic points out, the overall investigation spans papers published well beyond the trainee in question’s time in the lab. Situations like this start posing questions in my mind about the tone and tenor of the lab and how that might influence the actions of a trainee. Not saying misconduct can’t be the lone wolf actions of a single bad apple. I’m sure that happens a lot. But I am equally sure that it is possible for a PI to set a tone of, let us say, pressure to produce data that point in a certain direction.
What really bothered me about the Globe coverage was this, however. They associate a statement like this one:
In 2001, in a study in the American Journal of Primatology, Hauser and colleagues reported that they had failed to replicate the results of the previous study. The original paper has never been retracted or corrected.
with
Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.
“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.
In 1997, he co-authored a critique of the original paper, and Hauser and a co-author responded with a defense of the work.
What I am worried about in this type of coverage is the conflation of a failure to replicate a study with the absence of evidence (per the retraction blaming a trainee) with scientific debate over the interpretation of data.
The mere failure of an investigation to be able to replicate a prior one is not in and of itself evidence of scientific misconduct. Scientific findings, legitimate ones, can be difficult or impossible to replicate for many reasons and even if we criticize the credulity, scientific rigor or methods of the original finding, it is not misconduct. (Just so long as the authors report what they did and what they found in a manner consistent with the practices of their fields and the journals in which their data are published.) Even the much vaunted p<0.05 standard means that we recognize that 5 times out of a hundred experiments we are going to accept chance events as a causal chain resulting from our experimental manipulation.
Similarly, debates over what behavioral observation researchers think they see in animal behavior is not in and of itself evidence of misconduct. I mean, sure, if nobody other than the TruBelievers can ever see any smidge of evidence of the Emperor’s fine clothes in the videotapes proffered as evidence by a given lab, we might write them off as cranks. But this is, at this point, most obviously a debate about research design, controls, alternative hypothesis and potential confounds in the approach. The quote from Gordon Gallup taken in greater isolation (as in The Great Beyond blog entry) makes it sound more like perhaps he’s a disinterested party brought in as part of the investigation of scientific fraud when used in this context. In fact he appears to be a regular scientific critic of Hauser’s work. Gallup might be right, but I don’t like the way scientific debate is being conflated with scientific misconduct in this way.
Additional reading:
Harvard Magazine
Retraction Watch: including the text of the retraction to be published and a comment on Hauser serving as associate editor at the journal when his paper was handled.
Neuron Culture
John Hawks Weblog
melodye at Child’s Play
New York Times
New Scientist
__
Disclaimer: I come from a behaviorist tradition and and am more than a little skeptical of research traditions in the comparative cognition tradition that Hauser inhabits.
Innovation in NIH grants is not "hard to define"
August 10, 2010
Comrade PhysioProf recently noted another post on grant review scoring data from the NIGMS Director, Jeremy Berg. One of the comments over there from anon reviewer speculated that the Innovation criterion score is only poorly associated with the Overall Impact score because of reviewer confusion. The comment suggested reviewers struggle to define Innovation- as if they do not know what that means.
Nonsense. I replied as follows:
AR, I would suggest the “difficulty” reviewers have with the Innovation criterion is not confusion over what it really means. Rather it is *resistance* to the notion that Innovation should be more important than Approach and Significance. They just are not on board with this top-down emphasis of the NIH. So they strive to djinn up Innovation compliments for apps that are obviously lacking innovation because they like the approach and/or significance.
Right? Reviewers are not idiots. When you see some gibberish in the Innovation section of the critique written by your fellow reviewers you do not conclude they are fools. You conclude, quite rightly, that the reviewer liked an application that lacks any sign of innovation for other reasons.
The idea that NIH funded science should be all Innovation, all the time is idiocy. In the extreme. We’d never get anywhere without people doing the unglamorous work to follow up, verify, utilize, translate, generalize, extend and connect with the most innovative science.
Reviewers know this, which is precisely why the NIH obsession with innovation fails to translate to study section reviewing behavior.
__
added: I urge my readers to go over to the NIGMS Feedback Loop blog and comment. If you want to see more of these type of data out of other NIH Institutes and Centers, it seems obvious to me that a show of interest on the part of the NIH funded extramural research force (not just PIs, everyone) would be a good thing.