From this Op-Ed.
The Institute of Medicine has recently released a report outlining the ominous public-health threat of chronic hepatitis C, much of which is the result of unwitting infection through medically-necessary blood transfusions, leading to 350,000 deaths worldwide each year and infecting more than three to five times as many people in the United States as HIV.
Narsty isn’t it? We should get right on that, don’t you think? Any decent models for research?
Currently, chimpanzees are the only experimental animal, except for humans themselves, susceptible to infection with hepatitis C. The Great Ape Protection Act would end the use of chimpanzees in biomedical research, grinding promising studies to a halt and unconscionably delaying the release of anti-viral therapies and a vaccine for chronic hepatitis C.
Whoops.
I stumbled back onto something I’ve been meaning to get to. It touches on both the ethical use of animals in research, the oversight process for animal research and the way we think about scientific inference.
Now, as has been discussed here and there in the animal use discussions, one of the central tenets of the review process is that scientists attempt to reduce the number of animals wherever possible. Meaning without compromising the scientific outcome, the minimum number of subjects required should be used. No more.
We accept as more or less a bedrock that if a result meets the appropriate statistical test to the standard p < 0.05. Meaning that sampling the set of numbers that you have sampled 100 times from the same underlying population, fewer than five times will you get the result you did by chance. From which you conclude it is likely that the populations are in fact different.
There is an unfortunate tendency in science, however, to believe that if your statistical test returns p < 0.01 that this result is better. Somehow more significant, more reliable or more..real. On the part of the experimenter, on the part of his supervising lab head, on the part of paper reviewers and on the part of readers. Particularly the journal club variety.
False.
Scientific Research 101: Results!
March 24, 2010
So you’ve just completed your last assays on physioprofitin signaling in the Namnezian complex. Lo and behold it is qaz-mediated, just like you suspected and the beccans are off the freaking chart. woot! PiT/PlS ratios are within relevant physiological ranges and still this work of art, your labor of love, came through with the experimental goods.
With a hope and a prayer you run your stats….and YES! p < 0.01!!!!!!!!!!!!!!!
What is the correct way to report your big Result?
The statistical analysis____________ qaz-mediated upregulation of physioprofitin in the Namnezian complex.polls
doin it right
March 20, 2010
[ Please welcome our guest blogger, who identifies as robin, just your average everyday neuropharmacologist. -DM ]
One of the most important yet overlooked tasks of the average pharmacologist is dissolving drugs into solution. Those of you who work with things that don’t have to cross the blood-brain barrier probably have a generally easier time dissolving shit than those of us who prefer to study CNS-active compounds. For those of us who play with compounds that are hydrophobic enough to cross the blood-brain barrier, I can testify that those range from fairly easy to major suck to put into an aqueous solution.
sourceI recently introduced a paper on the discriminative stimulus properties of cathinone analog drugs with reference to the recent emergence in the popular media of an analog called 4-methylmethcathinone (4-MMC), mephedrone (2-methylamino-1-p-tolylpropan-1-one), Meow-Meow, MMCAT. The name “plant food” is what 4-MMC is apparently being marketed under in the UK, given that the compound itself is not controlled but it is illegal (I surmise) to sell things as “legal ecstasy” or “legal methamphetamine” or similar. There has been one fatality attributed* to 4-MMC that I can find and a few bits of seized-drug analysis confirming that the stuff is indeed being used.
An early report of a fatality associated with consumption of the drug in Sweden resulted in placement of mephedrone on the controlled list. The followup in the Swedish press shows that the woman was reported to have consumed mephedrone (confirmed post-mortem) and smoked cannabis (no apparent confirmation; alcohol and other narcotics excluded postmortem) and then collapsed. Emergency services were unable to revive her and she died a day and a half later; symptoms of brain swelling, stroke, hyponatremia and hypokalemia were mentioned, as well as a low body temperature of 33 degrees C.
The story has heated up recently in the UK press after the death of two individuals who are, at present, suspected of taking 4-MMC/mephedrone, reportedly in combination with methadone (an opiate) and alcohol. As I mentioned before, a quick scan of PubMed finds little reported on the effects of this compound in animal models or in humans.
So the question is, scientists, what next?
Let’s play virtual science, shall we?
Scholars and Teachers on Divergent Paths
March 18, 2010
Research (source)via Female Science Professor. A recent news bit in the Chronicle of Higher Education details another case in which the alleged three legged stool of Professorial careerdom (teaching, research, service) is revealed to stand only on the one leg- research.
his department’s tenure-and-promotion guidelines.. were revised in 2000, shortly after he had received the university’s Distinguished Teaching Award and a similar prize from a statewide association of governing boards.
Under the revised criteria, faculty members are given many more points for supervising graduate students than for teaching undergraduate courses. “I can teach an undergraduate course with 44 students and get only three points,” Mr. Vable says. “But a faculty member who supervises a graduate student gets 19 points and can be released from course duty. So that totally skewed the algorithm.”
Well, at least they are up front about it.
A survey on "science blogs"
March 16, 2010
After I read the now-infamous paper by I. Kouper, entitled “Science blogs and public engagement with science: practices, challenges, and opportunities“, I was left in some confusion as to how the author selected 11 blogs to study. I was also curious about what my readers thought of when asked to generate a list of “science blogs” so I asked them. I left the request as general as possible because I was interested in what “science blog” meant as much as in specific examples.
For your entertainment and edification, I tabulated* the results from the 31 answers supplied as of this writing.
Discriminating Cathinone Analogs
March 15, 2010
sourceMy Google news alert for MDMA, Ecstasy and the like has been turning up references to a cathinone analog called variously 4-methylmethcathinone (4-MMC), mephedrone (2-methylamino-1-p-tolylpropan-1-one), Meow-Meow, MMCAT and a few other things. There has been one fatality attributed* to 4-MMC that I can find and a few bits of seized-drug analysis confirming that the stuff is indeed being used. A quick scan over at PubMed finds little reported on the effects of this compound in animal models or in humans. I did, however, run across an article on other cathinone analog drugs that caught my attention.
The newpaper reports on 4-MMC coming out of the UK, for the most part, are experiencing the usual difficulty in characterizing the subjective properties of an analog of a stimulant class of drugs. This not dissimilar to the case of MDMA and relatives such as MDA, MDEA/MDE which are structurally similar to amphetamine and methamphetamine but convey subtly different subjective properties. This also gives me an opportunity to talk about an animal model used quite a bit in drug abuse studies: The drug-discrimination assay. The paper of interest is the following one.
Cathinone: an investigation of several N-alkyl and methylenedioxy-substituted analogs. Dal Cason TA, Young R, Glennon RA. Pharmacol Biochem Behav. 1997 Dec;58(4):1109-16. (DOI)
An exercise for my readers
March 15, 2010
Off the top of your head, when you think “science blog”, which specific blogs, collectives, aggregators, etc come to mind?
List up to 11 in the comments.
(I’ll moderate comments for a little while today to avoid contamination)
My philosophy on annoying NIH grant reviews in a nutshell
March 12, 2010
We’ve been talking about grants getting spiked by one outlying, jerk of a Reviewer #3 lately. Here and elsewhere. I have one guiding philosophy when it comes to disappointing grant reviews.
Pardon my PhysioProffish.
Dealing with the 12 page NIH grant format
March 11, 2010
Many of my readers will have already faced the joys of the shorter NIH Grant application. Briefly, the meat of the R01 proposal has now been reduced from 25 to 12 pages and the meat of the R21 from 15 pages to 6. As I observed when the Notice appeared, this is a challenge.
Since I am finally getting serious about trying to write one of these new format grants, I am thinking about how to maximize the information content. One thought that immediately strikes me is….cheat!
I’ve been meaning to pick up on a comment made by a reader over at writedit’s epic thread on NIH paylines, scores and whatall. (If you want to swap war stories and score/IC payline grumbling, that is the hot place in town.) The guy was ticked off about a recent review he received and had a question:
I am an establishe investigator. I subnitted a competing renewal … I got a score of 40 (37 percentile). I was very shocked and dissapointed to find out that my application had a preliminary score of 2.7 (which would have been fundable) but it seems one negative reviewer carried the day, and convinced others to pull down the score. I have not yet seen the comments, but if the comments have factual errors, especially from the negative errors, can I appeal the review and request a re-review?
Recently, as luck would have it, a loyal reader of the blog submitted the following scores, received on the review of her R01 grant proposal. Under the new scoring procedures in place since last June, these are scores which each reviewer suggests for criteria of Significance, Investigator, Innovation, Approach and Environment. I may have slightly re-ordered specific scores for concealment purposes but this is essentially the flavor.
rev#1: 2,1,1,1,1
rev#2: 2,2,3,3,1
rev#3: 3,2,5,4,2
It really is always Reviewer #3, isn’t it?
A recent paper from I. Kouper entitled “Science blogs and public engagement with science: practices, challenges, and opportunities” has been receiving a fair bit of bloggy attention. Of the negative sort. Mostly because the paper purports to report on the state of all science blogging but then cherry picks a few blogs to generate data- which is not actually presented for the most part. Furthermore the paper ends up with a subjective review of blog tone, level and commentary that makes one wonder if the author actually reads blogs at all. It is just that detached from the experience of many of us.
Bora was particularly annoyed and held forth at some length. Additional thoughts were advanced at Cosmic Variance, Panda’s Thumb andPharyngula.
Since this blog was included in the alleged dataset, narcissistically, I felt I had better point out some more flaws in this paper. Let’s get the hilarious one out of the way first.
Exercise Phys Dudes Join Up
March 8, 2010
Go entertain yourself with the Obesity Panacea blog (previously here).
If you don’t laugh at the Ten Most Annoying Gym Personalities you need to, well, hit the gym.
Welcome Peter and Travis!
Okay, disgruntledocs, NIGMS is listening- go nuts!
March 4, 2010
From the NIGMS Strategic Plan site:
NIGMS has a long-standing commitment to research training and biomedical workforce development. As science, the conduct of research, and workforce needs evolve, we want to be sure that our training and career development activities most effectively meet current needs and anticipate emerging opportunities, and that they contribute to building a highly capable, diverse biomedical research workforce. To this end, we are engaged in a strategic planning process to examine our existing activities and articulate strategies to help us build and sustain the workforce that the nation needs for improving health and global competitiveness.
We are seeking broad input for this planning effort from university and college faculty members and administrators, current and former predoctoral and postdoctoral trainees, industry representatives, representatives of professional and scientific organizations, and other interested parties.
From March 2 to April 21, 2010, you may give us your input on our Web site in response to the series of questions below. These submissions will be completely anonymous.
1. What constitutes “success” in biomedical research training from the perspectives of an individual trainee, an institution, and society?
2. What can NIGMS do to encourage an optimal balance of breadth and depth in research training?
3. What can NIGMS do to encourage an appropriate balance between research productivity and successful outcomes for the mentor’s trainees?
4. What can NIGMS do through its training programs to promote and encourage greater diversity in the biomedical research workforce?
5. Recognizing that students have different career goals and interests, should NIGMS encourage greater flexibility in training, and if so, how?
6. What should NIGMS do to ensure that institutions monitor, measure, and continuously improve the quality of their training efforts?
7. Do you have other comments or recommendations regarding NIGMS-sponsored training?