MDMA Case Reports
May 22, 2007
The singular of data is “anecdote”.
We all know this hoary old scientific snark. Pure Pedantry ponders the utility of Case Reports following a discussion of same at The Scientist.
The Pure Pedantry Ponder identifies “rare neurological cases” as a primary validation for the Case Study, but the contribution goes way beyond this. Let’s take YHN’s favorite example, drug abuse science and MDMA in particular. To summarize, when MDMA started coming to the attention of scientists in, oh, the early 80s the thought was “hmm, looks like an amphetamine, let’s study it like an amphetamine”. This is where the “neurotoxicity”, “locomotor activity” and “drug discrimination” stuff came in. Bouncing forward, we can see the emergence of thermoregulation, ambient temperature, vasopressin as a relatively late interest. Where did this come from? The case reports of medical emergency and death. Which, while rare, are not of the singularly “rare neurological case” variety with which we are familiar from neurology101. Still, the MDMA Cases play a key role, I submit. Why?
The fact is, animal lab type drug abuse researchers get very far away from their subject matter in some cases. Understandably so. First, there is no guarantee they had much familiarity with the druggies in college in the first place and MDMA/Ecstasy likely postdated them. “Them” being the PIs who are sadly as of this writing very much not likely to be young enough to have been in college in the mid eighties. So they know all about the cocaine and the weed and the LSD but emerging drugs? Not so much. Their undergrads and even grad students could tell ’em but how often do the PIs listen? Then there’s just the general trend that even by the post-doc days scientist are just moving away from the youth culture, don’t you know? Finally, while scientists got to do a bit of tasting back in the 70s, those days are long gone.
So instead we have a bunch of out-of-touch old fogies who think MDMA is just another amphetamine, should be studied like just another amphetamine and can’t see why there are different approaches needed. Case reports provide needed stimulus to new experimental approaches, needed support for the arguments that other things need to be investigated about this drug, etc. Don’t believe it? Then why did NIDA have to come out with a Program Announcement in 2004 (yes, 2004!) saying in essence, “For God’s sake will you please submit some proposals other than the mechanisms of so-called serotonin neurotoxicity?”. [Current R01 and R21 versions of the PA].
Time will tell but the field may have missed the boat a bit by not paying enough attention to MDMA related Case Reports. Giorgi et al have reported a potential sensitization of seizure threshold following a particular regimen of MDMA in mice. Experimentally, this is relatively novel stuff. But reading the case reports with the hindsight, there are clear indications of seizure as possibly being a primary cause of medical emergency (it has been generally been assumed to be thermogenic seizure subsequent to the high body temperature). Time, and some additional experimentation, will tell if this was a missed foreshadowing from the case reports or not…
So yeah, I find a big role for case reports. Not just for the unique cases but also to lay down a record of some interesting phenomena that might bear looking into in a systematic way.
The reviews are in…
May 22, 2007
Lots of bashing of the peer review process lately. Admittedly Orac has a nice counter, directed at forces external to science but highly relevant to on-the-bus complainers. [Update: another comment on peer review from NeuroLogica]
I have some unusually un-cynical thoughts today. I finally got some reviews back on a recent submission and they touch on much that is wrong and much that is right with manuscript review. First of all, we’re talking a normal journal here, Impact Factor in the 3 range, field specific, working scientist as the editor. Meat and potatoes stuff. The topic of the paper is pretty much in the heart of the journal. It does however, reflect a slightly contrarian experimental approach which in some ways violates all the “norms” which were established over the past couple-three decades in this area. nothing earth shaking, just some experiments which converge on a single point of view, suggesting that no, we don’t always have to do things the canonical way and there is room for some improved models.
One reviewer is…critical. obsessively so. detailed point-by-point complaints about the interpretation of results. The flip side is that one reviewer “gets it”. Very laudatory review, really. Almost makes a better argument for publication than I could make myself. Editor comes in with “may be acceptable pending revision” with some additional critique.
Okay, pretty standard stuff, GREAT, I think and start beavering away with the responses and revisions. Why am I not ticked as other seem to be by the divergent opinions of the reviewers? Well, first of all, let’s face it. In contrast to the dismal reinforcement rate of the grant process, paper review has a fantastic effort/reward relationship. As one luminary in my area pointed out a very long time ago, everything gets published eventually. Especially when the editor seems favorably disposed in the face of at least one rather critical review. But in addition we should all admit in these cases that the truth lies somewhere in the middle. The paper is likely not as good as the favorable review indicates and not as bad as the critical one indicates. Editor serves as mediator over where the mean should fall. This is a good thing. Often the bias is for publication, again a good thing for all of us. As the old saw has it, real peer review starts after publication anyway.
Let’s take the ‘bad’ review first. Yes it IS irritating that some idiot questions our brilliant conclusions, seems willfully to miss the point and can’t see the forest for the trees. However, one of my best mentors once said to me that no matter how bad or stupid the reviews seem, it always results in a better paper. I have found this to be true, sure enough. A related point is that we should understand that the reviewers stand in proxy for our eventual audience. There will be critics and nonbelievers reading your paper if it does make it into print, don’t you want to have the chance to head off some of the criticism in advance? So the “idiot reviewer” is useful. Finally, heh, strategically if one wants to make sure the critical review isn’t heeded by the editor, we want them to be as obsessive, critical and idiotic as possible. Personal insults if possible (yeah, I had one of those recently too!) This can’t possible help the editor take his/her side and therefore is a good thing for the authors.
Now the “good” review. This one is tougher. Sure, we all want a cream puff review because after all, our manuscripts are brilliant as submitted right? Is this a reflection of the good-old-boys/girls club that those on the “outside” lament? Well perhaps. The lab group I’m in isn’t big-wiggy for sure but it IS known. Furthermore, the journals are increasingly requesting advice on who the reviewers should be (and should not), so yeah, we took advantage of that to request people we thought might be friendly. The thing is, who knows? I suspect that when it comes to paper and grant review, we don’t hit very high on average estimating who is going to give us an easy time of it and who is going to rip us apart. Just because you have a drink or two with a colleague and bemoan the state of the funding crisis into your beer doesn’t mean they’ll accept crap science from you! Getting back to the point, I just can’t say. Maybe the “good” reviewer was from our suggested list but maybe both of them were too. Maybe the “bad’ reviewer was someone we think of as a friend of the lab and the “good” one was a complete unknown!
Anyway, the system is working today, even if I am spending inordinate amounts of time on a point-by-point rebuttal of idiotic comments….