Bad Timing

February 9, 2012

One occasionally puts the pressure on to submit one’s paper on topic X with enough lead time to have a prayer of a decision just prior to submitting a grant proposal on X.

Unfortunately this may mean that the manuscript reviewers are busy trying to wrap up their own grant applications over the same timeframe.

Guess which job takes priority?


In other news, have you ever submitted a manuscript to a particular journal in the hopes that Associate Editor Jones who just so happens to be on a particular study section will see that it exists?

As you are aware, calls to boycott submitting articles to, and reviewing manuscripts for, journals published by Elsevier are growing. The Cost of Knowledge petition stands at 4694 as of this writing. Of these some 623 signatories have identified themselves as being in Biology, 380 in Social Sciences, 260 in Medicine and 126 in Psychology.

These disciplines cover the sciences and the scientists I know best, including my own work.

There seems to be some dismay in certain quarters with the participation of people in these disciplines. This is based, I would assume, on a seat of the pants idea that there are way more active scientists in these disciplines than seems represented by signatures on the petition. Also, I surmise, based on the host of journals published by Elsevier that cater to various aspects of these broader disciplinary categories.

Others have pointed out that in certain cases, such as Cell or The Lancet, there is no way a set of authors are going to give up the cachet of a possible paper acceptance in that particular journal.

I want to address some more quotidian concerns.

I already mentioned the notion of academic societies which benefit from their relationship with Elsevier. Like it or not, they host a LOT of society journals. Sometimes this is just ego and sometimes the society might really be making some ca-change from the relationship. For those scientists who really love the notion that their society has its own journal, this needs to be addressed before they will get on board with a boycott.

Moving along we deal with the considerations that go into selection of a journal to publish in. Considerations that are not driven by Impact Factor since within the class of society journals, such concerns fade. The IFs are all really close, even if they do like to brag about incremental improvement, or about their numerical advantage over a competitor. Yes, 4.5 is better than 4.3 but c’mon. Other factors come into play.

Cost: Somewhere or other (was it Dr. Zen?) someone in this discussion brought up the notion that paying Open Access fees upfront is a big stumbling block. Yes, in one way or another the taxpayers (state and federal in the US) are footing the bill but from the perspective of the PI, increasing library fees to the University don’t matter. What matters are the Direct Cost budgets of her laboratory (and possible the Institutional funds budget). Sure, OA journals allow you to ask for a fee waiver…but who knows if they will give it? Why would you go through all that work (and time) to get the manuscript accepted just to have to pull it if they refuse to let you skip out on the fee? I mean, heck, $1,000 is always handier to have in the lab than being shunted off to the OA publisher, right? I don’t care how many R01s you have…

Convenience: The online manuscript handling system of Elsevier is good. I’ve had experience with a few others, Scholar ONE based systems, etc. Just heard a complaint about the PLoS system on the Twitts the other day, as it happens. Bottom line is that the Elsevier one works really well. Easy file uploading, fast PDF creation, reasonably workable input of all the extraneous info….and good progress/status updating as the manuscript undergoes peer review and decision-making at the editorial offices. This is not the case for all other publishers/journals. And what can I say? I like easy. I don’t like fighting with file uploads. I don’t like constantly having to email the managing editorial team to find out if my fucking manuscript is out for review, back from review, sitting on the Editor’s desk or what. And yeah, we didn’t have that info back in the day. And knowing the first two reviews are in but the journal is still waiting for the third one doesn’t really change a damn thing. But you know what? I like to see the progress.

Audience: One of the first things I do, when considering submitting to a journal in which I do not usually publish, is to keyword search for recent articles. Do they publish stuff like the one we’re about to submit? If yes, then I feel more comfortable in a general sense about editorial decision making and the selection of relevant reviewers. If no…well, why waste the time? Why start off with the dual problem of arguing the merits of both the specific paper and the general topic of interest? Now note, this is not always a valid assumption. I have a clear example in which the journal description seemed to encompass our work…but if you looked at the papers they generally published you’d think we were crazy to submit there. “But they only publish BadgerDigging Studies, not a BunnyHopper to be seen” you’d say. Well, turns out we didn’t have one lick of trouble about topic “fit” from that journal. Go figure. But even with that experience under my belt, I’m still gonna hesitate.

Editor (friendly): Yes, yes, I frequently point out how stupid and wrong we are when trying to game out who is going to respond favorably to our grant proposals. Same thing holds for paper review. But still. I can’t help but feel that I’ve gotten more editorial rulings going my way from editors that I know personally, know they know my work/me and suspect that they are at least 51% favorable towards me/my submissions. The hit rate from people that I’m pretty convinced don’t really know who I am seems somewhat reduced. So yeah, you are damn right I am going to scrutinize the Editorial board of a journal for signs of a friendly name.

Editor (unfriendly): Again, I know it is a fool’s errand. I know that just because I think someone is critical of our work, or has a personal dislike for me, this means jackall. Heck, I’ve probably given really nice manuscript and / or grant reviews to scientists who I personally think are complete jerks, myself. But still… it is common enough that biomedical scientists see pernicious payback lurking behind every corner. Perhaps with justification?

I don’t intend to just stay mad, but to get fucken EVEN the next time I’m reviewing one of theirs. Which will fucken happen. It will.

So yeah, many biomedical scientists are going to put “getting the damn paper accepted already” way up above any considerations about Elsevier’s support for closing off access to tax-payer funded science. Because they feel it is not their fight, yes, but also because it has the potential to cost ’em. This is going to have to be addressed.

On a personal note, PLoSONE currently fails the test. Their are some papers starting to come out in the substance abuse and behavioral pharmacology areas. Some. But not many. And it is hard to get a serious feel for the whole mystique over there about “solid study, not concerned about impact”. Because opinions vary on what represents a solid demonstration. Considerably. Then I look at the list of editors that claim to handle substance abuse. It isn’t extensive and I note at least a few…..strong personalities. Surely these individuals are going to trigger friendly/unfriendly issues for different scientists in their fields. Even worse, however, is the fact that many of them are not listed as having edited any papers published in PLoSONE yet. And that is totally concerning to me if I were considering submitting to that journal instead of one of the many Elsevier titles that might work for us.

Rude Questions

February 6, 2012

I was trying to make a point on the Twittah to a couple of people who were asking someone about the latest grant score.

Would you rather be asked, in public, for your 1) age, 2) weight or 3) latest #NIHGrant score?

As usual, the more I think about my offhand question, the more curious I become. I know I had a post related to this in the past but there are new Readers and I can’t recall the exact focus of the last post.

So what has been your experience Dear Reader? As a noobacious Asst Prof were you made aware of scores and how many grants your Associate and Full Professor Departmentmates were putting in? How about your lateral peers? What about you more-senior types? Are grant scores to be freely discussed or are is it SimplyNotDone in polite society?

but you are nevertheless absolutely straight on target with:

And the way that we do this is not by telling one of these poor fuckes not to send their beautiful work to a particular prominent journal for political reasons. Rather, we fight tooth and nail on hiring, tenure/promotion, and grant review committees against the abdication of responsibility for judging the importance and interest of particular lines of research to non-scientist editors at legacy “high-impact” journals.

I’m reminded of this post because we’ve been talking a little bit around these parts about peer review and the role of publishers in said mechanism of scientific quality control. Also because Rosie Redfield has submitted her paper addressing the Arsenic-Life-Debacle to Science and has put it up on ArXiv and begged for “open peer review”. This is a situation that has the the OpenAccess, peer-review-is-dead types all a’slaver.

This first went up Nov 3, 2008, originally at the site.

The rather stimulating discussion that arose following Isis the Scientist’s critique of a recent study prompted additional blog posts from YHN, Coturnix and drdrA (so far). At the root of much of the bloviating is a comment the original article’s authors posted at Isis’ blog which appeared to recommend that the most appropriate venue for critical discussion of a scientific paper is the Letters-to-Editors section of the journal in question. Reading additional commentary from the original authors on various of the blogs I am not convinced they meant this point to be as absolute as it came across but it does get me thinking about such discussions.
And I thought I would overview one such prior discussion that bears on some of my posts on MDMA.

Vollenweider and colleagues published a paper in 1998 on the “Psychological and cardiovascular effects and short-term sequelae of MDMA (“ecstasy”) in MDMA-naive healthy volunteers.”

Vollenweider FX, Gamma A, Liechti M, Huber T.Neuropsychopharmacology. 1998 Oct;19(4):241-51 [publisher link; I think this and the following letters are freely available, if not check the MAPS database linked in the sidebar]. In this study they administered a 1.7 mg/kg oral dose of MDMA to human volunteers.

A letter to the editors of the journal was submitted by Gijsman and colleagues in which they objected to the study:

…we think this study should not have been performed because of the risk … the authors state that “animal research strongly suggests that a single recreational dose of MDMA is unlikely to produce long-term serotonergic deficits in humans”. We disagree with this assertion for the following reason: Repeated administration of MDMA to animals leads to damage of serotonergic axons and terminals which regenerate only to a certain extent and in a very abnormal manner (Fischer et al. 1995). This damage is associated with decreased concentrations of serotonin in the brain. Single dose MDMA also causes a rapid, biphasic decrease in concentration of serotonin in the brains of animals: concentration drops within 1-3 hours restores within 24 hours and drops again after 24-36 hours, lasting for months or even a year (Steele et al. 1994). Therefore it cannot be excluded and even seems likely that administration of a single dose of MDMA to humans causes damage of serotonergic neurons. Even more because primates seem to be more sensitive to both acute and chronic effects of MDMA: in rats, 10 mg/kg (Colado et al. 1995) and in monkeys, 5 mg/kg causes these effects (Ricaurte et al. 1988), a dose that closely approaches the usual recreational dose.If MDMA were a newly developed drug it would almost certainly not be allowed in clinical phase I studies on these grounds.

Readers will recall that I have an interest in these very issues in large part because of ongoing clinical trials (which must certainly surprise the letter authors!)

This letter resulted in a defense from the original study’s authors which is extensive, citation heavy and freely available so I won’t quote it extensively. Suffice it to say they argued that there was little evidence from either human or animal studies that a single 1.7 mg/kg dose of MDMA was likely to lead to lasting damage. There was one pertinent hook to the continuing story:

In this respect, Ricaurte and colleagues’ study (cited by the authors), where a single dose of 5 mg/kg MDMA was given orally to monkeys (Ricaurte et al. 1988), is probably the closest approach to the dose regimen used in our human study. This study found a reduction in 5-HT and 5-HIAA content of about 20% in the thalamus and hypothalamus 2 weeks postdrug.However, as shown above, this does not permit any conclusions as to a possible loss of 5-HT terminals.

Presumably since the Gijsman et al letter suggested there were potential ethical issues at hand with respect to the treatment of human subjects, the editors also addressed the issue. Lieberman and Aghajanian conclude:

Thus, it would appear that while Gjisman et al. (1999) raise valid concerns about this type of research in general and this study in particular, the data do not support the view that single oral doses at 1.7 mg/kg of MDMA (which was one third of the doses used in monkeys by Ricaurte et al. (1988)) are likely to produce damage to serotonin terminals.

These comments would count as trolling a colleague in the blogosphere and sure enough, McCann and Ricaurte soon contributed a viewpoint. Their argument relied on some reasonably accepted* intra-species dose scaling principles.

Unfortunately, when extrapolating the animal data to humans, Dr. Vollenweider et al. (1999) and the Neuropsychopharmacology editorial omitted a critical and fundamental factor in their calculations: the principle of interspecies drug dose scaling (see Mordenti and Chappell 1989). This principle, which is based upon the underlying anatomical, physiological, and biological similarities among mammals, permits researchers to extrapolate animal data to human beings under a variety of experimental conditions. Put simply, smaller animals require higher dosages of drug, on a mg/kg basis, to achieve the same effect. Stated mathematically: Dhuman = Danimal(Whuman / Wanimal)0.7 where D = dose of drug in milligrams (mg) and W = weight in kilograms (Kg). If the known single oral neurotoxic dose of MDMA in a monkey (5.0 mg/kg in a 1 kg monkey) is substituted into the equation, the equivalent dose in a human being weighing 70 kg is 1.4 mg/kg, slightly lower than the 1.7 mg/kg dose used by Vollenweider and colleagues. When identical calculations are carried out using rodent neurotoxicity data (O’Shea et al. 1998), the equivalent MDMA dose in humans is approximately 1.7 mg/kg

Readers will recall that I’ve tended to ignore this species-scaling argument when discussing the MDMA doses used in the human clinical trials. Instead, I limited my discussion to the size of the typical subject range and the supplemental dosing practices. It is worth recognizing here that most comparisons of dose / exposure to a drug are estimates. Differences in human body size means that human clinical prescriptions (including OTC recommendations) for a fixed mg dose of drug only get in the ballpark. Differences in individual metabolic and other within-species factors mean that using a mg dose per kg bodyweight approach is still only an approximation. Differences in the way two species metabolize different classes of compounds may mean that a dose-scaling equation such as described above holds true for certain types of drugs but not others!

Ultimately, additional data are required to resolve a tight threshold question such as “Is 1.7 mg/kg MDMA likely to cause lasting damage?”.
One more response letter from the senior author on the study in question, Vollenweider:

Allometric interspecies scaling is based on the fact that the regression of the logarithm of a pharmacokinetic parameter and the logarithm of species weight is generally linear. As a result, pharmacokinetic (and therefore pharmacodynamic, including toxicity) parameters for a given drug can be estimated in any species if this linear relationship is determined (Ings 1990). Yates and Kugler (1986) and others have criticized the potential inaccuracy of interspecies scaling, noting the 10-fold range of estimates that may be derived depending on which pharmacokinetic and corrective factors are thought relevant. Accordingly, the accuracy of the technique is dependent on the availability of sufficient data.

Vollenweider then throws up some additional smoke screen regarding mechanism of toxicity and potential toxic metabolites but I think the point was made with the general observation. We simply did not then, and still do not now, have very tight estimates about the likely threshold for MDMA-induced damage of a lasting nature. I think the fact that MDMA itself is capable of producing essentially permanent alterations in brain serotonin function is not debatable but the question of threshold is certainly not well resolved.

Getting back to the meta-point of this post, I return to the question of the venue for discussion. Very occasionally a really good discussion of a scientific paper or finding breaks out via exchange of Letters-to-the-Editor in a scientific journal. In my experience, however, this is quite a rare event. The one I detail here was very useful although admittedly many people in the field were already discussing these dose-scaling issues. Still, it brings the discussion out into the open where any first-year graduate student getting up to speed on the field would run across it. A GoodThing.

This is the sort of discussion of a paper that I believe the journals which put effort into such things (such as Nature and PLoS) desire. Their online discussion formats seem to beg for more discussion and wax disappointed with the current state of affairs. The anonymous-comment blog format, however, seems to have no difficulty supporting interesting and fruitful discussion of scientific papers.
*as always I’m not really a pharmacologist and would welcome any necessary correction from experts such as Abel.
Mordenti J, Chappell W (1989): The use of interspecies scaling in toxicokinetics In Yacobi A, Kelly J, Batra V (eds), Toxicokinetics and New Drug Development. New York, Pergamon Press, pp 42-96

It’s been awhile since I last talked about MDMA, aka 3,4-methylenedioxymethamphetamine, the canonical ingredient in Ecstasy. I phrase it this way because street drug sold as Ecstasy is notoriously promiscuous in terms of psychoactive drug content. Stroll on over to if you are new to this topic.

Not that there haven’t been more emergencies and deaths, including ones that didn’t involve MDMA but something else, like PMMA. And yes, the MAPS folks are marching on with great dispatch, dosing more and more people with MDMA in the context of trying to prove it an effective adjunct to psychotherapy for PTSD. So, you know, I keep up with my interests as expressed in earlier days on the blog, I just don’t necessarily bore you with it.

There’s a human laboratory paper I’ve been looking at that makes a point semi-related to some of the above issues. It’s from the laboratory of Carl Hart (who I profiled a few years ago as part of D.N. Lee’s Diversity in Science Blog Carnival.)

Kirkpatrick MG, Gunderson EW, Perez AY, Haney M, Foltin RW, Hart CL. A direct comparison of the behavioral and physiological effects of methamphetamine and 3,4-methylenedioxymethamphetamine (MDMA) in humans. Psychopharmacology (Berl). 2012 Jan;219(1):109-22. Epub 2011 Jun 30. [PubMed]

The essence of the design is that it was a human laboratory study with a repeated measures design. They orally dosed the subjects with inactive placebo, 100 mg of MDMA and both 20 and 40 mg of methamphetamine with these treatment conditions separated by 3 days. A series of cognitive, physiological and self-report assessments were conducted- I’m not going to overview the findings here, you can go read the paper for yourself.

The interesting part about this paper for today’s discussion is that the subjects were really bad at identifying the drug that they’d been given. Keep in mind that the subjects had to have prior experience with both methamphetamine and MDMA. I imagine there are few people in the audience that are not aware that at least the mean, reported subjective effects of MDMA and methamphetamine differ considerably. Although it does have a prototypical psychomotor stimulant character to it, MDMA’s subjective properties have people reaching for new terms like “entactogen”. Likening it to a hybrid of a classical hallucinogen and a stimulant. Insisting vociferously that it is different.

This ties into the question of the pharmacological diversity of the recreational “Ecstasy” market, people’s ability to know what they have just taken, etc. Which may influence their decision to take more drug later on, to take more tablets in the original dose, etc. It also plays into the blinding that might otherwise be assumed to be impossible in the clinical trials and their occasional selection of something else like methylphenidate as their control drug.

Kirkpatrick and colleagues report:

On the questionnaire probing what drug the participants thought they had received, 72.7% of participants (i.e., eight out of 11) correctly identified placebo (18.2% reported MDMA and 9.1% reported sedative; confidence rating= 72.7±9.5), 45.5% correctly identified 20 mg methamphetamine (45.5% reported MDMA and 9.1% reported placebo; confidence rating=76.7±13.3), 72.7% correctly identified 40 mg methamphetamine (27.3% reported MDMA; confidence rating=80.1±5.6), and 45.5% correctly identified 100 mg MDMA (27.3% reported methamphetamine and 27.3% reported sedative; confidence rating=87.6±5.2).

Now, just for reference, the 100 mg MDMA and 40 mg methamphetamine conditions resulted in approximately the same effects on heart rate, blood pressure and self-report measures of “good drug effect” and “feeling stimulated”. So no need to go looking there for reasons. This isn’t some sort of meta assessment of physiological responses or a good/bad drug binary decision. These compounds must produce subjective effects that are pretty indistinguishable. They did differ in group terms on several of the outcome measures so this really does focus on the subject’s awareness and not on the actual effects, so to speak.

And do recall this was a controlled laboratory study in which the environment was relatively invariant compared with potential differences in environments in which Ecstasy is consumed in the natural setting. There is every reason to expect that situational variables and expectations would hugely influence the subjective response.

My consideration for the blog topics is this. When someone starts going on confidently about knowing the purity and/or nature of other non-MDMA constituents of street Ecstasy they have consumed, this is unlikely to be a credible assertion. In either direction. I.e., it is as dubious if they claim to have the pure stuff as if they claim it “must” have been contaminated with methamphetamine.

Unfortunately the study did not manipulate MDMA dose so we’re unable to extend our interpretation in another obvious direction which would be whether or not individuals were very good at identifying how much MDMA they had consumed. I’m betting not very good at this either but we’ll have to wait on another study for that evidence.