Commenter mikka wants to know why:

I don’t get this “professional editors are not scientists” trope. All the professional editors I know were bench scientists at the start of their career. They read, write, look at and interpret data, talk to bench scientists and keep abreast of their fields. In a nutshell, they do what PIs do, except writing grants and deciding what projects must be pursued. The input some editors put in some of my papers would merit a middle authorship. They are scientists all right, and some of them very good ones.

Look, yes you are right that they are scientists. In a certain way. And yes, I regret the way that my opinion that they are 1) very different from Editors and Associate Editors who are primarily research scientists and 2) ruining science tends to be taken as a personal attack on their individual qualities and competence.

But there is simply no way around it.

The typical professional editor, typically at a Glamour(ish) Mag publication, is under-experienced in science compared with a real Editor.

Regardless of circumstances, if they have gone to the Editorial staff from a postdoc, without experience in the Principal Investigator chair then they have certain limitations.

It is particularly bad that ass kissing from PIs who are desperate to get their papers accepted tends to persuade these people over time that they are just as important as those PIs.

“Input” merits middle authorship, eh? Sure, anyone with half a brain can suggest a few more experiments. And if you have the despotic power of a Nature editor’s keyboard behind you, sure…they damn well will do it. And ask for more. And tell you how uniquely brilliant of a suggestion it all was.

And because it ends up published in a Glamour Mag, all the sheep will bleat approvingly about what a great paper it is.

Pfaagh.

Professional editors are ruining science.

They have no loyalty to the science*. Their job is to work to aggrandize their own magazine’s brand at the cost of the competition. It behooves them to insist that six papers worth of work gets buried in “Supplemental Methods” because no competing and lesser journal will get those data. It behooves them to structure the system in a way that authors will consider a whole bunch of other interesting data “unpublishable” because it got scooped by two weeks.

They have no understanding or consideration of the realities of scientific careers*. It is of no concern to them whether scientific production should be steady, whether uninteresting findings can later be of significance, nor whether any particular subfield really needs this particular kick in the pants. It is no concern to them that their half-baked suggestion requires a whole R01 scale project and two years of experiments. They do not have to consider any reality whatsoever. I find that real, working scientist Editors are much more reasonable about these issues.

Noob professional editors are star-struck and never, ever are able to see that the Emperor is, in fact, stark naked. Sorry, but it takes some experience and block circling time to mature your understanding of how science really works. Of what is really important over the long haul. Notice how the PLoSFail fans (to pick one recent issue) are heavily dominated by the wet-behind-the-ears types and the critics seem to mostly be established faculty? This is no coincidence.

Again, this is not about the personal qualities of the professional editors. The structure of their jobs, and typical career arc, makes it impossible for them to behave differently.

This is why it is the entire job category of professional editor that is the problem.

If you require authoritah, note that Nobel laureate Brenner said something similar.

It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.

He was clearly not talking about peer review itself, but rather the professional Glamour Mag type editor.

_
*as well they should not. It is a structural feature of the job category. They are not personally culpable, the institutional limitations are responsible.

The latest round of waccaloonery is the new PLoS policy on Data Access.

I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.

Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.

Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.

This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.

But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.

The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?

This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.

Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?

Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?

This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.

or so asketh Mike Eisen:

There’s really no excuse for this. The people in charge of the rover project clearly know that the public are intensely interested in everything they do and find. So I find it completely unfathomable that they would forgo this opportunity to connect the public directly to their science. Shame on NASA.

This whole situation is even more absurd, because US copyright law explicitly says that all works of the federal government – of which these surely must be included – are not subject to copyright. So, in the interests of helping NASA and Science Magazine comply with US law, I am making copies of these papers freely available here:

FORWARD THE REVOLUTION, COMRADE!!!!!!!

Go Read, and download the papers.

h/t: bill

There should be a rule that you can’t write a review unless you’ve published at least three original research papers in that topic/area of focus.

Also a rule that your total number of review articles cannot surpass your original research articles.

Thought of the Day

September 10, 2013

There seems to be a sub population of people who like to do research on the practice of research. Bjoern Brembs had a recent post on a paper showing that the slowdown in publication associated with having to resubmit to another journal after rejection cost a paper citations.

Citations of a specific paper are generally thought of as a decent measure of impact, particularly if you can relate it to a subfield size.

Citations to a paper come in various qualities, however, ranging from totally incorrect (the paper has no conceivable connection to the point for which it is cited) to the motivational (paper has a highly significant role in the entire purpose of the citing work).

I speculate that a large bulk of citations are to one, or perhaps two, sub experiments. Essentially a per-Figure citation.

If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are.

On the other side, fans of “complete story” arguments for high impact journal acceptances are suggesting that the bulk of citations are to this “story” rather than for the individual experiments.

I’d like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.

As we all know, much of the evaluation of scientists for various important career purposes involves the record of published work.

More is better.

We also know that, at any given point in time, one might have work that will eventually be published that is not, quiiiiiite, actually published. And one would like to gain credit for such work.

This is most important when you have relatively few papers of “X” quality and this next bit of work will satisfy the “X” demand.

This can mean first-author papers, papers from a given training stint (like a 3-5 yr postdoc) or the first paper(s) from a new Asst Professor’s lab. It may mean papers associated with a particular grant award or papers conducted in collaboration with a specific set of co-authors. It could mean the first paper(s) associated with a new research direction for the author.

Consequently, we wish to list items that are not-yet-papers in a way that implies they are inevitably going to be real papers. Published papers.

The problem is that of vaporware. Listing paper titles and authors with an indication that it is “in preparation” is the easiest thing in the world. I must have a half-dozen (10?) projects at various stages of completion that are in preparation for publication. Not all of these are going to be published papers and so it would be wrong for me to pretend that they were.

Hardliners, and the NIH biosketch rules, insist that published is published and all other manuscripts do not exist.

In this case, “published” is generally the threshold of receiving the decision letter from the journal Editor that the paper is accepted for publication. In this case the manuscript may be listed as “in press“. Yes, this is a holdover term from the old days. Some people, and institutions requiring you to submit a CV, insist that this is the minimum threshold.

But there are other situations in which there are no rules and you can get away with whatever you like.

I’d suggest two rules of thumb. Try to follow the community standards for whatever the purpose and avoid looking like a big steaming hosepipe of vapor.

“In preparation” is the slipperiest of terms and is to be generally avoided. I’d say if you are anything beyond the very newest of authors with very few publications then skip this term as much as possible.

I’d suggest that “in submission” and “under review” are fine and it looks really good if that is backed up with the journal’s ID number that it assigned to your submission.

Obviously, I suggest this for manuscripts that actually have been submitted somewhere and/or are out for review.

It is a really bad idea to lie. A bad idea to make up endless manuscripts in preparation, unless you have a draft of a manuscript, with figures, that you can show on demand.

Where it gets tricky is what you do after a manuscript comes back from the journal with a decision.

What if it has been rejected? Then it is right back to the in preparation category, right? But on the other hand, whatever perception of it being a real manuscript is conferred by “in submission” is still true. A manuscript good enough that you would submit it for consideration. Right? So personally I wouldn’t get to fussed if it is still described as in submission, particularly if you know you are going to send it right back out essentially as-is. If it’s been hammered so hard in review that you need to do a lot more work then perhaps you’d better stick it back in the in preparation stack.

What if it comes back from a journal with an invitation to revise and resubmit it? Well, I think it is totally kosher to describe it as under review, even if it is currently on your desk. This is part of the review process, right?

Next we come to a slightly less kosher thing which I see pretty frequently in the context of grant and fellowship review. Occasionally from postdoctoral applicants. It is when the manuscript is listed as “accepted, pending (minor) revision“.

Oh, I do not like this Sam I Am.

The paper is not accepted for publication until it is accepted. Period. I am not familiar with any journals which have accepted pending revision as a formal decision category and even if such exist that little word pending makes my eyebrow raise. I’d rather just see “Interim decision: minor revisions” but for some reason I never see this phrasing. Weird. It would be even better to just list it as under review.

Final note is that the acceptability of listing less-than-published stuff on your CV or biosketch or Progress Report varies with your career tenure, in my view. In a fellowship application where the poor postdoc has only one middle author pub from grad school and the two first author works are just being submitted…well I have some sympathy. A senior type with several pages of PubMed results? Hmmmm, what are you trying to pull here. As I said above, maybe if there is a clear reason to have to fluff the record. Maybe it is only the third paper from a 5 yr grant and you really need to know about this to review their continuation proposal. I can see that. I have sympathies. But a list of 8 manuscripts from disparate projects in the lab that are all in preparation? Boooo-gus.

This is, vaguely, related to an ongoing argument we have around here with respect to the proper treatment of authors who are listed as contributing “co-equally” to a given published paper. My position is that if we are to take this seriously, then it is perfectly fine* for the person listed second, third or eighth in the list of allegedly equal contributors to re-order the list on his or her CV. When I say this, my dear friend and ex-coblogger Comrade PhysioProffe loses his marbles and rants about how it is falsifying the AcademicRecord to do so. This plays into the story I have for you.

Up for your consideration today is an obscure paper on muramyl peptides and sleep (80 PubMed hits).

I ran across Muramyl peptides and the functions of sleep authored by one Richard Brown from The University of Newcastle in what appears to be a special issue of Behavioural Brain Research on The Function of Sleep (Volume 69, Issues 1–2, July–August 1995, Pages 85–90). The Preface to the issue indicates these Research Reports (on the original PDFs; termed Original Research Article on the online issue list; remember that now) arise from The Ravello Symposium on ‘The Function of Sleep’ held May 28-31, 1994.

So far so good. I actually ran across this article by clicking on an Addendum in the Jan 1997 issue. This Addendum indicates:

In the above paper an acknowledgement of unpublished data was omitted from the text during preparation. This omission could affect the future publication of the full set of data. Thus the author, Dr. Richard Brown, has agreed to share the authorship of the paper with the following persons: J. Andren, K. Andrews, L. Brown, J. Chidgey, N. Geary, M.G. King and T.K. Roberts.

So I tried to Pubmed Brown R and a few of the co-authors to see if there was any subsequent publication of the “full set of data” and….nothing. Hmmm. Not even the original offending article? So I looked for Brown R and sleep, muramyl, etc. Nada. Wow, well maybe for some reason the journal wasn’t indexed? No, because the first other article I looked for was there. Ok, weird. Next I searched for the journal date and month. Fascinatingly, PubMed lists these as “Review”. When the print PDFs say “Research Report” and the journal’s online materials list them as “Original Research Articles”.

But it gets better….scanning down the screen and …..Whoa!

Behav Brain Res. 1995 Jul-Aug;69(1-2):85-90. Muramyl peptides and the functions of sleep. Andren J, Andrews K, Brown L, Chidgey J, Geary N, King MG, Roberts TK. Department of Psychology, University of Newcastle, Australia.

Now this Richard Brown guy has been disappeared altogether from the author line! Without any obvious indication of this on the ScienceDirect access to the journal issue or article.

The PubMed record indicates there is an Erratum in Behav Brain Res 1997 Jan;82(2):245, but this is the Addendum I quoted above. Searching ScienceDirect for “muramyl peptides pulls up the original article and Addendum but no further indication of Erratum or correction or retraction.

Wow. So speaking to PP’s usual point about falsifying the academic record, this whole thing has been a clusterbork of re-arranging the “academic record”.

Moving along, the Web of Science indicates that the original, credited solely to Brown has been cited 9 times. First by the Addendum and then 8 more times after the correction…including one in 2011 and one in 2012. Who knows when the PubMed record was changed but clearly the original Addendum indicating credit should be shared was ignored by ISI and these citing authors alike.

The new version, with the R. Brown-less author line, has been cited 4 times. There are ones published in Jan 2008 and Sept 2008 and they indeed cite the R. Brown-less author list. So the two and possibly three most-recent citations of the R. Brown version have minimal excuse.

Okay, okay, obviously one would have to have done a recent database search for the article (perhaps with a reference management software tool) to figure out there was something wrong. But even so, who the heck would try to figure out why EndNote wasn’t finding it rather than just typing this single-author reference in by hand. After all, the pdf is right there in front of you…..clearly the damn thing exists.

This is quite possibly the weirdest thing I’ve seen yet. There must have been some determination of fraud or something to justify altering the Medline/PubMed record, right? There must have been some buyin from the journal Publisher (Elsevier) that this was the right thing to do.

So why didn’t they bother to fix their ScienceDirect listing and the actual PDF itself with some sort of indication as to what occurred and why these folks were given author credit and why Richard Brown was removed entirely?

__

*The fact that nobody seems to agree with me points to the fact that nobody really views these as equal contributions one little bit.

h/t: EvilMonkey who used to blog at Neurotopia.

One of the little career games I hope you know about is to cite as many of your funding sources as possible for any given manuscript. This, btw, is one way that the haves and the rich of the science world keep their “fabulously productive” game rolling.

Grant reviewers may try to parse this multiple-attribution fog if they are disposed to criticize the productivity of a project up for competing renewal. This is rarely successful in dismantling the general impression of the awesome productivity of the lab, however.

Other than this, nobody ever seems to question, assess or limit this practice of misrepresentation.

Here we are in an era in which statements of contribution from each author is demanded by many journals. Perhaps we should likewise demand a brief accounting as to the contribution of each grant or funding source.

Sometimes you get a manuscript to review that fails to meet whatever happens to be your minimal standard for submitting your own work. Also something that is clearly way below the mean for your field and certainly below this journal’s typical threshold.

Nothing erroneous, of course.

More along the lines of too limited in scope rather than anything egregiously wrong with the data or experiments.

Does this make you sad for science? Angry? Or does it motivate you to knock out another LPU of your own?

My initial mindset on reviewing a manuscript is driven by two things.

First, do I want to see it in print?. Mostly, this means is there even one Figure that is so cool and interesting that it needs to be published.

If there is a no on this issue, that manuscript will have an uphill battle. If it is a yes, I’m going to grapple with the paper more deeply. And if their ARE big problems, I’m going to try to point these out as clearly as I can in a way that preserves the importance of the good data.

Second, does this paper actively harm knowledge?. I’m not as amped up as some people about trivial advance, findings that are boring to me, purely descriptive studies, etc. So long as the experiments seem reasonable, properly conducted, analyzed appropriately and interpreted compactly, well I am not going to get too futzed. Especially if I think there are at least one or two key points that need to be published (see First criterion). If, OTOH, I think the studies have been done in such a way that the interpretation is wrong or clearly not supported…well, that paper is going to get a recommendation for rejection from me. I have to work up to Major Revision from there.

This means that my toughest review jobs are where these two criteria are in conflict. It takes more work when I have a good reason to want to see some subset of the data in print but I think the authors have really screwed up the design, analysis or interpretation of some major aspect of the study. I have to identify the major problems and also comment specifically in a way that reflects my thinking about all of the data.

There is a problem caused by walking the thin line required for a Major-Revision recommendation. That is, I suppose I may pull my punches in expressing just how bad the bad part of the study really is. Then, should the manuscript be rejected from that journal, the authors potentially have a poor understanding of just how big the problem with their data really is. Especially if the rejection has been based on differing comments between the three sets of reviewers. Sometimes the other reviewers will have latched on hard to a single structural flaw…which I am willing to accept if I think it is in the realm of ‘oh, you want another whole Specific Aim’s worth of experiments for this one paper, eh?’.

The trouble is that the authors may similarly decide that Reviewer 3 and Reviewer 1 are just being jerks and that the only strategy is to send it off, barely revised, to another journal and hope for three well-disposed reviewers next time.

The trouble is when the next journal sends the manuscript to at least one reviewer that has seen it before….such as YHN. And now I have another, even harder, job of sorting priorities. Are the minimal fixes an improvement? Enough of one? Should I be pissed that they just didn’t seem to grasp the fundamental problem? Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?

Grumpy reviewer is….

June 25, 2013

grumpy.

Honestly people. What in the hell happened to old fashioned scholarship when constructing a paper? Pub Med has removed all damn excuse you might possibly have had. Especially when the relevant literature comprises only about a dozen or two score papers.

It is not too much to expect some member of this healthy author list to have 1) read the papers and 2) understood them sufficiently to cite them PROPERLY! i.e., with some modest understanding of what is and is not demonstrated by the paper you are citing.

Who the hell is training these kids these days?

__
Yes, I am literally shaking my cane.

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains….I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.

whoo-hoo!

Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I’d take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
2012-ImpactFactor1
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience’s journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I’m going to score the last couple of years’ of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That’s a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index….I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don’t enjoy as much time to acquire citations.

Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.

The F1000Research will be waiving the publication fee for negative result manuscripts up through the end of August.


If you have negative results in your lab notebooks, this is the time to write them up! Like all journals, we of course publish traditional full-length research papers but, in addition, we accept short single-observation articles, data articles (i.e. a dataset plus protocol), and negative- and null-result submissions.

For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work. We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days.

The follow up post requesting comment is here.

This is a great idea and the original post nails down why.

This is not only a disappointment for the researchers who conducted the work, it’s also damaging to the overall scientific record. This so-called “publication bias” toward positive results makes it appear as though the experiments with negative or null results never happened.

Sometimes the unpublished experiments are obvious next steps in elucidating a particular biological mechanism, making it likely that other researchers will try the same thing, not realizing that someone else already did the work. This is a waste of time and money.

On other occasions, the positive results that are published are the exception: they could have been specific to a narrow set of conditions, but if all the experiments that didn’t work are not shown, these exceptional cases now look like the only possible result. This is especially damaging when it comes to drug development and medical research, where treatments may be developed based on an incomplete understanding of research results.

The waste of time and money cannot be emphasized enough, especially in these tight funding times. Why on earth should we tolerate any duplication of effort that is made necessary simply by the culture of not publicizing results that are not deemed sexy enough? This is the information age, people!

One example from my field is the self-administration of delta9-tetrahydrocannabinol (THC) by the common laboratory species used for self-administration studies of other drugs of abuse. Papers by Goldberg and colleagues (Tanda et al, 2000; Justinova et al, 2003) showed that squirrel monkeys will self-administer THC intravenously which was big news. It was the first relatively clear demonstration in lab animals for a substance we know humans readily self-administer. As the Goldberg group related in their 2005 review article, there is no clear evidence that rodents will self-administer THC i.v. in literature stretching back to the 1970s when the self-administration technique was being used for studies of numerous drugs.

Over the last three decades, many attempts to demonstrate intravenous self-administration of THC or of synthetic cannabinoid CB1 receptor agonists by experimental animals were relatively unsuccessful (Pickens et al., 1973; Kaymakcalan, 1973; Harris et al., 1974; Carney et al., 1977; van Ree et al., 1978; Mansbach et al., 1994) (Table 1). None of these studies clearly demonstrated persistent, dose-related, self-administration behavior maintained by THC or synthetic cannabinoids, which would be susceptible to vehicle extinction and subsequent reinstatement in the absence of unusual ‘‘foreign’’ conditions.

The thing is that rats “wouldn’t” self-administer nicotine either. Nor alcohol. That is, until people came up with the right conditions to create a useful model. In the case of ethanol it was helpful to either force them to become dependent first (via forced liquid diets adulterated with ethanol or ethanol inhalation chambers) or to slowly train them up on cocktails (called the flavorant-fade procedure). In the case of nicotine, the per-infusion dose was all critical and it helped to provide intermittent access, e.g., with four days on, three days off. Interestingly, while making rats dependent on nicotine using subcutaneous osmotic pumps didn’t work (as it does for heroin) very well, a recent study suggests that force inhalation-based dependence on nicotine results in robust intravenous self-administration.

For many drugs of abuse, subtle factors can make a difference in the rodent model. Strain, sex, presence of food restriction, exact age of animals, circadian factors, per-infusion dose, route of administration, duration of access, scheduling of access…. the list goes on and on. A fair read of the literature suggests that when you have cocaine or heroin, many factors have only quantitative effects. You can move the means around, even to the p<0.05 level, but hey, it's cocaine or heroin! They'll still exhibit clear evidence that they like the drug.

When it comes to other drugs, maybe it is a little trickier. The balance between pleasurable and aversive effects may be a fine one (ever tried buccal nicotine delivery via chew or dip? huh?). The route of administration may be much more critical. Etc.

So the curious person might ask, how much has been tried? How many curious grad students or even postdocs have “just tried it” for a few months or a year? How many have done the most obvious manipulations and failed? How many have been told to give it up as a bad lot by older and wiser PIs (who tried to get THC self-administration going themselves back 20 years ago)?

I’m here to tell you that it has been attempted a lot more than has been published. Because the lab lore type of advice keeps rolling.

It is really hard, however, to get a comprehensive look at what has been tried and has led to failure. What were the quality of those attempts? N=8 and out? Or did some poor sucker run multiple groups with different infusion doses? Across the past thirty years, how many of the obvious tweaks have been unsuccessful?

Who cares, right? Well, my read is that there are some questions that keep coming around, sometimes with increased urgency. The current era of medical marijuana legalization and tip-toeing into full legalization means that we’re under some additional pressure to have scientific models. The explosion of full-agonist cannabimimetic products (K2, Spice, Spike, etc containing JWH-018 at first and now a diversity of compounds) likewise rekindles interest. Proposals that higher-THC marijuana strains increase dependence and abuse could stand some controlled testing….if we only had better models.

Well, this is but one example. I have others from the subfields of science that are of my closest interests. I think it likely that you, Dear Reader, if you are a scientist can come up with examples from your own fields where the ready availability of all the failed studies would be useful.

Thought of the Day

May 4, 2013

Listed-third author gets to refer to it as a second-author-paper when the first two are co-equal first authors, right?