One of the little career games I hope you know about is to cite as many of your funding sources as possible for any given manuscript. This, btw, is one way that the haves and the rich of the science world keep their “fabulously productive” game rolling.

Grant reviewers may try to parse this multiple-attribution fog if they are disposed to criticize the productivity of a project up for competing renewal. This is rarely successful in dismantling the general impression of the awesome productivity of the lab, however.

Other than this, nobody ever seems to question, assess or limit this practice of misrepresentation.

Here we are in an era in which statements of contribution from each author is demanded by many journals. Perhaps we should likewise demand a brief accounting as to the contribution of each grant or funding source.

Sometimes you get a manuscript to review that fails to meet whatever happens to be your minimal standard for submitting your own work. Also something that is clearly way below the mean for your field and certainly below this journal’s typical threshold.

Nothing erroneous, of course.

More along the lines of too limited in scope rather than anything egregiously wrong with the data or experiments.

Does this make you sad for science? Angry? Or does it motivate you to knock out another LPU of your own?

My initial mindset on reviewing a manuscript is driven by two things.

First, do I want to see it in print?. Mostly, this means is there even one Figure that is so cool and interesting that it needs to be published.

If there is a no on this issue, that manuscript will have an uphill battle. If it is a yes, I’m going to grapple with the paper more deeply. And if their ARE big problems, I’m going to try to point these out as clearly as I can in a way that preserves the importance of the good data.

Second, does this paper actively harm knowledge?. I’m not as amped up as some people about trivial advance, findings that are boring to me, purely descriptive studies, etc. So long as the experiments seem reasonable, properly conducted, analyzed appropriately and interpreted compactly, well I am not going to get too futzed. Especially if I think there are at least one or two key points that need to be published (see First criterion). If, OTOH, I think the studies have been done in such a way that the interpretation is wrong or clearly not supported…well, that paper is going to get a recommendation for rejection from me. I have to work up to Major Revision from there.

This means that my toughest review jobs are where these two criteria are in conflict. It takes more work when I have a good reason to want to see some subset of the data in print but I think the authors have really screwed up the design, analysis or interpretation of some major aspect of the study. I have to identify the major problems and also comment specifically in a way that reflects my thinking about all of the data.

There is a problem caused by walking the thin line required for a Major-Revision recommendation. That is, I suppose I may pull my punches in expressing just how bad the bad part of the study really is. Then, should the manuscript be rejected from that journal, the authors potentially have a poor understanding of just how big the problem with their data really is. Especially if the rejection has been based on differing comments between the three sets of reviewers. Sometimes the other reviewers will have latched on hard to a single structural flaw…which I am willing to accept if I think it is in the realm of ‘oh, you want another whole Specific Aim’s worth of experiments for this one paper, eh?’.

The trouble is that the authors may similarly decide that Reviewer 3 and Reviewer 1 are just being jerks and that the only strategy is to send it off, barely revised, to another journal and hope for three well-disposed reviewers next time.

The trouble is when the next journal sends the manuscript to at least one reviewer that has seen it before….such as YHN. And now I have another, even harder, job of sorting priorities. Are the minimal fixes an improvement? Enough of one? Should I be pissed that they just didn’t seem to grasp the fundamental problem? Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?

Grumpy reviewer is….

June 25, 2013

grumpy.

Honestly people. What in the hell happened to old fashioned scholarship when constructing a paper? Pub Med has removed all damn excuse you might possibly have had. Especially when the relevant literature comprises only about a dozen or two score papers.

It is not too much to expect some member of this healthy author list to have 1) read the papers and 2) understood them sufficiently to cite them PROPERLY! i.e., with some modest understanding of what is and is not demonstrated by the paper you are citing.

Who the hell is training these kids these days?

__
Yes, I am literally shaking my cane.

Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.

The F1000Research will be waiving the publication fee for negative result manuscripts up through the end of August.


If you have negative results in your lab notebooks, this is the time to write them up! Like all journals, we of course publish traditional full-length research papers but, in addition, we accept short single-observation articles, data articles (i.e. a dataset plus protocol), and negative- and null-result submissions.

For negative and null results, it is especially important to ensure that the outcome is a genuine finding generated by a well executed experiment, and not simply the result of poorly conducted work. We have been talking to our Editorial Board about how to try to avoid the publication of the latter type of result and will be addressing this topic and asking for your input in a further post in the next few days.

The follow up post requesting comment is here.

This is a great idea and the original post nails down why.

This is not only a disappointment for the researchers who conducted the work, it’s also damaging to the overall scientific record. This so-called “publication bias” toward positive results makes it appear as though the experiments with negative or null results never happened.

Sometimes the unpublished experiments are obvious next steps in elucidating a particular biological mechanism, making it likely that other researchers will try the same thing, not realizing that someone else already did the work. This is a waste of time and money.

On other occasions, the positive results that are published are the exception: they could have been specific to a narrow set of conditions, but if all the experiments that didn’t work are not shown, these exceptional cases now look like the only possible result. This is especially damaging when it comes to drug development and medical research, where treatments may be developed based on an incomplete understanding of research results.

The waste of time and money cannot be emphasized enough, especially in these tight funding times. Why on earth should we tolerate any duplication of effort that is made necessary simply by the culture of not publicizing results that are not deemed sexy enough? This is the information age, people!

One example from my field is the self-administration of delta9-tetrahydrocannabinol (THC) by the common laboratory species used for self-administration studies of other drugs of abuse. Papers by Goldberg and colleagues (Tanda et al, 2000; Justinova et al, 2003) showed that squirrel monkeys will self-administer THC intravenously which was big news. It was the first relatively clear demonstration in lab animals for a substance we know humans readily self-administer. As the Goldberg group related in their 2005 review article, there is no clear evidence that rodents will self-administer THC i.v. in literature stretching back to the 1970s when the self-administration technique was being used for studies of numerous drugs.

Over the last three decades, many attempts to demonstrate intravenous self-administration of THC or of synthetic cannabinoid CB1 receptor agonists by experimental animals were relatively unsuccessful (Pickens et al., 1973; Kaymakcalan, 1973; Harris et al., 1974; Carney et al., 1977; van Ree et al., 1978; Mansbach et al., 1994) (Table 1). None of these studies clearly demonstrated persistent, dose-related, self-administration behavior maintained by THC or synthetic cannabinoids, which would be susceptible to vehicle extinction and subsequent reinstatement in the absence of unusual ‘‘foreign’’ conditions.

The thing is that rats “wouldn’t” self-administer nicotine either. Nor alcohol. That is, until people came up with the right conditions to create a useful model. In the case of ethanol it was helpful to either force them to become dependent first (via forced liquid diets adulterated with ethanol or ethanol inhalation chambers) or to slowly train them up on cocktails (called the flavorant-fade procedure). In the case of nicotine, the per-infusion dose was all critical and it helped to provide intermittent access, e.g., with four days on, three days off. Interestingly, while making rats dependent on nicotine using subcutaneous osmotic pumps didn’t work (as it does for heroin) very well, a recent study suggests that force inhalation-based dependence on nicotine results in robust intravenous self-administration.

For many drugs of abuse, subtle factors can make a difference in the rodent model. Strain, sex, presence of food restriction, exact age of animals, circadian factors, per-infusion dose, route of administration, duration of access, scheduling of access…. the list goes on and on. A fair read of the literature suggests that when you have cocaine or heroin, many factors have only quantitative effects. You can move the means around, even to the p<0.05 level, but hey, it's cocaine or heroin! They'll still exhibit clear evidence that they like the drug.

When it comes to other drugs, maybe it is a little trickier. The balance between pleasurable and aversive effects may be a fine one (ever tried buccal nicotine delivery via chew or dip? huh?). The route of administration may be much more critical. Etc.

So the curious person might ask, how much has been tried? How many curious grad students or even postdocs have “just tried it” for a few months or a year? How many have done the most obvious manipulations and failed? How many have been told to give it up as a bad lot by older and wiser PIs (who tried to get THC self-administration going themselves back 20 years ago)?

I’m here to tell you that it has been attempted a lot more than has been published. Because the lab lore type of advice keeps rolling.

It is really hard, however, to get a comprehensive look at what has been tried and has led to failure. What were the quality of those attempts? N=8 and out? Or did some poor sucker run multiple groups with different infusion doses? Across the past thirty years, how many of the obvious tweaks have been unsuccessful?

Who cares, right? Well, my read is that there are some questions that keep coming around, sometimes with increased urgency. The current era of medical marijuana legalization and tip-toeing into full legalization means that we’re under some additional pressure to have scientific models. The explosion of full-agonist cannabimimetic products (K2, Spice, Spike, etc containing JWH-018 at first and now a diversity of compounds) likewise rekindles interest. Proposals that higher-THC marijuana strains increase dependence and abuse could stand some controlled testing….if we only had better models.

Well, this is but one example. I have others from the subfields of science that are of my closest interests. I think it likely that you, Dear Reader, if you are a scientist can come up with examples from your own fields where the ready availability of all the failed studies would be useful.

I generally like Stephen Curry’s position on the Journal Impact Factor. For example, in today’s confessional posting, he says:

mostly because of the corrosive effect they have on science and scientists.

In this we agree. He also posted “Sick of Impact Factors” and this bit focused on UK scholarly assessment. I enjoy his description of the arguments for why the Journal Impact Factor is leading to incorrect inferences and why it has a detrimental impact on the furthering of scientific knowledge.

But he pulled an academic nose sniffer / theological wackaloon move that I cannot support.

I was asked by a well-known university in North America to help assess the promotion application of one of their junior faculty. This was someone whose work I knew — and thought well of — so I was happy to agree. However, when the paperwork arrived I was disappointed to read the following statement the description of their evaluation procedures:

“Some faculty prefer to publish less frequently and publish in higher impact journals. For this reason, the Adjudicating Committee will consider the quality of the journals in which the Candidate has published and give greater weight to papers published in first rate journals.”

He then, admirably, tried to get them to waver on their JIF criterion….but to no avail

The reply was curt — they respected my decision for declining. And that was it.
I feel bad that I was unable to participate. I certainly wouldn’t want my actions to harm the career opportunities of another but could no longer bring myself to play the game. Others may feel differently.

So by refusing to play, he has removed himself as a guaranteed advocate for change. By drawing a hard, nose-sniffing line in the sand that he refuses to play if the game doesn’t change.

I prefer a more practical approach to all of this. I think I’ve alluded to this in the past.

I certainly agree to review manuscripts for journals where they are overtly concerned with “impact and importance” and the maintenance of their Journal Impact Factor. Certainly. And no, I do not ignore their obvious goals. I try to give the editor in question some indication of where I see the impact and importance and whether it deserves acceptance at their high falutin’ journal.

But I use my standards. I do not just roll over for what I see as the more corrosive aspects of Glamour Chasing. I rarely demand more experiments, I do not throw up ridiculous chaff about “mechanism” and other completely subjective bullshit and I do not demand optogenetics as the threshold for being interesting.

Stephen Curry could have very well done the same for this tenure review. He could have emphasized his own judgement of the impact and importance of the science and left the JIF bean counting to other reviewers. He could have struck a blow in support of the full and comprehensive review of the actual meat of this poor young faculty members’ contributions. Instead, he simply left the field, after sending up an impotent protest flag.

I think that is sacrificing actual progress on ones goals for the fine feeling of chest thumping purity. And that is a mistake.