Another reason why journals maintain those lengthy pre-publication queues…

April 4, 2012

So you finally got your paper accepted, the proofs have come and been returned in 48 hrs (lest some evil, unspecified thing happen). You waited a little bit and BOOM, up it pops on PubMed and on the pre-publication list at the journal. The article is, for all most intents and purposes, published.

YAY!

Now get back to work on that next paper.

But there’s that nagging little thought…..it isn’t really published until it gets put in a print issue. Most importantly, you don’t know for sure which year it will be properly published in, so the citation is still in flux. So you look at the number of items below yours in the pre-print list, figure out approximately how many articles are published per issue in the journal and game it out. Ugh…. four months? Six? EIGHT????

WHY O WHY gods of publishing?? WHY must it take so long???????

Whenever I’ve heard a publishing flack address this it has been some mumblage about making sure they have a smooth publication track. That they are never at a loss to publish enough in a given issue. And they have to stick to the schedule don’t you know!

(except they don’t. Volumes are pretty fixed but you’ll notice a “catch up” extra issue of a volume now and again.)

Well, well, well. Something I’ve never considered was raised in a blog post at Scholarly Kitchen. An article by Frank Krell in Learned Publishing (I swear I’m not making that journal title up) asks if publishers might be using this to game the Impact Factor of their journals.

Dammit! Totally true. Think about it…

Now, before I get started, the Scholarly Kitchen, good publisher flacks that they are, caution:

To me, there needs to be some evidence — even anecdotal — that editors are purposefully post-dating publication for the purposes of citation gaming. Large January issues may be one piece of evidence; however, it may also signal the funding and publication cycle of academics. I’d be more interested to know whether post-dating conversations are going on within editorial boards, or whether authors have been told that the editor is holding back their article to maximize its contribution to the journal’s impact factor.

But this only really addresses the specific point that Krell made about pushing issues around with respect to the start of a new year.

There’s a larger point at hand. One of the points of objection I’ve always had about the IF calculation is that the two-year window puts a serious constraint on the types of citations that are available in certain kinds of science. The kind where it just takes a lot of time to come up with a publishable data set.

Take normal old, run of the mill behavioral experiments that can be classified as behavioral pharmacology (within which a lot of substance abuse studies live). Three to four months, easily, just to get an animal experiment done. Ordering, habituating, pre-training, surgeries and recovery…it takes time. A typical study might be 3-4 groups of subjects, aka, experiments. That’s if you get lucky. Throw in some false avenues and failed experiments and you are easily up to 6 or 8 groups. Keep in mind that physical resources like operant boxes, constraints such as the observation window (could be a 6 hr behavioral experiment, no problemo) and available staff (not everyone has a tech) really narrow down the throughput. You can’t just “work faster” or “work harder” like supposedly is possible at the bench. The number of “experiments” you can do don’t scale up with time spent in the lab if you are doing behavioral studies with some sort of timecourse. The PI may not even be able to do much by throwing more people into the project even if the laboratory does have this sort of flexibility.

Right?

So here you are, Joe Scholar, reading your favorite journal when BLAM! You see an awesome paper that gives you a whole line of new ideas that you could and should set out to studying. Like, RIGHT FREAKING NOW!!! Okay, so suppose money is not an issue and you don’t have anything else particularly pressing. Order some animals and off you go.

It’s going to be a YEAR minimum to complete the studies. A month to write up the draft, throw in three months for peer review and another month for the journal to get it’s act together. Thus, if things go really, really well for you there is only a 6 month window of slack to get a citation in for that original motivating paper before the 2 year IF citation window elapses.

Things never go that well.

In my view this makes it almost categorically impossible for a publication to garner IF credit for a citation that is the most meaningful of all. A citation from a paper motivated almost entirely by said prior work.

The principle extends though. Even if you only see the paper and realize you need to incorporate it into your Discussion or Introduction, the length of time the paper is available with respect to the IF window matters. If there were just some way journals could extend that window between general availability of a work and the expiration of the IF window then this would, statistically, boost the number of citations. If the clock doesn’t start running until the paper has been visible for 6 months….say, how could we do that? How….? Hmm.

Ah yes. Let it languish in the “online firstarchive! Brilliant! It goes up on PubMed and people can read the paper. Get their experiments rolling. Write the findings into their Intros and Discussions.

Right.

I agree with the Scholarly Kitchen post that we don’t know that this is why some journals keep such a hefty backlog of articles in their pre-print queue. Having watched a handful of my favorite journals maintain anywhere from six to thirteen month offsets over periods of many months to years, however, I have my suspicions. The journals I pay attention to have maintained their offsets over at least a decade if you assume the lower bound of about 4-5 months (and trust that my spot-checking is valid as a general rule). The idea that they do this to avoid publication dryspells is nonsense, they have plenty of accepted articles on a frequent enough basis so that they could trim down to, say, 2-3 months of backlog. So there must be another reason.

No Responses Yet to “Another reason why journals maintain those lengthy pre-publication queues…”

  1. Karen Says:

    As a former print editor, I had no such ulterior motives (mostly because I wasn’t smart enough to game the system). When we had a long queue, it was because we had a fixed page budget. We couldn’t run as many papers as I wanted to because we had an average of 121 pages per issue, and there wasn’t much I could do about that. In my experience, I was delighted to have the “online first” queue so that the results could get out there, regardless of when I could finally squeeze in the 20-pager. Does that make you feel any better?

    Other people are probably both smarter and more evil than I am, though.

    Like


  2. Interesting. I wouldn’t have expected such a posting suggesting a nefarious scheme from Scholarly Kitchen, which I’ve only ever heard of from the Eisen brothers’ critique of pro-commercial publisher propaganda posted there.

    Of course, as the Eisens would say, if you publish in an open access online journal, the online publication *is* the publication, thus solving the problem.

    Like


  3. Ugh…. four months? Six? EIGHT????

    Yeah, maybe for the shitteasse journals you publish in. And this IF gaming probably works only for those journals: “WHOOPEE!!! We gamed up from 1.238 to 1.496!!!!!”

    Like

  4. Bashir Says:

    So there must be another reason.

    Because that’s the way it’s always been. I hear that when article time to review comes up. Journals that take 3-4 months for each review round claim that’s just the way it is! Can’t be changed!

    Like

  5. AnyEdge Says:

    Umm, there are perfectly non-conspiracy queueing theory reasons for these queues too. Treat each paper as a customer, each edition as a server, and do the math. If a journal wants to accept a decent number of journals, there’s going to be a queue.

    Like

  6. AnyEdge Says:

    Sorry, “accept a decent number of [papers]”.

    Like

  7. drugmonkey Says:

    JB- actually the SK post was acting the skeptic. So, pretty much true to form….

    Like


  8. This feels a bit tinfoil-hat to me.

    Like

  9. drugmonkey Says:

    You would be surprised at how many EIC of “shitasse” journals are concerned about seemingly minor changes in IF, tseo..

    Like

  10. Alex Says:

    As it stands, my most significant scientific legacy will be that I’m playing a big role in defining what is and isn’t possible in a field with important applications.

    I would prefer it if my most significant legacy is that a student of mine goes on to show that something that I thought was impossible is in fact possible.

    Like

  11. Alex Says:

    Or if one of my students uses my work to do something that benefits a lot of people.

    Like

  12. Alex Says:

    Oops, wrong thread.

    Like


  13. You would be surprised at how many EIC of “shitasse” journals are concerned about seemingly minor changes in IF, tseo..

    Srsly. Neuron and Nature Neuroscience fight it out over the range from 13.5 to 15.5 by publishing shittetonnes of fucken review articles, so why wouldn’t the shitteasse journals do whatever they can to fight it out over the range from 1.35 to 1.55?

    Like

  14. drugmonkey Says:

    Bashir-

    3-4 months per review round is ridiculous. If I had one take that long, that journal would be off my list in the future….

    Like


  15. Just sayin’. I wouldn’t be shocked or anything if it was true, I’m just unconvinced so far.

    Like

  16. Grumble Says:

    I love the idea of “taking journals off your list”. Years ago, we submitted something to Cerebral Cortex, motivated by its absurdly high IF. (Absurd, that is, for a journal that focuses on only the protective mantle surrounding the parts of the brain that do the REAL work.) The reviews took for-fucking-ever, and asked for all kinds of stupid fixes to nonexistent problems. I was all primed to obsequiously answer every single criticism and re-submit, but my PI stopped me by refusing to submit to that journal.

    What a breath of fresh air.

    Like

  17. drugmonkey Says:

    Where did it go instead?

    Like

  18. Grumble Says:

    Neuroscience.

    And wow, my career is still alive, despite publishing in IF=3.5 instead of IF=6! That PI taught me the valuable lesson that chasing IF is foolish. The way I look at it, journals are stratified into two or three tiers, and among those in the lowest tier (IF=1 to IF=5-6 range), it really doesn’t matter very much which journal you publish in. The target audience will find it and read it, and P&T committees and the like are not likely to care whether it’s IF=2 or IF=4.

    Like

  19. drugmonkey Says:

    my career is still alive, despite publishing in IF=3.5 instead of IF=6!

    Astonishing, isn’t it?

    journals are stratified into two or three tiers, and among those in the lowest tier (IF=1 to IF=5-6 range), it really doesn’t matter very much which journal you publish in

    For the trainees in the audience, yes…..but. This is going to depend on your subfield audience and who is judging you. As I always say, you need to do your research to figure out what is a meaningful distinction. In many behavioral pharm disciplines, IF 1 is below the dump journal level. Where dump journal is the place you are almost certain to be able to get something in but people will actually cite it, and known figures are on the editorial boards. And almost everyone publishes in it once or twice. IF 2-5 range, sure there is not a lot of difference but some of these are going to be considered the subfield dump journal and some will be considered respectable….this judgment may be on the strength of the editor being known as a total hardass (cough*cough*Klaus!*cough*cough) rather than IF. Get to know your subfield. When you get to the next tier, I’d say for behavioral pharm that kicks in around IF6-8…..this may be completely and utterly different in your subfield and the IF6-8 journals may not be viewed as any different from the IF3-4 journals. Do your homework.

    The target audience will find it and read it,
    If it is indexed on PubMed, this is true no matter the IF of the journal.

    P&T committees and the like are not likely to care whether it’s IF=2 or IF=4.
    This cannot be stated as a universal truth. Far too much variability in the expectations and backgrounds of those on the P&T panels. You have to assume that for this purpose, higher is always better. Of course, more is always better too so sometimes getting a rapid accept trumps many IF points….

    Like

  20. whizbang Says:

    Having been involved with several journals, the only “gaming” of the IF I have seen involves more invited reviews and trying to get more submissions (so they can make sure that the most significant papers get offered to them first).
    Of course, it’s those invited reviews that are most likely to cite something in print <2 years after publication…

    Like

  21. Bashir Says:

    3-4 months per review round is ridiculous. If I had one take that long, that journal would be off my list in the future….

    I wish. I would love to get some extensive time to review data. I have a little bit and it seems like my entire area is at 3+ months. It’s like at some point it was decided that that’s just how long review takes. Editors scoff at the idea of getting that down to 2 months, like that would be breaking the speed of light.

    Like

  22. drugmonkey Says:

    Over the course of the past ten years or so I’ve seen a lot of journals go from giving reviewers 3 weeks (or unspecified) to 14 days. I notice that PLoS ONE seems to give you only 10 days. There has been an effort, I conclude, in many fields of study to cut down the review cycle.

    I was just recently going nutz about a journal because it was something on the order of 2 months per round of review and this is definitely on the long side, going by my recent experience.

    Like


  23. […] more recent writing on gaming scholarly publication, check out the DrugMonkey blog at scientopia and a recent post at the scholarly kitchen, and S. Scott Graham’s blog entry on citation […]

    Like

  24. Grumble Says:

    So if a data-heavy paper with 12 figures and 20,000 words can be reviewed in one month, why does it take NIH 9 months from submission to funding?

    Yeah, I know there’s far more involved, but I’ve never gotten over how absurdly long it takes to extract a dime out of NIH.

    Like

  25. drugmonkey Says:

    Good point Grumble. Specially when the order of review is so tightly matched to funding decisions. Maybe they should roll out about half of the grants on the score and then save the rest for secondary and tertiary review? So if payline is 8%, everything 4% pays right after the scores come out?

    Like


  26. Maybe they should roll out about half of the grants on the score and then save the rest for secondary and tertiary review?

    Then you better start lobbying Congress, because it is currently against the law to make competing awards without Council review.

    Like

  27. drugmonkey Says:

    Two words: en bloc

    Like


  28. Dude, the point is that the Council actually has to meet–either in person or on the phone or electronically–to approve the applications en bloc. Plenty of ICs already make awards prior to the regular in-person Council meetings by this mechanism. But it isn’t going to happen “right after the scores come out”, because of the various administrative and substantive issues that have to be addressed prior to making awards.

    Like

  29. drugmonkey Says:

    Who does this? News to me. It should be expanded.

    Like


  30. […] Scientist Shortage?: The Johnny-can’t-do-science myth damages US research (must-read) Another reason why journals maintain those lengthy pre-publication queues… Pink Slime and ammonia consumption – the numbers Research efficiency: Perverse incentives We may […]

    Like


Leave a comment