Why cascading manuscript acceptance schemes can't work

March 6, 2013

For reference, Scicurious’ proposal

What if manuscript submission could be as good as a one-shot?

Like this: you submit a paper to a large umbrella of journals of several “tiers”. It goes out for review. The reviewers make their criticisms. Then they say “this paper is fine, but it’s not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions”. Or they have the option to reject it outright for all the journals in question. Where there is discrepancy (as usual) the editor makes the call.

and the Neuroscience Peer Review Consortium.

The Neuroscience Peer Review Consortium is an alliance of neuroscience journals that have agreed to accept manuscript reviews from other members of the Consortium. Its goals are to support efficient and thorough peer review of original research in neuroscience, speed the publication of research reports, and reduce the burden on peer reviewers.

I think these schemes are flawed for a simple reason. As I noted in a comment at Sci’s digs….

Nobody bucking for IF immediately goes (significantly) down. They go (approximately) lateral and hope to get lucky. The NPRC is a classic example. At several critical levels there is no lateral option. And even if there was, the approximately equal IF journals are in side-eyeing competition…me, I sure as hell don’t want the editors of Biol Psychiatry, J. Neuro and Neuropsychopharmacology knowing that I’ve been rejected by one or two of the other ones first.

I also contest the degree to which a significantly “lower” journal thinks that it is, indeed, lower and a justifiable recipient of the leavings. Psychopharmacology, for example, is a rightful next stop after Neuropsychopharmacology but somehow I don’t think ol’ Klaus is going to take your manuscript any easier just because the NPP decision was “okay, but just not cool enough”. Think NPP and Biol Psych are going to roll over for your Nature Neuroscience reject? hell no. Not until their reviewers say “go”.

This NPRC thing has been around since about 2007. I find myself intensely curious about how it has been going. I’d like to see some data in terms of how many authors choose to use it (out of the total manuscripts rejected from each participating journal), how many paper are subsequently accepted at another consortium journal, the network paths between journals for those that are referred, etc.

My predictions are that referrals are very rare, that they are inevitably downward in journal IF and that they don’t help very much. With respect to this latter, I mean that I bet it is a further minority of the manuscripts that use this system that are subsequently accepted by the second journal editor on the strength of the original reviews and some stab by the authors at a minimal revision (i.e., as if they’d gotten minor-revisions from the original editor instead of rejection).

One fascinating, unknowable curiosity is the desk reject factor. The NPRC could possibly know how many of the second editors did desk rejects of the referred manuscripts based on the forwarded reviews. That would be interesting to see. But what they can’t know is how many of those would have been sent out for review if the reviews had not been forwarded. And if they had been sent out for review, what fraction would have received good enough reviews (for the presumptively more pedestrian journal) that they would have made it in.


7 Responses to “Why cascading manuscript acceptance schemes can't work”

  1. dr24hours Says:

    Did you read my review of a systems-science model of an alternative to peer review? It was a REALLY interesting paper. Essentially, to get your paper bid on by a journal, you have to review papers in a pool.



  2. That NPRC is wack. If you get rejected by Nature Neuroscience, you’re not gonna just drop down to J Neuroscience. You’re gonna give Neuron a shot. And then PLOS Biology. And then PNAS. And then Current Biology.


  3. DJMH Says:

    These schemes ignore the issue, that a lot of times, people WANT a new set of reviewers.

    The actual problem with Sci’s scheme is that it concentrates the power for all the journals in the hands of the two or three reviewers. If you get one crazy person or enemy in that threesome, then you are seriously screwed.


  4. Spiny Norman Says:

    Yet another permutation. We get reasonable reviews atIF 6-7 journal followed by rejection. In one case the rejection allowed a buddy of the editor to get his paper out before ours, in another the editor just had a stick in his ass. In both cases I turned around, sent the entire submission including all reviews and correspondence to a competing journal of the SAME IF, including all reviews and correspondence. In both cases the papers were rapidly accepted at the second journal.

    These cases are similar to Sci’s scenario but with two key differences. First, we did not drop the papers by a notch as there was no reason to do so. Second, the monitoring editors were the problem, not the referees. In this case the present system is superior to Sci’s scheme because our contacts with the editors were independent.

    This points to my own main concern with the current process: arbitrary and/or gutless decisions by monitoring editors. With a good editor at the helm, it becomes much harder for the review process to spiral out of control. Conversely, with bad editorial judgement the worst impulses of referees are unchecked, and even a good review process is of little help.


  5. Susan Says:

    I have not used the NPRC for the obvious reasons — what DJMH said, and the unconscious (or not) bias against papers with a red X bestowed by BSDs. We all know the review process is a far cry from the semi-objective “good work, just not cool enough”. I’d bet that psychologists have done the experiment: if two equally-good choices (say, resumes) appear, where one is novel but you know the other was rejected by your buddy, your choice is obviously biased.

    I’d love to see the data, too.


  6. Andy Says:

    I generally like Sci’s suggestion, with the addition of an additional modification in response to the critique above. Recognizing the problems that one or two reviewers can create, why not give the authors the option to decide whether or not they want to port their reviews to a new journal? This would potentially fix the issue of one or two reviewers sinking a paper indefinitely.


  7. Dave Says:

    Personally, I am totally ticked off that Model Railroader Monthly keeps turning down my totally excellent travel Tips for Asia.

    These are all just specialist magazines, folks. The same rules apply as at other magazines. Send your shit to the appropriate place, and quit fretting about some imaginary hierarchy. As long as it’s listed in PubMed, people will find it. And as long as you’re really doing useful science instead of stupid ass assays cataloging of minutia, people will recognize it.

    And we wonder why the public is getting tired of funding our shenanigans…


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: