Manuscript Review Musing

July 5, 2013

My initial mindset on reviewing a manuscript is driven by two things.

First, do I want to see it in print?. Mostly, this means is there even one Figure that is so cool and interesting that it needs to be published.

If there is a no on this issue, that manuscript will have an uphill battle. If it is a yes, I’m going to grapple with the paper more deeply. And if their ARE big problems, I’m going to try to point these out as clearly as I can in a way that preserves the importance of the good data.

Second, does this paper actively harm knowledge?. I’m not as amped up as some people about trivial advance, findings that are boring to me, purely descriptive studies, etc. So long as the experiments seem reasonable, properly conducted, analyzed appropriately and interpreted compactly, well I am not going to get too futzed. Especially if I think there are at least one or two key points that need to be published (see First criterion). If, OTOH, I think the studies have been done in such a way that the interpretation is wrong or clearly not supported…well, that paper is going to get a recommendation for rejection from me. I have to work up to Major Revision from there.

This means that my toughest review jobs are where these two criteria are in conflict. It takes more work when I have a good reason to want to see some subset of the data in print but I think the authors have really screwed up the design, analysis or interpretation of some major aspect of the study. I have to identify the major problems and also comment specifically in a way that reflects my thinking about all of the data.

There is a problem caused by walking the thin line required for a Major-Revision recommendation. That is, I suppose I may pull my punches in expressing just how bad the bad part of the study really is. Then, should the manuscript be rejected from that journal, the authors potentially have a poor understanding of just how big the problem with their data really is. Especially if the rejection has been based on differing comments between the three sets of reviewers. Sometimes the other reviewers will have latched on hard to a single structural flaw…which I am willing to accept if I think it is in the realm of ‘oh, you want another whole Specific Aim’s worth of experiments for this one paper, eh?’.

The trouble is that the authors may similarly decide that Reviewer 3 and Reviewer 1 are just being jerks and that the only strategy is to send it off, barely revised, to another journal and hope for three well-disposed reviewers next time.

The trouble is when the next journal sends the manuscript to at least one reviewer that has seen it before….such as YHN. And now I have another, even harder, job of sorting priorities. Are the minimal fixes an improvement? Enough of one? Should I be pissed that they just didn’t seem to grasp the fundamental problem? Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?

17 Responses to “Manuscript Review Musing”

  1. Alex Says:

    You might as well just set down a big pile of red meat risotto for CPP. It would have about the same effect.

    I eagerly await “Shitteasse dumppe journal” rantings.

    Like

  2. Busy Says:

    So long as the experiments seem reasonable, properly conducted, analyzed appropriately and interpreted compactly, well I am not going to get too futzed.

    This. We shouldn’t underestimate the importance of publishing clean, valid data even if no particular breakthrough derives from it. Of course minor contributions such as these deserve to be published in archival journals, as opposed to flagship journals, but they will form the basis of future metadata studies and if the money was already spent in collecting the data, heck, why not make it part of the public record?

    Like

  3. tyrant Says:

    “First, do I want to see it in print?. Mostly, this means is there even one Figure that is so cool and interesting that it needs to be published.”

    In my opinion, that is a determination for the editor to make- a reviewer ought to check logic, accuracy, absence of obvious duplication, even syntax and grammar if in dire need of improvement. What you find interesting or cool may be rather ho-hum to me, and the reverse applies as well, obviously. Perhaps you haven’t seen all prior art in this area, and what you think is cool is in fact no better than warmed up leftovers, even -or especially- if published in some out of the way place.

    “If there is a no on this issue, that manuscript will have an uphill battle.” This is how we get to be our own worst enemies, both in reviews of manuscripts and grant applications.

    “If, OTOH, I think the studies have been done in such a way that the interpretation is wrong or clearly not supported…well, that paper is going to get a recommendation for rejection from me.” A rather obvious point, no?

    “Sometimes the other reviewers will have latched on hard to a single structural flaw…which I am willing to accept if I think it is in the realm of ‘oh, you want another whole Specific Aim’s worth of experiments for this one paper, eh?’.” ??? Review what’s in front of you, not what you would like to see.

    “Should I be pissed that they just didn’t seem to grasp the fundamental problem? Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?” Might there be a legitimate difference of opinion? Value judgment (“just irritated”)? It all depends on whether your marching orders include a a request for some statement as to suitability for this (= CNS ?) journal, or whether you are asked to judge the work on its merits (=technical quality, logic, lack of duplication). How do you calibrate the harshness of your review for any given journal (Science versus Neuron vs J. Neurochem vs PLoS One)?

    “Am I just irritated that IMO if they were going to do this they should have jumped right down to a dump journal instead of trying to battle at a lateral-move journal?” It appears that you would be the type of reviewer who would let his emotions (“pissed, irritated”) trump an attempt at a more objective and level-headed assessment.

    I remain convinced that the best thing a reviewer can do is to take a simple up or down vote on what he/she’s asked to review. The decision to accept (no major problems noted) or reject (flawed logic, lack of controls, lack of priority etc) should be based solely on verifiable criteria. I’d be curious to know how often vindictive attitudes sink an otherwise meritorious manuscript: “My paper was rejected by (CNS) based on one/a few piss-ant reviews, so I sure as hell won’t let this story, no more extensive or complete than mine, get into (CNS) .”

    Like

  4. Ola Says:

    My general policy is to turn down requests for review if I’ve already “done” the manuscript at another journal, and to specifically state which journal in my declination email, so the editor can possibly follow up on why. I just find I’m automatically biased against a paper if I’ve already rejected it, and it’s not really fair to the authors to have the same reviewer again.

    As for initial review mindset, it all boils down to the data for me. Does it have a high degree of integrity? If the data is solid, I find it quite difficult to reject on silly criteria such as “not sexy/novel enough”, especially when reviewing for a low impact journal.

    Like

  5. Busy Says:

    . I just find I’m automatically biased against a paper if I’ve already rejected it, and it’s not really fair to the authors to have the same reviewer again.

    Interestingly enough if the paper has been sent to a “lower” journal I generally find myself biased in favor of it…unless it was an utter disgrace, which is rarely the case.

    Like

  6. DrugMonkey Says:

    Wait….why is that “biased” if you’ve already reviewed it?

    Like

  7. Alex Says:

    Here’s an interesting dilemma:

    I recently reviewed a paper that had a brilliant methods section. The clarity with which they addressed certain issues in the execution of the methods makes this a paper that everyone in my sub-field should read. However, their application of these methods, and the conclusions that they drew, were deeply flawed. They understand the method better than they understand the problem that they applied it to. I feel like this paper needs to be published just so I can assign it to my students, but it can’t be published because the conclusions are deeply flawed.

    So I recommended major revision, praised the methods, and pleaded with them to do a better analysis and properly interpret their results. I told the editor that I would cite this paper for the first half if only the second half were fixed.

    The verdict was “Major revision.” I really, really hope that these authors fix what’s wrong.

    Since I saw a conference talk on this work, I could contact them and express interest in the work without revealing that I’m the reviewer. If they don’t get this paper published, I’m kind of tempted to contact them and ask to collaborate on a methodological paper, just so I can help them get this work out there in a respectable form.

    Like


  8. I recently reviewed a paper that had a brilliant methods section.

    Who the fucke reads the methods section when reviewing a paper?

    Like

  9. Alex Says:

    I normally just skim it, but they were doing something novel with their data analysis algorithm. This wasn’t one of those “We ran [software package]” methods sections, this was “[Rather new data analysis algorithm] uses this principle…” and then really explained some aspects of it pretty deeply.

    Like

  10. tim Says:

    “Who the fucke reads the methods section when reviewing a paper?”

    i can’t even tell if this is ironic or not; i’ve read no shortage of papers that make clear that this is the prevailing standard

    Like

  11. Laurent Says:

    “Who the fucke reads the methods section when reviewing a paper?”

    There are plenty of fields where reading MatMeth is required to assess quality of data and analyses. You wouldn’t miss that very part since it’s telling you where the flaws are if any, or if the authors understand how firm the results are. At any sign of stat voodoo woo, you can tell there’s something wrong under disguise.

    Like

  12. eeke Says:

    “Who the fucke reads the methods section when reviewing a paper?”

    I do. And they better be fuckin clear as day, or I’ll bitch about it.

    Like


  13. The one thing reviewers rejecting a paper need to remember is that they actually need to give actual reasons for the rejection and these need to be things that can’t be trivially fixed by a revision; I remember handling a paper that a reviewer went on and on about how the authors used a technical term in a slightly incorrect manner, and that “obviously” anyone who would make such an error doesn’t understand the field. Since the other reviewer just had a few minor comments, I added fixing the terminology to needed revisions. As typical, the reviewer arguing for rejection refused to review the revision.

    Like

  14. The Other Dave Says:

    WTF? Comrade, tell me it isn’t true. You don’t read the Methods section of papers you review? Seriously? Isn’t that, like, the most important section of the paper for anyone that actually wants to replicate it? Quit being a lazy bastard, recognize that proper review is more than just checking the reference list for your name, and DO THE GODDAMN JOB!

    This is how I review a paper:

    1) What are the authors claiming?
    2) Do the data support that claim?

    I tell the editor whether I think #1 is interesting/important, and why. This helps *the editor* decide whether the paper is appropriate for their journal. Reviewers should not pretend to be editors; it just makes the editors job harder. An increasing number of reviewer instructions include a statement like “Do not indicate in your review whether you think the paper should be published”. They are politely reminding you that your role is advisorial, and please let the editor do his/her job.

    Then I say whether the data support the claim or not. If they do, awesome. There is no greater joy in the world than saying that something is acceptable essentially as is. But most often the data fall short of the claim. If so, I list (numbered, always numbered) the things that the authors should do to validate their claim. If it’s just a few easy things, I recommend ‘minor revision’. If it’s a lot of harder things, I recommend ‘major revision’. These are just labels. The numbered list is what’s important.

    The most important thing to remember when writing reviews: Put yourself in the shoes of the authors, the poor grad student whose career is on the line and who is going to have to figure out how to respond to your ramblings. Be helpful. Write the clear solid review that you wish YOU had when you were first starting out, so the authors can emulate it when it comes time for them to review your paper. And remember that the editor knows who you are, and can comment on reviewer performance. If you take too long, or write stupid sloppy confusing petty reviews, you will get a reputation for badness that will subtly follow you.

    Like


  15. If there were more “Other Daves” , the world would have been a much peaceful place ( or at least my mind would be more peaceful after the review by stupid reviewers come in)

    Like

  16. eeke Says:

    “If you take too long, or write stupid sloppy confusing petty reviews, you will get a reputation for badness that will subtly follow you.”

    Is this true? Are there reviewer rankings out there? I’ve received review comments that have been inappropriate (i.e. whether the authors were willing to work on weekends), or just beyond stupid. It makes me wonder whether the manuscript was handed off to someone’s 4th grade offspring or something.

    Like

  17. Juanlopez Says:

    The Other Dave,
    You make several excellent points, but I have to say that I disagree with something: I am tired of all the reviewing being, supposedly, to help the editor. It seems to me that editors want to have the power and make the decisions but without the work of the review. I don’t think that all we are supposed to do as reviewers is to advise them. The big shots that do the editing haven’t been involved in the hands on work for a long long time. Sure, they may have good long range vision and see the field differently than I do, but I can smell the BS in some papers better than they can. If I am expected to work for hours reviewing a manuscript, then I demand, in exchange more than an advisory role.
    On the other hand, it is clear that I take this waaaay too seriously, ha ha. Part of it is the issue of putting myself on the authors’ shoes.

    Like


Leave a comment