"…would be beyond the scope of this paper"

June 2, 2008

A now somewhat older post of drdrA’s over at Blue Lab Coats covers a recent manuscript rejection received by the laboratory. In discussing the reviewer criticisms of the manuscript, the post alludes in several places to reviewers asking for a substantial amount of additional work to be conducted. I picked up on this in a brief snark, however the critical issue was better expressed by commenter BugDoc:

I’m really concerned about what appears to be a growing trend for reviewers to ask for years worth of revisions, which could often be an additional paper. We will sometimes pull out the old standby “…beyond the scope of this paper”, but I’m curious to know if there are other rebuttal strategies with which to deflect reviews aimed at having you compress the work of an entire career into one paper.

I concur with the first sentiment, although I’d probably substitute “really, really, really annoyed” for “really concerned” if I were in a venue in which I was inhibited from expressing myself in physioproffian terms.


As PhysioProf already noted in his piece responding to this post of drdrA’s, one cannot get too caught up in what the reviewers have to say about your paper. Even if the Editor initially goes with a “rejection” decision, said Editor is still often open to a substantive plea for reconsideration. In any case, as Orac noted:

You don’t always have to do what the reviewers demand when you resubmit. There have been more than one occasion when I’ve seen reviewer requests that were clearly unreasonable and, in my response, I politely said why we weren’t going to do what the reviewers asked. If you can justify why you didn’t do what the reviewers ask with a reasonable explanation, you can often get your paper published without doing a whole bunch of extra experiments.

Exactly. Your ultimate audience is the Editor, not the reviewers and I would hope most trainees learn this in the course of their first one or two first-author submissions. So just because a reviewer is recommending that a large number of new experiments would greatly improve the paper, the authors do not necessarily have to comply in order to get the paper accepted by the editor. Nevertheless it is indeed the case that reviewers seem quite fond of asking authors to provide a substantial amount of new data to a manuscript that the authors clearly thought described a body of work that is worthy of publication. In many cases an amount of additional work that the authors find not just irritating but unreasonably demanding and out of step with career and funding realities for PI and trainees alike. This latter concern pushes YHN’s buttons.
First, a confession. As a reviewer of papers, I’ve been known to ask for more data myself. To intimate that a paper wasn’t up to publication quality in my view simply because of a lack of scope of the data presented. Herein lies the sharpest point of the discussion between reviewers and authors is it not? Because science is an ongoing process of incremental discovery and no finite set of experimental results ever really exhaust a question. It is therefore quite natural that different individuals would come to different conclusions about what amount of progress along a line of investigation represents “a paper”. (Discussion over the so-called “Least Publishable Unit” is an old tradition in academia, of course, but I was a bit surprised in writing an older post that Wikipedia has an entry for the LPU. ) So despite the fact that I probably lean toward a least-publishable unit approach myself (from a certain perspective, of course) I do have some threshold for publication worthy amounts of data.
Of course, we all couch this to ourselves in terms of different things but something in the nature of “a complete story” runs through most of us. Which is utterly laughable as a workable standard. We very quickly run into questions of arbitrary taste, tradition and levels-of-analysis. And of course, the worst possible reason for demanding more data: “Well, I include five times more data in each of my publications so everyone else should have to as well”. What is “traditional” in your given subfield or for a given journal is perhaps about the best we can do. Trouble is, what if that changes over time?
As I expressed in a comment to drdrA’s post, I believe that there are some ways in which demands for increasing the amount of data, scope or type ($$) of assays included to be a nasty mechanism by which the powerful in science perpetuate their own position. Note that I do not say that this is even an explicit process and obviously it does not apply each and every person who gets a paper published in a GlamourMag. To put it as neutrally as possible, publication in the top general science magazines is an arms race in which the constant one-upsmanship of breadth of the data, inclusion of the latest and greatest techniques and sheer depth of the edifice of prior work on which the end-product must be based tends to increase the bar for what is required for a paper to be considered sufficiently “complete”.
It used to be the case that if you knocked out a gene in a mouse and had one phenotype you were good to go. Now? Well you’d better have the conditional knockout, isolated to particular cell populations, generate at least three or four systems worth of whole-organism phenotyping, silence, rescue, do a bunch of increasingly irrelevant so-called “mechanism” experiments in in vitro systems and gene array everything under the sun. OK, I perhaps exaggerate. But not much. And the point is that there are structural and technical “requirements” for a paper to be acceptable for a top-flight journal which have very little to do with the real quality and significance of the work.
The thing for those of us in subfields which have been less affected in this way to consider is can it happen. It assuredly could. In behavioral pharmacology, for example, would we look to a FDA-approval like standard and say “Well, that result in your rodents is very nice and all but we really need some backup in a second and larger species like dog, swine or nonhuman primate”? Ridiculous, right? But really what is the difference? So the molecular nutters start insisting on gene array this, no, no that’s old hat, Chippie-Chip that, whoops, what’s all this Selexa sequencing? blah. blah. New tools are available, they do kewl stuff, people scramble to use them (application is secondary of course) and next thing you know, you can’t publish very highly without them!. So these aforementioned behavioral pharm studies in now two species (including one of the expensive ones), well, now you better have PET occupancy data too. I mean, c’mon, it makes sense doesn’t it? There is very little argument that these sorts of additions would make the science that much stronger, impressive, general and all that good stuff. We could go there, we just haven’t. How about your comfy little subfield, DearReader?
This is where it strikes me that certain intentional or even unintentional processes which demand more and more and more go into a paper have the tendency to constrain to a higher degree who can publish in a certain journal. Constraining on the basis of laboratory resources rather than brilliant insight or clever experimental design. And this is not necessarily a good thing.
One of the more frustrating aspects of the debate over “scope” is the push and pull between paper review/acceptance and grant review/funding. For most people the two are linked in fairly direct ways. You can only do the science you can afford in terms of grant dollars paying for supplies and personnel and institutional support in terms of access to big-ticket equipment or other resources. In order to get the grant dollars from the NIH, you have to publish papers and the nature of those papers counts. The higher the Impact Factor of the journals you publish in, the easier the grant money is to pry loose (on a field-normalized basis). If your papers are considered more substantial, deeper, broader or whatever, the easier the grant money is to pry loose. The hotter and more exciting your work…well, you get the picture.
Lest this seem like a us/them screed, let us draw back and recognize that on every level there is some of this at work. Inevitably, our conception of what is a complete paper or a good paper is shaped by what we are doing ourselves. After all, people should have to go to the same trouble as we go to, at a minimum, to deserve that paper, should they not? Or be at least as good as us to deserve a grant, no? It is only fair.
You see where this leads, though, don’t you? I mean even at my modest level, I am capable of pulling off a breadth of work that I couldn’t have done when I was just starting my independent career. Should I hold New Investigators to my current standard? Is that really fair? Or flip it around, should we hold much more senior people to a an even higher standard just because they are so much more capable? WetEar Prof, you get a three-experiment paper but Geezer Prof, you better have a dozen in there!
The fact that the last handful of papers I reviewed suffered severely from lack of extensive content has nothing to do with this. Neither does the really sweet experience I’ve been going through with the grant reviewers wanting to know where some pubs are (before the $$) and the reviewers of a related paper wanting an R01’s worth of data for acceptance. Nothing at all. I swear.

No Responses Yet to “"…would be beyond the scope of this paper"”

  1. Coturnix Says:

    If you are out of money, out of the lab, out of the career trajectory, yet have old data that others should see, shouldn’t they get published somewhere?

    Like

  2. Becca Says:

    totally off topic (not to say your post wasn’t interesting and informative as usual, DM!)…
    CAGE MATCH! Sunday, Sunday, SUNDAY!
    See WetEar Prof Vs. Geezer Prof in a fight to the finish! No submissions allowed, KO only!
    Next on “Who Gets Funded?”
    (it’s probably time for me to go home and get some sleep)

    Like

  3. whimple Says:

    It’s a good topic, and “beyond the scope of this paper” is a perfectly reasonable thing to say.
    On the flipside, when I’m reviewing papers, I tend *not* to ask for additional data. If I feel crucial controls are lacking, that’s one thing, but just asking for *more* is not cool. Instead, if I feel *more* is needed, rather than *better*, I recommend rejection and publication in a less prestigious journal.
    Conversely, when I’m publishing papers, I tend not to comply with requests for *more*. I try to explain why what I have is a good story as it stands to the editor (as DM astutely points out, the editor’s opinion is the ONLY one that counts). If that doesn’t work, I take my paper and walk. Publishing now somewhere (anywhere) is so much better than chasing the fool’s gold that is trying to please a capricious reviewer.

    Like

  4. Sigmund Says:

    The last paper I sent in resulted in comments where the reviewers took absolutely contradictory stances – one asking for more work done on one part of the topic and the other asking for less data on this point. The editor told us we would have to comply with ALL the points of the reviewers if we wanted the paper accepted.
    We actually did the necessary experiments and wrote a new version of the manuscript and a letter pointing out that we cannot really both extend and delete the particular contentious point. The reviewers then accepted the manuscript but the editor himself came up with a completely new point that none of the reviewers had raised in their comments and we had to start off a new series of experiments that took about three months to complete – the entire process taking about over a year from submission to acceptance.
    And this was from a fairly modest impact factor journal.

    Like

  5. PhysioProf Says:

    The last paper I sent in resulted in comments where the reviewers took absolutely contradictory stances – one asking for more work done on one part of the topic and the other asking for less data on this point. The editor told us we would have to comply with ALL the points of the reviewers if we wanted the paper accepted.

    There is a high-impact journal in my field that pulls this shit all the fucking time. “This is a solid study, but in order to merit publication we would need to see more experiments that get at mechanism.” I have, on occasion, spent over a year on a multiple-round back-and-forth with these fuckers doing more and more and more experiments, only to end up rejected at the end.

    So the molecular nutters start insisting on gene array this, no, no that’s old hat, Chippie-Chip that, whoops, what’s all this Selexa sequencing? blah. blah. New tools are available, they do kewl stuff, people scramble to use them (application is secondary of course) and next thing you know, you can’t publish very highly without them!

    My own research is heavily leveraged off of novel tool development within my own lab, so I mostly avoid this, and do not go chasing after the latest, greates techniques that–oh, by the way–coincidentally seem to involve buying expensive equipment or paying exorbitant licensing fees to use.

    Like

  6. BugDoc Says:

    Very very helpful perspective, DM. My graduate advisor’s approach was to do everything you possibly can to address reviewers’ comments regardless of whether you think it’s a good idea or not, so I’ve obviously internalized that philosophy. However, I’m now rethinking that approach, based on the reviewer phenomenon described in the post. When I review manuscripts, I ask for the experiments or controls that are reasonably needed to support the conclusions that the authors themselves made, rather than asking the authors to extend their study to make claims that I think they should make.
    Having said that, regardless of how unreasonable reviewer requests might be….many of the editors I deal with are colleagues or acquaintances (or people that I would like to develop a working acquaintance with), and thus I thought I should try to avoid getting a reputation as the type that tries to get out of doing any additional work. Obviously there’s a balance to be struck, and I’m getting the sense from the comments here at at the previous “Rejection” post that I’ve probably been too conservative with rebuttals, and that others have been pretty successful setting limits on what they are willing to do for revision.

    Like

  7. neurolover Says:

    Beautifully insightful post, DrugMonkey. I think part of the problem, though, is that the power labs are doing more work. Perhaps we’re seeing the slow death of small science? I’ve seen what you describe, jokingly in your knockout example, and the CNS manuscripts that come out of them are arguably better, no? And, of course, impossible for a new investigator to accomplish, requiring as they do an army of high level researchers who can do the different approaches well.
    I’ve heard the same thing from reviewers of grants, even those who really want to help the young investigators along — when they’re looking at the proposals, there’s just no competition between what the new guy is proposing compared to the established guy. Applying the same standard to everyone, though, means that the new guy is never going to get past being a new guy.
    Young investigator grants are a help, but they don’t let you attract the personnel to do the research army work, even if you have some money.
    This is where I do worry that the system is broken — that the PI centered lab is dying off, and that we haven’t figured out how the big PI lab works in the long run yet. Maybe we should be looking for some clues from physics, where I think we might be heading.

    Like


Leave a comment