Peer review demands for extra experiments are "wasteful tyranny"

April 27, 2011

…or so says scientist Hidde Ploegh in a Nature News column.

The whole thing is brilliant and lays out the case admirably:

Rather than reviewing what is in front of them, referees often design and demand experiments for what would be better addressed in a follow-up paper. It is also commonplace for reviewers to suggest tests that, even if concluded successfully, do not materially affect conclusions….This has a serious and pernicious impact on the careers of young scientists, because it is not unusual for a year to pass before a paper is accepted into a high-profile journal. As a result, PhD degrees are delayed, postdocs may have to wait an entire year to compete for jobs and assistant professors can miss out on promotions…The extra months of experiments increase costs for labs, without any obvious advantage for science. Although journals profit handily when prospective authors offer the best science possible, most do not spend money to produce it.

The only glaring lack here is the failure to castigate this process for generating a LOT of data that may possibly be of use to science that will never see the light of day, once the extra person-years worth of work is shoehorned into a single additional figure.

The good Professor has three suggested solutions to help:

First, they should insist that reviewers provide a rough estimate of the anticipated extra cost (in real currency) and effort associated with experiments they request. This is not unlike what all researchers are typically asked to provide in grant applications.

This is a nonstarter to me. It just begs the question about how much additional effort and cost is too much and how much is just right. Who is doing to decide this part? Obviously the reviewers who are asking for the additional experiments have some idea already of the amount of time and expense…and they’ve made the demands anyway. Also, reviewers will shade their budget estimates in the direction of their demands “Oh, this should only take a month for a postdoc to work out”. Yeah right. I’m not seeing where this will help.

Second, journals should get academic editors with expertise in the subject to take a hard look at whether the requests of reviewers will affect the authors’ conclusions, and whether they can be implemented without undue delay.

This was my reaction to the column up to this point. The solution is good editing. I can’t tell you how many times at real journals (i.e., those who are focused on the science, not the chase to be first, hawt and/or sensational) that the editor has shut down reviewer requests for additional experiments to be added to our manuscripts. Sometimes explicitly by saying in the editorial part of the review “The request for additional experiments X, Y or Z by Reviewer #3 are beyond the scope” or sometimes implicitly “I concur that points A, B and C in the reviews are most important to address”. Either way, there is a clear signal that the Editor plans to accept the paper without the additional experiments. It should not surprise you that it is the practicing scientist class of Editor that I have found to engage in this behavior. They take an active role in policing the “level” of the journal not only in terms of keeping out the chaff but also in terms of keeping a lid on ever escalating demands for what is justified to publish as a paper.

There is another more cultural facet to the academic/professional editor discussion as well. I had a recent experience, as it happens, in which an editor eviscerated a manuscript we had submitted by demanding we delete a good half of the stuff in there. While it smarted to find that she did not recognize the unique brilliance of all of our offerings, I can accept this. This editor is a senior scientist in the field, has been an Editor in Chief of journals for a very long time and I am entirely uninsulted to find her inserting herself so emphatically in the process. I would have a considerably different reaction to the professional editor class which does not come with similar scientific stature (despite what they, hilariously, seem to think). So I’m with Professor Ploegh that this is a good change- academic editors (i.e., practicing scientist peers) over professional editors. You will be entirely unsurprised to learn that Nature editor Maxine Clarke is not impressed by this suggestion “I think your first and third suggestions, in particular, are good ones.“.

Back to Ploegh:

Third, reviewers should give a simple yes or no vote on the manuscript under scrutiny, barring fatal shortcomings in logic or execution. Once editors have decided that, in principle, the results are of interest to their publication and its readership (which is their editorial prerogative), passing a simple test of logical rigour and quality of data should be enough to get them through peer review.

He’s making a point consistent with his early observation that peer reviewers should stick to reviewing the manuscript that is in front of them, rather than reviewing the manuscript they think they’d like to see in the future. I concur. I think I’ve actually said this before in an exchange with some professional editor or other, probably Noah Gray of NPG, who tried to weasel out by insisting that editors are simply responding to the field (the peers doing the reviewing). As I noted above, academic editors have no problem telling reviewers who demand excessive numbers of additional experiments to go pound sand. There is no reason the professional editor class cannot do the same. Simply have a house rule that demands for additional experiments will be grounds for rejection of the manuscript. Not “revise and resubmit” trolling…rejection. “Try again later”. With the clear understanding that the present paper has been rejected and any subsequent submission better be substantially different. Because after all, that’s what the reviewer demands are saying, right? That it must be substantially different to be acceptable…

Professor Ploegh refers to a vicious cycle:

Many reviewers are also, of course, authors, who will receive such unreasonable demands in their turn, so why does the practice persist? Perhaps there is a sense of ‘what goes around comes around’, and scientists relish the chance to inflict their experiences on others.

So make use of this. Publish the papers that do not receive demands for additional experiments and give a hard rejection to those for which the reviews ask for lots of more stuff. Since GlamourScientists are the ones doing the reviewing, they’ll snap in to line eventually.

Professor Ploegh ends with a comment that is going to warm the cockles of the hearts of the younger scientists:

Having read some of the biographies of the founders of molecular biology, it is hard to escape the impression that, once, the mechanics of science were indeed thus. It is worth revisiting the experiment, I should think.

The nasty way to put this is that, dude, you OldTymers just walked around picking up fruit off the ground, never mind picking the low-hanging stuff, and it was a freaking Nature or Science paper. We’re up against some new astronomical standard in which a whole 5 year program of research is supposed to go into each GlamourPub.

The more sober realization is that science progressed just fine in the past when Science and Nature pubs with one or two figures in a “paper” of highly limited scope became foundational parts of our subfields. Certainly for those subfields of my own interest, when I go back to look at the original paper for something that became absolutely canonical it is a figure or two. A much more substantive paper always followed after the first observation but that was typically in a nonGlamour journal. A field journal. Now, of course, the followup papers are less frequently publishedby the original group*, and less frequently published at all**. That is a shame and a loss for science.

I can’t believe the NIH is not concerned that their money is being wasted with this competitive cycle being played out with their extramurally funded investigators, aided and abetted by GlamourMags with a clear profit motive.

__
*because, of course, being GlamourLabs, the filling in of “details” is best left for “the little people***” and they are on to the next big splash.

**why would some other group pursue an area that the GlamourLab has the lead on, they are just going to get scooped to the next paper (see vicious recursion with *)

**Yes, that is very nearly a direct quote of the Glamour-est PI of my acquaintance.

No Responses Yet to “Peer review demands for extra experiments are "wasteful tyranny"”

  1. anon Says:

    “So make use of this. Publish the papers that do not receive demands for additional experiments and give a hard rejection to those for which the reviews ask for lots of more stuff.”

    It’s understandable how some reviewers’ demands are over the top. However, how is this, your statement, supposed to favor newer authors? Maybe I have misunderstood. I have seen manuscripts from established labs breeze through the review process and published despite loud protests from reviewers – citing lack of controls, etc. Wouldn’t editors tend to favor publishing manuscripts from more established labs, especially with these policies?

    Like

  2. drugmonkey Says:

    These are related but independent concerns. No matter what the process, you are going to have an editor or very limited editorial board making a call. They are going to be subject to biases.

    I’d rather have the biases of an established figure in the field (what can I say, I’ve found them to be very fair to my manuscripts even when I was not really well known to them personally OR professionally) who has loyalties shaped by his/her allegiances to journal (sometimes to the associated academic society as well) and subfield. Rather than professional editorial staff with loyalties to the journal bottom line and the idea of being Glamourous.

    Like

  3. DrLizzyMoore Says:

    In my experience, when a reviewer asks for more experiments (not the kinds of experiments that truly take a few weeks), he/she does not like the manuscript or is looking for excuses to get it sunk. But, I will say not all suggested experiments are a bad idea or mean that you will shoot down a rabbit hole. After a few drinks, I usually really appreciate reviewer comments and suggestions.

    Like

  4. BugDoc Says:

    “academic editors have no problem telling reviewers who demand excessive numbers of additional experiments to go pound sand”

    Some academic editors certainly do this (and I worship them), but some don’t. This is not due to lack of scientific acumen, IMO, since these scientist editors are generally very well recognized in their field, but mostly likely due to too many demands on their time. It’s a lot easier to tell the authors to deal with all the reviewer comments. To make it easier on busy editors, it would be good SOP that any comment/criticism made by a reviewer should be clearly justified by specifics on why it would make the paper more rigorous. I often look at reviews from other scientists on papers I have evaluated and am sometimes shocked by the laundry list of vague comments that the authors are expected to deal with. Doing a better job of training young scientists on the process of critical review could help in future. It’s also unfortunate that there isn’t some mechanism of reviewer feedback, but since reviewing is essentially a volunteer service activity, I guess editors may not feel free to give negative feedback to reviewers with poor reviewing habits.

    Like

  5. drugmonkey Says:

    sooo.. I didn’t mean that editors always cap off reviewer comments on the very first review. sometimes they do and it is a GoodThing. But in your scenario, sometimes the editors just want to see how you respond and you are always free to rebut or fight a comment. The good Editor then focuses down on the situation and makes the call. I’m okay with that too, in my experience editors break my way on a sufficiently high number of these questions….

    Like

  6. odyssey Says:

    I chose “Other” in the poll above because it depends. Sometimes extra experiments really are justified. Having said that though, I am in broad agreement with Prof. Ploegh.

    Like

  7. gerty-z Says:

    I voted “yes”, but with an *. I have seen too many reviews (not just my work!) where reviewers have demanded experiments that were superfluous or, even if they worked, did not address the conclusions of the ms. Sometimes the reviewers were adamant that these be done in order to publish the paper, and the editor went with it. The * is that there are, of course, some times that good experiments are suggested by the reviewer.

    I agree that good editors, academic or professional, that were able to critically evaluate the review and put a stop to the thoughtless reviewer demand of “more” would be the way to address this. There are certainly some excellent editors out there that do this well. But I also think that we, the reviewers, must recognize that this is our problem. Reviewers should be working with editors to correct the problem. If you don’t like to get thoughtless “more” reviews, then don’t write them. And teach your trainees how to critically evaluate work without resorting to this lazy review strategy.

    Like


  8. I have seen no correlation at all between the editor’s status–“professional” or “working scientist”–and whether she is willing and able to do the work of appropriately signaling to the authors what reviewer demands for additional experiments need to be met for acceptance. One of the most effective editors I have had my manuscripts handled by in this regards is a “professional editor”, who hasn’t been in a lab in decades.

    Incidentally, as a PLoS ONE academic editor, I can assure everyone that it is explicit PLoS ONE editorial policy that authors be instructed by the reviewing editor concerning exactly which reviewer-suggested experiments must be performed and which are optional.

    Finally, yeah, HAHAHAHAHAH! How fucken hilarious was it when that editor douchebagge was all like “We editors are *exactly* like PIs!”

    Like

  9. TheGrinch Says:

    I suspect reviewers are more likely to demand wasteful new data when the authors are not established or when they are new to the field, whereas they may not raise similar issues when they see a manuscript from the established lab—”they have been up to this for long time now, so they probably know what they are doing or they might have even tried it already.”

    One way to improve the situation is perhaps the double-blind review of the manuscripts. Sure, it will increase the pain of complying with additional demands for everyone—but only in the short term. Once reviewers have no names to attach to with a manuscript, they are more likely to question the necessity of their own suggestions. Plus, in any system where authors are known, there will always be a few ones having easier time with the review and those are the ones who will resist any change. But with a double-blind system, the editors will eventually see that this is happening more or less randomly for everyone, and at that point saying no to crazy reviewer demands will be much easier to implement in practice. My $0.02.

    Like

  10. Dr. O Says:

    My mentor and I fought the good fight re: an awful third reviewer years back… and won. The editor never hinted that s/he found this reviewer’s comments out of line. But after a diplomatic rebuttal outlining the numerous flaws in his/her review, the editor accepted the paper immediately. These editors exist, and in large numbers I’m sure, but sometimes you have to ask.

    Like

  11. Pinko Punko Says:

    I kind of agree, and perhaps not Dr. Ploegh, but other biggie wiggies complain because their papers don’t sail like they used to and are in fact treated like the rest of the plebes.

    Like


  12. As both DoucheMonkey and I have been pointing out for years, whenever one of these “esteemed professor” douchebagges writes some kind of editorial or open letter about some kind of “systemic problem”, you can bet the motherfucken farm that what it’s all about is his own interests having recently been impacted adversely.

    Like

  13. drugmonkey Says:

    Dunno…he seems to have a couple of recent Science papers. You think all the PNAS work is him being frustrated with pushing those higher?? Or is one of his favored scientific progeny getting hammered, d’ya think?

    Like

  14. anon Says:

    Even if it were true, that’s good (at least) for other scientists who struggle through this seemingly ridiculous, apparently wasteful*, and unavoidably pain-in-the-a** demands.

    I ever wish once someone who is powerful or reach certain level in the so-called academic system*, then make some changes or at least make some ‘shakes’. Otherwise, those in the lowest pile will never get to shake it.

    *refer to this blog’s entry
    **refer to those cranky reviewers

    Like

  15. CD0 Says:

    It is ironic that he writes that precisely in Nature, which states that “when making a decision about publication in the light of reviewers’ comments, editors consider not only how good the paper is now, but also how good it might become after revision”.

    Like

  16. Kaija Says:

    I really like this idea…I know that when I grade papers and exams, having the students’ names hidden is helpful to objectivity from myself and the others who may be grading (TAs, graders, etc), and many research studies have shown that double blind reviews of resumes and CVs as well as many types of manuscripts (not just scientific) results in significant differences by removing conscious or unconscious bias (and we all have them). Another good practical example is the introduction of “blind” auditions for musicians who played for the jurors while hidden behind a screen (Google this…it’s fascinating).

    If the science is good, it will stand on its own without Dr. BigName at Prestigious U to hype it.

    Like


  17. IMO, these sorts of reviewers are dishonest (those who request additional experiments because they want to get it sunk). Personally, if I think there are experiments lacking and they were necessary to support the claims of the manuscript, I have always called for a flat out rejection. I’ve then explained why I’m calling for a rejection (i.e., you need Experiment A, B and/or C to support this claim, and until that’s done, I don’t think this is publishable). That’s been rare though. Usually the papers I’ve read have the supporting data, if not a minor quirk here or there which I’d like to see addressed more thoroughly (showing controls for example, or wanting to see a different statistical model than the one that the reviewers used because their data doesn’t fit the one they used). When I do this, I try to adhere to a couple of rules: 1) Am I placing a cost on them in time and money that I would find unreasonable if I had been in their shoes? 2) Is my request absolutely necessary and/or will it benefit the field and the manuscript if this were added? If I can answer “No” to the first one, and “Yes” to the second … I make the request.

    Like


  18. In regards to fighting a comment … indeed, it is an authors right and should always be exercised. Presenting a sufficient case for why additional experiments DO NOT need to be done is part of the job IMO, and a successful author quickly learns how to do so. I’ve uttered the phrase “These requests are outside the scope of this current manuscript.” so many times that I’ve lost count at this point. And you know what? It’s always worked. However, if you don’t fight it … you’ll lose.

    Like

  19. Pinko Punko Says:

    Ploegh is an ultra hard ass, and in his case I would bet that he’s tougher than most reviewers. They do very good work, but you could say he is accustomed to publishing in the toppest a lottest. Yet given this, there is always the “NATURE IS MY BIRTHRIGHT” possibility. This always ignores the fact that there aren’t that many more Nature spots in play now than 20 years ago, yet there are entire new fields and massively greater amounts of researchers vying for those spots. Papers are getting longer and have much more data. Many of the bigwigs do not use this into their “I’m getting screwed by reviewers” calculator.

    Like

  20. drugmonkey Says:

    The majority of the “Other” comments are in the nature of “it depends” or “sometimes”.

    One additional thought for the discussion was one that said demands for additional work was okay for *controls* but not okay for “new experiments”.

    Like

  21. drugmonkey Says:

    Nature has a long and hallowed tradition of posting opinions and editorials bemoaning some situation within the culture of science (see Impact Factor) to which they contribute substantially on the bad side. And for things which they could quite easily reverse by taking a different tack on their operations and preferences.

    Like

  22. BugDoc Says:

    Obviously I’m free to argue as much as I want, and I certainly love to argue just on principle. But to illustrate the point more plainly, here’s a common flow chart for what bogs the system down when the editor just says deal with the stuff:

    (1) you get 1st round of reviews. Editor says “address reviewers’ comments”
    (2) I call/email editor/assistant and leave message
    (3) Assistant says email passed on to editor, but they are at a meeting
    (4) I call/email editor again with proposed list of things we can do and what is beyond the scope (aka bullshit)
    (5) Finally, the editor emails back and says, ok, um, I’ll have to take a look at the paper “again” to figure out what’s ok.
    (6) I email the editor again.
    (7) The editor emails back and says “here’s the deal…” (if you’re lucky) and you actually get some guidance on what they think is important.

    I totally get that editors are really busy. My point is why not just give that guidance in the first place, rather than 5 emails and 2 weeks later since it takes the same amount of time to read the paper? There are two outcomes here and neither one is optimal – 1st, you get the info you needed, but 2 weeks later or 2nd, if you don’t negotiate enough with the editor, you do a lot of extraneous experiments a la Ploegh. This is how things are done now and I’ve done this many times. I would argue that it’s inefficient and that it’s the editor’s job to pass on this guidance WITH the reviews. I’m SO grateful to the really terrific editors I’ve had that gave me their valuable feedback right up front, so we could just get to work and take care of business.

    Like

  23. Canadian_Brain Says:

    Pretty tough to actually double-blind a manuscript… I know most of my papers have lines like “We recently reported that…”, “see MyName et al., 2010, for a detailed description of this technique”, or “This fits with our previous observations…”. Its tough to a cogent narrative of your research program if you have to pretend that your previous manuscripts aren’t written by you!

    Like

  24. DrugMonkey Says:

    tough? it is impossible for this to work. especially as a universal practice as opposed to “well, this limited subset of three percent of submissions could possibly be anonymized”.

    My feeling is that people who raise this red herring have very little experience participating in peer review within a disciplinary subfield.

    [added: and just to head off the usual whinging about “waaah, so we shouldn’t even try just because you, DrugMonkey, say it won’t work? That’s defeatist!”…..it is about what can actually be accomplished versus that which cannot, without radically changing the nature of scientific publishing and the way data are presented. it would all have to be anonymous- no more authors at all. tilting at stupid windmills distracts from pursuing fixes that might actually have a chance of addressing a problem. PLoS ONE type of efforts, have a chance of changing cultures. Continued deconstruction of the IF scam has a chance of changing cultures. Step by step refusals to consider GlamourMag pubs as anything special by peers who sit in judgment of grants and careers has a chance of changing cultures. Imagining peer review could ever be “double blinded” is pure fantasy]

    Like

  25. Canadian_Brain Says:

    I wonder if sometime reviewers can’t think of any good critques, but their pride (i.e., “I’m a brilliant scientist, the must be something wrong that i’ll spot) forces them to make a ton of comments. I know that my rejected papers often don’t have suggestions for additional experiments, but my best ones do…

    I also find whenever you have ‘contradictory’ findings, there is a request for additional experiments. I imagine the reviewers thought process is “But Frank found the opposite, so maybe this lab doesn’t know what they’re doing”.

    We had a result with a stimulant producing opposite findings than you would expect. The reviewers couldn’t believe it so we had to prove that we knew how to inject a stimulant. They made us prove that amphetamine increased locomotor activity… despite about a hundred years of knowing that speed made you move around more… Waste of about a grand of research money…

    Like

  26. drugmonkey Says:

    They made us prove that amphetamine increased locomotor activity… despite about a hundred years of knowing that speed made you move around more.

    but of course the effect of stimulants is biphasic and as you get the dose high enough the rodent (I’m assuming you mean rats or mice) actually stops moving around because it is engaging in repetitive, stereotyped behavioral patterns. (that was for the peanut gallery, not to insult your intelligence)

    That point of transition depends on numerous factors, not limited to mg/kg. So without any additional information I can imagine a host of reasons why a reviewer would ask for a traditional assay to show that a highly novel outcome was not because you had some funky dose by environmental / experimental factors thing going on….

    Like

  27. bacillus Says:

    I’m inclined to agree since his commentary is specifically aimed at the “top journals”. One of my fantasies is a website where every one can post the most egregious comments they received on a manuscript or grant review.

    Like


  28. Most published scientific papers could probably benefit from additional experiments. It is the nature of research that one answered question spawns many more. Constructive and timely peer-review provides a real benefit to the authors, the journal and ultimately the scientific research community. However, in my personal experience, the quality of peer reviews varies markedly, despite the prestige level of the journal. Delays in the publication of insightful discoveries can hinder the rate of scientific knowledge advancement just as much as premature reports that might lead researchers down blind alleys. I think that the solution to this problem is to have open access peer-review for recently published scientific papers by informed readers who properly identify themselves. In this manner, problematic aspects of such reports can be flagged so that other readers are alerted to potential deficiencies.

    Like

  29. BikeMonkey Says:

    The notion that peer review can solve the problem of published results that lead the field down blind alleys is utter nonsense.

    Publish the work and let people get to work. Replication, or lack thereof, is the only way to determine lasting impact…

    Like

  30. Pinko Punko Says:

    Or their claims that an anonymous ranter about massive fraud in a German scientist’s lab was causing all sorts of problems and was very worrisome, when in fact the anonymous person was an actual whistleblower who got stonewalled at the institutional level and the PI ended up retracting 8+ papers (see Retraction Watch for details). Once again, Nature misses the mark on the editorial side. See also: Nature editorial on Nisbet report on climate money.

    Like

  31. cashmoney Says:

    seriously Pinko? You mean that crazy person who spammed every possible comment thread under the NPG umbrella with accusations was vindicated????? Is this recent?

    Like

  32. Alex Says:

    I wonder how much of this is driven by supplemental online materials. In an earlier era, the papers in Science and Nature could only be a few pages and that was it. Supplemental materials did not exist, so if one started to ask for umpteen million additional things there would be nowhere to put it.

    I’m quite happy that the most prestigious society-level journal in optics (Optics Letters) has a completely inviolable 3 page limit, and no supplemental online materials, except for animations. No supplemental figures, no supplemental methods, no supplemental derivations in appendices, none of that. You say what you have to say in 3 pages, and if it’s important enough, and done well, they publish it. If the result needs follow-up, there are other journals that you can publish in, including journals published by that society.

    Like

  33. Pinko Punko Says:

    If that person was the person talking about this one, then yes!

    Like


  34. I’ve never called my Assistant Editor/Editor asking for advice on how to proceed. I’ve always just typed up my responses and refused to do the extra experiments I thought were superfluous.

    Like


  35. If an experiment lacks the proper controls, is it really an experiment?

    Like

  36. drugmonkey Says:

    Is it really cut and dried as to what constitutes the “proper” control(s)?

    Like

  37. BugDoc Says:

    We often submit to mid-high IF journals (> 10) where multiple rounds of review are not uncommon. I prefer to try to get it right the 1st time if possible to avoid that and I find a quick chat with the editor can help in that regard, similar to talking to the PO after you get your reviewer comments.

    Like


Leave a comment