More deconstruction of the “tyranny” of excessive reviewer demands for more experiments

April 28, 2011

In case you missed it, there is a great column up at Nature News by Professor Hidde Ploegh. It laments the ever increasing demands by reviewers of scientific manuscripts, particularly for GlamourMag level journals, for additional experiments.

Submit a biomedical-research paper to Nature or other high-profile journals, and a common recommendation often comes back from referees: perform additional experiments. Although such extra work can provide important support for the results being presented, all too frequently it represents instead an entirely new phase of the project, or does not extend the reach of what is reported.

The comments are shaping up quite nicely (also see my post at Scientopia) over there and I was struck by one particular contrast.
Maxine Clarke, Publishing Executive Editor (and listed fourth on the masthead) of Nature, issues a denial of culpability when she says:

However, one counter-point to the one you make is that for a “top” journal, many papers are submitted that claim far more than their results will support. And it is only if these results are really solid that the (interesting) claim holds up. It is sometimes hard for editors, however experienced, to second guess some of these claims – though of course many such are easily identified as speculative compared with the data shown, and can be declined without peer-review until the authors have provided more substantiation.

Right. It is just those hot air blowing authors that are the problem. Sending in their first version of the manuscript with wild claims about how they’ve revolutionized!!!!111!!! our understanding of cancer or gene transcription or the function of the eleventy protein du jour, Gertzin. So of course what is the journal ever to do but to make sure that the authors have provided sufficient experimental backing for their outlandish claims.
A counter to this was provided by one David Vaux:

the first, and bigger, hurdle is getting the paper sent out for review. All too often the glossy journals reject a paper without review.
This provides an incentive to exaggerate “sex up” the results in the cover letter and abstract.

Exactly. If you don’t put in those outrageous claims of having solved everything that could ever possibly be important in biology (or pick your discipline), or discovered something so amazing that we will no longer used “sliced bread” as our favorite metaphor….you don’t even get your manuscript sent out for review. Why? Because the editors won’t think that it could ever be appropriate for Nature.
So authors are forced to make those outrageous claims that probably run far ahead of what they really believe*.
Don’t get me wrong. There are no doubt plenty of scientists on the submitting and peer reviewing side that throw in their lot enthusiastically with this process.
But don’t try to pretend you are out of the loop on this one, Editor Clarke. And most especially don’t try to pretend that you are not in a unique position to dramatically reduce this nonsense, simply by changing the way you do business** at your magazine.
__
*Do enough GlamourMag chasing (and succeeding) and you might just start believing your own BS, donchathink?
**for the slow readers, Professor Ploegh covered this. Simply shift your editorial policy towards considering what is in front of you rather than what you want to see. If the claims are too outrageous, reject it. Don’t tell the authors “well, I want to see the paper that actually supports*** your most flamboyant interpretations of your present data”.
***do we even need to go into why these kinds of pressures to come back with data that support the flights of fancy lead to data faking and, shall we say, optimistic consideration of the work that is being done in response to the review?

18 Responses to “More deconstruction of the “tyranny” of excessive reviewer demands for more experiments”

  1. bsci Says:

    I’ve never reviewed for a GlamourMag, but I frequently see claims and conclusions that aren’t supported by the data. If the paper is still interesting without those claims, I just ask them to remove the claim or clearly state the major assumptions/flaws behind their conclusion. If the paper can’t stand without more data to support the claims, I reject.
    The one case I can remember asking for more data is when the sample size was significantly below the standards of the field and I suggested they increase it to make the paper more than a preliminary finding. Perhaps my hesitation regarding asking for data collection is that my field’s experiments (fMRI) are very expensive. Lab-heads who review might not worry about grad student or postdoc time, but they understand that a followup experiment is $10-20K.
    As a reviewer, I frequently ask for more analyses, but I try to keep track of how long certain things should take be clear about suggestions vs. requirements for publication.
    Of course no one trained me on anything regarding reviewing and I’m just learning from experience (and feel really bad about the recipients of some of my early reviews)

    Like

  2. Isis the Scientist Says:

    It is sometimes hard for editors, however experienced, to second guess some of these claims – though of course many such are easily identified as speculative compared with the data shown, and can be declined without peer-review until the authors have provided more substantiation.

    HA HA HA HAHA!!!! Seriously?

    Like

  3. msphd Says:

    Yeah, I’m with bsci.
    I have never understood why anyone thinks it’s easier or more appropriate to
    a) reject a paper or b) ask for a ton of new experiments
    rather than just EDIT THE TEXT to be more reasonable?!
    Perhaps this is because the culture encourages scientists to suck at communication and err on the side of all-or-nothing thinking?
    I don’t get it. Our current system cuts off all our noses. No wonder we’ve all got a bad case of spiteful face!
    btw, Drugmonkey,
    “discovered something so amazing that we will no longer used “sliced bread” as our favorite metaphor”
    = some of the best writing I’ve seen all week. 😀

    Like

  4. becca Says:

    DM are you really missing the point that much? She’s telling you that the difference between “this claim is too outrageous” and “this claim is exciting, if true” is not always obvious.
    The scope of Nature and Science is extremely broad.Consider the sheer number of scientist editors they would have to have on board to be able to ‘consider what is in front of them’ as well as a society-level journal could. I’ve met editors from these journals, they are very bright and very well trained scientists… but they are still going to be out of their depth when it comes to evaluating every claim in every paper sent their way. That’s one reason peer-review exists.

    Like

  5. DrugMonkey Says:

    becca, are you really missing the point that much?
    “this claim is exciting, if true”
    It is not the place of a journal editor, nor peers who are reviewing the science, to get into this within the manuscript review process. Review the manuscript in front of you. Does it support the claim, or not?
    Yes, there will be a distribution and some minor amounts of additional evidence can be critical. But recall that old discussion where an alleged former Nature editor was going on about how “we work with PIs for years” on a story? Yeah, that’s wrong.

    Like

  6. becca Says:

    You are assuming it is always easy to tell if the claim is supported or not. I don’t think it is.
    In addition, it is the place of the journal editor to decide if the claim is exciting. That’s what must happen in a world where some journals get a lot more attention than others and also get a lot more submissions than they have slots to fill.
    I mean really?? What you are saying is akin to: ‘it is not the place of a PO, nor the peers who are reviwing the science, to decide if a grant is innovative’.

    Like

  7. Eric Lund Says:

    She’s telling you that the difference between “this claim is too outrageous” and “this claim is exciting, if true” is not always obvious.
    The reason the difference is not always obvious is because these two things are not mutually exclusive. “Exciting if true, but not supported by the data presented” is not a particularly rare description of claims made in papers, even for the society level journals I am familiar with, and like bsci above, I have recommended rejection of papers where the data presented do not support the central claim of the paper (asking for more simulations may be reasonable, but additional experiments are usually harder, and sometimes impossible, to do in my field than in bsci’s). It’s not always reasonable to insist that the editor make that call; that’s what reviewers are for.
    Some editor triage is appropriate. Even if you really have found a cure for breast cancer, your paper does not belong in Physical Review Letters, any more than a paper that demonstrates a self-consistent theory of quantum gravity deserves consideration at Cell. But when a high-impact journal regularly triages out papers that don’t meet the editor’s threshold for excitement, you shouldn’t be surprised to see authors pushing that boundary. DrugMonkey is basically right that the practice is not just asking for trouble, but sending an engraved invitation.

    Like

  8. Pinko Punko Says:

    Some percent of triage is of course because the 28-year old editor does not know the field beyond a likely shallow, name-heavy, glamourmag influenced depth.

    Like

  9. qaz Says:

    Has no one ever had a reviewer suggest just the right experiment? Or just the right analysis?
    I’ve been in this positive light on both sides of the issue. For example, for one paper I reviewed, I suggested an analysis using a new technique that the authors were likely to be unfamiliar with, but which absolutely proved their claims. The authors contacted the person (not me) who had developed the technique, learned to do it (in about a month or so), which made the paper a much better paper.
    Similarly, I’ve been in a case where the reviewers did not feel that our claims were justified by our analyses. So they suggested new analyses. (In our most recent case, we felt the suggested analyses were not doable with our data, but we were able to find a different analysis that satisfied the reviewers and the paper was accepted. I think the paper is much stronger for it.) In fact, I usually find that the paper is much stronger with the additional analyses.
    We don’t want to flat out reject papers when we think there’s a potential to fix them easily. We should be giving authors a choice. Authors need to understand that reviewers are saying “This claim is not proven. Either the authors need to remove it or to provide proof of it. Some ways to prove it might be…” In fact, I would say that authors are ALWAYS given a choice. If you feel that the extra experiments are too onerous, send it to a different journal. This is really only a problem with scientists chasing GlamourMags. You can always withdraw your paper and send it to another journal that doesn’t demand the extra experiments.
    This complaint is silly. It is only made by scientists who feel it is their *right* to be published in a GlamourMag. If you can’t get past the review, or you feel what the reviewers are asking for is too onerous, send it somewhere else.
    If you can’t find anywhere to publish it, put it on your blog. It’s not peer reviewed.

    Like

  10. Alex Says:

    Cross-posted from the other blog:
    I wonder how much of this is driven by supplemental online materials. In an earlier era, the papers in Science and Nature could only be a few pages and that was it. Supplemental materials did not exist, so if one started to ask for umpteen million additional things there would be nowhere to put it.
    I’m quite happy that the most prestigious society-level journal in optics (Optics Letters) has a completely inviolable 3 page limit, and no supplemental online materials, except for animations. No supplemental figures, no supplemental methods, no supplemental derivations in appendices, none of that. You say what you have to say in 3 pages, and if it’s important enough, and done well, they publish it. If the result needs follow-up, there are other journals that you can publish in, including journals published by that society.

    Like

  11. Isis the Scientist Says:

    It is only made by scientists who feel it is their *right* to be published in a GlamourMag.

    Except that this is not true. It is a problem among the society journals as well. So much a problem, that the editor of our major journal has made a public commitment to stop this from happening.

    Like

  12. dsks Says:

    “If you can’t find anywhere to publish it, put it on your blog. It’s not peer reviewed.”
    Until a peer posts a comment.
    Frankly, I’m all for it. We’re practically there already anyway. The last time I published in JBC I practically had to format the whole fucking manuscript for them. I might as well have just slapped the paper on my blog and sent an email to my field cronies inviting them to come and rip the shit out of it directly. Been a whole lot cheaper, that’s for sure, and frankly a two-way conversation between authors and >3 competitors would beat the bells out of the current stodgy peer review process.
    With any luck, sooner rather than later, some big shots are going to take the first step towards ripping the bottom out of this whole government-subsidized sector of the vanity press industry.

    Like

  13. SesliALeyram Says:

    Biz yazarlar çözümü sunuyor olmalıdır. Yazarlar, yorumcular söylediğini anlamak gerekir “Bu iddia kanıtlanmış değil. Ya yazarlar veya kaldırmak için onu kanıtlamak gerekir. Bazı yollar bu olabilir kanıtlamak için …” Aslında, yazarlar DAİMA bir seçenek sunulur söyleyebilirim. Eğer ilave deneyler çok külfetli olduğunu düşünüyorsanız, farklı bir dergi gönderin. Bu gerçekten GlamourMags takip bilim adamları sadece bir sorundur. Her zaman kağıt çekebilirsiniz ve ilave deneyler talep etmez başka dergiye gönderiniz.

    Like


  14. The last time I published in JBC I practically had to format the whole fucking manuscript for them. I might as well have just slapped the paper on my blog and sent an email to my field cronies inviting them to come and rip the shit out of it directly.

    Excellent idea! I’m sure tenure and promotion committees will be equally impressed by your self-published blog manuscripts as those you have published at JBC.

    Like

  15. Joe Says:

    When I was a grad student an editor of Science came and gave a talk at my univ. My advisor had just tried and failed to get an article in Cell. When discussing this with her, he asked her to send the article to Science, but he also said biomedical scientists were much harder on their peers in the review process than physical/other types of scientists. Do you think that this phenomenon is universal or more associated with the biomed science community?

    Like

  16. Eric Lund Says:

    Frankly, I’m all for it. We’re practically there already anyway.
    The physics world has been there for close to a decade with the arXiv preprint server. For historical reasons physicists have been relatively open about sharing preprints of papers under review, and once the World Wide Web (invented by physicists) was well established, it made sense to put the preprint sharing network online–that way, well connected professors don’t have an unfair advantage over nobodies who aren’t on preprint mailing lists. Some papers on the arXiv eventually do get published, including in Physical Review Letters (which is considered a GlamourPub if you are in the physics business). Many don’t.
    The last time I published in JBC I practically had to format the whole fucking manuscript for them.
    Also not news in much of the physical science world. Many of the journals I deal with have a preference (and a few have a requirement) for manuscripts to be prepared in LaTeX, which is a programming language designed for typesetting manuscripts. You download the style files for the journal in question, put a few appropriate lines into your manuscript file, and away you go. It’s certainly an improvement over the 1970s and 1980s, when many of the published papers (and I’m talking camera ready copy, not just submissions) in my primary society’s journals were prepared on typewriters. Whether it’s because I’m older now (and this practice had largely ended by the time I started my Ph.D. research) or because my eyes have grown accustomed to the more reader-friendly typefaces used in modern publications, I now find many of these typewritten papers painful to read.

    Like

  17. supratall Says:

    Do you think that this phenomenon is universal or more associated with the biomed science community?

    Like

  18. sharon Says:

    Here are some additional solutions to this problem that will speed up publication. http://www.americanbiotechnologist.com/blog/peer-reviewed-science/

    Like


Leave a comment