Ahh, reviewers

December 13, 2012

One thing that cracks me up about manuscript review is the reviewer who imagines that there is something really cool in your data that you are just hiding from the light of day.

This is usually expressed as a demand that you “consider” a particular analysis of your data. In my work, behavioral pharmacology, it can run to the extent of wanting to parse individual subjects’ responses. It may be a desire that you approach your statistical analysis differently, changing the number of factors included in your ANOVA, a suggestion you should group your data differently (sometimes if you have extended timecourses such as sequential drug self-administration sessions, etc, you might summarize the timecourse in some way) or perhaps a desire to see a bunch of training, baseline or validation behavior that…

….well, what?

Many of these cases that I’ve seen show the reviewer failing to explain exactly what s/he suspects would be revealed by this new analysis or data presentation. Sometimes you can infer that they are predicting that something surely must be there in your data and for some reason you are too stupid to see it. Or are pursuing some agenda and (again, this is usually only a hint) suspected of covering up the “real” effect.

Dudes! Don’t you think that we work our data over with a fine toothed comb, looking for cool stuff that it is telling us? Really? Like we didn’t already think of that brilliant analysis you’ve come up with?

Didya ever think we’ve failed to say anything about it because 1) the design just wasn’t up to the task of properly evaluating some random sub-group hypothesis or 2) the data just don’t support it, sorry. or 3) yeah man, I know how to validate a damn behavioral assay and you know what? nobody else wants to read that boring stuff.

and finally..my friends, the stats rules bind you just as much as they do us. You know? I mean think about it. If there is some part of a subanalysis or series of different inferential techniques that you want to see deployed you need to think about whether this particular design is powered to do it. Right? I mean if we reported “well, we just did this ANOVA, then that ANOVA…then we transformed the data and did some other thing…well maybe a series of one-ways is the way to go….hmm. say how about t-tests? wait, wait, here’s this individual subject analysis!” like your comments seem to be implying we should now do…yeah that’s not going to go over well with most reviewers.

So why do some reviewers seem to forget all of this when they are wildly speculating that there must be some effect in our data that we’ve not reported?

No Responses Yet to “Ahh, reviewers”

  1. Spiny Norman Says:

    lulz.

    Like

  2. Mike Says:

    Sometimes you can infer that they are predicting that something surely must be there in your data and for some reason you are too stupid to see it. Or are pursuing some agenda and (again, this is usually only a hint) suspected of covering up the “real” effect.

    I don’t know about your field, but often when reviewers do this they are implying that you’re holding back data for the next paper, where it would be better placed here. (even if that means the next paper will now be a mishmosh of boringness).

    Like

  3. drugmonkey Says:

    No, I mean even without asking for more experiments. Mostly about presenting a different look, or more detail, of the existing experiment.

    Like

  4. Spiny Norman Says:

    In other words, they want you to do more post-hoc jiggery-pokery, thereby increasing the odds of generating a false discovery (type I) error. Always thinkin’, those referees. Always thinkin’.

    Like

  5. Spiny Norman Says:

    Of course, if the referees are *truly* evil, they know you’ll be a good little statistician, and correct for the additional multiple comparisons — and by so doing, lose significance on the effects you’ve already reported!

    Like

  6. kevin. Says:

    Maybe they just think you’re dumb. Or they’re smarter than you.

    Like

  7. zb Says:

    Well the answer is that these data sets should be public, and then they can run all the other anovas with as many factors as they want. Hopefully, they correct for the multiple comparisons and too small group size.

    Like


  8. Dude, you only get reviews like this because you publish your shitte in dump journals.

    Like

  9. drugmonkey Says:

    Possibly

    Like


  10. Definitely. Reviewers for real journals don’t concern themselves with narishkeit like “type I error”, whatever the fucke thatte gibberish is. They focus on real science.

    Like

  11. Pinko Punko Says:

    I don’t buy any of the nefarious stuff. I just think they are treating your paper like a Research-In-Progress type presentation, where someone just says some stuff (admittedly sometimes it is the person who is maybe trying to sound smart). Papers don’t necessarily present “oh we tried all this stuff and it was all negative”. Let me ask, are the reviews you get like this generally positive or generally negative? I would guess generally positive.

    Like

  12. Virgil Says:

    You’re definitely right about hidden agendas. We recently received this classic… “Publishing these data would not be good for the field”. This roughly translates as “publishing these data would not be good because it might screw up my (the reveiwer’s) chances of getting a grant”.

    Like

  13. Grumble Says:

    CPP, if you don’t know or (worse) don’t care what a type I error is, you are an ass who should not be allowed to do science. And if you’re getting away with publishing in highfalutin’ journals, that’s even worse.

    Real Science, indeed. There are profound issues involved in determining whether an observed effect is likely to be real. Real scientists care about and understand what their statistical tests mean for that determination.

    Like

  14. Dave Says:

    …don’t concern themselves with narishkeit like “type I error”, whatever the fucke thatte gibberish is. They focus on real science.

    hahahahaha!! Love it.

    Like

  15. miko Says:

    All reviewers want your paper to be about what they’re interested in rather than what you’re interested in. Understandably. Editoring is a job.

    Like

  16. drugmonkey Says:

    reviewers want your paper to be about what they’re interested in rather than what you’re interested in

    BINGO!

    Like

  17. Laurent Says:

    reviewers want your paper to be about what they’re interested in rather than what you’re interested in

    Except of course when you’re about to scoop them.

    Like

  18. qaz Says:

    DM – I’m really curious what you think the point of peer-review is. Given that you don’t believe that reviewers should point out any potential analyses , any potential problems, any potential control experiments, any potential additional experiments, or anything at all.

    The assumption that you have thought of everything about your experiment and your own work is an arrogance that is inconsistent with the scientific process.

    If you have already thought of it, and already did the analysis, then the response to reviews is trivial. It says “great idea. we tried that. it came out this way. we have [added it to/left it out of] the paper for the following reasons.” If you haven’t thought of it and it’s easy to do, then the response to reviews is “great idea” and to try it. If you haven’t thought of it and it’s hard to do, then the response to reviews is “great idea, but that’s impossible/hard-to-do for the following reasons”. “Damn reviewers, stop thinking about my work” is simply not the right response.

    Like

  19. drugmonkey Says:

    I must have missed where I said peer reviewers of manuscripts shouldn’t do all those things qaz.

    Like


Leave a comment