Ahh, reviewers

December 13, 2012

One thing that cracks me up about manuscript review is the reviewer who imagines that there is something really cool in your data that you are just hiding from the light of day.

This is usually expressed as a demand that you “consider” a particular analysis of your data. In my work, behavioral pharmacology, it can run to the extent of wanting to parse individual subjects’ responses. It may be a desire that you approach your statistical analysis differently, changing the number of factors included in your ANOVA, a suggestion you should group your data differently (sometimes if you have extended timecourses such as sequential drug self-administration sessions, etc, you might summarize the timecourse in some way) or perhaps a desire to see a bunch of training, baseline or validation behavior that…

….well, what?

Many of these cases that I’ve seen show the reviewer failing to explain exactly what s/he suspects would be revealed by this new analysis or data presentation. Sometimes you can infer that they are predicting that something surely must be there in your data and for some reason you are too stupid to see it. Or are pursuing some agenda and (again, this is usually only a hint) suspected of covering up the “real” effect.

Dudes! Don’t you think that we work our data over with a fine toothed comb, looking for cool stuff that it is telling us? Really? Like we didn’t already think of that brilliant analysis you’ve come up with?

Didya ever think we’ve failed to say anything about it because 1) the design just wasn’t up to the task of properly evaluating some random sub-group hypothesis or 2) the data just don’t support it, sorry. or 3) yeah man, I know how to validate a damn behavioral assay and you know what? nobody else wants to read that boring stuff.

and finally..my friends, the stats rules bind you just as much as they do us. You know? I mean think about it. If there is some part of a subanalysis or series of different inferential techniques that you want to see deployed you need to think about whether this particular design is powered to do it. Right? I mean if we reported “well, we just did this ANOVA, then that ANOVA…then we transformed the data and did some other thing…well maybe a series of one-ways is the way to go….hmm. say how about t-tests? wait, wait, here’s this individual subject analysis!” like your comments seem to be implying we should now do…yeah that’s not going to go over well with most reviewers.

So why do some reviewers seem to forget all of this when they are wildly speculating that there must be some effect in our data that we’ve not reported?