Critique writing: Manuscript review
April 2, 2015
challdreams wrote on rejection.
These things may or may not be part of your personal life, where rejection rears its head at times and you are left to deal with the fall out. And that type of rejection is seldom based on “your writing” but rather on “you as a person” or “things you did“, which is a little more personal and a little harder to ‘accept and get back on the horse‘.
It made me think of how I try to write criticisms of manuscripts that focus on the document in front of me. The data provided and the interpretations advanced. The hypothesis framed.
I try to write criticisms about whether the data as presented do and do not support the claims. The claims as advanced by the authors. This keeps me away, I hope, from saying the authors are wrong, that their experimental skills are deficient or that they are stupid.
It can be all a matter of phrasing because often the authors hear “you are stupid” when this is not at all what the reviewer thinks she is saying.
April 2, 2015 at 2:37 pm
I wish I could credit the person who posted this to Twitter, but this post on Criticism and Ineffective Feedback held a lot of resonance for me. I critique for a living, with the goal of teaching. When I first started doing it, I softened my language with “I think” and “you could consider”, and one of my senior colleagues recommended that my comments be more direct.
But I don’t want to tell people, “You are stupid.” I doesn’t matter if I don’t use those words, if, as DM points out, they hear that message. In my experience, criticism comes across more constructively both if couched in the “Do this” (as opposed to “Don’t do that”) advice described in the link above. It also helps if I avoid forms of ‘to be’. Instead of, “Your conclusions are not supported by your data, because you statistics are wrong,” sentences, it forces more relational and explanatory language. “I disagree that the data support the conclusions drawn by the authors, because the chosen approach to statistical analysis does not adequately discriminate between the parameters.”
LikeLike
April 2, 2015 at 3:05 pm
@IGrrrl: As Ben Franklin wrote in his autobiography:
“I made it a rule to forbear all direct contradictions to the sentiments of others, and all positive assertion of my own. I even forbade myself the use of every word or expression in the language that imported a fixed opinion, such as “certainly”, “undoubtedly”, etc. I adopted instead of them “I conceive”, “I apprehend”, or “I imagine” a thing to be so or so; or “so it appears to me at present”.
When another asserted something that I thought an error, I denied myself the pleasure of contradicting him abruptly, and of showing him immediately some absurdity in his proposition. In answering I began by observing that in certain cases or circumstances his opinion would be right, but in the present case there appeared or seemed to me some difference, etc. I soon found the advantage of this change in my manner; the conversations I engaged in went on more pleasantly. The modest way in which I proposed my opinions procured them a readier reception and less contradiction. I had less mortification when I was found to be in the wrong, and I more easily prevailed with others to give up their mistakes and join with me when I happened to be in the right.”
LikeLike
April 2, 2015 at 4:00 pm
Yep, nothing new. But we’re not all innately good at it, and using the E-prime approach can help.
LikeLike
April 2, 2015 at 4:18 pm
Instead of, “Your conclusions are not supported by your data, because you statistics are wrong,” sentences, it forces more relational and explanatory language. “I disagree that the data support the conclusions drawn by the authors, because the chosen approach to statistical analysis does not adequately discriminate between the parameters.”
I find it hard to believe that grown up scientists who have been through the peer review process at least a few times give a flying fucke whether their paper reviewers use language like the former versus like the latter.
Anyway, the NIH grant review business can be much more painful, as there is an explicit review criterion–investigator–in which reviewers are required to provide a critique of the PERSON AS A SCIENTIST. I’ve never gotten anything harsh in this criterion on my grants, but I’ve seen–and sometimes written–investigator critiques that say, “Based on your prior poor performance, there is reason to doubt your likelihood of quality future performance.” That’s gotta hurt a lot more than some fuckebagge paper reviewers telling you that your statistics are wrong.
LikeLike
April 2, 2015 at 4:34 pm
So the Heddleston dealio boils down to saying, “keep your elbow up”, instead of “don’t drop your elbow”? I can see how this gimmick makes some sense when you’re dealing with novices who don’t know what the alternative to dropping your elbow might be or how to ride a motorcycle. But once you’re dealing with experts performing tasks that they have a huge amount of experience with, understand all of the parameters, and just need precise feedback from someone observing their performance of the task, these linguistic differences are meaningless. When I say to a graduate student or post-doc, “don’t use ten separate t-tests to compare pairs of experimental conditions in a multi-condition experiment”, they know exactly what that means, and exactly what to do to get their statistics correct. Well, actually, you *can* use multiple t-tests, but then you have to apply an experiment-wide correction, like Benjamini-Hochberg–which is my personal favorite. But everyone in my lab knows that, and by telling what not to do, I give them the freedom to make their own choices and use their own creative judgment in getting from incorrect to correct.
The more I think about this, the more I think that this “telling people what to do instead of what not to do” is actually terrible in a creative environment. Maybe it’s great in a context where the parameters of the task are precisely established and there is a defined correct best way to perform the task, so you just need to effectively convey those best parameters. “Do this! Then do that! BOOM!” But where you are trying to get people to learn how to explore parameter space and come up with creative solutions to problems, isn’t it better to point out the problems with what they are doing, but give them the space to use their own creativity to figure out how to move past them? Even in athletics, there is room for creativity. Some pitchers have unorthodox mechanics, yet pitch very effectively. They know “don’t let the batter see what pitch is coming by your motion”, but figure out new ways of achieving it, as opposed to “do this exact motion, and the batter won’t see what pitch is coming”.
LikeLike
April 2, 2015 at 5:22 pm
“I feel,” “I think,” etc. are implicit because you, the reviewer, are writing the critique. But there’s no reason to use “I,” “you,” or (my favorite) “This reviewer” – who has two thumbs and critiques papers? This reviewer.
Simply write
The data do not support the conclusions. The statistics are not appropriate for the comparisons. The controls do not exclude X interpretation. The mouse model is inappropriate because of Y phenotype.
But in general, don’t be an ass.
LikeLike
April 2, 2015 at 5:22 pm
If you read the whole post, she also discusses using questions, which can really facilitate thinking in a creative environment, and also serve as a way to point out problems. I absolutely will tell people “Don’t do that” when warranted. But when I watch my language, I communicate less lazily, more clearly, and clients hear me better. My job is different from yours, in that when I review a grant application, I am not a peer reviewer in the same sense. My job is to help people write better applications and clarify the argument for their project.
I have been in lab environments with toxic levels of criticism to the point where it inhibits creativity the way the “Don’t do that” inhibited the water polo kids. That experience also affects my perspective here.It helps to think about the desired end point of your communication. I the case of that lab, the purpose of the criticism boiled down to, “Look how much smarter than you I am.” It didn’t help the science much.
LikeLike
April 2, 2015 at 5:30 pm
I think the reason to give constructive feedback rather than negative feedback is because for so many things, the parameter space of WRONG greatly dwarfs the parameter space of RIGHT. As a result, constructive feedback usually contains more information: “Don’t bash the electrode into the filament” is not nearly as informative as, “Clamp the electrode firmly in the puller bar before sliding it through so that it doesn’t hit the filament.”
The other thing is the white-elephant effect. If I’m riding my bike and I see an obstacle, and I focus on that obstacle, I invariably ride straight into it. (This may be more of a commentary on my motor planning skills, but bear with me.) If instead I focus on the route around the obstacle, I usually manage to miss the dead squirrel or what have you. Again, if you say “don’t drop your elbow” it doesn’t create a path to what you should do, it just makes you think more about dropping your elbow.
LikeLike
April 2, 2015 at 6:18 pm
iGrrl, is it not enough to simply keep the language professional and not overly blunt? I don’t mind harsh critiques as long as they’re not overtly nasty personally, and even then, not because it hurts emotionally, but because it shows bias, which is hard to dispel with further experiments.
I once had a reviewer tell me that my experimental design showed that I didn’t understand molecular biology, and recommended that my paper not be published in “this journal or any journal ever.”
Oddly, though, this worked in our favor, because the overt ad hominem attacks allowed us to convince the editor this individual clearly had a bias, and we got a new reviewer.
For grants though (which is the subject of the original posts), as PhysioProf notes, the critiques hit a bit closer to home. Journal reviews never phased me, but when I got a criticism saying my publication record was skimpy, I got a surge of anger which was probably not conducive to reflection on how I might address this fact.
LikeLike
April 2, 2015 at 6:33 pm
I think the reason to give constructive feedback rather than negative feedback is because for so many things, the parameter space of WRONG greatly dwarfs the parameter space of RIGHT. As a result, constructive feedback usually contains more information: “Don’t bash the electrode into the filament” is not nearly as informative as, “Clamp the electrode firmly in the puller bar before sliding it through so that it doesn’t hit the filament.”
Yeah, exactly my point. This is the kind of task that has a completely determined parameter space and an unequivocal single correct way of doing it: you need to clamp the electrode firmly enough so that it doesn’t wiggle, but not so tightly that you smash the glass.
But what about much more subjective, complex, and creative tasks, like giving a good presentation? There is no one right way to give a presentation, but there are certainly things that you shouldn’t do. How to avoid doing those bad things allows for a ton of wiggle room. For example, you should never have content on slides that you don’t explain to the audience. To avoid this, you can either (1) take content off or (2) explain the content. Deciding which to do requires making a judgment about the relative importance of content to the narrative arc of the presentation. If I always tell my trainees, “Take this off the slide, and explain that”, then they never learn to develop their own judgment as to what is essential information and what should go. So I always say, “You should not have things on your slides that you don’t explain”, and let them figure it out (obviously, providing iterative feedback on later versions).
Or what about designing controls for an experiment? It is much better training to say to a student or post-doc, “Your controls do not account for X possible confound”, than to say, “You need to include this particular control to account for X possible confound”. Essentially what this philosophy of “constructive feedback” does is completely rule out any sort of Socratic dialogue in which trainees are gradually guided to figure shit out on their own. Just telling people what to do doesn’t fucken work if you are trying to train them to be creative and self-directed.
LikeLike
April 2, 2015 at 7:18 pm
While I like PP’s ideas about leaving room for exploration and possible solutions, there is a really bad failure mode, which is a PI who provides either vague or unpredictable negative feedback. The end result of this is students who wander around on projects, mostly hoping to not aggravate the PI. I know some scientists that are successful despite this – but they tend to depress students rather than training them.
With the specific context of manuscript review, I have actually tried to train myself away from the suggested approach to be much firmer and more direct. When I say, “I believe that X may not be an appropriate solution, because it does not really accomplish Y,” the authors return with weasel comments rather than fixing X. This wastes everyone’s time if what I really mean is “I won’t accept this paper unless you fix X so it actually works.”
LikeLike
April 2, 2015 at 11:26 pm
Fair point. I was thinking of this in the context of manuscript review. When I’m giving feedback on a manuscript, formally or informally, I try not to shoot something down unless I can also give some direction for a fix…e.g. not “paragraph is unclear” but “paragraph is unclear and here’s an idea for restructuring.” Or “this experiment doesn’t control for X, but that could be solved by redoing using drug Z.”
That’s a specific situation though where (unless the author is your trainee..) you will only see the manuscript once, and you can’t have a Socratic dialogue about anything. If we’re talking about lab life, though, for sure it is better to train the person to reach their own solutions.
LikeLike
April 2, 2015 at 11:29 pm
Also I totally thought Benjamini Hochberg was your made-up parodic version of Kruskal Wallis or Kolmogorov Smirnov until I googled. I should’ve known, Benjamini Hochberg isn’t nearly as funny as the other two.
LikeLike
April 3, 2015 at 1:13 am
Benjamini Hochberg is for tools.
LikeLike
April 3, 2015 at 6:38 am
B-H FTMFW!!!11ELEBNTY11!!
LikeLike
April 3, 2015 at 9:38 am
Next time I throw a good party, the drinks will be theme named after obscure statistical tests. Except for of course the one drink we’ll let the grad students have, Students.
LikeLike
April 3, 2015 at 9:55 am
I think that this “telling people what to do instead of what not to do” is actually terrible in a creative environment.
In a grant application review situation this is absolutely correct. The reviewer is not there to fix the application for the applicant. Nor are the reviewers there to demand that science be conducted their way.
Even though one has a tendency in manuscript review to want to be very pointed, to make sure the authors get the point, I think this still holds true. Maybe the authors will come up with some creative new way to alleviate your concerns. So let them choose how to respond to the deficit in the manuscript.
LikeLike
April 3, 2015 at 10:06 am
@DJMH: That drink can only be Guinness.
LikeLike
April 3, 2015 at 12:31 pm
@Davis, TY! Also it is pretty obvious what the main ingredient in a Kolmogorov-Smirnov has to be. The K-W seems more open to interpretation.
LikeLike
April 3, 2015 at 1:14 pm
I don’t just want to get drunk with Holm and Sidak. I want to go all the way and have a statistical 3-way with them.
LikeLike
April 3, 2015 at 4:17 pm
Just make sure you aren’t cornered by Bonferroni. I heard he leans a little too far to the right, and that’s no fun at a party.
LikeLike
April 6, 2015 at 8:26 am
http://chall-dreams.blogspot.com/2015/04/getting-attention-thank-you.html
LikeLike