Peer Review: The power of competing biases, part eleventy

March 10, 2009

There is a post over at Blue Lab Coats that reminded me that I’ve lamely neglected to pick up on a tip from the incomparable writedit (sorry dude). drdrA notes:

I found this article late last week, at Inside Higher Ed… entitled “The Black-Box of Peer Review“, about a new book by Michele Lamont entitled ” How Professors Think: Inside the Curious World of Academic Judgment“. I’m looking forward to reading this book… when I can get my hands on it and have a bit of spare time (insert big laugh here)

Now it is a little disappointing that the book author did not manage to get into some NIH study sections when doing her research. Still, I imagine the principles generalize very well so the only lack here is the convincing testament to my assumption on this. Would have been nice. Now, I mostly had PhysioProf’s response on this one but doubledoc did identify a point of interest to me in her overview of the author’s points on what has been found lacking in peer review:

Favoritism for work similar to one’s own… or for some personal interest (other than direct personal ties).


That’s the short version. The Inside Higher Ed piece identifies this flaw as:

The Power of Personal and Professional Interests: Lamont writes that most reviewers would never admit to being unfair and would never engage in explicit favoritism based on personal ties, or an applicant being a student of a friend or colleague. But when it comes to an affinity for work that is similar to their own or that reflects personal interests having nothing to do with scholarship, many applicants benefit in a significant way. In a passage that may be one of the most damning of the book, Lamont writes: “[A]n anthropologist explains her support for a proposal on songbirds by noting that she had just come back from Tucson, where she had been charmed by songbirds.
-snip-
… Yet another panelist ties her opposition to a proposal on Viagra to the fact that she is a lesbian: ‘I will be very candid here, this is one place where I said, OK, in the way I live my life and my practices … I’m so sick of hearing about Viagra. … Just this focus on men, whereas women, you know, birth control is a big problem in our country. So I think that’s what made me cranky.’ Apparently, equating ‘what looks most like you’ with ‘excellence’ is so reflexive as to go unnoticed by some.
[emphasis added]

The personal biases of the reviewer in question are always going to be a factor when human decisions are involved. Always. And it is acutely harmful to getting good unbiased review, for us to pretend otherwise!
Humans are biased in a variety of ways when it comes to making decisions. There is an extensive decision-making literature which looks at human behavior when there is an objective “best decision”- like how to optimize winnings when gambling under transparent pay-out rules. There is a wealth of literature on false perceptions, again where there are objective ways to determine what the percept “should be”. So we know this, even before we get into decision making where the “right” decision is not necessarily obvious and we have to make comparisons to statistical expectations. (Say, when looking at hiring patterns for women, ethnic minorities or whatnot.)
And then there is the now-famous implicit-association bias literature. This is the work in which minor differences in reaction time are observed in individuals who may not think they are biased [another example]. The test is set up so the most parsimonious explanation is that the test subject finds particular associations discordant and other ones congruent. For example, subjects are trained to respond quickly with the left finger to positive words and with the right finger to negative words. If you then introduce an interleaved choice task in which subjects are asked to respond to black / white faces on one or the other key, you find a disconnect. Some individuals respond fastest and most accurately when the association is positive/white, negative/black than when it is negative/white, positive/black. The validation is with other scales of white/black bias.
And these implicit-association tests have been extended to many other domains so it is not just ethnicity that is at issue. It is a fundamental property of people that they are subject to bias and are not even aware that they are biased.
That was a long diversion, wasn’t it?
Back to the point. In peer review it strikes me that there are only two solutions.
First, we can pretend that we have selected our reviewers carefully to get supposedly unbiased individuals. The judgment and implicit-association literature does indeed suggest that there is a distribution of bias severity on any given type of judgment. Trouble is, how do we determine who is unbiased when it comes to grant review? We can’t. Because the outcome does not have objective measure back-stopping as would be the case with maximizing payout in a gambling task. There is no knowable “right answer” from which we can evaluate the performance of reviewers. So whenever this comes up, we then express additional biases about who is “unbiased”. This is so freaking meta it would be funny if it didn’t have real-world consequences. The cry I hear most frequently in recent time is that NIH grant review panels must enroll more senior, more “broadly experienced” and “more successful” scientists to get better reviews. What a crock. All this does is select for a cohesively biased review panel.
This brings me to our second choice. Which is the best one and the one that we have employed time and again for this very purpose.
The contest of competing biases.
The other way we talk about this is ….”diversity”. If we try to be as broad as possible in the range of biases that are applied to a given decision, the odds are better that overall decision making will end up less biased.
This post is getting long but I wanted to end on a question. If the solution is the contest of biases (and in the case of NIH/CSR panels, I would suggest this is exactly the solution that has been struck) how should a given individual behave? Should she express the field-specific, gender-specific, geographic-specific or institution-type-specific bias she has been brought in to represent in an explicit manner? To advocate most strongly for grants that are “like her”? Or should she strive to be unbiased and let the implicit/unconscious bias do its own work?

No Responses Yet to “Peer Review: The power of competing biases, part eleventy”

  1. Dr. Strangelove Says:

    The Shame Factor
    We are all imperfect, yet we often do things correctly and ethically, because we are ashamed to indulge our laziness, weaknesses or temptations. This shame factor is missing in the peer review process. Anonymous reviewers can do practically whatever they want, and there are not only no consequences, but their actions are seldom exposed. This is, in part, our (applicants’) fault – we seldom appeal. How do I know? A couple of months ago I was so pissed with absurdities in critiques that I asked the Program Officer to initiate the appeal procedure. She HAD TO FIND OUT HOW TO DO THIS.
    – We don’t appeal, because we don’t want to compromise the chances of future submissions.
    – We don’t appeal, because the program officers advise against it.
    – We don’t appeal because this moves the application back in the queue by many months – it’s much faster to resubmit.
    And because we don’t appeal, reviewers are not ashamed to put any crap they want in the peer review process’ black box.

    Like

  2. DrugMonkey Says:

    Out of curiosity Dr. Strangelove, have you ever sat on the review panels similar to the ones that are giving you fits? (the same agencies, I mean)

    Like

  3. D. C. Sessions Says:

    You get what you measure.
    If there is no performance metric for peer review, you’re not going to reliably improve it. All you do (as noted) is substitute one set of a priori guesses about How Things Should Be for another.
    On the other hand, if you can actually quantify the quality of peer review — however sloppily, as long as it’s consistent — it’s possible to do statistical quality improvement.
    And, yes, I do love Deming (with all of his flaws.)

    Like

  4. Dr. Strangelove Says:

    DM: No, I’ve never reviewed grants. My reviewing experience is limited to journal submissions. On these occasions I experience the shame factor quite frequently – I know the editors personally and I would be ashamed to deliver substandard work.
    P.S. Just this minute I received another summary statement. Application based on a de novo (again: DE NOVO) methodology, trashed by a reviewer alleging obsolete computational approach and lack of my experience, specifying these with:
    ‘(iii) a lack of validation steps for docking’
    DOCKING!!!!!!!
    Shame factor non-existent, apparently…

    Like


  5. This shame factor is missing in the peer review process. Anonymous reviewers can do practically whatever they want, and there are not only no consequences, but their actions are seldom exposed.
    This may be true to some extent for peer review of research manuscripts, but it is completely false for NIH study sections.

    Like

  6. DrugMonkey Says:

    No, I’ve never reviewed grants.
    sooo, with my grant reviewer apologist hat on, I find it makes a big difference. I’ve been through the first bits of the cycle, I’ll note. I was sitting in the young-asst-prof’s chair, railing about idiotic reviews that couldn’t possible have read the application, were stupidly biased, made flagrant errors, varied quite obviously from any reasonable interpretation of the review criteria, etc.
    and all the assoc-profs around me who had been on study section tried to calm me down with the types of platitudes I now relate on this blog. I didn’t really believe ’em.
    not until I’d sat on reviews- and it only took a couple of rounds at that.
    obviously, if you read my comments on grant review, I have some problems. But I also do a fair bit of apologist blogging because it is my view that many of the problems are a result of the system sucking rather than craven, idiotic or lazy reviewers.

    Like

  7. Dr. Strangelove Says:

    Comrade PhysioProf:
    And why is that? If the reviewers decide to streamline the application, the only persons who ever sees their justification are the SRA (swamped by tons of other applications, and thus not paying attention) and the applicant, reluctant to appeal for the reasons stated earlier.
    Even if the application is scored and discussed, evident idiocies find their way into summary statements, because panel members tend to trust the judgment of the reviewers who actually have read the proposal, rather than investigate themselves.

    Like

  8. DrugMonkey Says:

    If the reviewers decide to streamline the application, the only persons who ever sees their justification are the SRA … and the applicant
    and the other reviewers assigned to the application. sometimes other panel members as well. In the week after reviews are due and before the meeting, I read the other reviews of all my assigned grants, even if we concur in triage. You never know when you missed something and/or you want to back check your own opinion on something.
    I also read the submitted reviews for selected other apps, whether because the apps are near to my interests/domains, because there is a significant score disparity or whatever. With respect to this latter, note that anyone can call for a triaged app to be discussed.

    Like

  9. Anonymous Says:

    DM: I have no idea how you can justify reviews with flagrant errors.

    Like

  10. JD Says:

    “This may be true to some extent for peer review of research manuscripts, but it is completely false for NIH study sections.”
    I think that the existence of study sections is one of the elements of the NIH that makes it such an effective institution. I think that there is a real difference between peer review in such a structured environment and journal-based peer review.
    Separating the two types of peer review probably makes for a cleaner conversation.
    In terms of grant submissions, if the comments are so bad that an appeal is necessary then I’d wonder two things:
    1) was some element of the writing unclear or potentially misunderstandable? One of my favorite papers is routinely misinterpreted due to a poor choice of words, it was a great learning experience.
    2) Are you submitting to the correct study section? I am listening to discussions of RC1’s and noticing how people try to make what they want to do match RFAs that can be quite distant. The mental gymnastics to try and do both in one grant can make it confusing.
    Now it could easily be the case that neither is true of the case in point (I’ve never seen the grant) as even good processes have errors. But it is worth at least considering these issues.

    Like

  11. DrugMonkey Says:

    DM: I have no idea how you can justify reviews with flagrant errors.
    Me either. What’s your point? That the system should be flawless? That it would make you happier to receive your triage if the review was “flawless” than if you appreciated from being around the block a time or two that this perceived “flagrant error” had nothing to do with the eventual qualitative disposition of your proposal? Is it a “flagrant error” when the reviewers choose to overlook a big ol’ whopping flaw in your research plan because the ideas and the rest of the proposal are so cool?

    Like

  12. Dr. Strangelove Says:

    There are two ways of writing a critique:
    1. listing facts pertaining to the subject, and then drawing conclusions from them;
    2. presenting an opinion supported by the reviewers professional standing
    Model 2. is acceptable too, but flagrant errors immediately invalidate it, because they prove that the reviewer has NO CREDIBILITY. This happens too often, and the example I presented above (docking vs. de novo) perfectly illustrates the problem.

    Like

  13. abb3w Says:

    DrugMonkey:

    If we try to be as broad as possible in the range of biases that are applied to a given decision, the odds are better that overall decision making will end up less biased.

    In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
    “What are you doing?”, asked Minsky.
    “I am training a randomly wired neural net to play Tic-tac-toe”, Sussman replied.
    “Why is the net wired randomly?”, asked Minsky.
    “I do not want it to have any preconceptions of how to play”, Sussman said.
    Minsky then shut his eyes.
    “Why do you close your eyes?” Sussman asked his teacher.
    “So that the room will be empty.”
    At that moment, Sussman was enlightened.

    Like

  14. Anonymous Says:

    DM: Sorry, I missed this crucial part:
    That it would make you happier to receive your triage if the review was “flawless”

    Yes, it would. A flawless (or reasonably flawless) critique either exposes flaws in my proposal, or endorses it. I am a professional enough to accept even a damaging critique, if it is based on sound logic and science.

    Like

  15. Dr. Strangelove Says:

    DM: Sorry, I missed this crucial part:
    That it would make you happier to receive your triage if the review was “flawless”

    Yes, it would. A flawless (or reasonably flawless) critique either exposes flaws in my proposal, or endorses it. I am a professional enough to accept even a damaging critique, if it is based on sound logic and science.
    Nonsenses in critiques only contribute to information noise, which often obscures pure nothingness in critiques. Let me reiterate: I am not going to cling to minor errors if other elements of a critique are meritorious. What if, however, critiques stripped off the errors and misinterpretations are reduced to zero?

    Like


  16. What if, however, critiques stripped off the errors and misinterpretations are reduced to zero?

    Let’s put it this way, holmes:
    (1) One person’s “error and misinterpretation” is another person’s “deep insight”.
    (2) If reviewers are consistently “misinterpreting” your grant applications, maybe your writing sucks shit.

    Like

  17. Dave Says:

    Back to the original point of the post…
    It is no secret that if you hit right on the interest of the study section, you are golden. If your research doesn’t quite fit with the interests of any study section, you are hosed. Most often, the effect is that truly novel research is penalized, while humdrum incremental crap that just happens to align with the narrow interests of study section members gets the nod.
    As DM points, out: This is not because study section members are lazy or evil. They just honestly get excited by stuff that is along the lines of their own research. Which means they recommend funding it, even if something they’re less interested in may be better more useful science.
    The only way to fix it is get rid of study sections as they are now and have review by a broader range of people — some who can judge the general coolness of the science, and some who really know the area and can highlight potential problems as only an expert in that specific area can. I think NSF does this relatively well, through use of multiple external reviewers and panel review.

    Like

  18. Dr. Strangelove Says:

    I believe in the existence of objective reality, Watson. The possibility, for example, of synthesizing “all possible 50-meric peptides” is not a matter of any high-level scientific disagreement. This is quite elementary combinatorics. There is a body of such elementary knowledge reasonably required from reviewers, and I disagree with the notion that lack of such knowledge represents “another person’s deep insight”.

    Like

  19. DrugMonkey Says:

    What if, however, critiques stripped off the errors and misinterpretations are reduced to zero?
    well, it is hard to know absent specifics. can you get completely screwed now and again? sure.
    does it happen to you repeatedly? if so, something is wrong here….and in most cases what is wrong is an unjustifiably pissed-off PI who does not have the larger perspective. most cases. again, you could be getting screwed, it could happen.
    Dave sez

    The only way to fix it is get rid of study sections as they are now and have review by a broader range of people — some who can judge the general coolness of the science, and some who really know the area and can highlight potential problems as only an expert in that specific area can. I think NSF does this relatively well, through use of multiple external reviewers and panel review.

    say what? it seems to me all you are doing here is saying that you are just unhappy with some particular study section or other. if the section gets too diverse in expertise, then you or some other person is going to bitch about how a certain narrow area is only covered by one individual (who is clearly biased against your work).
    again, real world reviewing experience can in some cases be useful. Take my bunny hopper study section.
    we might have bunny ecologists, bunny hopping behavioral pharmacologists and kangaroo jumping electrophysiologists reviewing one proposal….the amazing thing is how often such people from very different traditions and perspectives arrive at the same approximate merit evaluation.

    Like

  20. Dr. Strangelove Says:

    DM: Indeed, I screw up, in a way, because instead of applying known methodologies to problems of interest to review panels, I do method development. This must make some reviewers uncomfortable. I realize this, and I am fine with the lack of enthusiasm, as long as it is expressed in a way that does not offend logic and elementary scientific knowledge. In other words: I can live with lack of enthusiasm. I am not going to accept, however, lack of enthusiasm masked by pseudoscientific babble by reviewers, who don’t really know what to write to justify their discomfort.
    Since I represent a scientific niche (specific examples may be difficult to judge), let me illustrate the problem with a hypothetical, more “mainstream” example: imagine that you have an idea pertaining to NMR spectroscopy. You present it in a proposal, add preliminary data constituting proof of concept, and after several months receive the summary statement, which reads: Review #1. “A very good proposal” #2 “Yada, yada” #3. “This is a weak proposal, nothing novel, IR (sic!) spectroscopy has been used for a long time”. So, is the proposal faulty, or the reviews? Definitely you can make the proposal “better”, by avoiding controversial or too novel ideas. I have not fallen so low yet to do this, even though some of the reviewers’ hints are completely clear. Another one from the R21 review I received today:
    “Also the concept of (…), although premature, is novel.” That’s it: it all boils down to “premature”.
    Well, I am actually not that heroic with this “not falling so low”. I can afford some crappy reviews, because crap happens with applications focused on applications of the methodology. The research pertaining to method development, presented in an R01, got very reasonable, no-nonsense critiques, which I am quite happy with. Crap (often quite amusing) is written by reviewers of applications for smaller grants (focused on specific medical conditions), who clearly lack any expertise relevant to other disciplines. So, recently I amuse myself with writing appeals pertaining to certain exceptionally amusing critiques.

    Like

  21. JD Says:

    “NMR spectroscopy” -> “IR (sic!) spectroscopy”
    One has to wonder if this is due to that fact that the proposal is on methodology or due to confusion due to complexity. One thing that comes up in my work (I also do methods but surely in a very different field) is that it is an incredibly difficult art to describe complex approaches in the amount of space given. I often cry (almost literally) as I try and parse extremely complex issues into 250 word abstracts.
    I have the feeling that the same is true in grants. If you are doing incremental work then you save a lot of space as people know and accept a lot of concepts. If you need to explain how the basics work, you run out of space real fast. It’s an art to do it (and one that I recognize but cannot yet duplicate).
    In terms of R21 grants, looking at effort to reward ratios, have you considered focusing on R01 grants?

    Like


  22. Since I represent a scientific niche (specific examples may be difficult to judge), let me illustrate the problem with a hypothetical, more “mainstream” example: imagine that you have an idea pertaining to NMR spectroscopy. You present it in a proposal, add preliminary data constituting proof of concept, and after several months receive the summary statement, which reads: Review #1. “A very good proposal” #2 “Yada, yada” #3. “This is a weak proposal, nothing novel, IR (sic!) spectroscopy has been used for a long time”. So, is the proposal faulty, or the reviews?

    Are you having your proposals read before submission by the kinds of people who will be on the study sections that review them? If this kind of misunderstanding is happening to you repeatedly, it suggests that you are not writing your applications effectively for the intended audience: the study sections that will review them.

    Like

  23. Dave Says:

    say what? it seems to me all you are doing here is saying that you are just unhappy with some particular study section or other. if the section gets too diverse in expertise, then you or some other person is going to bitch about how a certain narrow area is only covered by one individual (who is clearly biased against your work).

    No, what I am saying is that the study section foci are too narrow.

    Take my bunny hopper study section. We might have bunny ecologists, bunny hopping behavioral pharmacologists and kangaroo jumping electrophysiologists reviewing one proposal….the amazing thing is how often such people from very different traditions and perspectives arrive at the same approximate merit evaluation.

    Sounds great, but what happens when someone submits a proposal on a project designed to examine bunny crawling behavior? I’ll tell you: The bunny hopping section says: ‘What the heck?! This is not about hopping!’. Or what if it is an equally groundbreaking proposal on snake hopping? The study section says: ‘What the heck?! Snakes?! We study bunnies!’ Either way, the grant gets triaged and another 3 grants get funded to determine whether bunnies leave the ground at 20 degrees or 22 degrees and whether MDMA makes a difference to this angle. Who the fuck cares besides this bizarre little club of bunny hopper fetishists?
    I respect NIH and all, and admit that the study section thing is great at bringing together experts in a certain area. But it’s a very conservative system that is more likely to fund a study mutating every amino acid in a protein we’ve known the role of for 20 years rather than a study that identifies a new protein that plays a more important role. And don’t give me any bullcrap about that not being true, DM. You know darn well it is, or else your own head is stuck up a very narrowly-defined asshole.
    Granted, I am a little bitter in this regard. I once had a single proposal reviewed by four different study sections and considered by 7 different institutes! Not a single reviewer saw the grant twice. Almost — but not quite — fundable scores in the first two submissions, until the last where I got triaged. Best damn project I ever proposed. I am doing it anyway with monies squeaked from all over (mostly private organizations), have had two different write-ups in Nature, a write up in Science, loads of speaking invites, and world-wide press. I am speaking at a Gordon conference next week on it. Good stuff. But I am not even bothering to ask NIH for money on it anymore. Not until I have some amino acids I need to mutate. Or some reason to ask whether MDMA affects the thing. But honestly by then I won’t care; there’s better stuff to do.

    Like

  24. whimple Says:

    Dr. Strangelove:
    You’re smart enough to come up with a groundbreaking new analytical technique. That’s great. You’re going to need to be smart enough to understand this:
    Science is a business. Your grant is the product you are selling. The study section is the potential buyer.
    That’s all. No amount of pleading along the lines of, “no really, you just don’t understand, you really want this, it’s really super extra awesome, honest!” is going to sell product. You need to sell something your customer is going to buy. If you have to explain why your product is great, you’re sunk. You need to be reminding them why your product is great. If they didn’t already believe that before they got your grant, your grant is not going to convince them. That’s what it means when you get back “premature”. Publish on the topic, become known to be an expert on the topic, then it’s not “premature” anymore. In the meantime, propose what they want to hear, what they expect you to submit. If they don’t already have an expectation from you because they don’t know you, it’s the wrong study section. You must pick a study section with people that know you on it, with people that know what to expect from you on it.
    You can do it. Really.

    Like


  25. Publish on the topic, become known to be an expert on the topic, then it’s not “premature” anymore. In the meantime, propose what they want to hear, what they expect you to submit. If they don’t already have an expectation from you because they don’t know you, it’s the wrong study section. You must pick a study section with people that know you on it, with people that know what to expect from you on it.

    While this is one way to stack the deck in your favor, it is not the only way.

    Like

  26. becca Says:

    “the amazing thing is how often such people from very different traditions and perspectives arrive at the same approximate merit evaluation.” because merit exists (and is known by these people) or because, irrespective of sub-sub-sub discipline research niche, we are really drawing our scientists from a very limited selection of people (with a narrow “phenotype” in terms of scientific thinking)?

    Like

  27. Dr. Strangelove Says:

    JD: Definitely, the R01 looks like a better option. It went to a study section where the reviewers seem to have broader horizons and approach their responsibilities quite professionally.
    Comrade PhysioProf: even if I fail to present something clearly, a proper review should read: “it is unclear what the PI wants to achieve and how; the application cannot be endorsed until this is clarified.” Fair enough. What I am dealing with rather involves a reluctance to accept an out-of-the-box approach, followed by a wild goose chase (because the SRA wants a written critique for the summary statement), often ending with a reviewer making a bumbling fool of himself/herself. Examples were provided, and they speak for themselves.

    Like

  28. Anonymous Says:

    It is no secret that if you hit right on the interest of the study section, you are golden. If your research doesn’t quite fit with the interests of any study section, you are hosed. Most often, the effect is that truly novel research is penalized, while humdrum incremental crap that just happens to align with the narrow interests of study section members gets the nod.
    Exactly. How many times have I seen RFPs from different agencies calling for “novel, not incremental” proposals, only to then find out months later what proposals get funded and….it’s not novel at all!! it’s just more of the same ol’….but claiming to be novel because of changing one detail (and I know enough about the field to know that changing said details does not constitute something entirely different…)

    Like

  29. Dr. Feelgood Says:

    I have found that rather than going to the study section exactly aligned with your interests, I will often do better if I go to a less reductionist section. I typically would go to a BDCN section. However, if I go to a more behavioral/systems section such as an IFCN, they are more wowed my the fantastic wizardry of my research. Go to people who in general think of your technical expertise as more magical, while the general research area is still of great interest to them. This helps you hit gold.
    And, as everyone says, dont fight with your reviewers. They are always right, you just werent “clear” enough.
    Doc F

    Like


  30. Comrade PhysioProf: even if I fail to present something clearly, a proper review should read: “it is unclear what the PI wants to achieve and how; the application cannot be endorsed until this is clarified.” Fair enough.
    Dude, are you fucking kidding with this “proper review” shit? You wanna get funded, or you wanna nurse grievances?
    Now sack the fuck up and start writing proposals that can be easily understood–both in terms of significance and approach–by the study section it is going to be assigned to, not by some mythical care bears motherfucking tea party study section with unicorns and rainbows shooting out of its motherfucking ass. Sheesh.

    Like

  31. Dave Says:

    Yea, gotta agree with CPP. We all have issues with the way things are done, and most certainly always will, but ultimately we’re also the ones cap in hand. Be respectful, polite, and always thank The Man even when he spits on you.

    Like

  32. whimple Says:

    Also, don’t forget the obligatory self-pitying cry, “It’s discrimination because I’m a woman/black/gay/foreigner/youngster/muslim/leper/ostrich!” 🙂

    Like

  33. Dave S Says:

    Dr Stangelove and Anonymous (Re #14 and #15)
    I hope you’re not under the impression that the “nonsense” factor in the comments you receive accurately represents the discussion the panel may have had when reviewing your proposal. Comments are almost always post-hoc justifications constructed to justify the final rankings, and are of course limited by the quality of the memory and notes of the person tasked with writing them. That person may be the primary reviewer, but that still doesn’t mean that those comments are really what swayed the panel to vote the way they did. In my experience the comments produced are always an imperfect record of the actual factors that influenced the review panel. You need to take comments with a grain of salt and read between the lines, although they are useful in identifying how people have *misunderstood* what you meant to say.
    Experienced scientists know that the same proposal will get somewhat different reviews from the purely random factors of panel membership and expertise. Its not malevolence or stupidity, reviewers do try to do their best, but there is a lot of noise in the system and you have to accept that. And even beyond that other scientists have no obligation to agree with you. You may think X, Y and Z are facts, but other scientists may have perfectly valid reasons for not believing that. You have to accept not everyone can be made to agree with you.
    Being on a grant review panel is a huge amount of thankless work. The reviewers do their best, but obviously a lot of the proposers will get turned down and we have to manufacture a reason… we’re not allowed to say “Reasonably well written and interesting proposal but frankly not as well written and exciting as the ones we collectively voted higher”. Fact of life. Its not personal.

    Like

  34. Dave Says:

    we’re not allowed to say “Reasonably well written and interesting proposal but frankly not as well written and exciting as the ones we collectively voted higher”.

    Huh? Yea we are, and I’ve seen it done. Although not exactly in those words. But “relative to other proposals…” is not an uncommon phrase, especially lately.
    But otherwise, I agree with Dave S in that summary statements are generally post hoc explanations/justifications. They should be seen as constructive criticism from well-meaning reviewers who honestly would love to fund (almost) everybody, but can’t, and are tasked with explaining why.
    To get funded at NIH, you need to excite at least one of your two (or three) primary reviewers enough to argue eloquently and passionately on your behalf. And then you need to also not have anything glaringly bad in your proposal that someone else could point out and sink your proposal in front of the entire panel. This is a result of human nature. Panel members love to talk. It’s really quite boring and annoying. They’re all 8 year olds looking for attention. Keep this in mind and then note that there are two excuses for panel members to blather on: 1) We have a great proposal that we want to push for funding, or 2) we have some brilliant insight that makes us look smarter than the proposal-writer.
    Getting funded is part skill, but a lot of luck, in that your proposal has to match the right people at the right time. Don’t take the failures personally, but hone the skills and take a lot of shots. But don’t take so many half-ass shots that you exasperate the panel. Remember you are asking a bunch of volunteer people to do a lot of work every time you submit.

    Like

  35. whimple Says:

    Not exasperating the panel is also why appealing a decision is very likely to be long-term counterproductive.

    Like

  36. Dr. Strangelove Says:

    “Now sack the fuck up and start writing proposals that can be easily understood”
    Cut the crap – if the reviewers (from certain study sections) are not capable of even understanding what methodology is involved, nothing can be done for them. It’s their responsibility to understand that they are making fools of themselves, and perhaps an appeal or two will help them to realize that they are fucking up their own professional reputation.
    And I don’t mind, because I don’t do incremental research: since I am doing method development, I can submit one major proposal concerning the methodology itself, and a number of derivative proposals focused on specific applications. If one or two get screwed by someone incompetent, it will just give me the sadistic satisfaction of writing an appeal letter.

    Like

  37. Dr. Strangelove Says:

    Dave S.:
    The reviewers do their best, but obviously a lot of the proposers will get turned down and we have to manufacture a reason… we’re not allowed to say “Reasonably well written and interesting proposal but frankly not as well written and exciting as the ones we collectively voted higher”.
    This is exactly what should be written in the summary statement. We all realize that proposals are prioritized – fact of life, as you wrote. If you, however, try to manufacture a reason (rather than state a reason – this is an important distinction), there is a huge risk of writing nonsense. A critique based on an absurdity may have two consequences:
    1. It makes it more difficult for the applicant to realize if the proposal is essentially good or bad beyond repair (and it’s better not to resubmit). A nonsense in the critique is just information noise.
    2. If you piss the applicant enough, your creative, hard work may be publicly discussed by the Council, and this will do no good to your reputation.
    Let’s treat each other professionally. No nonsense, please, either in applications or reviews, ok?

    Like

  38. Dave S Says:

    we’re not allowed to say “Reasonably well written and interesting proposal but frankly not as well written and exciting as the ones we collectively voted higher”.

    Huh? Yea we are, and I’ve seen it done. Although not exactly in those words. But “relative to other proposals…” is not an uncommon phrase, especially lately.

    Dave (#34) OK, maybe you’re allowed to in weirdo NIH land. All the NASA and NSF proposal reviews I’ve been on disallow or strongly discourage saying anything of the sort. We all know “better” proposals get the funding, but the rejection of a proposal must be based on flaws intrinsic to that proposal.
    Dr S. (36). Quick question. How long do you think review panel members can realistically spend reading, understanding and checking any specific proposal? If you had to read and rate 70 four page proposals, or 50 fifteen page proposals with 10 of those as primary reviewer, how long would you allocate?
    At my university reviewing proposal counts as “Non-sponsored activity.” I am not technically allowed to charge the days I spend reviewing proposals to any of my grants. Either I must use separate department funds, or do review work outside work hours.
    People don’t go on review panels because its fun or beneficial (although you do learn what makes a good proposal), they do it because its one of those professional duties like refereeing papers and teaching. You try starting to “name and shame” review panels and no-one will volunteer to do them any more. It wouldn’t be worth the agro, because no matter how good a job you did some crazy would make your life hell.

    Like


  39. It’s their responsibility to understand that they are making fools of themselves, and perhaps an appeal or two will help them to realize that they are fucking up their own professional reputation.

    HAHAHAHAHAHAHAHAHAHAHAHAHAH!!!!!!!!!! Dude, you’re fucking delusional!
    The only “professional reputation” an “appeal or two” is going to fuck up is your own. Is somebody feeding you this crazy shit, or are you making it up yourself?

    Like

  40. Dr. Strangelove Says:

    Dave S:
    If I am expected to care how much time you devote to evaluating a proposal, then you, please, appreciate at least a little bit the time required to prepare a proposal. Don’t waste this time by a rejection “supported” by pure nonsense. Professional duties should be done professionally. Personally, I am very grateful to those reviewers who write meritorious critiques, and I express this genuine appreciation while addressing their questions and concerns.
    PhysioProf: As I wrote, I can afford antagonizing some clowns. Generally, however, you are right: for the stated reason people are afraid to antagonize, and that’s why ignorant clowns are allowed to run amok in some review groups. Luckily not in those that are of most interest to me.

    Like


  41. Personally, I am very grateful to those reviewers who write meritorious critiques, and I express this genuine appreciation while addressing their questions and concerns.

    Just out of curiosity, what do you express when you address the questions and concerns of those reviewers who have written non-meritorious critiques of your proposals?

    Like

  42. Dr. Strangelove Says:

    “Just out of curiosity, what do you express when you address the questions and concerns of those reviewers who have written non-meritorious critiques of your proposals?”
    Usually nothing, for purely technical reasons. Applications for the “small” mechanisms have a much stricter than R01 page limit for the introduction, and there are better uses for this space. Depending on the circumstances, I either ask the SRA for reassigning the application or rather address the specific whoopsies and my “appreciation” in an appeal letter (my very recent preference).

    Like

  43. DrugMonkey Says:

    if the reviewers (from certain study sections) are not capable of even understanding what methodology is involved, nothing can be done for them. It’s their responsibility to understand that they are making fools of themselves, and perhaps an appeal or two will help them to realize that they are fucking up their own professional reputation.
    You are doing a bit of dodging and weaving here so it is difficult to get a bead on whether you are just blowing off steam about those minority of reviews that are frankly incompetent or you have habitual problems that really need to be addressed. But let us assume that your comment is valid and you really are landing in study sections of people incompetent to review your proposals and such competent people actually exist out there in the real world.
    If so, it is absolutely obligatory for the PI to start taking responsibility for her grant before it even gets to the panel. Call the SRO of the panels that are appropriate and start jawboning about “appropriate expertise” with specific examples of individuals you think would be good reviewers of your proposals. Respectfully and with substantive, non-whiny reasons for your statements and selections. You might just get pleasantly surprised about who shows up on the roster.
    Now, this is no guarantee that the person is going to like your proposal but at least it shuts you up about competence.

    Like


  44. Depending on the circumstances, I either ask the SRA for reassigning the application or rather address the specific whoopsies and my “appreciation” in an appeal letter (my very recent preference).

    What sorts of responses have you received from SROs when you request reassignment? What have been the outcomes of your appeals?

    Like


  45. Call the SRO of the panels that are appropriate and start jawboning about “appropriate expertise” with specific examples of individuals you think would be good reviewers of your proposals.

    My understanding is that by suggesting particular reviewers, you are forcing the SRO to *exclude* them by the OER rules.

    Like

  46. Dr. Strangelove Says:

    DM: I did a very similar thing to what you are suggesting while submitting my R01 focused on method development and validation. Without providing specific names, I very carefully defined the necessary areas of expertise, and yes, I was very surprised by the high quality of reviews.
    My rant is not abut the existence of ignorant people. Damn, I can’t even count the disciplines I am ignorant in. My very fundamental question is the following: when a reviewer notices that he/she cannot properly evaluate the proposal, why the reviewer does not ask the SRA for additional opinion? Such an option is explicitly offered in the NIH peer review policy. What psychological mechanism is involved? Ego? Arrogance? Being so afraid of making an impression of possessing insufficient expertise?
    As for my “dodging” – this is a blog, I don’t want to write essays. Briefly: I am developing a methodology for de novo design of peptides. This is a subject quite en vogue in certain circles of computational chemistry and computational biology, but rather exotic to the general scientific public – this general (pharmaceutical) public usually associates computers with QSAR or docking studies, aimed at predictions. De novo is de novo, docking is docking. Two separate methodologies, two distinct philosophies. The difference is actually bigger than between IR and NMR.
    Now, imagine receiving a critique, in which a reviewer attempts to trash the presented de novo approach, by referring to features of docking studies, and even using the word “docking” expressly. This is not about some unclear presentation of the intricacies of the idea; this is the reviewer’s complete lack of understanding of the very subject of the proposal. So, I don’t believe any amendments would improve the perception (I went far enough with making the proposal “reviewer-friendly” that competent reviewers accused me of unnecessarily “splitting the hair” about the basics).
    When I am proposing applying the methodology to some specific problem, I don’t have much choice of a review group. If it’s about HIV envelope proteins, for example, it goes to an HIV group. These people are biochemists, biologists, medicinal chemists working in this specific area. And this is fine, unless they feel they are obliged to trash the application by discussing areas outside their areas of expertise, with predictable results. Why they feel they need to do this, that’s the question!

    Like

  47. Dave Says:

    Dave S. said: we’re not allowed to say “Reasonably well written and interesting proposal but frankly not as well written and exciting as the ones we collectively voted higher”.
    I responded: Huh? Yea we are, and I’ve seen it done. Although not exactly in those words. But “relative to other proposals…” is not an uncommon phrase, especially lately.
    Dave S. then said:“OK, maybe you’re allowed to in weirdo NIH land. All the NASA and NSF proposal reviews I’ve been on disallow or strongly discourage saying anything of the sort. We all know “better” proposals get the funding, but the rejection of a proposal must be based on flaws intrinsic to that proposal.
    ———–
    I think it is YOU who are in weirdo land, Dave S. I was once specifically told by an NSF program officer in charge of the panel I was serving on to write in my summary something about the ‘relative merits of the proposal’. And I continue to do so regularly for NSF (mostly in terms of ‘Broader impacts’, which are so subjective). Perhaps you got the idea that this was disallowed because NSF works fundamentally different than NIH, in that NSF reviewers are really only making recommendations to the program officer, who then is supposed to use the recommendations. NSF panels don’t focus so much on ranking. NIH, albeit implicitly rather than explicitly, is all about ranking stuff and then simply going down the list until they are out of money.
    In any case, I am not aware of anything one is not supposed to talk about in a review, except an applicant’s funding (even though this is stupid, because it DOES make a difference)
    Granted, I push the asshole limits in my reviews much as I do everywhere else (including this blog), in that I don’t dance around what I think. That’s what I’m there for, right? Only once have I been asked to revise a review for being inappropriately informal. Two other times, though, I was told by the program officer that the applicant called afterwards specifically thanking them for my refreshingly useful review. I re-reviewed one of those two proposals again. The applicant had taken all my advice, and the proposal sailed through to funding.
    I don’t know what it is about the ‘system’ that makes people feel like they have to write all kinds of bullcrap code-speak for what they really think. It’s stupid. Scientists as a whole are bizarre socially-crippled misfits trying to act cool but not knowing how, like 13 year old boys at a dance.

    Like

  48. microfool Says:

    My understanding is that by suggesting particular reviewers, you are forcing the SRO to *exclude* them by the OER rules.

    This is my understanding, as well. I have heard this mostly in the context of reviewers suggested in the cover letter.

    Like

  49. Dave Says:

    Dr. Strangelove: You say you are ‘developing novel methodologies’. That requires extra justification. As a reviewer, the instant I read those words, here are the things that start popping into my head. I will expect any decent proposal to address these things — explicitly.
    1) Why do we need a new technology? What makes your approach better?
    2) What makes you think your approach will work? Do you have preliminary data showing it will work?
    3) You say your approach is novel. Why is it novel? What recent developments prevented development of this approach before? Or has it been previously tried and failed? if so, how have you overcome those problems?
    4) Is this new technology, if developed, going to help me personally?

    Like

  50. Dr. Strangelove Says:

    Dave:
    All these issues are presented, as early as possible (briefly addressed in the abstract, actually). The preliminary data (proof of concept) are provided. Intricacies of the algorithm are discussed, comparisons with competing methodologies made. 4. It is going to help you professionally if you are flexible enough to utilize it. And this is the core of the problem, since some reviewers are clearly not flexible enough to even try to understand what the proposal is about. “Some” is the keyword – nothing frustrates as much as getting one very favorable opinion, one neutral (when the reviewer is clearly not competent, but knows this), and one negative, based on “arguments” that make you feel like Alice in the Wonderland.
    As for naming NIH reviewers: there is a confusion even among the SRAs. One of them asked me once for suggesting reviewers, and then corrected himself later, saying that this is actually not allowed. I am not sure if naming specific reviewers would actually cause excluding them from the pool. Perhaps such suggestion would be just ignored.

    Like

  51. Sr. Strangelove Says:

    OK, thank you all for comments, advices, rants and ridiculing. Back to work.

    Like

  52. microfool Says:

    when a reviewer notices that he/she cannot properly evaluate the proposal, why the reviewer does not ask the SRA for additional opinion? Such an option is explicitly offered in the NIH peer review policy.

    I guess one could ask for the scientific opinion of an SRO/SRA/Program person in the room, but any SRO/SRA who would like to keep their job would squash any such inquiry. The SRO is not a reviewer.

    Like

  53. Dr. Strangelove Says:

    Misunderstanding. The SRA is not expected to provide this opinion. The SRA may, upon request form a reviewer, seek an opinion from another reviewer.
    http://www.csr.nih.gov/guidelines/revguide.htm
    “If you believe that additional scientific expertise is needed to review an application, contact the SRA who can obtain an appropriate outside opinion.

    Like

  54. DrugMonkey Says:

    My understanding is that by suggesting particular reviewers, you are forcing the SRO to *exclude* them by the OER rules.

    This is my understanding, as well. I have heard this mostly in the context of reviewers suggested in the cover letter.

    Interesting. My experiences have been in having SROs, to whom I have been ranting respectfully conveying my opinions on the phone, end with a request for me to summarize my points in an email. Perhaps there are efforts to crack down on this very process or perhaps there has been a change in policy since I last tried complaining to an SRO…

    Like

  55. Dave Says:

    Interesting. My experiences have been in having SROs, to whom I have been ranting respectfully conveying my opinions on the phone, end with a request for me to summarize my points in an email.

    Holy crap. I can’t believe you wouldn’t do that anyway. Always email. Always email. Always email. It makes it easier for people to say no, but it also lets people think about, forward to others, and check up on what you are saying. So when you do finally get an answer it is likely to be more reliable.
    And in the mean time, you let a very busy person get on with stuff uninterrupted, which they’ll appreciate. Tell them in your email that you are happy to talk with them via phone and they can call if they want, and give them your number and a humongeous block of time in which they can call. Remember, they are doing you a favor.
    Dr. SRA/PO/Whatever,
    I’m writing in regard to my proposal R01 “Molecular analysis of groveling for research funds”, which is scheduled for review by XXX next month. I would have called, but know you are busy and so thought it might be better to outline the issue here first. If you’d like to talk on the phone, I can be reached at 555-555-1212 M-F anytime between 10AM and 2PM EST.
    Here is my problem:
    I think the best way to resolve this would be…
    I recognize that… But….
    Thanks very much for taking the time to consider my request. As I said above, you are welcome to call me if that’s most convenient for you. But email is fine too.
    Thanks again,
    You know, like a typical business letter or something. You can get more informal if you’re on a first name basis. Or you can get more formal if you’re requesting something that will likely have to go ‘on file’, like an appeal or that a reviewer be excluded from seeing your proposal or something like that. The key is to help them help you.

    Like

  56. Dave S Says:

    Dave (#47) – you’re misinterpreting what I said.
    What I was trying to point out is that comments that discuss ranking are discouraged. You’re not supposed to refer to relative rankings of proposals w.r.t other proposals that the panel is reviewing “Comments: You came Zth but we can only award X grants and we thought the others were better.”, because (a) that does not help proposers improve proposals and (b) it reduces the Program Officer’s ability to fund proposals at the borderline.
    The “merits of the proposal” are exactly what you are supposed to discuss. We both agree on this.
    But the point of my original comment was that even when you take the trouble to write good comments (good for you if you do this, btw) they are not necessarily a complete and representative picture of the panel discussion that lead to the ultimate ranking the proposal got. When people rush writing up the final comment this is even more true. So just because you might get some “crazy” comments back does not automatically mean you were unfairly reviewed.

    Like


  57. What I was trying to point out is that comments that discuss ranking are discouraged. You’re not supposed to refer to relative rankings of proposals w.r.t other proposals that the panel is reviewing

    This is totally false. Good NIH study section chairs explicitly ask reviewers to consider whether they think a particular application is or isn’t “better” than ones that were previously reviewed in that study section. The reason for this is to ensure that the rank ordering of scoring of applications reflects the rank ordering of perceived quality of applications, and that there is not score “creep” over the course of the review panel meeting.
    Dude, I see from your blog that you are an astrophysicist. You do get that this blog is focused on the conduct mostly of biomedical science, and the grantsmanship is almost entirely biomedical and oriented towards NIH, right?
    Maybe you should listen and learn. I don’t go over to physics blogs and tell physicists how the physics grant review process at NSF works.

    Like

  58. Dave Says:

    Dave S: Comrade PhysioProf’s statements, albeit reasonable, should not in my opinion dissuade people from offering experiences that differ (and which might help reveal faults in the system we’re most familiar with). But then again, I am not the blogger here either so we all better do as he says (I’ve been banned from here once already. I obviously snuck back on. I have no idea why. Ask an alcoholic why he drinks. I suppose it’s the same answer for me.)
    Anyway, you said…

    What I was trying to point out is that comments that discuss ranking are discouraged. You’re not supposed to refer to relative rankings of proposals w.r.t other proposals that the panel is reviewing “Comments: You came Zth but we can only award X grants and we thought the others were better.”, because (a) that does not help proposers improve proposals and (b) it reduces the Program Officer’s ability to fund proposals at the borderline.

    Again, I don’t think this is discouraged at NSF. Rather, it is simply meaningless at NSF. Unless the biology panels operate very differently from the astrophysics panels (in which case I’d be interested in hearing what the differences are), the reviewers and panelists are there simply to advise the program officer. The program officers are then tasked with selecting a set of projects that meet various criteria delivered from ‘higher-ups’ — mostly proposals that are just great science, of course, but maybe also a couple that represent great training for underrepresented groups, or underrepresented institutions, or whatever. The closest thing to ranking is grouping of proposals as ‘excellent’, ‘very good’, etc, down to ‘please don’t send us any version of this crap again’. Generally, in biology, only a subset of the ‘excellent’ proposals are funded, based on the whims and directives of NSF bureaucrats. And note that the majority of reviews are external ad hoc reviews. Panelists also review the proposal, but their most important job is to summarize all the reviews for the program officer and write a summary. The applicant gets copies of all the ad hoc reviews, as well as the summary. I guess it’s sort of more like manuscript review at journals, in that editors ask for external reviews, evaluate them and throw in their own opinion, and then fund/publish or not depending on current interests and space/resources available.
    In any case, there is no ‘ranking’ or numerical scoring like at NIH. No scrabbling to the top of a greasy pole, stepping on the heads of your colleagues/competitors as needed. NSF funding is competitive, but less explicitly so than at NIH. So discussing ranking at NSF makes no sense. I suppose it’s reasonable that a program officer might dissuade reviewers from talking about things that make no sense. But don’t mistake this for what it’s not.

    Like


Leave a comment