Summary statement tea leaves

June 5, 2007

Now that we’re past the new-R01 deadline and heading for the revised-R01 deadline it is time to talk summary statements. Out they come and we start perusing them for clues as to how to revise so as to improve our score. Frequently, one starts tearing one’s hair when it seems that the reviews cannot have been done by anyone 1) with a brain, 2) familiar with the science or 3) who actually read the grant.

A recent comment from writedit touches on the issue:

Oh, you haven’t read scores of summary statements over the past 2 decades or had PIs ask you if the 3 assigned reviewers all read the same proposal … or read/understood it at all (based on the irrelevant comments raised).

Also, I pick on Rob Knop again for his expression of a common frustration with essentially opposing critiques in NSF review land.

There are reasons for that frustrating pink sheet where reviewers are diametrically opposed, not all of them are nefarious. There are at least two important concepts in summary statement tea leaf reading that are not readily apparent until you’ve been on study section.

One, the reviewers are not always talking to you, despite what you might think. Some comments in there are a discussion between reviewers and/or reviewers trying to hit the study section’s cultural buttons. Huh? Well like most sub-cultures, grant reviewers generate some shorthand timesavers. This leads to the use of some Stock Critiques. Examples include “lack of productivity”, “too ambitious”, “lack of clear hypotheses”, “independence of the PI” and “fails to consider alternate approaches”. These evolve into shorthand because everyone agrees that certain items are a GoodThing to have in the grant application. Perhaps more importantly, even those who think these items may be silly tend to agree that there should be some consistency (read “fairness”) in review and thus if application 56 is beat up for Stock Critique Z, well application 99 better get beat up for that too. This means that reviewers can anticipate the use of Stock Critiques and, if inclined toward the grant, may state things in a way that are designed to head off the anticipated Stock Critique from other reviewers. If the other reviewer uses the Stock Critique then you get opposing reviews and not only that but they may only loosely fit the actual application because the reviewers are using a lazy shorthand. After all, if it gets serious (i.e., the grant does not end up triaged) they can focus on detail in the discussion. If the grant is a revision this can be even worse because the battle may already be joined and a favorably disposed and disfavorably disposed reviewers use the language that the know will have currency with the other members of the panel-so they are talking to the panel, not the applicant!

Example: “this grant has been fantastically productive in the prior interval” vs. “scientific output has been modest”. Huh? which is it? are they reading the same biosketch and progress report? well, sure. but there is no objective standard for what is “productive”. Some papers count more. Some people are willing to look at the PIs overall output without considering it is funded by 4 R01s. Some people want to divide by the number of grants or make sure the pubs listed are really directly relevant to the grant under discussion. etc. So when you see the above comments what it really translates to is “I suspect the other reviewer is going to brag on productivity to sway the panel but I don’t like this proposal so I’d better preempt the issue.”

Two, summary statement writing is frequently an exercise in confirmation. You might, perhaps, be under the impression that the reviewer dissects the proposal first and comes to a conclusion at the end of an exacting reading of the application. Not so. Often one reads the proposal over, with a beer or coffee in hand, and then comes to a Gestalt opinion about the grant. Next one writes the summary statement according to the established position. Thus, if you decide “triage” you are looking for some quick points to make to justify the opinion, this may or may not fit closely with your actual reasons. For example, it is vastly easier to communicate “the application failed to state any clear hypotheses nor explain how the proposed experiments would test such” than it is to communicate “yes, I know this model pumps out a paper every year or two but this area bores the bajeezus out of me, scientifically speaking”. If you decide “fund this puppy” you are looking for the best argument in support of the proposal. So you may overlook the deficits and really trumpet the strong points. One may shamelessly rely on Stock Critique type communication to sway the panel in either direction, depending. In many cases you can end up with an advocate writing a critique that is much more laudatory than the reviewer actually feels, analytically, about the grant. This is because s/he has decided that it is a great proposal despite minor flaws. Conversely, the detractor may write a critique which is much more critical than s/he actually feels. Among other reasons, why bother identifying a bunch of strengths if you are just going to assign a bad score? It confuses and lengthens the discusssion and in any case takes more time that could be better devoted to the good grant on one’s pile…

So as you are re-reading that summary statement that you haven’t been able to bring yourself to look at again in the past two months, try not to overreact.

11 Responses to “Summary statement tea leaves”

  1. Rob Knop Says:

    So as you are re-reading that summary statement that you haven’t been able to bring yourself to look at again in the past two months, try not to overreact.

    That would be much easier if the life and death of our careers didn’t depend on getting the funding that is based on the opinions of these reviewers….

    I see you defending the process. However, if the fact is that the process works this way, and if it leads to feedback that is either useless or, worse, the opposite of useful to the grant proposer, AND if the grant proposer stands to lose his job if he doesn’t get a grant at some point, then everything you’re saying here should be taken as evidence that the way we fund science in this country is sick and inhumane.

    -Rob

    Like

  2. drugmonkey Says:

    Rob,

    I’m most certainly NOT defending (all aspects of) the process. I have some big problems with a lot of the way NIH grant review goes. What I’m trying to do is explain some aspects of the process that were very foggy to me when I was first writing. I learned a few things along the way but sitting on study section was still a REAL eye opener.

    The feedback is not “useless” but it helps to be able to interpret it properly. One never knows absolutely for sure in individual cases. But by understanding patterns one can avoid wasting time in the revision process. There are general principles at work. There are also specific principles applicable to sub-cultures, like particular review panels or funding agencies, without doubt.

    My hope in explicating some of the randomness of the process is to help people to depersonalize the outcome a bit by realizing that lots of people are in the same boat, there are structural forces at play that have little to do with you and the feedback may not really mean what it seems.

    sick and inhumane: well, yes, there is debate we need to have on the balance of project-funded versus investigator-funded approaches. NIH is in theory hardline project-funded but in practice a mixture. The application is uneven and therefore unfair. A robust argument could be had as to how (un)successful the approach has been and whether our current balance is appropriate.

    Like


  3. […] 7th, 2007 Not only is it grant revision time, but it is also grant review time. Lots of study sections meeting to review the piles of […]

    Like

  4. AssProf Says:

    Hi-
    Thanks a lot for your commentaries on the peer review process.
    I have 2 questions.

    1.
    One is if you have any sense of how the volume of submissions affects the kind of review offered by the reviewers. Are reviewers asked to read many more proposals these days? I heard, last autumn, that there was a big push at NIH to move the percentage of “unscored” (often meaning, “undiscussed in study section”) proposals up to 60%. Based on the reviews I received, and how short they were (vastly less detailed than those I receive and offer in the VA scientific review process), I wound up wondering if the reviewers are just in a painful degre of overload. Conversely, it may be that NIH reviews have always been this way.

    2.
    I am perpetually confused by the distinction between “fundable percentile” (which seems to vary from 7-14%, depending on Institutes, and is something I learn mostly through gossip) and “percentage of applications funded”, which is a better looking number (18-25%, by institute, and is publicly disseminated on the NIH website). What is the proper use of these numbers. Hint: I’m waiting on the percentile score for an application to NIH.

    Like

  5. drugmonkey Says:

    With respect to volume, yeah, we’ve been running close to 60% triage for about a year now. The official NIHdom is under the impression that average reviewers are reading fewer proposals and are actively working to get that number up, if you can believe it. Personally, I think the per-reviewer numbers are going down mostly because of bringing in more ad-hoc reviewers for specialized expertise on a couple of grants, a GoodThing. I don’t like the push to drop ad-hocs and get chartered members to review even more. Are reviewers in overload? Maybe. Or just spending more time on the tough-calls, which may be increasing because of all of the great grants that are essentially in a holding pattern because of the budget level.

    Percentiles: I have a a little something on this in another post. It boils down to the hard line they’ll admit to, the end result when all is said and done in each round and the way NIH global stats consider an “application”. There are a fair number of revision pairs considered “one application” if I have it right.

    Like

  6. PhysioProf Says:

    I just received a summary statement from a standing study section for an A1 first resubmission. From reading the reviews and the SRA’s summary of discussion, you would think the grant was triaged, with “moderate enthusiasm for the proposal” and “strengths outweigh weaknesses”. Nevertheless, the proposal was ranked just outside the 20th percentile. I haven’t spoken to my Program Officer about this yet, so I assume he will have insight into what was really going on (he listened to the discussion). It just seems odd to me, and I wonder if you have any thoughts.

    And speaking of the code words, it appears that with paylines where they are, the new fund-this-puppy adjective is “superb”. “Outstanding” means, “please try again”.

    Like

  7. drugmonkey Says:

    Physioprof, hard to comment without reading the whole summary statement. “Moderate enthusiasm” indeed sounds like a negative comment however “strengths outweigh weaknesses” can be a plus to big-plus depending on context. For example the advocating reviewer might feel s/he has to acknowledge the correctness of criticisms but then would like to point out that the remaining strengths should fully counter any hesitations.

    If, however, you feel that the overall sense in the summary statement is considerably more negative than the score would indicate, you got saved in the course of discussion. One or more of the reviewers may have talked themselves up, a less-experienced reviewer may have recalibrated their initial judgment after reading other preliminary reviews or other applications or during the course of discussions. A member or members of the panel that did not review your grant may have sailed in to the rescue- this may even have been someone who reviewed it previously but was not currently assigned.

    The PO will, of course, not give you any details on this level. However he or she should be able to tell you what seemed to be the core strengths that everyone agreed upon and what seemed to be the critical sticking points.

    Like


  8. […] the better terms. This is one of those heuristics that might help with crafting responses to the Summary Statement or the paper review. Others have views that touch on the topic for example MWE&G has the […]

    Like

  9. PhysioProf Says:

    Well, I spoke to the PO, and what he told me was quite surprising. Apparently, the priority scores of the assigned reviewers were even better before the discussion. Strange.

    Like

  10. drugmonkey Says:

    Well this might explain why the resume seemed negative, if so. It is usually written by the SRA to reflect the sense of the discussion. Keep in mind too that reviewers can revise their critiques after discussion. Indeed they are urged to so that the SS matches the decision better. In most cases I think people are too lazy but I don’t really have much data on this.

    I could also imagine this is a matter of calibration. Different study sections may have different manners of internal communication, the subject of my post here. It may be that they save the superlatives for really great apps. Another section may use shades of positivity for all non-triaged grants. Or, these particular reviewers may be poorly calibrated or unusually calibrated (think UK Prime Minister’s question hour and how that style of discourse would sound in the US Congress.).

    anyway, you can see I use the term “tea leaves” advisedly…

    Like


  11. […] previously discussed the fact that scientific productivity can come up at grant review and one of the interesting things is that there are no objective standards to reference. The most […]

    Like


Leave a comment