Now that we’re past the new-R01 deadline and heading for the revised-R01 deadline it is time to talk summary statements. Out they come and we start perusing them for clues as to how to revise so as to improve our score. Frequently, one starts tearing one’s hair when it seems that the reviews cannot have been done by anyone 1) with a brain, 2) familiar with the science or 3) who actually read the grant.

A recent comment from writedit touches on the issue:

Oh, you haven’t read scores of summary statements over the past 2 decades or had PIs ask you if the 3 assigned reviewers all read the same proposal … or read/understood it at all (based on the irrelevant comments raised).

Also, I pick on Rob Knop again for his expression of a common frustration with essentially opposing critiques in NSF review land.

There are reasons for that frustrating pink sheet where reviewers are diametrically opposed, not all of them are nefarious. There are at least two important concepts in summary statement tea leaf reading that are not readily apparent until you’ve been on study section.

One, the reviewers are not always talking to you, despite what you might think. Some comments in there are a discussion between reviewers and/or reviewers trying to hit the study section’s cultural buttons. Huh? Well like most sub-cultures, grant reviewers generate some shorthand timesavers. This leads to the use of some Stock Critiques. Examples include “lack of productivity”, “too ambitious”, “lack of clear hypotheses”, “independence of the PI” and “fails to consider alternate approaches”. These evolve into shorthand because everyone agrees that certain items are a GoodThing to have in the grant application. Perhaps more importantly, even those who think these items may be silly tend to agree that there should be some consistency (read “fairness”) in review and thus if application 56 is beat up for Stock Critique Z, well application 99 better get beat up for that too. This means that reviewers can anticipate the use of Stock Critiques and, if inclined toward the grant, may state things in a way that are designed to head off the anticipated Stock Critique from other reviewers. If the other reviewer uses the Stock Critique then you get opposing reviews and not only that but they may only loosely fit the actual application because the reviewers are using a lazy shorthand. After all, if it gets serious (i.e., the grant does not end up triaged) they can focus on detail in the discussion. If the grant is a revision this can be even worse because the battle may already be joined and a favorably disposed and disfavorably disposed reviewers use the language that the know will have currency with the other members of the panel-so they are talking to the panel, not the applicant!

Example: “this grant has been fantastically productive in the prior interval” vs. “scientific output has been modest”. Huh? which is it? are they reading the same biosketch and progress report? well, sure. but there is no objective standard for what is “productive”. Some papers count more. Some people are willing to look at the PIs overall output without considering it is funded by 4 R01s. Some people want to divide by the number of grants or make sure the pubs listed are really directly relevant to the grant under discussion. etc. So when you see the above comments what it really translates to is “I suspect the other reviewer is going to brag on productivity to sway the panel but I don’t like this proposal so I’d better preempt the issue.”

Two, summary statement writing is frequently an exercise in confirmation. You might, perhaps, be under the impression that the reviewer dissects the proposal first and comes to a conclusion at the end of an exacting reading of the application. Not so. Often one reads the proposal over, with a beer or coffee in hand, and then comes to a Gestalt opinion about the grant. Next one writes the summary statement according to the established position. Thus, if you decide “triage” you are looking for some quick points to make to justify the opinion, this may or may not fit closely with your actual reasons. For example, it is vastly easier to communicate “the application failed to state any clear hypotheses nor explain how the proposed experiments would test such” than it is to communicate “yes, I know this model pumps out a paper every year or two but this area bores the bajeezus out of me, scientifically speaking”. If you decide “fund this puppy” you are looking for the best argument in support of the proposal. So you may overlook the deficits and really trumpet the strong points. One may shamelessly rely on Stock Critique type communication to sway the panel in either direction, depending. In many cases you can end up with an advocate writing a critique that is much more laudatory than the reviewer actually feels, analytically, about the grant. This is because s/he has decided that it is a great proposal despite minor flaws. Conversely, the detractor may write a critique which is much more critical than s/he actually feels. Among other reasons, why bother identifying a bunch of strengths if you are just going to assign a bad score? It confuses and lengthens the discusssion and in any case takes more time that could be better devoted to the good grant on one’s pile…

So as you are re-reading that summary statement that you haven’t been able to bring yourself to look at again in the past two months, try not to overreact.

woo, hoo! Another new R01 sent off. Congrats to everyone else who got theirs in!