Your Grant In Review: Junior Reviewers Are Too Focused on Details

September 23, 2008

Although the enthusiastic witch hunt against the mythical Assistant Professor on study section who “killed my grant” has been percolating along for several years now, I have yet to come across a well supported expression of the complaint. I hear a lot of mewling about how we need “better” and “more experienced” reviewers in the context of calls for reviewers of more advanced age and career status. What I do not hear is a lot of explicit and well-reasoned argument for why older reviewers are necessarily better and why less experienced reviewers are necessarily worse.
When I can pin down someone in person, however, I can extract a limping argument so I might as well address that.


The argument against the more-junior reviewers on NIH study section review that is the most amenable to discussion is that the younger people are unable to see the forest for the trees. Unable to take the broad view. Have a tendency to focus on details and grantsmanship issues in preference to the quality of the science (wot, wot?). I will note the evidence for this is quite lame, as there are never any studies or CSR datasets being referenced. What I do hear generally follows the following trends.

  1. The comically flawed assumption that the PI in question knows which of the critiques of their grants were written by less-tried investigators.
  2. The perspective of the PI’s experience with the (1?, 3?) Assistant Professors in their Department who give them informal comments on their own grant.
  3. Occasionally some direct study section experiences in which it is claimed that the Assistant Professors focus on detail too narrowly.

For argument’s sake, let us credit these observations and ask why this might be the case.
My first thought is that the nit-picky, comprehensive review approach has a lot to do with the Assistant Professor not wanting to look like a fool in the study section meeting. This is perfectly natural. Nobody wants to look an idiot in any scientific setting and the study section is one place where you are expected to be reaching a bit outside of your most comfortable domains. It can be terrifying. I was VERY nervous on my first invitation to ad hoc review. Particularly since I was visiting a panel which had, on multiple opportunities to review my grant proposals, essentially told me they already thought I was an idiot* (that’s what triage means, right?)! I hear this from colleagues on their first appearances as well and let us recognize that I am not a particularly wilting violet. So I tend to assume this is a consistent feature. Assistant Professors go into exhaustive detail to avoid looking foolish in front of their peers.
I would argue that this has less to do with their career status and more to do with their specific experience reviewing grants. A few meetings under the belt and I wouldn’t be surprised if this tendency (if generally true) is much reduced. Score one for experience over career tenure.
My second thought has to do with the way that these Assistant Professors have been taught to review grants- by way of reading the summary statements provided for their own submissions! Right? On our study section at any rate it is very common that the pool of newbie reviewers is drawn from the pool of recent applicants. So if the reviewer has a handful of summary statements from the BunnyHopper panel and then goes to review grants in that panel, might she not conclude that the proper way to review grants is the way that her applications have been reviewed?
And, of course, since Assistant Professor PIs have been disproportionately criticized for things like publication output (without regard for the fact they have been building a lab on one R21 and startup instead of operating on 3 R01s), cleanliness of grant preparation, clarity of hypotheses, interpretation of results and alternative procedures, relevance of background, relevance and abundance of preliminary data, proof of familiarity and function of the methods proposed (you know, StockCritiques)….is it any wonder they direct the same critiques at the applications they review?
They may do so for two reasons. First and foremost because they have been trained to think this is the way review is supposed to go. After a few rounds of study section, well, such a reviewer might start to notice that certain obvious StockCritique bait is not inevitably snapped up. Reviewers choose to overlook much of that stuff–completely coincidentally, I am sure, on the more established PIs grants so how would our Assistant Professor protagonist know that in advance? Again, we can score one for direct experience reviewing and zero for career status.
The second reason has to do with fairness. Once the n00b-ish reviewer has a little experience with the way study section goes and realizes that senior investigator applications get passes for stuff that they had been beaten up for repeatedly, and that younger investigators generally get hit for, well, they get a little pissed. [Are you recognizing anyone here, DearReader?] And they think to themselves that they are for damnsure not going to perpetuate the status quo- what is sauce for the goose is sauce for the gander and all that. You can’t really argue with fairness, can you? I mean the explicit CSR instructions touch on fairness and equality of review in many ways. We are in fact supposed to focus only on the merits of the application as presented and they tell us so repeatedly, even in the course of discussion (if the SRO is attentive, anyway).
So what’s the big deal then? Why would senior investigators be complaining if their grants are reviewed the way junior investigators’ grants are reviewed? Why would they be insisting this shows that there is a Problem with review? Why would the supposed problem be with the junior reviewer instead of the more-senior reviewer who gives the senior PI a pass on things that the junior PI is beaten up over?
Why indeed?
__
*honestly, they were very welcoming and I got a few of the “I’m really glad you finally got your grant” type comments.

No Responses Yet to “Your Grant In Review: Junior Reviewers Are Too Focused on Details”

  1. NM Says:

    However, available evidence (data, not anecdote) might suggest that younger reviewers are actually better than older reviewers. Just cause you didn’t like the review doesn’t mean it wasn’t accurate.
    Paper reviewing isn’t the same as grant reviewing- But one of the only factors that explained any of the variance in review quality of article sent to the BMJ was age. Younger reviewers gave slightly better quality reviews on average than older reviewers.
    http://jama.ama-assn.org/cgi/content/full/280/3/231
    An audit of reviews made for the Annals of Emergency Medicine also found that younger age was weakly associated with better quality reviews.
    Does anybody know of NIH reviews or similar data being analysed in this way?

    Like

  2. DrugMonkey Says:

    how would we benchmark grant review quality since at present there are few agreed-upon, never mind objective, criteria…

    Like

  3. PhysioProf Says:

    Dude, that’s some deep shit. Seriously.

    Like

  4. NM Says:

    I don’t know. I just wondered if anybody else knew of any attempts.
    It doesn’t necessarily need to be purely objective when the reviewer’s identity is not known. Whilst there is a lot of grey we can (grudgingly perhaps) recognise a good quality honest review even if it’s just shot down our paper/grant acceptance chances…

    Like

  5. JSinger Says:

    how would we benchmark grant review quality since at present there are few agreed-upon, never mind objective, criteria
    Resulting publications, citations thereof, continuations… None of those are perfect measures, as you say, but they should be orthogonal enough to reviewer age to identify or rule out any large effects.
    I think it’s a good idea.

    Like

  6. juniorprof Says:

    Every time NIH solicits info on improving peer review I suggest the same thing:
    Establish a study section of asst profs and postdocs to run at the same time as an established study section but keep them completely independent (with the established one not even knowing the other exists). Leave all the actual scoring and summary statements to the established group but compare the summaries for the grants and scores between the groups over a few cycles and see what happens. To make it even more interesting, track the progress of the proposed projects over a couple years (without attention to what is funded or not) and see which groups were able to best identify projects that would make a contribution. A seperate independent panel could judge this comprised of senior and junior researchers in the same field.
    Until they are willing to actually test this hypothesis I’m not buying it.

    Like

  7. DrugMonkey Says:

    I think juniorprof’s proposal would be the best way, sure. I want to see those data!
    Resulting publications, citations thereof, continuations… None of those are perfect measures, as you say, but they should be orthogonal enough to reviewer age to identify or rule out any large effects.
    A decent starting point. Trouble with resulting pubs is that it can take a while for the publication output to emerge. Is faster better? Would encouraging rush-to-publish even more than the system already does be a good thing? And then the inevitable Impact Factor debates would emerge with greater urgency because it would have to be nailed down in stone how journals were to be compared. no? citations of the articles? c’mon. we’re talking a grant starting 9 mo after the review event. 5 yrs of funding, papers start rolling out year 2 if that, might take until year 4 until they are actually starting to appear in the lit. citations of those then emerge when? two years later? and we’re supposed to go back and apply those measure to the review quality of a panel that met 6 years before?
    I hate to be negative-nancy here but I didn’t pose the question idly. It is a very thorny problem in my view.

    Like

  8. NM Says:

    Good call Juniorprof.
    DM- If you want good quality longitudinal data you do in fact have to wait for it. It looks like a epidemiological survival study of sorts to me survival (JuniorProf proposed idea). Whilst we wait for the longer term objective-ish data we can also use qualitative measures of review quality as an intermediate endpoint.

    Like

  9. BugDoc Says:

    DM@#7: Trouble with resulting pubs is that it can take a while for the publication output to emerge.
    DM, as usual, great post with lots of food for thought. An interesting trend that I have noted amongst a lot of my junior colleagues is that they get conflicting comments about productivity (either referring to their postdoc or pubs from their own lab). In the same round, one reviewer will comment on the excellent productivity of the investigator, while another will express concern about “low” productivity (even if the “low” productivity profile included very high profile CNS papers). I have also occasionally seen in study section that very established investigators sometimes getting a pass for 2-3 pubs during the previous granting period, because they have been in the field for many years and have a long publication history.
    Obviously there is not general consensus about what good productivity is among reviewers, nor about how to take quality into account in productivity. I imagine this adds another factor into review variability, in addition to the “junior vs senior” debate.

    Like

  10. Pascale Says:

    Another problem with tracking publications of funded and unfunded grants to look for quality of review. After my resubmission got shot down, I had no funds to continue the project. The two years before that had been spent generating preliminary data “requested” by earlier review, but it generated nothing immediately publishable. As a one-R01 lab (that ran out during this period), generating papers from that grant took priority over the new project. Until you have multiple grants you really don’t have the money or clout to attract the students and post-docs you need for a “science factory” that can publish a paper a month.
    Thus my current period as a Playtex lab (“no visible means of support”).

    Like


Leave a comment