NIH backs down on resubmitting unfunded A1 grant applications

April 17, 2014

The rumors were true. NOT-OD-14-074says:

Effective immediately, for application due dates after April 16, 2014, following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date. The NIH and AHRQ will not assess the similarity of the science in the new (A0) application to any previously reviewed submission when accepting an application for review. Although a new (A0) application does not allow an introduction or responses to the previous reviews, the NIH and AHRQ encourage applicants to refine and strengthen all application submissions.

So, for all intents and purposes you can revise and resubmit your failed application endlessly. Maybe they will pick you up on the A6 or A7 attempt!

Sally Rockey has a blog entry up which gives a bit more background and rationale.

While the change in policy had the intended result of a greater number of applications being funded earlier,

I really wonder if she believes this or has to continue to parrot the company line for face saving reasons. There is no evidence this is true. Not until and unless she can show definitively that the supposed A0s being funded were not in fact re-workings of proposals that had been previously submitted. I continue to assert that a significant number of PIs were submitting “A0” applications that were directly and substantially benefited by having been previously reviewed in different guise.


As a result, we heard increasing concerns from the community about the impact of the policy on new investigators because finding new research directions can be quite difficult during this phase of their career.

If the true concern here was the ESI or NI, then they could have simply allowed them to pass the filter as a category.

The resubmission of an idea as new means the application will be considered without an association to a previous submission; the applicant will not provide an introduction to spell out how the application has changed or respond to previous reviews; and reviewers will be instructed to review it as a new idea even if they have seen it in prior cycles.

The only way this is remotely possible is to put it in a different study section and make sure there are no overlapping ad hocs. If they don’t do this, then this idea is nonsense. Surely Dr. Rockey is aware you cannot expect “instruction” to stick and force reviewers to behave themselves. Not with perfect fidelity.

However, we will monitor this new policy closely.

HA! If they’d decided to allow endless amendments (and required related apps to be submitted as such) then they would have been able to monitor the policy. The way they did this, there is no way to assess the impact. They will never know how many supposed “A0” apps are really A2, A4, A6, nor how many “A1” apps are really A3, A5, A7…etc. So what on earth could they possibly monitor? The number of established PIs who call up complaining about the unfundable score they just received on their A1?

71 Responses to “NIH backs down on resubmitting unfunded A1 grant applications”

  1. Dr Becca Says:

    The only thing I imagine they could monitor is the GINORMOUS spike in submissions they’re sure to get, and whether it creates an impossible workload for reviewers, and/or a disproportionate increase in triaging of ESI/NI.

    Like

  2. Chris Says:

    I’m a bit more cynical, I don’t know if this will have much effect on anything. Savvy grant writers have already been doing this for years. I suspect they are just admitting the obvious – it is a waste of time (and money) for them to try and determine if a new application is “too similar” to a previously unfunded submission.

    Like

  3. Pinko Punko Says:

    I don’t think there will be a huge increase in submissions. The half lives in careers will not support endless churning. Maybe I am naive but I don’t think people will just keep submitting grants getting 7-9 scores. This might however push reviewers to be even more ruthless, I wonder.

    Like

  4. Litespud Says:

    So, how do people interpret the announcement? That an A1 submitted after 4/16 can be resubmitted as a “new A0” when the Summary Statement lands, or that the upcoming June deadline will see new A0s derived from last years unfunded A1s?

    Like

  5. drugmonkey Says:

    and/or a disproportionate increase in triaging of ESI/NI.

    This is not supposed to happen, regardless. Study sections are not supposed to let the burden of triage fall on ESI applications. So if about 40% of nonESI apps are discussed, then the same proportion of ESI apps are supposed to be discussed. Of course, the functional significance of discussing apps over the general triage line just because they are ESI is not likely to be high.

    The real point here is that genuine brand new A0s will certainly be disadvantaged relative to A2asA0 apps. On average.

    I suspect they are just admitting the obvious – it is a waste of time (and money) for them to try and determine if a new application is “too similar” to a previously unfunded submission.
    I would agree with you, but the track record of NIH realizing what the heck is going on with it’s policy changes, review, applicant behavior, etc is not encouraging.

    Maybe I am naive but I don’t think people will just keep submitting grants getting 7-9 scores.

    There are a heck of a lot of triaged apps seeing 4-5 scores which include at least one reviewer who is throwing 2s, 3s…maybe even 1s. The way I see it, you are trying to fight that triage score up into a scored range. From there you might have a chance at striking a vein. This comes just as I had finally convinced myself to abandon the triaged apps…..

    the upcoming June deadline will see new A0s derived from last years unfunded A1s?

    This will be the case for at least 2-3 rounds. then the back log will clear and maybe we’ll start settling into a new normal.

    Like

  6. dsks Says:

    It isn’t going to change much in terms of number of submissions imho. Nobody is going to be resubmitting consecutively triaged grants, so research that is clearly unpopular will be pruned away as it always has been. What this means though is that junior PIs that manage to get within spitting distance of the payline, and who perhaps have received consistently good significance scores on their initial two tries, no longer have to do a merry dance in order to dress up their A2 as a “new” A0″. They can just concentrate on addressing the technical criticisms in re experimental design or whathaveyou.

    Like

  7. Dave Says:

    So why enforce going back to an A0 at all? Why not just go back to allowing endless resubmissions? As you elude to, I suppose this is a way for the NIH to save face by claiming the change to only two submissions worked as intended and that this is to help young investigators.

    Like

  8. Joe Says:

    Arrrgh. This will just be a return to the holding pattern. You’ll go A0, A1, A0, A1. At least under the two-shots rule, you had reviewers who would overlook petty problems they found in the Approach of an A1, because they knew that was the last try. Now, such reviewers can be as petty as they like and just assume it will be back soon as an A0.

    Like

  9. AcademicLurker Says:

    The policy of requiring proposals to become “substantially different” (whatever that meant) after 2 bites at the apple only made sense if you believed that a proposal that missed the cut twice must be fatally flawed somehow. With paylines being what they are, we know that isn’t generally true.

    Like

  10. DrugMonkey Says:

    “I’d like to see this one come back” was always the most maddening thing I heard on study section. Ok, neck and neck with “…..but we know Prof Oldster will do good work”, but still. Maddening.

    We’ll be seeing more queuing of apps.

    Like

  11. DrugMonkey Says:

    AL- not fatally flawed. Fatally unlucky. It *should* have been an attempt to cull the herd but since they didn’t want to admit that was the goal, here we are.

    Like


  12. “Maybe I am naive but I don’t think people will just keep submitting grants getting 7-9 scores.”

    There is a criterion/impact score calibration issue that needs to be taken account of here. We have been instructed to recalibrate to use the entire scoring range just for the applications we discuss. This means that the worst of the discussed applications is going to get a seven or eight impact score, but the triage cutoff is usually around four impact score. So this means that there are triaged people who are getting threes, fours, and fives on their criterion scores, and thus delusionally thinking they were close. Reviewers never go back and recalibrate the criterion scores of their triaged applications. So if you were triaged, the criterion scores are completely utterly meaningless concerning whether you should revise and resubmit. This is a major reason why I think the criterion scores should be abolished.

    Like

  13. Pinko Punko Says:

    I have not heard the directive for using the whole range of scores for discussed applications. That would be new to me, and also foolish. My limited experience especially on panel that was “recalibrated” was to use the full range of scores on the initial review. Percentile is not just based on scored grants, it is based on all grants, right? This is why it makes sense that a grant scoring at bottom of discussed maybe is in 40-60%ile range, even if only 40% were discussed. It doesn’t make sense that the worst discussed grant is scored a 9. It is likely better than a bunch of non-discussed grants. There were a bunch of calibrations that shifted scoring of some grants down, but the scores were never meant to be relative. A grant that is a 9 is a giant POS in absolute terms. A grant that is a 5 we were told is pretty good, moderate to good impact grant with a few flaws. Scoring a 5 grant at the edge of discussion (let’s say it got discussed) as a 9 is stupid and meaningless, and introduces relative scoring problems when percentile is based on last three rounds of grants. And worse, if there is a government budget issue where councils have to be stacked, a relatively good round of submissions will be disadvantaged to a bad round (we see this all the time in study sections) because score was not attempting to be focused on a grant in some sort of absolute qualitative terms, but in relative terms. I have heard SROs running their panels totally differently with regards to “scoring vs. the pile” and “don’t score vs. the pile”.

    Like

  14. physioprof Says:

    The reason for doing this is to make sure that there is sufficient spread in scores around the likely payline that the study section’s judgments are meaningful.

    Like

  15. physioprof Says:

    BTW, this is definitely the directive within MDCN IRG.

    Like

  16. Cassius King Says:

    My guess is that relatively few people made it under the payline by a “this is an A1 so I’ll give it a push.” I suppose those people would be hurt by this, but for everyone else this is a good thing, since they will avoid the hand wringing associated with the need to change directions on a project that is worthy of support. It also gives PIs the chance to redirect to a study section that might be a better fit for the “new” A0.

    Like

  17. Pinko Punko Says:

    Yeah, I get that but I think I would suggest they just go to half scores for discussed. Thus would leave the scoring somewhat consistent across discussed/non-discussed. Discussion is for fine tuning anyway.

    Like

  18. Pinko Punko Says:

    The other thing that doesn’t make sense, cpp, is how can you be fair in recalibrating scores if you vote in a certain order? It would be almost impossible to use the entire scale unless you kind of use the original score order- you wouldn’t devote on grants already scored and you might end up slotting later grants in between . You’d have to be very tough at the beginning of the discussion. Maybe that would work. I know what would work is increasing the funding pool otherwise the only thing that matters is trying to be fair and consistent. That is what I want out of panels at the minimum. I think sometimes they have the memory of drosophila 20 minutes later unless there is a strong chair or someone acting as the conscience of the panel to remind about trying to be even.

    Like

  19. Wowchem Says:

    The whole thing is much to do about nothing. All of us were resubmitting, don’t tell me you weren’t. They probably realized they couldn’t stop it. Won’t change a thing

    Like

  20. Susan Says:

    Congrats. You’ve ‘won’ the ability to be endlessly queued. Just keep wA11ting your turn like a good little fish while we make sure the BSDs get what they need.

    Like

  21. E-rock Says:

    I think the criterion scores should be scored separately, by more reviewers. And one reviewer only assessing one criterion. Like MMI (multiple-mini interview) admissions interviews. It’s a way to remove some bias, can blindly assess the Approach, Ole Prof won’t get away without a Power Analysis, and Noob won’t “demonstrate his/her inexperience” in the Approach by not citing “obscure 1950s model.”

    Like

  22. DrugMonkey Says:

    The criterion scores do not need to be connected in any way to the overall Impact Score. Increased precision for those does nothing.

    Like

  23. martini Says:

    New investigators that are early to mid-tenure just got fucked over by this.

    Because of the 2 submission policy, new investigators were being advised to hold back on submitting R01s until they had a lot of data or even 1-2 papers (by program directors and departments). Now these new investigators don’t have things in the pipe and are now competing against a flood of grants that is about to hit the study sections. Even worse if they were applying for R21s or R15s in small states. This contrasts with the old advice that a new investigator should submit early and often.

    This is effectively going to be more like NSF where grants essentially get queued up, often being judged by the person not the project.

    Like

  24. anonymous postdoc Says:

    The comments at Rock Talk (and here) make me think two things:

    1) OER must be completely habituated to complaints now, and consequently ignoring all internet commenting. They restrict all grants to max two submissions, bitching and gnashing of teeth ensues. They remove the limitations on the number of submissions, and the tone is still largely petulant. I am confident that this change in policy had nothing to do with the unwavering negativity of the electronic comments, but rather the direct lobbying by BSD types to whom they actually listen, and about whom they actually give a shit. Which is why it was not restricted to ESI. It is not intended to benefit ESI.

    2) People are simultaneously concerned about large numbers of subpar, “never gonna make it” turd grants from other losers clogging up peer review, and flushed with excitement over the opportunity to submit their own excellent, “just under the payline” rejected A1s. The inability to see that these are the same grants is staggering myopia. Especially since we know exactly the limits of reviewer infallibility, and strongly suspect that triage is not an objective determination of turdishness in our own grants. Just everyone else’s, apparently.

    Like

  25. Pinko Punko Says:

    I don’t think there will be a flood of grants. I’ll guess we’ll see in June how much numbers go up.

    Like


  26. People are simultaneously concerned about large numbers of subpar, “never gonna make it” turd grants from other losers clogging up peer review, and flushed with excitement over the opportunity to submit their own excellent, “just under the payline” rejected A1s. The inability to see that these are the same grants is staggering myopia.

    Wrong. Grants that are “just under the payline” have percentiles that range from 10-25, while “turd grants” that were triaged are in the bottom 50%. These are separate populations of grants, based on a wholly objective criterion.

    Like

  27. dsks Says:

    “Wrong. Grants that are “just under the payline” have percentiles that range from 10-25, while “turd grants” that were triaged are in the bottom 50%. These are separate populations of grants, based on a wholly objective criterion.”

    Maybe they could keep the A1 cutoff for twice triaged grants? Throw two gutterballs in a row and that proposal is toast, no do-over. Again, data showing the general fates of previously triaged grants upon resubmission would be informative, here. If the evidence strongly suggests that twice triaged grants are extremely unlikely to be funded, then bring the hammer down on them (only do so with a little more muscle, so that folk don’t simply repackaged them as A0’s).

    btw The flip side to the idea that the A1 cut-off hurts ESIs by forcing them to change research direction is that, in some circumstances, it’s probably for the best that they do. However, it would certainly help for them to know that asap, so encouraging ESIs to go back to submitting early and often again is for the better, I reckon.

    Like

  28. AcademicLurker Says:

    Wrong. Grants that are “just under the payline” have percentiles that range from 10-25, while “turd grants” that were triaged are in the bottom 50%. These are separate populations of grants, based on a wholly objective criterion.

    Are they really separate populations? I’ve had a proposal go from triaged on the A0 to funded on the A1, and I’ve certainly heard plenty of people complain about getting a competitive score on the A0 (15-25% range) and subsequently getting triaged on the A1.

    Like

  29. Evelyn Says:

    I had some very enthusiastic PI responses yesterday due to this announcement, in both the established and new investigator camps. I am a bit more guarded and am aware that they are going to monitor the situation. I think if this policy results in success rate dropping (more dramatically than in the last few years), it will be either reversed or revised. I do like the idea of giving well-scored grants another go-around and so far, no one I am working with is sending in a triaged application. So, maybe that will be the trend across the community? Who here is thinking of sending a triaged A1 back in as an A0?

    Like

  30. drugmonkey Says:

    I am thinking of sending triaged apps back. For sure . Especially ones for which 2 reviewers threw down discussable criterion scores on Approach and Significance.

    Like

  31. dsks Says:

    “and I’ve certainly heard plenty of people complain about getting a competitive score on the A0 (15-25% range) and subsequently getting triaged on the A1.”

    I’m interested to know what the justification for this can be. The only legitimate one to my mind is that the A1 was resubmitted into a pool of stronger applications second time around, and so regardless of being improved relative to its A0, it was not deemed fundable relative to the top proposals in the pool.

    Like

  32. drugmonkey Says:

    That is most certainly one of the biggest reasons, dsks. The review is explicitly not supposed to be benchmarked to prior scores- it is to be compared with the current round of proposals.

    There is also another factor which every applicant chooses to ignore in a situation like this. Suppose the omnipotent “objective” score for your proposal is X. We all know that in a sample of judgments about that score, there will be variance around X. Sometimes it will be better than the true central tendency and sometimes it will be worse. So why do people assume the best score is the true one and the worse score must be sign of bad review? Maybe it is just that the distribution of scores for your idea is being expressed.

    Like

  33. drugmonkey Says:

    There is also the fact that you can mess up the revision.

    One example from a distant past review I was on. The first version was very interesting, with cool and provocative preliminary data. The data were in sore need of a set of controls to really nail down that they were as interesting as on first blush. So the first review is basically premised on the idea that yes, the provocative preliminary data are as good as presented.

    On resubmission, however, the controls were not performed. It wasn’t like it was some hugely expensive “run 30 more fMRI human subjects” control either. we’re talking pretty rapid and inexpensive bench stuff. So now, all of a sudden we’ve moved from stipulating that the cool data are valid to being very suspicious that they are not valid.

    The application itself may not have changed much and may even have improved on many other aspects. But the mind of the reviewer interacts with the fact that there was an opportunity to respond to a reasonable critique and this opportunity was bypassed. This can fundamentally shift the confidence factor that what is being presented is good data versus a convenient crock-of-shit story based on cherry picking preliminary data.

    Like

  34. anonymous postdoc Says:

    CPP: “Wrong. Grants that are “just under the payline” have percentiles that range from 10-25, while “turd grants” that were triaged are in the bottom 50%. These are separate populations of grants, based on a wholly objective criterion.”

    I know of at least two examples of recentish grants from people in some current or former mentoring capacity to me that got edge scores as A0, were resubmitted, and were subsequently funded as A0, but now we also have the scores as A1s. What happened?

    In one case, the A1 was discussed but scored crappy so it was out of the running. The other case was triaged. These grants were minimally altered and rapidly resubmitted, as you might expect from something that scored very well as an A0.

    These kinds of anecdotes suggest to me that the variance around the “true score” of a grant is much, much larger than the triage cutoff, let alone the funding cutoff.

    So yes, I do think everyone else’s shitteasse triaged grants and one’s own just-missed diamonds in the rough are actually the same population of grants, because the “wholly objective criterion” is subjective as shit.

    I find myself surprisingly comfortable with this; it makes it much easier to let rejection roll off one’s back. If only grants weren’t so much work.

    Like

  35. Evelyn Says:

    That’s not my experience. I would say in over 3/4 of cases, a scored grant as A0, gets scored again as A1 – usually better (yes, there are exceptions). A triaged A0 usually fails either as a resubmission, or a new submission somewhere else. I find that the scores/responses to ideas are generally close even when compared across different funding bodies (NIH, DoD, private foundation). What sucks at NIH is usually going to suck at the DoD. I understand that everyone has their anecdotes (“It happened to a friend of mine, to my advisor, to ME!”) but as someone who works with a fair number of PIs, the general trend I observed is that of peer review consensus. Sorry, but it’s not as much of a lottery as some of you seem to believe.

    Like

  36. eeke Says:

    I am in favor of this new policy. The whole thing was always a charade, and one thing that may happen is that it could unmask things that were going on anyway (decline in success rates, queuing, etc).

    One question. A colleague and I had a grant that was triaged (on A0). The reviewers were very excited about the topic, but not so much about our approach. If we change the approach, is it better to re-submit as an A0 (as a “do-over”), or submit as an A1, under the shadow of a previously triaged grant application? The only advantage to the latter that I can think of is the extra month of time we would have to generate sufficient preliminary data using the alternate approach.

    Like

  37. Pinko Punko Says:

    CPP be trollin, peoples.

    Like

  38. imager Says:

    Not thinking of sending triaged ones back – bit A1 where the score didn’t move at all and I get comments from now another set of 4 (sic!) reviewers like “he answered well to all the prior concerns – but I know have these concerns…”. Before I needed to redesign the grant to resubmit as A0 again, now I can take it, address the new comments and send it back in as a new one.

    I think it is more a nod toward the increasingly erratic, random and erroneous reviews rather than anything else. Call me biased but with whomever I talk I never heard that these were fair reviews. Of course, we all bitch around about the idiots who are reviewing our superb nobel-worthy project – but if you get repeatedly comments that clearly demonstrate ignorance and lack of expertise in the field than you start to wonder. Or requests to come back with the results of the research you try to get funded in order to get funded…

    Like

  39. imager Says:

    Evelyn – totally different experience for me, actually quite the opposite:

    I got a DOD grant out of something that was triaged at NIH

    So far ALL my resubmissions (A1) for R21 or R01 got the same or worse scores than the original grants, all thanks to a whole different panel of reviewers (usually 4).

    My current R01 got funded as A0, the resubmission I send in (since the overall budget and therefore the pay line was not set yet) – got triaged.

    So my confidence in the reality of the scoring and reviews is pretty much at zero. It is a lottery, luck – and than some good results and grantsmanship as salt and pepper on top of it.

    In that light the new policy is welcome.

    Like

  40. Evelyn Says:

    imager – interesting but at least in my experience (working with >20 PIs who submit quite frequently for 4 years) you are an outlier, not the norm. I do a littler exercise with PI’s after they get their summary statements: we sit down and underline common words in the critique. Majority of times, at least 2 out of 3 (or in your case 4) reviewers have an almost identical concern about the application. I am curious – did you rework your DoD application after you got the summary statement back or did you just send it in as is? We do retool triaged applications for future submissions but usually the work (both experimental and mental) is extensive.

    Like

  41. Comradde PhysioProffe Says:

    From the applicant’s standpoint, who gives a fucke if success rates decline because other applicants are submitting more piece of shit grants due to this new policy? All that matters is the number of grants that can be funded and the number of grants submitted that are scored better than yours. 10,000 additional grants submitted that are scored worse than yours don’t decrease your chances of being funded even though they substantially reduce the nominal success rate.

    Like

  42. Pinko Punko Says:

    CPP is 100% correct there.

    Like


  43. Of course I’m 100% correct. I’m always 100% correct.

    Like

  44. dsks Says:

    Well, somebody’s got to review all that extra shit, though haven’t they. That’s one of the problems outlined by the recent handwringing piece in PNAS; that similar to peer review in publishing, the workload is straining peer review of grant applications, and that the quality of review is likely suffering for that.

    Plus, if the data from the NIGMS and NHLBI are to be believed, everything above the 40%ile is in lottery ticket territory in terms of the resolving power of peer review to accurately rank applications by impact, suggesting that ones chances of winning are indeed diminished with every additional ticket slung in the hat.

    Like

  45. Pinko Punko Says:

    I do think peer review is better for grants when they actually get discussed but with tiny paylines it seems people think “why bother?” I don’t know how to get around this other than personally trying to engage with every grant I review and fight the drive to phone anything in. I don’t know what else to do.

    Like


  46. I don’t know how to get around this other than personally trying to engage with every grant I review and fight the drive to phone anything in. I don’t know what else to do.

    Save your powder for the grants that are at least decent. I spend very little time on grants I consider to be in the bottom half. Our job is to provide as large a spread of scores as possible to the discussed grants and thereby convey our relative ranking of them, and to provide guidance to the PIs of those grants of how we arrived at that ranking.

    Although there are all these cries about how peer review is so totally random, and “all the grants are excellent”, it is remarkable how infrequently a substantial discrepancy occurs between the three reviewers’ preliminary impact scores. It is usually only two or three grants per cycle where one or two reviewers gives a great score and one or two reviewers gives a terrible score.

    Like

  47. anonymous postdoc Says:

    I admit, it will be some time, if ever, before I am allowed on study section. Thus I must accept that CPPs observations about impact scores are as valid as anyone else’s personal experience.

    The issue is that we are all trafficking in anecdotes (yuck forever to “anecdata”) about something which there is copious, quantitative (secret) data. OER should be able to answer this question easily – what is the correlation between reviewers across all grants in a given meeting of study section? Between A0 and A1? Does this vary according to reviewer experience? According to number of grants assigned (eg ad hoc vs standing member)? Across study sections?

    I am aware of the data describing the relative relationship of the criterion scores to the overall impact, but nothing that actually even tiptoes around the question of intra- and inter-reviewer reliability. Which is disappointing given the requirement to report this kind of variability in our own data, particularly when using human coders to assess phenotypes.

    Like

  48. E-rock Says:

    I have not yet received a summary statement that lacked a blatant factual inaccuracy about the document that the reviewers were supposedly considering. Typically the demonstrably incorrect statements reveal some bias and …laziness. (Could be rebutted with: “see biosketch, see Analysis plan, this is clinical, not animal study, we did not propose that model, our institution is different than the one you critiqued here”…) The big shots say they are too busy to handle their responsibilities …. so get out of the way.

    Like


  49. FACTUAL ERRORS AND REVIEWER BIAS KILLED MY GENIUS GRANT!!!111!!11!!1!!

    Like

  50. Viroprof Says:

    Nice CPP, the good ol’ straw man attack!

    I’ve received a number of summary statements where (usually) the third reviewer slams experiments that I didn’t propose or refers to data that wasn’t in and isn’t relevant to the grant. I don’t think it’s maliciousness, just being overworked and inattentive. Although it’s inattentiveness that can sink someones career. I’ve talked to my reviewer about some of these and the general response was “that’s just how it goes”. Ever so slightly frustrating.

    Like

  51. physioprof Says:

    Although it’s inattentiveness that can sink someones career.

    The point isn’t that this doesn’t occur. Rather, the point is to accept that it occurs on occasion, realize that it happens to everyone, and do what you can to mitigate its adverse consequences. If you think this is happening to every grant you submit, then that is almost certainly a *you* problem, and you need to figure out how to write your grants in a reader-friendly manner that takes account of the fact that reviewers tend to be overworked and inattentive.

    When overworked and inattentive reviewers encounter an exciting well-written grant, they manage to overcome their fatigue and give the grant the attention that it deserves. When they encounter a boring and/or difficult-to-read grant, they just shut down.

    Like

  52. Viroprof Says:

    I do agree that you can always improve and make things more exciting/interesting, and there are a lot of times where its inexperience that causes misunderstandings. However, this is not always the case, and its frustrating is when the 1st and 2nd reviewers really like the proposal and give it 1’s and 2’s, and then the 3rd reviewer makes these types of comments and gives you 3’s and 4’s. There is also the issue of what the reviewer finds interesting. None of us find all aspects of science, or even our own field, especially interesting. This does influence how interesting we find a grant, and because its human nature there’s not much we can do about it even if the grant is well written. We’ve all dealt with this at some point. Perhaps trying to have less reviews/reviewer would help, although there are limitations to that approach (getting enough reviewers, resources, etc).

    Like

  53. physioprof Says:

    However, this is not always the case, and its frustrating is when the 1st and 2nd reviewers really like the proposal and give it 1’s and 2’s, and then the 3rd reviewer makes these types of comments and gives you 3’s and 4’s.

    If this grant gets discussed (which it should in any study section that is appropriately spreading scores), and ends up with an impact score closer to the latter than the former, then it means that the panel as a whole was more convinced by reviewer #3’s concerns than by the other two reviewers’ enthusiasm. This is why there are three reviewers and why there is an entire panel that votes after hearing the reviewers’ comment on the grant.

    Yes, there is a component relating to how relatively effective the three reviewers are at advocating for their positions. But you also have to understand that it is always possible that reviewers #1 and #2 were saving their powder to advocate for a grant or grants that they considered even better than yours.

    Yes, it is always frustrating not to get a fundable score, but you are very confused if you think that the scenario you describe represents some sort of breakdown in the efficacy and fairness of peer review. You have to get outside the mindset that the only “fair” outcome is for all three reviewers to invariably recognize the genius embodied in every one of your grants.

    Yes, there is noise in the system, and sometimes that noise means the difference between within and outside the payline. But there is a remarkable amount of consistency among reviewers when it comes to separating the discussed and non-discussed grants, and separating the top quarter from the second quarter. If you are never making it into the top quarter, then that is almost certainly a “you” problem, either with the attractiveness of your science and/or your ability to write a compelling grant.

    And BTW, the prevalence of the kind of misconception you are laboring under is yet another adverse consequence of the near-complete banishing of junior faculty from study sections. If you had served ad hoc a few times, you would see that what I am telling you is 100% correct. But when the study section is just a mysterious black box, it is hard not to imagine the worst.

    Like

  54. Pinko Punko Says:

    I don’t like the allowance for the third reviewer to be so succinct as to be meaningless. I think there are issues with feeling like it is only worth the time to spend on the 4 grants from BSDs in the pile and less on anyone else’s. This I feel is where the lazy Rev 3 can be difficult. That said if grant is discussed the Rev 3 scores should be diluted by the rest of the panel. I think some frustration comes when an unengaged review brings down into triage zone. This is why I try to engage all grants equally and write more of a full review when the 3rd reviewer.

    Like

  55. Viroprof Says:

    Thanks CPP. I appreciate you taking the time to respond with your experience.

    Like

  56. physioprof Says:

    DoucheMonkey and I have been sharing our experiences of the NIH system for almost eight years, with the goal of giving people as much information as possible to increase their chances of success. Advocating for policies that make the system fairer and more transparent is a good thing to do, and complaining about those things that make it unfair and opaque is part of that, but that’s completely orthogonal to figuring out how to succeed within the system as it currently stands.

    Like

  57. Pinko Punko Says:

    Not being part of problem on the review side is a tiny piece of the puzzle that can be provided on an individual level.

    Like

  58. drugmonkey Says:

    Very well put Pinko Punko. Also, be the change you want to see.

    CPP is, as usual when he’s being serious, spot on. Kvetch, sure, but also learn to enhance your chances for success within the system as you find it. And get on study as an ad hoc as soon as possible if you have not done so. It really is good for your mental health if nothing else.

    Like

  59. e-rock Says:

    I have no sympathy for the over-worked, tired, bored, distracted excuse for cursory examination and commitment of errors about the material reviewers are charged with evaluating & writing about. It is amateur (childish) BS. I do not tolerate it from my colleagues and I do not let trainees get away with it. In what other profession is it okay to produce work that lacks factual consistency and the perpetrators have zero accountability and the consumers (the NIH and applicants) actually either blame themselves or accept substandard work at the norm? I guess I am frustrated because I have recently reviewed publications that made amateur mistakes and it’s obvious the oldie on the paper is absentee, and it’s a waste of my time to teach their trainees how to prepare a manuscript through the peer review system. Does the too tired/distracted/over-worked excuse also fly in sending out manuscripts, analyzing data, mentoring trainees? It seems that in consolidating power and resources, the big shots can’t handle the responsibility that comes along with it. The old prof to whom I owe fealty has passed grant apps down to me (more than I can count) for review, critiquing, scoring. I promise you that it is not hard to make accurate statements about the document in front of you, even with all the distractions and competing duties. It’s not hard to apply the review criteria and the 424 instructions. If Oldie is too tired or distracted to continue to produce professional quality scholarly evaluations, find the door and call it a success.

    Like

  60. Joe Says:

    E-rock,
    From my experience on study sections, I would be surprised if many score-driving factual inaccuracies made it through discussion without being corrected. During the “read phase” before the meeting, you get to see the initial reviews of the other reviewers. If you scored the proposal a “1” or a “2” and one of the other reviews gave it a “4”, then you are surely going to go and see what it is that that guy is finding as a problem and figure out whether that is a real problem or not. You will then carefully correct the inaccuracy during your review or the discussion. It may well be that the factually-inaccurate reviewer never goes back and changes his review, and then the applicant won’t know what was said or corrected. (That’s a good reason to talk with your PO).
    As for people that send in crappy manuscripts for publication, I’m completely with you.

    Like

  61. AcademicLurker Says:

    @Joe: That’s been my experience as well. Significantly divergent scores generally lead to a pretty thorough discussion in order to figure out what the source of the difference is. Sometimes it’s just a genuine disagreement but sometimes there is a misunderstanding or misreading involved.

    And of course, sometimes the misunderstanding might actually be beneficial to the applicant. I was once out of step with the other 2 reviewers in the positive direction. They were specialists in the technique in question while I was experienced with the particular system being studied. The application of the technique that struck me as CoolNeato struck the specialists as dubious at best, and I ended up deferring to their judgment on the matter.

    Like

  62. drugmonkey Says:

    I have no sympathy for

    Do you review grants? how often?

    Like

  63. drugmonkey Says:

    Significantly divergent scores generally lead to a pretty thorough discussion in order to figure out what the source of the difference is.

    endorse.

    What I also think is that when people start bleating on about the errors* in review, they often are talking about a proposal that was not even close to the money in terms of preliminary scores. These minor errors were not, contrary to claims from the irritated applicant, the reason for the general location of the score.

    *and of course here we mean genuine errors, not a difference of scientific opinion.

    Like

  64. drugmonkey Says:

    I feel compelled to restate PhysioProffe’s point as well.

    If you occasionally get a clear error of fact that made a material difference in the categorical outcome of your grant, well, this is the gig. deal. submit some more proposals.

    If you get errors of fact on every single one of your proposals, than something about your grant game needs fixing*. It could be the study sections you are choosing, it could be your grantsmithing. Somewhere, something is deeply wrong if this is your majority experience.

    * it could be you confuse a difference of scientific opinion with “clear errors of fact”. that’s been known to happen to the occasional academic scientist.

    Like


  65. I promise you that it is not hard to make accurate statements about the document in front of you, even with all the distractions and competing duties. It’s not hard to apply the review criteria and the 424 instructions.

    There is only so much time in the day, and if a grant is generally boring/pointless or so shittily written/formatted that it can’t be assessed efficiently, then it is unfair to the grants that are generally decent and well-written/formatted to take time away from *their* careful assessment and ranking to putz around with the garbage.

    And yeah, no one who has ever served on NIH study section would ever assert something as absurd as this.

    Like

  66. E-rock Says:

    I’ve gotten the “well written /organized,” praise several times. Once on a grant submitted with someone else as PI was praised for the writing while with the same prose critiqued as wandering when submitted by me as PI on previous submission. The truth is my pub record wasn’t up to snuff to be funded that round (I can take that criticism) and they had to find reasons to reject it using justifications from the research plan without having read it, so they picked a criticism that’s impossible to rebut. The factual errors are about experiments, models, not in the grant, factual mistake about the teams record. Wrong institution. Saying something is missing when there’s a section heading for it. If an idea is so crappy and an applicant so incompetent that a reviewer can’t be bothered to get through it, just score the Investigators a 9 instead of the token 1, and be honest, don’t make stuff up. Standards are absurd, I love it.

    Like


  67. Standards aren’t absurd. What is absurd is the assumption that there are no costs to applying standards as exhaustively and carefully as possible to every grant under review.

    There is a very good reason, e.g., that only the top half of applications are discussed by the review panel. Similarly, there is a very good reason why individual reviewers differentially allocate their reviewing time and effort, spending the most time and effort on the grants that are in the range where the detailed comments and meticulously considered scoring can make a difference.

    Like

  68. drugmonkey Says:

    so they picked a criticism that’s impossible to rebut.

    I notice that you you assume the laudatory comment was the correct one. How do you know that that wasn’t the error?

    🙂

    Like

  69. E-rock Says:

    Touchee, indeed.

    Like


  70. […] science policy topics have led to as much discussion as the NIH policy with regard to the number of amendments allowable for grant applications. This […]

    Like


  71. […] funded. As you know, NIH banned any additional revisions past the A1 stage back in 2009. Recently, they have decided to stop scrutinizing "new" applications for similarity with previously reviewed and not-funded applications. This is all well and good but how should we […]

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: