Your Grant in Review: How do you know when a study section is a good or bad "fit"?

June 27, 2013

This is my query of the day to you, Dear Reader.

We’ve discussed the basics in the past but a quick overview.

1) Since the priority score and percentile rank of your grant application is all important (not exclusively so but HEAVILY so) it is critical that it be reviewed by the right panel of reviewers

2) You are allowed request in your cover letter that the CSR route your NIH grant application to a particular study section for review.

3) Standing study section descriptions are available at the CSR website as are the standing rosters and the rosters for the prior three rounds of review (i.e., including any ad hoc reviewers).

4) RePORTER allows you to search for grants by study section which gives you a pretty good idea of what they really, really like.

5) You can, therefore, use this information to slant your grant application towards the study section in which you hope it will be reviewed.

A couple of Twitts from @drIgg today raised the question of study section “fit”. Presumably this is related to an applicant concluding that despite all the above, he or she has not managed to get many of his or her applications reviewed by the properly “fitting” panel of reviewers.

This was related to the observation that despite ones’ request and despite hitting what seem to be the right keywords it is still possible that CSR will assign your grant to some other study section. It has happened to me a few times and it is very annoying. But does this mean these applications didn’t get the right fit?

I don’t know how one would tell.

As I’ve related on occasion, I’ve obtained the largest number of triages from a study section that has also handed me some fundable scores over the past *cough*cough*cough* years. This is usually by way of addressing people’s conclusion after the first 1, 2 or maybe 3 submissions that “this study section HATES me!!!“. In my case I really think this section is a good fit for a lot of my work, and therefore proposals, so the logic is inescapable. Send a given section a lot of apps and they are going to triage a lot of them. Even if the “fit” is top notch.

It is also the case that there can be a process of getting to know a study section. Of getting to know the subtleties of how they tend to feel about different aspects of the grant structure. Is it a section that is really swayed by Innovation and could give a fig about detailed Interpretations, Alternatives and Potential Pitfalls? Or is it an orthodox StockCritiqueSpewing type of section that prioritizes structure over the content? Do they like to see it chock full of ideas or do they wring their hands over feasibility? On the other side, I assert there is a certain sympathy vote that emerges after a section has reviewed a half dozen of your proposals and never found themselves able to give you a top score. Yeah, it happens. Deal. Less perniciously, I would say that you may actually convince the section of the importance of something that you are proposing through an arc of many proposal rounds*.

This leaves me rather confused as to how one would be able to draw strong conclusions about “fit” without a substantial number of summary statements in hand.

It also speaks to something that every applicant should keep in the back of his or her head. If you can never find what you think is a good fit with a section there are only a few options that I can think of.
1) You do this amazing cross-disciplinary shit that nobody really understands.
2) Your applications actually suck and nobody is going to review it well.
3) You are imagining some Rainbow Fairy Care-a-lot Study section that doesn’t actually exist.

What do you think are the signs of a good or bad “fit” with a study section, Dear Reader? I’m curious.
__
*I have seen situations where a proposal was explicitly mentioned to have been on the fourth or fifth round (this was in the A2 days) in a section.

Additional Reading:
Study Section: Act I
Study Section: Act II

39 Responses to “Your Grant in Review: How do you know when a study section is a good or bad "fit"?”


  1. I know I’m a broken record with this shitte, but the best study section is usually not the one with the deepest expertise in what you are proposing. I always aim for enough familiarity with the area to see the significance of the proposed studies, but not enough to see the weaknesses with the details.

    Liked by 1 person

  2. qaz Says:

    The best study section is the one wih your friends on it, the one that believes the bunny hopping that you do really matters.

    Like

  3. DrugMonkey Says:

    Well those are diametrically opposed views so I’ll split the difference…you need to be submitting to both kinds!

    Like

  4. The Other Dave Says:

    I am with Comrade. The people who know your stuff really well…

    1) Are direct competitors
    2) Are not competing because they think it’s a bad idea

    They probably also reviewed your half-assed first submission papers and saw and thought about all the problems with your shit that others don’t see. And they saw your grad students at meetings with their crummy posters talking about what the results are really like rather than the super rainbow land version of things you are painting in the proposal.

    I try really hard to be an advocate for colleagues, but honestly, I find myself most enthusiastic about proposals where I only have peripheral expertise. The stuff that I could do in my own lab, I always find problems with.

    But qaz is right too. If you have friends on the panel, they won’t get in the way when someone who doesn’t know better champions your proposal.

    You need a champion to get funded. That will be someone close enough to appreciate your stuff but not see the flaws, like Comrade says. And you need friends, like qaz says, who will not get in the way of that champion’s enthusiasm.

    Like

  5. qaz Says:

    Man, you people are harsh. (Or at least you live in harsh worlds.) I try to look at the big picture of whether this person doing this work is likely to produce good science. (NIH likes the word “high-impact”. I’ll let them have that for now.) In my field, that seems pretty typical.

    I do try to identify potential problems and comment on ways to improve the grant (see long ongoing historical argument between me and DM about whether reviewers should be telling investigators how to improve their science). In particular, when I’m reviewing a grant, a problem that’s correctable gets mentioned but doesn’t hurt the score that much. I also find that people in my field recognize the compromises the field has to make due to technological and other limitations. (This is what makes a field bunny hopping – everyone in the bunny hopping field has accepted that one can study hopping but not running because getting rabbits to run is too difficult.)

    In my experience the two biggest things that kill grants are (1) people in nearby fields who have made different compromises with reality and thus attack bunny hopping for not doing the controls that studying cats sleeping requires and (2) the people who feel that what you are doing is studying bunny hopping and that bunny hopping has nothing to do with curing cancer.

    Like

  6. Grumble Says:

    TOD: “The people who know your stuff really well…

    1) Are direct competitors
    2) Are not competing because they think it’s a bad idea”

    #2 is horseshit. There are often lots and lots of really good experiments that you’ll never have time for. So when I see a grant that is right smack in my field, from a competitor I respect, I’ll give it a rave review if it proposes the right experiments and controls to thoroughly test the hypothesis.

    QAZ: “In my experience the two biggest things that kill grants are e (1) people in nearby fields who have made different compromises with reality and thus attack bunny hopping for not doing the controls that studying cats sleeping requires”

    Ain’t that the truth. I’ve seen this too as a reviewer, and I find it infuriating. I’ve seen perfectly good grants get trashed because reviewers in slightly different fields insist that the grant be written the way they would write a grant. Because this seems to be a recurrent problem, I’ve actually complained loudly to the SRA for not including enough reviewers in the specific subfield I’m in.

    Like

  7. meshugena313 Says:

    As an as-yet unfunded ESI, with 4 scored R01s (and only 1 triaged…), I’ve found this situation to be exactly what I’m going through and unclear on how to get traction. My most recent R01 was to the targeted study section, which was nominally the most appropriate but with general but not direct expertise on any aspect of my proposal, even though I had specifically requested additional expertise to CSR. Regardless, the reviewers praised me and the “multi-pronged experimental approaches” that seem to be the calling card of this study section. However, they trashed my model organism of choice, basically killing the proposal (some valid critique, some ridiculous).

    So do I keep beating them over the head with why I should use this model organism, or go elsewhere?

    Like

  8. rs Says:

    I hope CPP is right. After getting burned for my last 2 proposals which went to the study section with my close technical expertise, I sent my last proposal to the study section which is relavant to the bigger question, but not in my technical speciality.

    Like

  9. DrugMonkey Says:

    Has this been a consistent result at this section? Or first time? And how badly did you get hammered? 4s? 5s? 9s?

    Like

  10. rs Says:

    My first one had 4s and 5s with occasional 1s and 2s (it was first NIH submission with a very fresh idea in the field), and second one had 2s and 3s with occasional 4s. These were R21. There were some really nice comments with pointing fine technical flaws in the study (basically killing the grants on the technical details). I plan to resubmit both proposals next month addressing the comments, but didn’t wanted to take chance with my R01 so sent it to different study section.

    Like

  11. The Other Dave Says:

    qaz: “There are often lots and lots of really good experiments that you’ll never have time for.”

    There is always time for a good experiment. We could do with more of them.

    Related to that…

    Grumble: “In my experience the two biggest things that kill grants are (1) people in nearby fields who have made different compromises with reality and thus attack bunny hopping for not doing the controls that studying cats sleeping requires and (2) the people who feel that what you are doing is studying bunny hopping and that bunny hopping has nothing to do with curing cancer.”

    I am one of those reviewers, and you know what? Tough. Good science is good science. I don’t see any reason why certain proposals should be funded just because a field is mired in mediocrity. “But… it’s hard!” wasn’t a good excuse when you were an undergraduate, and it’s not a good excuse now. Science is about getting it right.

    Like

  12. DrIgg Says:

    I agree, it happens and you have learn to deal with it. Having only submitted a handful of applications, my sample size is small.

    The most egregious grant/section/reviewer mismatches that I have seen is with F31/30 applications (from my students and others). I won’t get too specific, but here is one example. A student’s proposal for developing drug abuse treatments was routed to a neuroscience section and received 1s and 2s from the first two reviewers, but the third harshly criticized it, opening with something like “significance is moderate since the drug users voluntarily take the chemicals”. What? We shouldn’t treat diabetes or alcoholism either? They then went on spray it with 4s and 6s. Student demoralized, would not resubmit. Education in NIH grant submission complete.

    So, was this an example of poor study section “fit” or just a bad reviewer? Or a good scientist that was not matched in expertise to the application?

    Like


  13. Student demoralized, would not resubmit.

    What kind of shitteasse mentor allows a trainee with 1s and 2s from two reviewers to fail to resubmit an NRSA? And BTW, fellowship study sections have the broadest purview of any review panels at NIH, and the way to properly couch the research plan is for that reason (and others, of course) very different from an R grant.

    Like

  14. DrIgg Says:

    What kind of shitteasse mentor allows a trainee with 1s and 2s from two reviewers to fail to resubmit an NRSA?

    The kind that doesn’t want to waste student’s time and set them up for that crap again. The kind that weighs the balance of time left in student’s studies with the relative merits of having an NRSA. The kind that meets with student and dissertation committee to consider the likelihood of moving from “not discussed” to funded. If reviewer 3 could sway the other two reviewers to not discuss the grant even with their scores of 2 average, we felt we were not going to have a chance with this person still on the panel.

    Student is remoralized. All is well.

    Like


  15. How would you even know if that person would get the resubmission? I am flabbergasted at this decision.

    Like

  16. Grumble Says:

    “I am one of those reviewers, and you know what? Tough. Good science is good science.”

    No, it’s not that simple. In your field, doing X Y and Z might be considered absolutely necessary controls. In a slightly related field, an applicant might try to ask a related but different question from what is typically asked in your field. In that case, your favorite controls might be unnecessary.

    It all hinges on whether the experiments are sufficient to test the hypothesis. Some reviewers are so buried in their own little bunnyhopology field that they forget that the caveats that are important to consider for their own experiments are really not as important when different questions are being asked. I’ve seen this more than once, and it drives me up the fucking wall.

    Like

  17. The Other Dave Says:

    Grumble: I agree that controls would differ depending on the experiment. Obviously.

    Like

  18. The Other Dave Says:

    But proper controls are still necessary.

    Like

  19. drugmonkey Says:

    Has this been a consistent result at this section? Or first time? And how badly did you get hammered? 4s? 5s? 9s?

    sorry, rs, I meant that for meshugena313’s comment….

    Like

  20. drugmonkey Says:

    Good science is good science. I don’t see any reason why certain proposals should be funded just because a field is mired in mediocrity.

    HAHAHHAHA. Right dude, right. “Good science” is what the reviewer happens to find of personal interest first and foremost. Since we’re not all interested in the exact same thing……

    Should certain proposals be funded just because a field is blinded by *GLAMOUR*BLINGAZA* techniques?

    but the third harshly criticized it, opening with something like “significance is moderate since the drug users voluntarily take the chemicals”. What? We shouldn’t treat diabetes or alcoholism either? They then went on spray it with 4s and 6s. Student demoralized, would not resubmit. Education in NIH grant submission complete.

    This is totally answerable on resubmission. I agree with PP that this is no reason not to resubmit. This is a completely standard situation and having 1s and 2s from 2/3 reviewers (assuming you mean the important significance, innovation and approach criteria, not the throwaways) should be a huge encouragement.

    The kind that weighs the balance of time left in student’s studies with the relative merits of having an NRSA.

    I sympathize with this and agree it has to be a consideration. But since we know revision is the default expectation now even on fellowships and this person had two good reviewers….. I’d go with resubmit.

    Some reviewers are so buried in their own little bunnyhopology field that they forget that the caveats that are important to consider for their own experiments are really not as important when different questions are being asked.

    Also that their navel inspecting culturally demanded controls really don’t mean jack squatte of any relevance to real advance and nobody needs to do them anymore.

    Like

  21. becca Says:

    Back in the day when I was doing molecular epidemiology, my PI really griped like crazy when one of his grants was sent to an overly epidemiological study section. This was for two reasons:
    1) he’d asked it to be sent to his customary more molecular study section and
    2) while very successful at getting grants in general, he sucked at selling his molecular methods to statisticians.

    In fairness to the reviewers, he worked with collaborators to do a lot of the statistical stuff, so he probably sucked at explaining the statistics. But also, the epidemiologists sometimes would misunderstand what a “SNP” was, resulting in much ridicule. In fairness to ex-PI, he probably *could* sell molecular science to epidemiologists, but he wrote the grant THINKING he was selling epidemiology to molecular scientists.

    I’ve since seen that kind of study section issue come up in other cross disciplinary stuff. You don’t have to be amazing multidisciplinary person of awesomeness to get caught up in it, just blending two different fields where people have very different assumptions.

    Like

  22. Dave Says:

    In my experience in trying to get a K-grant (and with post-docs who have gotten training grants at the NIH recently), there is a definite bias towards prioritizing awards to A1s at some ICs, especially at those institutes that have been squeezing training grant budgets a fair bit. It would be madness not to resubmit an application that was in anyway close to the payline.

    Like

  23. meshugena313 Says:

    DM: My approach scores were 2-6 on the grants prior to the recalibration, and 4-6 after the recalibration, all in this same study section. This section really wants in vitro to in vivo tightly focused proposals. I seem to mainly get killed on the feasability or significance of the in vivo studies, so for one of these proposals I’m going to write an A0 for a different primarily “in vitro” study section.

    Like

  24. drugmonkey Says:

    yeah, I think you keep slugging, meshugena313. try elsewhere of course but keep trying at the first one. no chance of changing model organisms for the in vivo? any potential additional prelim data that would convince/budge them?

    Like

  25. meshugena313 Says:

    For this A1 gonna try with explanation on why this is the appropriate model org, additional data and elimination of the most problematic aim. Possible collaboration for another model system, but I don’t have the personnel to pursue the most technically tractable (and most popular…) approach for prelim data, as my two best postdocs are heading back to Europe shortly… I thought I would get to do some of the experiments myself, but that ain’t happening with grant writing, paper writing, course leading (and 3 kids!).

    The pink sheets are both a positive and negative – I get a lot of praise for me, my productive career, and my prelim results, but get killed on specifics of the approach. Its almost as if I have to do most of the proposal before proposing it.

    Hell, Reviewer 1 in the most recent grant wrote “Strengths of this application are its significance, innovation, investigators, environment, and mechanistic advances that are likely to derive from the studies of Aims 1 and 2. My enthusiasm for this application was substantially diminished by the risky, open-ended, and descriptive nature of Aim 3”. WTF? I guess ESIs aren’t allowed to have any risk whatsoever. In then end, not enough $ for so many proposals…

    Thanks, this discussion is very helpful.

    Like

  26. DrIgg Says:

    I sympathize with this and agree it has to be a consideration. But since we know revision is the default expectation now even on fellowships and this person had two good reviewers….. I’d go with resubmit.

    Good point. We might wipe the blood off of the proposal and head back into the fray yet.

    Like

  27. drugmonkey Says:

    Its almost as if I have to do most of the proposal before proposing it.

    hahha. Yes, true.

    My enthusiasm for this application was substantially diminished by the risky, open-ended, and descriptive nature of Aim 3″. WTF? I guess ESIs aren’t allowed to have any risk whatsoever.

    this is why you pursue with the same section. they are begging you to reduce the risk so do so. the game is that then it raises a barrier to them complaining about innovation in the next round. put in more of whatever the “mechanistic advances” are that they liked so much. give them what they want, basically.

    Like

  28. Joe Says:

    “Its almost as if I have to do most of the proposal before proposing it.”
    Of course the most convincing way to address feasibility is to provide the prelim data. I have seen this lead to the publication coming out before the application is reviewed. Then a reviewer will say “Well, Aim 1 is already done, so really there’s not much left here in the proposal.”

    Like

  29. DrugMonkey Says:

    Yep you definitely want to time that correctly Joe.

    Like

  30. The Other Dave Says:

    DM writes:“HAHAHHAHA. Right dude, right. “Good science” is what the reviewer happens to find of personal interest first and foremost. Since we’re not all interested in the exact same thing……”

    You misunderstand me. By ‘science’, I mean ‘the rational epistomology generally currently used to greatest effect, the latter evaluated by development of useful technology’. In other words: Activities that produce useful stuff. For NIH, ‘useful stuff’ means ‘insight into human biology function and disfunction, and/or methods for the manipulation thereof.’ Useful Stuff does NOT include data collection from dumping the Sigma catalog on weird ass cultured tumor cells or, angels-on-the-head-of-a-pin explorations of comparative insect anatomy, even though such things can be great science. Stuff can also be relevant to Human Health and be Bad Science. A colleague once told me about a special emphasis panel for a particular disease du jour where they were reviewing a bunch of proposals evaluating prayer and stuff, and post hoc analysis of childhood crap, with no controls or statistical rigor. The panel gave low scores, and even argued that NOTHING should be funded, but because money was set aside, they knew some of the stuff would get funded.

    DM also writez: “Should certain proposals be funded just because a field is blinded by *GLAMOUR*BLINGAZA* techniques?”

    No. I have often said: Why is one bad experiment unpublishable, but if you do it 10,000 times and call it ‘omic’, does it get the cover of Cell?

    The biggest fault I see in proposals is not that they don’t have an interesting/important question, but rather that their plans will not answer that question.

    Like

  31. Grumble Says:

    “Its almost as if I have to do most of the proposal before proposing it. ”

    This statement is inaccurate unless you remove the “almost.” And I am dead serious. This is hands down the best way to get a good score.

    Reason number fucktillion and one why the NIH grant system is Broken Beyond Repair.

    Like

  32. meshugena313 Says:

    So to get back to DM’s topic, does the choice of study section influence how much of the proposal has to be done already in order for them to be satisfied of feasibility?

    Is CPP right that reviewers outside the field may better for grant success? I did try that once and CSR assigned the proposal to a different section, commons had it sitting there for a while and then I think the SRO must have kicked it back to the “regular” section. So no guarantees with that path.

    Like

  33. DrugMonkey Says:

    I think there would be central tendencies for preliminary data that vary, meshugena313.

    Also the degree to which highly specific Preliminary Data are demanded versus just showing technical competence in key domains.

    Like

  34. Grumble Says:

    It’s not a question of whether doing the whole 9 yards worth of “preliminary” experiments is “demanded” or not. Sometimes it is and sometimes it isn’t, but the point is that you don’t know if it will be demanded or not, and if it’s not demanded it certainly won’t hurt your score if you do half the experiments before writing the grant (unless you publish first).

    Like

  35. meshugena313 Says:

    Any thoughts on reducing the requested length of support to 3 years (from 5), but still full modular, to perhaps convince the study section that I’m worth the $? That never made sense to me, seemed like I’m leaving money on the table for no reason, but I’ve heard conflicting advice. I know that CPP and others have argued against asking for less time, but perhaps in this insanely tight funding environment now it makes more sense?

    Like

  36. qaz Says:

    meshugena313 – Money is supposed to be discussed only after the scores are given. In my experience, people will sometimes over-scrutinize an application that is very expensive (think $495k/year), but no one worries about modular budgets. In fact, in the study section I’m on, the “Any budget issues” is often responded to with “it’s modular”. I’ve never seen anyone suggest cutting a modular budget. Certainly no one is going to change their score because you asked for 5 years of funding.

    Cutting your budget early is only asking for trouble. How are you going to renew a 3-year grant? You need to start working on your renewal in year 1! Leave it at five years, and answer the study section’s comments. They’ve told you what you have to do. Now you have to do it.

    Like

  37. Grumble Says:

    Go for 5 years. As a reviewer, I never question a budget that’s 5 years at the full modular limit (unless it proposes too few experiments to fill 5 years, which rarely happens). It would not make me any happier to see a 3 year modular grant than a 5 year. Why should it?

    I do, however, get a little grumpy when I see grants from Exalted Ones that break the modular limit, and are chock full of salary lines for all kinds of assistant research scientist types who don’t seem to be necessary to get the work done. I might support the king and his horsemen, but not the entire goddamn kingdom. Write some more grants like the rest of us, your majesty.

    Like

  38. AcademicLurker Says:

    Seconding what qaz and Grumble said above.

    You might attract unfavorable notice with a non-modular budget that’s truly extravagant (although people are not supposed to pay attention to that when scoring), but you won’t get any brownie points at all for requesting less than a full modular budget.

    Like

  39. Leslie Says:

    Weighing in on the idea of sending to a study section that is a little bit outside your direct research area: I like it though would put in a plug for having the writing in that proposal be above reproach. The really bad reviews (or unscored) I see (doing proposal submissions now for many years from the writing/editing end) often happen when a poorly written and organized proposal goes to a study section that is not entirely familiar with the work. Then people just throw up their hands and say forget it. It’s a lot easier to pull that off if the text is crystal clear, and spins a good story that a reasonably intelligent person from any scientific discipline can grasp.

    Like


Leave a comment