Quality of grant review

June 13, 2014

Where are all the outraged complaints about the quality of grant peer review and Errors Of Fact for grants that were scored within the payline?

I mean, if the problem is with bad review it should plague the top scoring applications as much as the rest of the distribution. Right?

47 Responses to “Quality of grant review”

  1. dr24hours Says:

    And it does. I’ve had funded grants with STUPID reviewer comments. But why complain when the result is good? Same with a manuscript. If the editorial decision is “accept”, do you kvetch about the reviewer’s idiotic comment? Maybe a little, but you don’t raise hell about it.

    Like

  2. Jim Woodgett Says:

    The problem is that bad behavior thrives in the dark. Grant and manuscript reviews would likely be far more constructive and show less bias if the writers knew their words would not be limited to the hapless author/applicant and editor/SRO. If we could deal with/nulify the issue of reviewer intimidation (e.g. Junior Faculty worried about retribution for their honesty), shining light on peer reviews in general would be a good thing (and allow better recognition of (good) reviewer efforts.

    I bet some reviewers have made selfless suggestions that took grants/papers in better directions. Of course, the opposite is also true where great ideas have been taken to the woodshed and the applicant author has disappeared from the “system” sometime thereafter.

    Like

  3. laserboy Says:

    Isn’t the implicit assumption here that the stupid reviewer comments are evenly distributed. Is that actually supported? For instance, if such comments are routinely fatal for a grant application, then you would expect to find all of them in the unfunded grants and none in the funded grants. If it only weighted the odds against a grant, then you might expect a tail of them in the funded grants. In any case, without the distribution, it is hard to comment on the lack or presence of such an outcry.

    I don’t know how the NIH works, but in the funding system that I work with, the final decision is taken by a jury that is picked, pretty much at random, from the local scientific community. Reviewer comments that are factually incorrect can be rebutted, usually with ease. However, the local opinion is that these comments still leave a lasting negative impression and that a substantive reply actually gives weight to their (incorrect) arguments.

    I am not sure how true this is, and my personal experience is that if the reviewers attack detailed stuff, like why X technique, rather than Y (no matter how correct they are), then you will probably be okay. But if anyone questions how interesting the research question is (even if it is based on their own ignorance), then you are dead.

    Like

  4. LincolnX Says:

    What Laserboy said. Having received a crapton of reviews I can assure you that the distribution of off the wall commentary skews toward lower scored proposals. I think partly some reviewers simply don’t like a particular proposal and start pulling things our of their nethers to kill it.

    Like

  5. dr24hours Says:

    Interesting. If we assume that likelihood of a stupid reviewer comment and quality of grant proposal are independent (not necessarily true!), then the tendency of stupid comments to lower grant scores would indeed suggest you’d get fewer of them in funded grants, thus there’s less to complain about.

    We see this is sports all the time. Very good teams are luckier than bad teams. But not because they’re good. They tend to be very good because they were pretty good, and they happened to be lucky.

    Like


  6. No one ever complains about this, because it never happens: anything anyone ever says that is favorable is always correct, and anything anyone ever says that is unfavorable is always off-the-wall biased commentary and/or factually erroneous.

    Like

  7. drugmonkey Says:

    But they screwed my chances for R37, PP! APPEAL!!!!!

    Like

  8. catherineee Says:

    For sure DM your reviewer was fucken CPP.

    Like

  9. anonymous postdoc (shrewshrew) Says:

    I would say LincolnX is on the mark. This corresponds to a recent experience I had with a symposium proposal. The review comments were split between fair and bizarre and (thanks to the mentoring of these blogges) I concluded something else must be going on.

    When I saw the symposia list several months later, the originator of the very topic I wanted to talk about was scheduled for a symposium, and another senior-level symposium was scheduled on related work. Thus the real criticism was likely “we already have some more important people on this topic, thanks” which I think is fairer, actually, than the criticism that was given, which was “this symposium format, which is explicitly for junior researchers, should have more senior researchers in it.” At least I was on the mark that it was a topic ripe for discussion.

    Anyway, not a grant, specifically, but it was a rapid demonstration of the truth that the listed reasons for rejection often have little bearing on the actual reasons for rejection. The corollary question is, do the listed reasons for acceptance have any bearing on the actual reasons for acceptance? Dunno. Acceptance is so rare.

    Like

  10. e-rock Says:

    It has been suggested to me that the stupid comments/errors/inaccuracies/non sequiturs are cleared up during discussion of the grant. So if a grant is scored, it is discussed, and these stupid things are removed or cleared up. If the grant is not discussed, the stupid crap stays. It’s like asking why dishes that went through the washer are clean but the ones that didn’t are still dirty.

    Like

  11. dr24hours Says:

    I did a quick simulation model at Complex Roots.

    Like

  12. Busy Says:

    Just like dr24hours did, I figured that the ones that suffer the most are the middle ranked ones. Minus ten points of random downward noise has a lot less impact on grant scored 100 than on a grant scored 80.

    Also what LincolnX said. Many people reach a judgment intuitively (and not necessarily incorrectly) and then just make up reasons to justify it in the review.

    Like

  13. drugmonkey Says:

    e-rock: My experience is that reviewers of NIH grants are often too lazy to go back and revise their comments to accord with the discussion. However, the main thrust of your comment is correct in my opinion. That if a grant is discussed, the review outcome rarely hinges on clear ErroRZ of FaCT and stupid comments. These do indeed tend to get clarified during Discussion

    Like

  14. Grumble Says:

    All reviewer comments are stupid, including mine.

    Someone writes a grant saying, “I will do X experiment and it will be wonderful because it will answer this big important scientific question.” Reviewers write, “Experiment X is well supported by preliminary data and is highly significant and likely to have a big impact.” So the PI gets the money.

    But, because the beautiful preliminary data was based on an N of 2, the PI finds she can’t do Experiment X. So she does something else.

    As you can see, the reviewer comments were extremely stupid. They should have demanded much better preliminary data. Except that if they did that routinely, no one would ever get funded because no one would have the money to get the preliminary data.

    I’ve been that reviewer, and I’ve been that PI – in fact, this scenario plays itself out almost every single time I’ve written and gotten a grant. I don’t complain about stupid reviewer comments, though. I complain about the stupid system that forces PIs to write necessarily stupid grants and reviewers to write necessarily stupid reviews.

    Like

  15. e-rock Says:

    Another suggestion might be that reviewers don’t read closely those applications that they intend to not discuss and to give high (worse) scores. With my psychic abilities, I think I’ve managed to decipher a few things.

    1) That the summary sentence is the most important thing and these are the deciding factors in how the “felt” about the proposal.

    2) If the comment in this section says something are unfavorable about/toward ME (which is repeated in the Investigator section), that means they question my ability to carry out the work proposed (as stated in the aims) — and it is unlikely that they read the details of the application itself because … why bother if you think Applicant is not qualified in the first place? (as CPP and you have pointed out to me, reviewers are too tired and busy to be thorough).

    3) If Investigator is boilerplate (“seems reasonable, strong team of Old Dudes on board), but there were several comments in the section which seem Confused by the application …. and then in the Approach comments: bizzairoworld non-sequiturs, dinged experiments that weren’t there, dinged the “lack” of things that WERE, in fact, in the proposal; ignored explanations of why this model and not that model; etc. — it suggests to me that they did not like the Overall Point of what was proposed in the Specific Aims and did not read past that. They may have skimmed the Approach to find a keyword or phrase with which to put a “weakness” bullet point in order to ad hoc justify (to whom I don’t know) that they will not discuss or give a fundable score to this application. Example … find the sentence, “We considered using Model A, which has b & c strengths but d & f weaknesses. Because we have had past success with Model X, which has x & y strength, we will use Model X with the caveat of its weakness z.” Reviewer writes: “They are proposing to use model A, which has d & f weakness.” Or: “The ability to perform the rat surgery has not been demonstrated by the Investigators’ experience,” (no animals proposed, all clinical observations).

    4) If it seems highly likely that the reviewer read the proposal, then I can use the comments to improve it.

    5) If it seems likely that they did not, then the bullet points are useless (more than useless, infuriating). I need to figure out how to fix the Specific Aims (not just the grantsmithing but CHANGE the aims to something BETTER or more Programatic), get another pub on the ole Biosketch, find a better Old Dude.

    I would much prefer a more honest appraisal, and a reviewer to actually say: “Applicant needs to beef up the Biosketch in order to play in our sandbox” instead of making stuff up. Or: “This is a terribad idea, and here is why …. ” and not search for ad hoc justifications to fit bullet points because it makes them look stupid & incompetent. Perhaps this is a consequence of the Powerpointification of higher ed.

    Like

  16. e-rock Says:

    …. and when I wrote, “did not like” … that translates to: “did not think this topic important enough to spend money on.” or: “did not think the planned experiments actually address the problem/gap in knowledge.”

    I want to stop thinking and communicating in terms that we need to “please” reviewers or that they “like” proposals. This isn’t a pie-baking contest.

    Like

  17. rxnm Says:

    Yeah, while we’re at it, where are all the men complaining about bias in their favor?

    Like

  18. rxnm Says:

    I guess bias is just a sour grapes pretend problem.

    Like

  19. Steve Shea (@sheacshl) Says:

    Seriously? Is this a joke? What rxnm said.

    sour grapes bitching – disregard/

    What if someone were to make the same remark about glam?

    The fashion on Twitter seems to be that peer reviews of papers are often stupid and glam is a flawed game that rewards salesmanship. They are and it is. But the fashion also holds that NIH peer review is just and selects the “right” people. It isn’t and it doesn’t. The presumption is that someone who gets a good score deserved it, and someone who gets a bad score made an objective error or is a bad PI. That’s offensive nonsense. NIH grant review is (quasi) anonymous and driven by non-scientific concerns and non-linearly rewarding like glam, but there is not even a mechanism for challenging prima facie idiocy in reviews. Given the strain on reviewers, I am pretty comfortable saying it is almost absolutely the case that every peer review of my papers has been more thoughtful and detailed than every peer review of my grants.

    Grants are a game just like glam. Some people are good at one or both of these games and others aren’t, but neither has shit to do with what a good writer you are, how good your ideas are, or what a good scientist you are. Both games have tactics to deploy and entrenched interests to appease and gladhand. But somehow playing this game is noble and that game is craven?

    We need to free ourselves of the ridiculous notion that most grants are objectively “good” or “bad”. I say this as someone who has served on a grants panel. One person’s great idea by a great PI is another person’s junk. Any suggestion otherwise from those winning this particular game to those losing is insulting and disingenuous.

    /sour grapes bitching

    Like

  20. DrugMonkey Says:

    I certainly don’t hold that NIH peer review is a “just” system. Not do I claim that any individual grant score is somehow objectively deserving. In point of fact I often argue that there is great imprecision.

    With that said, since everyone is under the same system it is not an obvious and personal tragedy when a given application comes up snake eyes.

    Like

  21. DrugMonkey Says:

    As far as “winning” and “losing” goes, a central motivating factor of this blog has always been, and continues to be, my (and PP’s and many others’) best thoughts on how to help other people succeed at the grant game. To the extent I criticize certain behaviors and attitudes it is because I generally think it interferes with your (the junior and less-well-funded you) chances of acquiring funding. My lack of sympathy for the previously well-funded who now have to struggle like the rest of is perhaps a bit meaner, sure. Bit disingenuous? Hardly.

    Like

  22. rxnm Says:

    Yet I think you and PP with the “factual errors” and “eat the rich” tropes often dismiss people’s legitimate complaints about how they are treated by the review process as being simple failures of granstmanship (or sour grapes) on their part. For example, they were triaged because they did not take steps to anticipate/foreclose certain stock criticisms in the way they present their ideas. It’s my opinion (and limited experience) that this is terrible advice, because stock criticisms are effective (and so frequently deployed, thus their name) precisely because they are vague/subjective enough that they can always be plausibly applied from a position of authority.

    “Not innovative” = “lacking mechanistic insight” = “uncollegial” = “not our kind”

    Because stock criticisms never engage in a meaningful way with the actual science in a proposal, I think it is safe to conclude that they are lazy, pretextual mechanisms for reaching a desired and foregone conclusions about an investigator or approach. Where you see a failure in granstmanship, I see reviewers who perceive the applicant either as an outsider or otherwise undeserving of funding.

    Because these same reviewers who demonstrably, reliably freeze out new investigators (or other groups based on whatever social/scientific criteria) have been rubber stamping auto-renewals for other groups of PIs for 20+ years, it is impossible to argue that there is any kind of legitimate or consistent intellectual rigor being applied in this process. Shifting the blame onto the grantsmanship of those being frozen out of the game is bullshit. For some, there is nothing they can do to please the cadre who control the NIH purse strings in their field. I think when people find the place where they are accepted/supported within this system, it is easy to believe it is a result of superior grantsmanship. I don’t think it is.

    Like

  23. drugmonkey Says:

    Well there is some truth to that, sure. I could probably work harder on the way I say things so that it doesn’t sound like blame.

    I have said over and over that I don’t think you can write your way into funding and that all you can do is maybe tilt your odds. The only solution in my considered view is to take a lot of shots at the target. Credible shots, I hasten to add.

    I am also pretty sure I insist that funding is not any sign of some sort of objective scientific discrimination either. And I am positively repetitious about the fact that current scientific generations should never ignore the difficulty/lack thereof of getting funded changed across time.

    Like

  24. drugmonkey Says:

    I dispute, btw, your suggestion that it is actually impossible to break in if you do particular kinds of work. Until and unless I review a PI who has worked multiple angles in a credible way and still come up dry after, I dunno, let’s say 20 submissions.

    Like

  25. rxnm Says:

    Appealing to your blogging track record when a post has pissed people off is against the internet rules, dude!

    Like

  26. drugmonkey Says:

    Where you see a failure in granstmanship, I see reviewers who perceive the applicant either as an outsider or otherwise undeserving of funding.

    This is a fairly typical confusion of advice that allows you to maximize success under the system as it is, with endorsement of that system. You could not be more wrong. I have many disagreements with unthinking StockCriticisms and I’ve blogged many of them. As it happens I’ve had many a study section battle over them. On a couple of StockCritique items I sortof endorse, PP has fairly violent opposition. Or sometimes qaz will come along and strongly support or oppose some other point of typical review. So, y’know, diversity of opinion reigns.

    Like

  27. drugmonkey Says:

    I think there is a reasonable expectation of overall positions being taken. It is one thing to think one of my comments in isolation means something. It is another for a dude who has read quite a number of my comments to miss the point.

    Like

  28. Bob Graybeard (@BSDneuro) Says:

    I hear a lot of griping from assistant professors about how difficult it is to get your first grant. Boy howdy! I had to submit mine twice before Kuffler had to intervene. I shudder to think what I might have done if I had to wait a third time for it to print dot matrix from our lab’s PDP-11!

    Thank god Steve took care of business and “Several projections to the visual claustrum of cats” was off like a rocket. Bob’s third proposal (I don’t know how he wasn’t fired) flew that round too so we did many a line of coke off some stripper’s melons that night I can tell you!

    Enough about me. How can I help you? Well you’re probably making some simple mistakes:

    1) Talk to your program officer. I had my third R01 before I knew what a program officer was. I was so naive. I probably wouldn’t gotten my fourth if I hadn’t figured out who they were.

    2) Formatting. My first time out I thought these were more “guidelines” or “helpful suggestions”. Turns out they are real hardasses about this stuff. Maybe some of you are making the same mistake?

    3) Have a hypothesis. For example, my hypothesis was “There are several projections to the visual claustrum in cats” This was suspected but not known. And the grant just writes itself from there! See how easy that is? Try it, it’s formulaic but foolproof.

    4) I was young once and and I know all about you kids with the hippin and hoppin and bippin and boppin. At my most debauched I never wrote more than 60% of a grant high on mescaline. And I always, ALWAYS proofread sober.

    5) If all else fails, it’s important to have fealty to a strongman. Who’s your Kuffler?

    If that doesn’t help, I’m not sure you’re cut out for this.

    Like

  29. rxnm Says:

    I know you’re not the enemy, DM… a bit touchy about arbitrary grant review bullshit at the moment.

    Get a blog, Bob.

    Like

  30. mytchondria Says:

    There are good reviewers and stupid ones. And I want my grant reviewed by the smart ones. The passionate ones.
    My job as a reviewer (not because anyone told me this BTW) is to pick my top 2-3 grants from my stack of 12 and sell the shitte out of them. I’ll look at your high scoring grant, but if you are presenting like a douche, I will run you down and put your grant head to head with the one I’m presenting and tell you why my gurl is better than that shite you just presented.
    If you don’t know enough to defend your grants, then you shouldn’t be at SS. I can not do better than that in this funding climate.
    I’m also talking to the SS chair after the meeting and telling him who the sucky reviewers are who need to go so we do a better job the next time.
    If it makes folks feel better, if I reviewed you, you may hate me. But if you reviewed with me, I know you hate me. My only armor is doing better. And its not that hard IME.

    Like

  31. qaz Says:

    A bad review on a funded grant is funny. A bad review on an unfunded grant is sad. A bad review on a unfunded grant that means the end of your career is tragic. No one gets mad about the first.

    Like


  32. Dude, you still don’t get it. A “bad” review on *your* funded grant could have been the erroneous favorable review that pushed it over the edge to fundable, and saved your career, while thereby killing some poor other fucke. What DoucheMonkey is trying to point out is that it is just as likely that bias, FACTUAL ERRORZ, laziness, and all the other shitte that applicants complain about when they receive what they perceive as unduly unfavorable reviews of their grants will on other occasions lead to unduly favorable reviews of their grants, including ones that end up inside the payline for that exact reason.

    Like

  33. e-rock Says:

    My interpretation of your pleading “too busy, tired, overworked, etc” to be thorough and the impression that certain reviewers did not read certain applications is that reviewers do not read all grants equally well. They decide whether a grant will not get a fundable score (from them) by looking at the biosketch & specific aims. If they do not think the investigator is qualified or the Overall Point of the project is worthy, as you pointed out, there is no reason to read the entire 12 page research plan (or letters of support, or Co-I biosketches). Instead, they skim for phrases to fill out a few bullet points in the review form — this skimming generates the errors. If, on the other hand, from the bio + Aims, the reviewer thinks the grant may get a fundable score, then the reviewer reads the entire thing, looking for actual strengths as weaknesses — hence fewer errors.

    So I am not proposing that reviewer errors are the REASON grants die, but dead grants have more reviewer errors in the Summary Statement — and the Applicants’ (not objective) perception is that the reviewer errors Caused the death of the grant. (Correlation does not equal causation). When a reviewer makes many and egregious errors, showing that they did not read the grant, I have learned to interpret that (after admittedly a lot of bitching a moaning), as a fundamental problem with the thrust of ideas or direction of research laid out in the Aims & Abstract (or a problem with my pub record). This does not mean that entire review is useless, but the bullet points are useless and it becomes necessary to figure out how to fix it using cues other than the stated (erroneous or stock) critiques.

    I am not being apologetic for lazy reviewers either, and I also think that there may be kernels of implicit bias that creep into the both the actual (not stated) reasons for rejection and kernels of bias in what the reviewer subconsciously chooses to make bullet points out of in the skimming process.

    Like

  34. qaz Says:

    A “bad review” is one that misses something important. A bad review that helped you is funny. A bad review that hurt you is not.

    Yes, of course, there is noise in the system. We’ve known that forever – it’s inherent in any measurement process. I’ve long said that the acceptance of noise was what worked with the OLD scoring system. You had a number plus random noise. This meant that a 2.3 had a small probability of being better than a 2.4, but a much larger probability of being better than a 3.4. That’s an old discussion that we’ve hashed through many times on this site.

    But it’s like Anna Karenina – there are a LOT more ways for something to go wrong than to go right. It’s an interesting and empirical question of whether grant review mistakes tend to preferentially drive grants up or down.

    Even more interesting is whether the grant review mistakes that permit funding actually impact scientific quality. (Remember, the reviewers were right about Columbus – he would never have been able to bring enough supplies to reach India without resupply.)

    Like

  35. e-rock Says:

    I also think there’s a fundamental problem with this “overworked reviewer” thing. There is obviously not a labor shortage. More people want entry than there are slots. But the distribution of labor with respect to prestige-generating work must be inefficiently distributed. Prof Skymiles cannot discharge duties well — neglecting trainees, being lazy in grant review, throwing (non-prestigious) administrative duties to junior scholars. How many Assistant Profs out there have labored over the IRB or IACUC protocol which Full Prof is named “PI” on? Prof Skymiles must have demonstrated competence at some point in order to get to the level they are at, but now has accumulated more responsibilities than can handle to high quality standards. Because certain responsibilities come with prestige & power, and there are no consequence for poor-quality work … there is no incentive to give responsibility (or credit … for the prestige-generating work) to the other laborers clamoring for entrance into the system.

    Like

  36. rxnm Says:

    CPP said: “What DoucheMonkey is trying to point out is that it is just as likely that bias, FACTUAL ERRORZ, laziness, and all the other shitte that applicants complain about when they receive what they perceive as unduly unfavorable reviews of their grants will on other occasions lead to unduly favorable reviews of their grants, including ones that end up inside the payline for that exact reason.”

    Bullshit, for all the reasons others have pointed out. Negatively biased reviewers obviously will give cursory attention to a grant they are opposed to, and look for flaws. “Factual errors” DO sometimes get sorted out for grants that are discussed, but not for those triaged. Positively biased reviewers are less likely to say stupid shit because they aren’t looking for reasons to ding an application, and it is more likely to be an approach/system/investigator they are familiar with.

    I can’t believe you people who can clearly, rationally look at how something like glam publishing works can’t see the same dynamics in grant review. Insiders (of any sort) are treated differently than outsiders at every stage. Grantsmanship is great and all, but it is not the defining component of success, and frankly it’s not all that hard. Illiterate shit grants get funded/renewed all the time, if it’s from the right person. Telling people they should obsess of every fucking detail of their grant style as if it’s going to remove whatever current of hidebound bullshit they’re rowing against is bad advice.

    The fact that those of you who serve on study section object to this view is entirely unsurprising. Glam reviewers who give authoritative opinions on what’s “worthy of Cell” believe in their heart of hearts that they are making well-reasoned and good faith determinations of what papers provide “mechanistic insight” and will have “high impact” on the field, too.

    Like

  37. Grumble Says:

    “Even more interesting is whether the grant review mistakes that permit funding actually impact scientific quality. (Remember, the reviewers were right about Columbus – he would never have been able to bring enough supplies to reach India without resupply.)”

    This is precisely what is wrong with the whole idea that there is even such a thing as “a good grant” (one that proposes feasible experiments that will have a big impact) and “a good review” (one that honestly assesses that feasibility and those impacts).

    Real science is a leap into the unknown. You can’t judge the impact of the leap until the leaper has lept (just like you can’t predict who will win a race until it’s been run). Any attempt to do so beforehand is an exercise in mendacity: it is based on what the judge thinks he knows, not what he really knows. In other words, the judge is always biased. No wonder “bias” is everyone’s favorite complaint about grant review.

    “Yes, of course, there is noise in the system. ”

    Right. The signal is synchronous bias. The noise is what good research gets through despite the signal. Ass-backwards, but that’s ingrained in the present system.

    Like

  38. imager Says:

    From Grumble: “You can’t judge the impact of the leap until the leaper has lept (just like you can’t predict who will win a race until it’s been run). Any attempt to do so beforehand is an exercise in mendacity: it is based on what the judge thinks he knows, not what he really knows.”

    That is why I think grants i.e. money should not be awarded for what you plan to do but for what you already have done. I showed that my idea/hypothesis worked with papers. Now give me money for that, and I can use it for the new ideas you will judge after I have done it (and not demand to show the results of what I propose in the grant). How does the cycle start – easy, start up money from your institution. Fishing expeditions can bedonw and also be pretty successful (see the interview with the late Janet Rowley in the NYT:

    Q. Do you think that the type of career you’ve had would be possible today?

    A. No. I was doing observationally driven research. That’s the kiss of death if you’re looking for funding today. We’re so fixated now on hypothesis-driven research that if you do what I did, it would be called a “fishing expedition,” a bad thing.

    Like

  39. AsianQB Says:

    I think this comic is appropriate here: The Science of Sex
    http://www.smbc-comics.com/?id=3390

    Like

  40. old grantee Says:

    hahahahahha

    Like

  41. Cassius King Says:

    Greatest hits from my recent triaged A1 R01 review:

    Reviewer 3:
    “No evidence that plan will address development of methodology for X”
    [From proposal: Aim 1: We will further optimize methodology for X. Here are 4 figures worth of data showing we can already do X.”]

    Reviewer 2:
    The proposed number of patients Y should be easily recruited in 2.5 years rather than 4, so recommend reducing budget to 2 years.
    Reviewer 3:
    The proposed number of patients is Y over 4 years, this number does not sound realistic to be obtained.

    Reviewer 3:
    Strength: The investigators have the expertise to conduct the proposed research.
    Weakness: There is no investigator for methodology X. (which was proposed in Aim 1).
    Weakness: The incremental sounds incremental.

    There was a lot of philosophical arguments about if bunny flying turns out to be the best, then our approach of bunny hovering will be obsolete. Neither of which have been done before. It was the most ridiculous straw-man argument I have ever seen. Needless to say I am study-section shopping….

    Like

  42. E-rock Says:

    Cassius, it sucks but that 2nd part is easily avoided. It is a difference in opinion and your application may not have explained (given evidence & historical experience), that your recruitment projections are feasible & accurate. You can’t leave that sort of thing hanging to reviewers’ opinions. Same with sample size, power analysis, etc. it is also low hanging fruit, stockcritique that kills grants from NI. I don’t have enough experience to say whether established PIs can get away with it, frankly, I doubt it.

    Like

  43. Cassius King Says:

    E-rock: I gave recent historical data on patient numbers and said that given my experience, that we proposed a conservative estimate of recruiting 25% of the total annual patient population in order to hit the minimum target, and also gave a power calculation justifying the adequacy of that target. I have a biostatistician on board. Reviewer 2 didn’t think we justified the 25% versus their own opinion that 40% annually should be easy to do. If I had said we would recruit 50% of the patient population and doubled my target, Reviewer 3 would have been even more incredulous.

    You may be right that this kind of critique can be avoided, but in my experience, not easily.

    Like

  44. Cassius King Says:

    Oh, and e-rock I think your earlier post about the reasons for this type of review is absolutely on the mark. I agree with you, that a review that simply said “not ready to join the club” would be infinitely more palatable. I’ll take hard truths, even if I perceive them to be unfair, over BS post hoc justifications every time.

    Like

  45. E-rock Says:

    Cassius, similar things have happened to me, and obviously they did not read the application. In my case, I realized that the field was full of established investigators on that topic already and one of them could whip out results from my proposal as a side-project if they thought it worthwhile. Even though the principles are sound. It was only apparent after a lot of bitching and moaning, and a conversation with the PO. Short hand to that could be, “not ready for the club,”

    But even de-personalized from that, thinking about the ecosystem of those involved, knowing that such and such lab could scoop me or that I could never get the work published through particular gate keepers or already know of negative, unpublished, findings on the topic; obviously not worth throwing money at but they have to justify to their peers at the SS, and the prevailing trend is a 0.9 correlation of Approach score with overall score so the critique has to be made up on approach post hoc … and quickly because they have more work to do that weekend. I try (very hard, after bitching and moaning) to be charitable in considering reviewers’ character (old and tired, but not intrinsically bad). Probably every situation has its nuances and it’s finding the particular unstated, hard, seemingly unfair truth.

    Honestly, I think truly good mentorship would alleviate this sort of crap, but we come here and learn it from CPP and DM and gang, hopefully not too late. I recently applied for a grant with a co-I older than my parents, they had super insightful, aerial view of the broad topic and knowledge of all the players, flawless writing, and a knack for concision. But couldn’t operate Dropbox or the latest version of word processors for making fancy tables, and forgot the latest detail about molecule X, which is my whole reason for going to work. I wish our paths had crossed sooner in my career and hope that I brought as much to the table as they did. Find those people with the aerial view and willingness to share and remember to pay it forward (if we survive).

    Like

  46. Pinko Punko Says:

    Good and bad reviews are likely not equally distributed. The imbalance in their distribution is bias in the system. Every study section I see really, really astute reviews, and also things on the lazy side. The “helpful” lazy I would suspect are more likely to go to known quantities. The negative lazy go to the unknown quantities. A reasonable subset of people do go against this, and if those are your discussants, you would likely have a good chance if finding weren’t so tight.

    Like


  47. […] you have no funding woes? It’s true, I do not. Although I recently pulled back the curtain to reveal the arduous journey I had to take to my first funded R01, it has essentially been smooth sailing since. My second R01 […]

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: