GrantRant IX

January 20, 2013

If you are a BSD type who has had program pick up your grant despite a fairly critical and damning review….

Don’t come back after five years with the same flaws* in your application. Programmatic pickups are not validation that your grant writing was acceptable.

Feel fortunate that you got away with one and make sure that you do better next time.

Oh and you’d better produce. Reviewers are not going to look kindly on a renewal when the prior review was harsh and you haven’t knocked it out if the park scientifically.

__
*the number of times that I’ve seen this situation…… Amazing.

No Responses Yet to “GrantRant IX”

  1. pinus Says:

    yes. a million times.

    Like

  2. Pinko Punko Says:

    The only thing about this that I can take is that the program picks it up and at least the study section doesn’t cave. Thing is there are excellent scientists that write lazy grants, and they should have a similar bar. It is supposed to be mostly about what is in the document, not “trust me, I’m awesome”. How many rounds do you cave if they keep productivity up? I’d rather have that be at the program level than at the section level.

    Like

  3. DrugMonkey Says:

    Well, five years later it is a good bet the specific members on the panel have changed over….

    Like

  4. joatmon Says:

    Define productivity please. What are the expectations from most study sections? Do study sections give young investigators (i.e. 1st renewal) more slacks?

    Like

  5. Grumble Says:

    Is it possible that these BSD types with multiple rounds of programmatic grant rescue just don’t know how to write the kind of formulaic grant you and the rest of the study section are looking for?

    Yet, their Dicks are Big and Swinging for a reason. Maybe ’cause you and your study section have difficulty recognizing what makes a big scientific dick to begin with: a way-off-the-bell-curve combination of amazing ideas, chutzpah, and technical savvy. The ability to package all this into pre-digested mash for easy consumption for the kinds of third rate hacks who populate study sections is NOT necessarily part of the qualifications.

    Just a hypothesis.

    Like

  6. DrugMonkey Says:

    Yeah, no. They aren’t *that* awesome of scientists.

    Like

  7. Pinko Punko Says:

    Who knows, Grumble, who knows. Could be the section is just entirely moronic. Could be the guy/gal is super smart. Could be the grant is a POS with zero controls, zero alternative approaches, zero preliminary data. Given as we heard in another thread about grants being a zero sum game, why should this be funded?

    Like

  8. Grumble Says:

    Because, Pinko, even if the PI didn’t propose the right controls, s/he might have a long history of doing the right controls. Given that it’s extremely unlikely that the PI is going to do the experiments exactly as written in the grant (or even close), why should it matter what the grant actually says?

    This is my problem with the whole grant review system as it is. It rewards dishonesty (“oh, yes, of course I’m going to do all 17 highly detailed and time consuming controls that will satisfy all of the 17 study section members’ individualized proclivities“) and penalizes past accomplishments as an index of future potential.

    If program staff provide some corrective by at least making sure the superstars with the absolute best records are consistently funded despite what they write in their grants, that’s at least a step in the right direction. Not that it goes far enough, in my view.

    Like

  9. DrugMonkey Says:

    We are not talking about “superstars with the absolutely best records”. Top 25th percentile, say, rather than 2%ile

    Like

  10. Pinko Punko Says:

    Yes, give them all of the dollars, so their significant but possibly massively inefficient operations have no accountability- this doesn’t make a lot of sense, nor does the claim that study sections’ member individually demand obeisance to their whims, though perhaps some people get reviews that say “dance, mothereffer!”, that is not my experience.

    This sounds like the frustration we all might experience when dealing with study sections perhaps when we first read bad news, but when we consult with our colleagues for the most part they tell us- “this was a strong, careful review, they make some good points here, they clearly didn’t understand this aspect, but you could have done a better job in laying out the logic, and these experiments they mention are good and valuable”. The occasional review is crap, but where is the evidence that these crap reviews even dominate individual study sections let alone subsequent iterations of “wronged” proposals. I certainly don’t see it. The process will certainly be corrupt if proposals don’t matter at all. The system does not need to be further rigged in directions you propose. At some size of lab, when does a BSD just become more of a brand name than an actual driver of the research? Perhaps some of that army of post-docs would better serve the community in their own labs and having their own intellectual angles on science. It really doesn’t ask a lot that captain genius can come down from on high and write a coherent and thought out proposal. It asks almost nothing considering the money at stake. Since that money could go elsewhere and generate other results perhaps almost as significant, what is the problem here?

    My experience with a specific BSD (lab where I worked, not grant that I reviewed) was that if the grants were more than the slimmest of afterthoughts, the science absolutely would have been improved, the projects would have been more efficient, the papers more significant.

    Like

  11. Grumble Says:

    “The system does not need to be further rigged in directions you propose. ”

    If you think I’m proposing auto-funding BSDs and de-funding everyone else, that’s not what I meant. I agree that it shouldn’t be rigged in that direction.

    But if DM is right that program is rescuing grants from the top 25%-types of PIs rather than the top 2%-types, then I’d say this is exactly the general direction the NIH ought to be going towards. Why should NIH spend its precious cash on people who write very nice bullshit in grants? Why shouldn’t they look at a PI’s track record and say, “hm, if we don’t fund this very productive investigator whose research we love even though she can’t write a grant worth beans, she might lose her lab, so let’s let her have the money anyway”?

    “possibly massively inefficient operations have no accountability” – the accountability should come from an analysis of what the PI has produced, not of the nonsense she says she’s going to produce. If program staff recognize that, even if only a little bit and only sometimes, more power to them.

    Like

  12. DrugMonkey Says:

    It’s because then the people being rescued will be the buddies of the POs. that’s not a system to protect quality nor to permit much turnover.

    Demanding a certain standard be hit in the proposal itself is a way of being fair. Of giving everyone an equal shot. This feeling is what drives much of formulaic review btw.

    Like

  13. Pinko Punko Says:

    Very few if any PIs get grants that are amazingly crafted masterpieces without some track record to show for it. And if these people were to be funded, it would be evidence that the section is willing to take a risk on possibly innovative science that many people demand. However, this same innovation has greater risks and if you fail on one amazing idea, then it is possible you now simple have a track record of failure. Asking for the grants to be good is the only bar we have for fairness. Well-crafted insignificant grants don’t get funded either, and that isn’t an issue. Given that older PIs have larger grants, how can there be balance for funding new investigators, whose grants will be funded for lower amounts. It is an impossible system as DM says a lot, but POs doing bros a solid is not going to work out in the end I think.

    Like

  14. Grumble Says:

    “It’s because then the people being rescued will be the buddies of the POs. ” And that’s different from the current system… how? I mean, the system where people with lots of buddies on the study section seem to do better than those who know no one.

    “Asking for the grants to be good is the only bar we have for fairness.” I completely disagree, on two levels. First, I simply don’t understand why at least some portion – maybe not all – of a PI’s funding can’t be based on past productivity. Such a system, if codified in some way and followed with reasonable rigor by the NIH, could be just as “fair” as the current one.

    But on the second level, why does the process have to be “fair”? NIH wants to fund the best science. Why should they give newbies exactly the same chance at big bucks as established BSDs with long and fecund… track records? They should give newbies some money, see how they do, and then commit more to those who do well. That is, in fact, how the system works now – only it takes writing 7 bullshit grants to get just one. I’m suggesting that the writing of bullshit could be eliminated and the system would function very much as it does now, but without all the extra useless wheel-spinning.

    Like

  15. Pinko Punko Says:

    NIH should judge the science on the quality of the proposals with an eye on productivity as a measure. Anything else would be lack accountability and would be relying entirely on external factors such as journal reviewers. NIH review is a reasonable checkpoint for government dollars.

    Like


Leave a comment