The Bergermeister MeisterBerg asks who gets in the lifeboat

February 1, 2013

Another viewpoint on the NIH situation from ASBMB head Jeremy Berg.

To return to the Titanic analogy, we are certainly in waters full of icebergs in this current climate, and we need to do everything we can to help the captain and crew steer clear of them. That does not mean, however, that we should neglect to urge careful examination of the policies that determine how access to the limited number of seats in the lifeboats is determined. The long-term health of the biomedical research enterprise depends on it.

Go read, it is meaty.

No Responses Yet to “The Bergermeister MeisterBerg asks who gets in the lifeboat”

  1. dsks Says:

    Jeebus. It makes you wonder where a 40%ile, or even a triaged application might fal if the NIH let a few of those through just for shits, grins, and curiosity vis-a-vis impact and productivity of the subsequent output.


  2. Spiny Norman Says:

    JBerg is a reality-based super hero. No wonder he didn’t get on with Collins.


  3. Beaker Says:

    Berg’s article reiterates that the fairest way to review NIH grants is to treat every application as new, at every study section. I would extend this idea to include renewals. In the “every application new” system, there are no renewals. If your last funded idea was a good one, you are free to propose extending it for the next 5 years, but the application process would not be any different than for a brand new grant.

    A lot of investigators hate this idea because it restricts their ability to respond to criticism–both the fair and the clueless types of criticism. Others don’t like this system because it does not prevent applicants from perpetually submitting the same dumb idea (it doesn’t encourage the practice either). If this system is also applied to renewals, then it might detract from the ability of the Study Section to judge the productivity of the investigator following a previous award.

    After reading the article, I am persuaded that this may be best system going forward. It doesn’t fix the problem of too many piggies fighting for scarce NIH teat space, but it gives each piglet a fairer shot. Sure, the fat piggies still get more milk on average than the runts, but at least there is a chance that fat, lazy, old ones will get out-competed by clever, young ones.

    The “every application is new” also creates space for ideas that were ahead of their time upon first submission, but later get re-submitted at a time when the scientific landscape is more favorable to the idea. Lastly, such a system eliminates the thinking among some investigators that, if they simply do due diligence and address the particular criticisms mentioned in summary statement, they deserve funding—i.e., that their revised grant should get an “A” like their schoolboy essays did in high school.


  4. dr24hours Says:

    No renewals at all. If your idea was good enough to get an R01, and you did that work and you have new aims and hypotheses, submit them. They might get funded. Otherwise it’s just a permanent old-boys system.


  5. Grumble Says:

    I like the “every application new” idea too. It solves the alleged “stand in line” problem just as neatly and arguably more effectively as limiting to one resubmission. The question is whether it means more work for reviewers: potentially increased reviewer workload is just about the only advantage the A1 limit system has over the no limit system.

    So, in the no limit system, suppose you get two glowing reviews and one really stupid crappy one. You’re free to submit the exact same thing, verbatim, in the next round and just hope that the idiot has rotated off. Because zero (or very little) effort is involved in resubmitting in cases like this, the number of applications could only increase under this system.


  6. qaz Says:

    Beaker – The problem with the every-grant-is-new is that it completely removes the possibility of doing any planning. This means you can never promise technicians that they have a permanent job. You can never honestly tell a student “yes, I can allow you to work with me for five years”. You can never truly offer a postdoc the promise of continued funding through the end of the time they need to get trained to be ready to go onward. Your life is like feeding your family by playing the lottery. (which is something we try to tell problem gamblers NOT to do.)

    With responding-to-comments in going from A0 to A1 and including some aspect of productivity in your renewal application, we are at least in a position wherein we can go to some small source (say local/internal U sources or NIH program) and ask for bridge funding because we have some hint that we might actually get funded. I’ve also been in the (NSF) world where every grant is new and there is not even the expectation that responding to reviews is worthwhile. (*)

    * Yes, I know there is a response to reviews section in NSF proposals and a productivity in NSF proposals, but in my experience on NSF panels, these are simply ignored. As compared to NIH study section where these are both primary discussion points.

    What we REALLY need to do is find our way back to the old system that the baby boomers made for themselves wherein you could survive on one R01 because you were 90% sure that you could get it renewed if you had done a good job (and before everyone starts screaming about sinecures, note that I said IF YOU HAD DONE A GOOD JOB). There used to be the expectation that if you had made good productivity overt the previous funding cycle, then funding for the next was almost assured. In that kind of a world, one could in fact survive on a single R01, something that is dangerous if not impossible today. The key is that we need to get back to the point of less uncertainty – we need to know what the line for success is, so that we can plan to get there. In this day and age, every application is a crapshoot because the funding lines are so low.

    The more I see and hear of this, the more convinced I am that we need to identify when this system fell apart and figure out how to get back there again.


  7. qaz Says:

    Also, I agree with dsks…. Jeebus.

    These numbers confirm the sense that I (and others commenting here chez DM) have expressed that most of the proposals in the top X% (whether it be 25% as reported in the Berg article) or even higher are all equivalent in terms of their likelihood to have scientific impact. (Personally, I suspect it’s close to 50-75%.)

    What Berg’s article suggests is the ENORMOUS waste of effort that is being put into writing and reviewing grants. We spend months writing and then reviewing to nitpick differences between 10% and 15% because that difference is the difference between survival and starvation. Damn.

    Making every proposal new makes this problem worse not better.

    Perhaps the fairest thing is for us to really put our names (no grants, just names) into a lottery and just pick random winners. 😦


  8. Brugg Says:

    How about NIH cut funding of Centers/PPGs? Often there are cores and Centers with multiple fat cat directors and associate directors, allowing PIs to funding a large % of their salary, while a tiny number of workers actually do the work.

    Same goes for T32s; gut them. Let everyone compete openly for F funding. How many T32 trainees got awarded fair and square? Stop the horse trading by PIs of T32s and PPGs.

    This plus cap indirect at, say, 20-25%, or some dollar amount per square feet without adjusted for regional cost of living differences. I highly doubt biotech startups in fancy office parks are even paying such high overhead for rent, utilities, insurance, etc.


  9. becca Says:

    Review every grant for sound scientific approach. Place into hat. Draw names. You get one entry per year. Get your ass back to the bench, PIs. The end.


  10. drugmonkey Says:

    Biotechs pay the leases for buildings on what is often very expensive real estate. State Universities are often on land given to them long ago that isn’t even taxed. That represents a value that has to be shouldered by the taxpayers of that state.

    *All* costs, people.


  11. drugmonkey Says:

    Dropping BigMechs and TGs is something I could go along with. I agree with you on inefficiency, though I have definitely benefitted from many such.

    This proposal counters the Eisen style drive for more stable funding, I would note.


  12. drugmonkey Says:

    I agree with qaz to the extent that uncertain renewals have led to a vicious cycle. Much of the churning (and dismal hit rate, perhaps) could be alleviated (remember the huge number of 1-2 grant labs?) by stabilizing expectations.

    I’ve suggested expanding use of the R37, giving more people 10 yrs of non competing. Increasingly fond of this since I’ve learned various Programs will indeed pull the plug circa year 6 if the PI isn’t doing anything.


  13. His name is The Bergerino.


  14. zb Says:

    Why is no data for Type 1 grants presented? There’s a line that says that the correlation (ie reliability of percentile scores v productivity) is even lower for Type 1s. I suspect that data would help address the idea that fairness would improve if all grants were treated as “new”. I wouldn’t be at all surprised to learn that the correlation in the T2s is mostly/partly a correlation between past and future productivity, which is being filtered into the percentiles (both explicitly and implicitly).

    It’d be nice to see even more analysis, which NIH should want to do, as scientists. But the analysis on success of evaluation methods in high pressure tournament models seems to be pointing to the utter failure of the methods for the use to which they are put. And people hate to here that.

    Other methods of culling seem to boil down to culling at the front end followed by more stability.


  15. Bill Hooker Says:

    El Bergerino, surely?


  16. Spiny Norman Says:

    Who gets to be in the boat? UC administrators, that’s who.


  17. Beaker Says:

    Yea Spiny, exactly. “Deans and administrators first!” These people provide essential management of the shrinking pool of overheads, so we need al of them.


  18. whimple Says:

    Having every application be new will shake things up, but doesn’t go far enough. I would also implement a policy of “no standing study section members” to go along with it, so that every fresh grant is seen with a fresh set of eyes.


  19. qaz Says:

    @whimple. GMAFB. Why don’t you just say you want NIH to work on the NSF system? Every grant is a crapshoot. Every application is new. Every reviewer brings its own set of personal crap to the table. I have worked closely with both NSF and NIH. Trying to figure out where the goal post is at NSF is a nightmare. NIH study sections have characteristics you can prepare for. A good study section builds up a stable of bunny hopping, Which moves the field forward reliably.

    We don’t want to shake things up! We want to make them MORE stable. So that people can have LIVES. So that there can be technicians. So that graduate students can know where their next meal is coming from. SO THAT PEOPLE CAN WRITE FEWER GRANTS.

    Remember, everybody, writing grants is only a means to an end. The only useful thing about grants is that it might (*) have some usefulness in assigning money to the right place. The only real record is the scientific publication record. Grants proposals are NOT part of that published record.

    * The recent data from Berg strongly suggests that all this grantwriting and reviewing is a waste of time.

    We want to free the scientists up to make discoveries. Not tie them down to a lottery!


  20. poke Says:

    Several people have mentioned lotteries.

    They’re on the right track, except we need to be thinking less Powerball and more Shirley Jackson…


  21. GAATTC Says:

    Nice poke.


  22. drugmonkey Says:

    I am increasingly of the opinion tha proposals should be part of the public record. If this arxiv idea ever lands on bioscience…boom, my proposals are going IN!


  23. whimple Says:

    Completely agree with DM about proposals being public. Since federal cash is spent reviewing them, it makes sense to see what that cash was spent on.


  24. drugmonkey Says:

    Well….that would argue for publicizing the reviews but not the proposals….


  25. drugmonkey Says:

    Anyway, thanks to ARA whack jobs, this is never goin to happen other than on a volunteer basis.


  26. Alex Says:

    You guys should totally do arXiv. I don’t know about proposals, but definitely for papers.

    I do biophysics, and recently I was showing my work to a biologist friend and pointing out that it is impossible for somebody to scoop me because it’s already on arXiv. Even if reviewer dick around forever and ultimately bounce my work to a lower journal, nobody can get take advantage of the delay and scoop me because it’s already on arXiv. The arXiv establishes my priority, and whatever journal it ultimately gets into will give a seal of approval for quality (to whatever extent you consider journal ranking a sign of quality). This separation of quality approval and priority claims has a lot of beneficial effects on the field, I think.


  27. qaz Says:

    Why don’t neuroscientists do arXiv? I have this vague memory that a number of the major journals didn’t (don’t?) allow papers (or figures) that have been put online (which would include arXiv).

    Of course, neuroscientists are particularly bad with priority, citing new Nature papers over earlier JNeurophys papers…


  28. drugmonkey Says:

    Why “particularly bad”? Is this not just that the knowledge of the field best known that is talking here?


  29. qaz Says:

    “particularly bad” in comparison to what I hoped/wished/naively-believed.

    But I’m thinking if they won’t go cite the appropriate JNeurophys paper published years before the less-appropriate Nature paper, why would they care that your paper was in arXiv first?


  30. zb Says:

    Too much of this conversation is focused on how things would be better for scientists and not science. Nothing will change if there isn’t’t a good argument to connect the two. Many of the suggestions to produce stability go to the teacher tenure model that is coming under significant attack (I.e. pick a group of people and then let them do their jobs).


  31. drugmonkey Says:

    It is hard for NIH funded scientists to thrown down too explicitly about the amount of time being wasted writing grants (and possibly in servicing grants “yeah this is boring but we need production on Aim III for the renewal”) because of the anti-lobbying thing. And, as appropriate, because even in hard money jobs their other time isn’t supposed to be 100% grant writing.


  32. zb Says:

    “It is hard for NIH funded scientists to thrown down too explicitly about the amount of time being wasted writing grant”

    Oh, I’m fully sure that there’s lots of time wasted in the process of writing and evaluating grants. The problem is that taxpayers are unlikely to accept the idea that we just pick good people and let them do their jobs (with 500K/year) given what we see with the conflicts over teacher tenure.

    Would people like post-hoc review? Some category of people get the 500K, and then are evaluated, every 1?2?5? years to see if they’ve produced something? That’s a bit like the NIH intramural system, and I think there are a fair number of complaints that the system isn’t rigorous enough in review.


  33. drugmonkey Says:

    This is what Michael Eisen is proposing, yes, plenty of people want to move the needle over into secure funding for……. then they are kinda silent about who gets to stay but you can be damn sure they mean first and foremost themselves.


  34. Spiny Norman Says:

    Poke wins the internets for the week starting 3 Feb.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: