Cost of War
May 2, 2007
In a nutshell this is why scientists have a visceral rage about the tight NIH funding picture. One estimate of the cost of the Iraq war puts it at about $422.3 billion and translates this number into what is being lost into units of public good like education and public housing. Let’s translate that into grants, shall we? The most easily available numbers from the NIH are for funding trends for FY2003. In FY2003, the NIH funded some 28,698 R01 grants at an average cost of about $340,000 in total costs per year. So let me just see here….. 422.3 billion divide by 6 years divide by $340K….um 207,010 R01s are being burned in Iraq each year. (Almost 4,000 grants each and every week.) That’s about 7 times the number of R01 grants that the NIH funded in 2003.
This is the answer to why scientists aren’t buying the “it’s tough times” stuff. It is why it wouldn’t bother me if the NCCAM wasted a bunch of money.
Seven times the number of grants. Look around you. Seven times more funding for your lab. Or seven times the grants in your Department. Seven times the funding in your cosy little subfield. Dynamics of cats has similar thoughts.
Just think of what we would cure…
EUREKA mechanism
May 2, 2007
The Medical Writing, Editing & Grantsmanship blog mentions a new trial grant mechanism, summarizing some main points:
the application itself, which is limited to 8 pages to explicitly address the significance/importance of the problem; the innovation/novelty of the hypothesis or methodology; the magnitude of the potential impact; and (something else new, borrowed, & blue) the size of the community affected. Even curiouser, the biographical sketch is limited to the 15 publications – the 5 most relevant, 5 most significant, and 5 most recent – plus a paragraph describing qualifications for the proposed research.
Preliminary data are allowed but not required.
Some additional details may be had from the NIH but I can’t find anything really official yet. There may be an announcement as early as this month. [UPDATE 07/30/07: The initial announcement is out as an RFA] The alert reader will note that this proposal is formed to address some common criticisms of the current NIH R01 application…in spades.
First the “good”. Most of it IS good. R01 applications are far too lengthy for a number of reasons. Most importantly, I think the level of detail expected at present distracts from the central issues. Often times the review gets bogged down into a discussion of methodological minutia that has no place in grant review. (One, if we’re so concerned that this PI can’t figure out the basic methodology, why are we considering this person seriously as a PI. Two, if we are concerned that the control condition isn’t exactly correct to prove the point, isn’t this more appropriately the province of the paper review process? argh.) Shortening the application has to potential to head off much of what I consider unproductive aspects of review. The significance/impact/innovation part is all good, of course people are supposed to stress that currently and the top applications do- impact is uncertain. Finally, this is going to save a lot of PI time in grant preparation. Since there is a tremendous focus on methodological minutia this means that the grant itself has to be immaculate from a document perspective. Everything has to be consistent, timelines have to add up, the obligatory hypotheses better not be contradictory. (You know a bunch of “requirements” that have little to do with the way science is actually conducted!). Shorter and less detailed applications are going to save time and short circuit the “aha, hypothesis 1.A.II is slightly incompatible with hypothesis 4.C.IV! clearly the PI doesn’t know what s/he’s doing….Triage!” process…
The “bad”. Review bias. This is going to reinforce the bias for giving higher scores to well established researchers and lower scores to less experienced and transitioning investigators given the same objective quality of the proposal. The reason is that reviewers are concerned with issues of feasibility and likely productive outcome of the research. Particularly when it is viewed as “high risk” from a scientific standpoint, the notion of “success” of the project (meaning papers resulting) will be a concern. There is an entrenched belief that “track record” is highly important. There is a belief that a PI with a long career will be “able to get it done”, with a fairly nonspecific but nevertheless motivating belief that untried PIs will somehow blow it, waste the money, etc. I’ll likely get into this unsupported myth at some point but for now trust me, it is a powerful determinant of review outcome. The current long format R01-type proposal cannot completely cure the problem and indeed it doesn’t. However it gives the untried PI a fighting chance to address concerns. Lots of preliminary data, exquisitely argued/detailed research plans, additional autobiographical details slipped in (“sure these figures were from my postdoc days but i was running the project as evidenced by X, Y, Z….”), etc. All things that provide ammunition to the reviewer who is favorably disposed toward the application. Shorter applications are going to lessen the ability of the less-established investigator to establish a sense of confidence in the outcome of the project.