The letters in the 13 April issue of Science contain a few thoughts on funding issues from scientists. The fascinating thing is the reply by Director Zerhouni. It is a classic political reply, repeating his latest talking points on NIH funding without addressing the points raised by the letters. For example, all three letters address data that suggest that the traditional investigator-initiated R01 type of proposal is being de-emphasized in the current budget. Zerhouni, instead of addressing the point (and the data presented) obfuscates and sidesteps. His response is that R01s still represent the biggest category of grants at the NIH. This doesn’t, of course, address the point of the three letters that the relative representation of R01s is falling in recent years. He then asserts the value and supposed democratic support for the Roadmap, large scale projects and the like. This does not square with the experience of most scientists who universally, in my experience, criticized the initiative. Sure, they’ll take advantage now and, if funded, might say it is a good thing. But this was most certainly not a democratic, grass roots process.

Another good example is in the May issue of the CSR Peer Review Notes ,which discusses the plan to shorten the length of the R01 application (comment from MWE&G).

Co-chairs of the NIH Grant Application Committee met with the NIH Peer Review Committee (PRAC) on April 19, 2007, and discussed responses to this question that were submitted by over 5,000 applicants and reviewers. An initial analysis of the input showed that the majority supported shortening the R01 grant application.

Why oh why do they try to spin us like this? Guess what? Scientists like data and precision! We’re capable of understanding basic descriptive statistics. So why not just tell us the actual stat instead of saying “majority”? Would it perhaps be because the actual data support a Bushian “mandate” of 52%? Would presenting the actual breakdown of opinions reveal that the NIH is bound and determined to make the change in the face of an essentially even split of opinions on the part of the researchers? As I’ve discussed before, this smells like a done deal with the survey-of-researchers stuff being mere window dressing. This isn’t really likely to win friends, even among those favorably disposed toward the concept.

Committee members then went over all responses and analyzed 500 randomly selected responses in detail. Based on this input, the Committee made the following recommendations:

1. The research plan section of the application should be shortened—a majority favored 15 pages,

2. Instructions to applicants and reviewers should be modified to emphasize impact,

3. Sections of the application should be more closely aligned with the review criteria.

A final recommendation was that changes to the application and to the peer review process should be made in a coordinated fashion. These recommendations will be presented to the NIH Extramural Activities Working Group soon.

Stay tuned on this one. I also agree that it is absolutely essential that the review approach be altered in concert with the shorter application because this is very much unlikely to happen spontaneously in study sections. If the review approach is unchanged (or unevenly changed) New Investigators are going to pay the price. As usual.

MDMA Case Reports

May 22, 2007

The singular of data is “anecdote”.

We all know this hoary old scientific snark. Pure Pedantry ponders the utility of Case Reports following a discussion of same at The Scientist.

The Pure Pedantry Ponder identifies “rare neurological cases” as a primary validation for the Case Study, but the contribution goes way beyond this. Let’s take YHN’s favorite example, drug abuse science and MDMA in particular. To summarize, when MDMA started coming to the attention of scientists in, oh, the early 80s the thought was “hmm, looks like an amphetamine, let’s study it like an amphetamine”. This is where the “neurotoxicity”, “locomotor activity” and “drug discrimination” stuff came in. Bouncing forward, we can see the emergence of thermoregulation, ambient temperature, vasopressin as a relatively late interest. Where did this come from? The case reports of medical emergency and death. Which, while rare, are not of the singularly “rare neurological case” variety with which we are familiar from neurology101. Still, the MDMA Cases play a key role, I submit. Why?

The fact is, animal lab type drug abuse researchers get very far away from their subject matter in some cases. Understandably so. First, there is no guarantee they had much familiarity with the druggies in college in the first place and MDMA/Ecstasy likely postdated them. “Them” being the PIs who are sadly as of this writing very much not likely to be young enough to have been in college in the mid eighties. So they know all about the cocaine and the weed and the LSD but emerging drugs? Not so much. Their undergrads and even grad students could tell ’em but how often do the PIs listen? Then there’s just the general trend that even by the post-doc days scientist are just moving away from the youth culture, don’t you know? Finally, while scientists got to do a bit of tasting back in the 70s, those days are long gone.

So instead we have a bunch of out-of-touch old fogies who think MDMA is just another amphetamine, should be studied like just another amphetamine and can’t see why there are different approaches needed. Case reports provide needed stimulus to new experimental approaches, needed support for the arguments that other things need to be investigated about this drug, etc. Don’t believe it? Then why did NIDA have to come out with a Program Announcement in 2004 (yes, 2004!) saying in essence, “For God’s sake will you please submit some proposals other than the mechanisms of so-called serotonin neurotoxicity?”. [Current R01 and R21 versions of the PA].

Time will tell but the field may have missed the boat a bit by not paying enough attention to MDMA related Case Reports. Giorgi et al have reported a potential sensitization of seizure threshold following a particular regimen of MDMA in mice. Experimentally, this is relatively novel stuff. But reading the case reports with the hindsight, there are clear indications of seizure as possibly being a primary cause of medical emergency (it has been generally been assumed to be thermogenic seizure subsequent to the high body temperature). Time, and some additional experimentation, will tell if this was a missed foreshadowing from the case reports or not…

So yeah, I find a big role for case reports. Not just for the unique cases but also to lay down a record of some interesting phenomena that might bear looking into in a systematic way.

The reviews are in…

May 22, 2007

Lots of bashing of the peer review process lately. Admittedly Orac has a nice counter, directed at forces external to science but highly relevant to on-the-bus complainers. [Update: another comment on peer review from NeuroLogica]

I have some unusually un-cynical thoughts today. I finally got some reviews back on a recent submission and they touch on much that is wrong and much that is right with manuscript review. First of all, we’re talking a normal journal here, Impact Factor in the 3 range, field specific, working scientist as the editor. Meat and potatoes stuff. The topic of the paper is pretty much in the heart of the journal. It does however, reflect a slightly contrarian experimental approach which in some ways violates all the “norms” which were established over the past couple-three decades in this area. nothing earth shaking, just some experiments which converge on a single point of view, suggesting that no, we don’t always have to do things the canonical way and there is room for some improved models.

One reviewer is…critical. obsessively so. detailed point-by-point complaints about the interpretation of results. The flip side is that one reviewer “gets it”. Very laudatory review, really. Almost makes a better argument for publication than I could make myself. Editor comes in with “may be acceptable pending revision” with some additional critique.

Okay, pretty standard stuff, GREAT, I think and start beavering away with the responses and revisions. Why am I not ticked as other seem to be by the divergent opinions of the reviewers? Well, first of all, let’s face it. In contrast to the dismal reinforcement rate of the grant process, paper review has a fantastic effort/reward relationship. As one luminary in my area pointed out a very long time ago, everything gets published eventually. Especially when the editor seems favorably disposed in the face of at least one rather critical review. But in addition we should all admit in these cases that the truth lies somewhere in the middle. The paper is likely not as good as the favorable review indicates and not as bad as the critical one indicates. Editor serves as mediator over where the mean should fall. This is a good thing. Often the bias is for publication, again a good thing for all of us. As the old saw has it, real peer review starts after publication anyway.

Let’s take the ‘bad’ review first. Yes it IS irritating that some idiot questions our brilliant conclusions, seems willfully to miss the point and can’t see the forest for the trees. However, one of my best mentors once said to me that no matter how bad or stupid the reviews seem, it always results in a better paper. I have found this to be true, sure enough. A related point is that we should understand that the reviewers stand in proxy for our eventual audience. There will be critics and nonbelievers reading your paper if it does make it into print, don’t you want to have the chance to head off some of the criticism in advance? So the “idiot reviewer” is useful. Finally, heh, strategically if one wants to make sure the critical review isn’t heeded by the editor, we want them to be as obsessive, critical and idiotic as possible. Personal insults if possible (yeah, I had one of those recently too!) This can’t possible help the editor take his/her side and therefore is a good thing for the authors.

Now the “good” review. This one is tougher. Sure, we all want a cream puff review because after all, our manuscripts are brilliant as submitted right? Is this a reflection of the good-old-boys/girls club that those on the “outside” lament? Well perhaps. The lab group I’m in isn’t big-wiggy for sure but it IS known. Furthermore, the journals are increasingly requesting advice on who the reviewers should be (and should not), so yeah, we took advantage of that to request people we thought might be friendly. The thing is, who knows? I suspect that when it comes to paper and grant review, we don’t hit very high on average estimating who is going to give us an easy time of it and who is going to rip us apart. Just because you have a drink or two with a colleague and bemoan the state of the funding crisis into your beer doesn’t mean they’ll accept crap science from you! Getting back to the point, I just can’t say. Maybe the “good” reviewer was from our suggested list but maybe both of them were too. Maybe the “bad’ reviewer was someone we think of as a friend of the lab and the “good” one was a complete unknown!

Anyway, the system is working today, even if I am spending inordinate amounts of time on a point-by-point rebuttal of idiotic comments….

Well the NIH budget pinch has finally affected the intramural researchers. (Tip to MWE&G.) My sympathies are limited. What a sweet gig is Intramural NIH. Ahhh. No competing for grants. Production is therefore, er, less than competitive. Get in as a post doc, get a staff appointment, never leave. Or if you have to leave, join the ranks of the Program Officials or SRAs.

I speak of the Program Projects, Center grants and the like. You know, the huge grants to big wigs incorporating several R01-equivalent projects, and assorted “Cores”. Cores being another word for “extra money for doing exactly what the lab is doing in the first place”. Don’t get me wrong, these babies are great if you happen to be getting some money through one. The hit rate is fantastic of course because for the most part the deal is fixed in advance and once funded these things (particularly Centers) keep going and going.

The annoyance, however, is the Center or Project meeting. Bad enough in normal times because they consist of about 30% science and about 70% strategerizing about funding. We’re down to the wire on one here, the review is weeks away. That means the meetings come fast and furious. Weekly. Good god how much hand wringing over an essentially done deal can we do, eh? Obsessing about how other similar Centers/Projects have faired in review. Obsessing about the politics of the special review panel. Obsessing about productivity. Etc. What a bloody timesink…

Sigh

May 7, 2007

To recap. Luminary of the MDMA field published high-impact Science paper in 2002 which had to be retracted a year later over a “mistake” in the drug used. Other retractions followed from the same mistake. Much hoopla in popular press and elsewhere seriously emboldening denialists of the MDMA-advocacy position. Much time and NIH $$ wasted in the year it took for the original authors to ‘fess up even though they knew within months of publication that they were having difficulty replicating the effect. (Full Discl: Your Humble Narrator being one of those wasting not-insubstantial amounts of time because of the erroneous original publication.)

Ricaurte was awarded a competing continuation of one of his R01s in 2002 on the strength of the work that was retracted (going by the abstract) and a K05 in April 05. New R01 in Aug 05 based on a finding which bears some of the hallmarks of the unusual finding/overselling that was possibly part of the problem with the Science paper debacle. Apparently the NIH can’t throw money at this guy fast enough.

This week, we have a correction of the usual molecular biology sort from this group.

instead of inserting the panel corresponding to SERT antibody 1, we
inserted the panel corresponding to SERT antibody 2 correctly shown in Figure 8, top panel, of the published paper). This resulted in duplication of the panel for SERT antibody 2, and omission of the panel for SERT antibody 1, now included in corrected Figure 4 (panel a). In the same figure, the bar graph in the lowest panel of the published figure was incorrect (same as bar graph in top panel of figure 8).

YHN cannot understand how this sort of thing happens. Really. I just don’t get how the wrong figures make it through one round of revision, proofing and a 12 month publication delay at this journal. I don’t understand how when the lab comes to the PI with findings that look very similar to other things they’ve published, the PI doesn’t say “Are you sure it was MDMA and not methamphetamine? This looks like meth to me…” At the very, very best, this lab is sloppy. Why does he deserve more money? Why? No doubt he’s over the salary cap, meaning that NIH could be getting two younger investigators for the price of one of him.

Some appear to wonder how scientific misconduct persists, well, because it pays…

*************

Update: MarkH over at the denialism blog invited some traditional peer-review bashing in the comments. A comment touches facetiously on the role of peer review in blog bloviation. Why not? Why depend on random blog readers to comment? Why not seek out expert professional opinion on blog topics? So we’ll be trying a little experiment in soliciting expert opinion…

Over at MWE&G we have additional comment on the impact of shorter NIH grant applications. There is a “proposal” being floated (aka “a done deal”) to reduce the length of the research plan section of the standard R01 type application from 25 to 15 pages. As outlined here, the thought is to focus review on significance and impact and to de-emphasize methodological critique. A second benefit imagined by the NIH is that this is a way to decrease the number of individuals needed for review. [As usual, a comprehensive understanding of real behavior is absent in this NIH-think. Shorter apps means even more apps-per-investigator thus leaving the review “burden” unchanged even if their rational about shorter apps was correct, which it is not. ] As MWE&G points out, a NIH survey found reviewers less than enthusiastic:

However, current reviewers weren’t raising their hands to take on more of these shorter applications, so the NIH will need to rely on expanding their reviewer pool – hopefully made easier by a reduced reading burden.

The problem is simple and anyone who has done any sustained reviewing (8+ apps per round, 3 rounds per year, 4 year commitment for “charter members” of panels) can point this out. The major burden of review is not simply reading the pages. Ten additional pages takes at best another 15-20 minutes to physically “read”. This is immaterial in the scheme of things. Understanding what is being conveyed in a grant is a synthetic process in which all major sections (Background, Preliminary Results, Research Plan) need to be integrated in the reviewer’s head. The major time commitment is the “figuring it out” process- how is this experiment supported by the preliminary data and background, what hole in the literature is being addressed, how does this address the Aims, etc. In fact, decreasing the length of the application is going to increase the reviewer burden in some cases because the reviewer will have to bring her/himself up to speed on things that were previously laid out cleanly in a 25 page proposal.

The great unknown is whether reviewers will adapt their behavior to the new approach. They certainly can but this is likely to be a long and very uneven process. The last few years have seen attempts to refocus review on “translational impact” and “significance” and “innovation”. It hasn’t worked in Your Humble Narrator’s study section. Some reviewers remain “old school”. Some espouse the newer “significance/innovation” approach. Some do both depending on the grant under discussion! Is this because people are unqualified or pernicious? No. It is because there are legitimate differences of opinion on various things that tie into the decision about what represents the “best possible science”. Unfortunately there is essentially zero discussion in any official capacity or forum to navigate the intent of review. There are some published guidelines but these don’t really get past the format of a review. Likely due to an understandable reluctance on the part of the CSR to “contaminate” the independence of review by telling reviewers how to review a proposal. But this leads to additional variability in the process because there is no commonality of approach. This leads to a great deal of frustration on the part of the PI reading the summary statement. “Reviewer one says it is ‘highly significant in providing a clear test of hypotheses to resolve two major theoretical approaches to the field’. Reviewer two says it ‘lacks significance because it the experiments do not address any significant question of public health’. AAAGGHHHH! What in the heck is ‘significance’ supposed to mean?”

What indeed.

Cost of War

May 2, 2007

In a nutshell this is why scientists have a visceral rage about the tight NIH funding picture. One estimate of the cost of the Iraq war puts it at about $422.3 billion and translates this number into what is being lost into units of public good like education and public housing. Let’s translate that into grants, shall we? The most easily available numbers from the NIH are for funding trends for FY2003. In FY2003, the NIH funded some 28,698 R01 grants at an average cost of about $340,000 in total costs per year. So let me just see here….. 422.3 billion divide by 6 years divide by $340K….um 207,010 R01s are being burned in Iraq each year. (Almost 4,000 grants each and every week.) That’s about 7 times the number of R01 grants that the NIH funded in 2003.

This is the answer to why scientists aren’t buying the “it’s tough times” stuff. It is why it wouldn’t bother me if the NCCAM wasted a bunch of money.

Seven times the number of grants. Look around you. Seven times more funding for your lab. Or seven times the grants in your Department. Seven times the funding in your cosy little subfield. Dynamics of cats has similar thoughts.

Just think of what we would cure…

EUREKA mechanism

May 2, 2007

The Medical Writing, Editing & Grantsmanship blog mentions a new trial grant mechanism, summarizing some main points:

the application itself, which is limited to 8 pages to explicitly address the significance/importance of the problem; the innovation/novelty of the hypothesis or methodology; the magnitude of the potential impact; and (something else new, borrowed, & blue) the size of the community affected. Even curiouser, the biographical sketch is limited to the 15 publications – the 5 most relevant, 5 most significant, and 5 most recent – plus a paragraph describing qualifications for the proposed research.
Preliminary data are allowed but not required.

Some additional details may be had from the NIH but I can’t find anything really official yet. There may be an announcement as early as this month.  [UPDATE 07/30/07: The initial announcement is out as an RFA] The alert reader will note that this proposal is formed to address some common criticisms of the current NIH R01 application…in spades.

First the “good”. Most of it IS good. R01 applications are far too lengthy for a number of reasons. Most importantly, I think the level of detail expected at present distracts from the central issues. Often times the review gets bogged down into a discussion of methodological minutia that has no place in grant review. (One, if we’re so concerned that this PI can’t figure out the basic methodology, why are we considering this person seriously as a PI. Two, if we are concerned that the control condition isn’t exactly correct to prove the point, isn’t this more appropriately the province of the paper review process? argh.) Shortening the application has to potential to head off much of what I consider unproductive aspects of review. The significance/impact/innovation part is all good, of course people are supposed to stress that currently and the top applications do- impact is uncertain. Finally, this is going to save a lot of PI time in grant preparation. Since there is a tremendous focus on methodological minutia this means that the grant itself has to be immaculate from a document perspective. Everything has to be consistent, timelines have to add up, the obligatory hypotheses better not be contradictory. (You know a bunch of “requirements” that have little to do with the way science is actually conducted!). Shorter and less detailed applications are going to save time and short circuit the “aha, hypothesis 1.A.II is slightly incompatible with hypothesis 4.C.IV! clearly the PI doesn’t know what s/he’s doing….Triage!” process…

The “bad”. Review bias. This is going to reinforce the bias for giving higher scores to well established researchers and lower scores to less experienced and transitioning investigators given the same objective quality of the proposal. The reason is that reviewers are concerned with issues of feasibility and likely productive outcome of the research. Particularly when it is viewed as “high risk” from a scientific standpoint, the notion of “success” of the project (meaning papers resulting) will be a concern. There is an entrenched belief that “track record” is highly important. There is a belief that a PI with a long career will be “able to get it done”, with a fairly nonspecific but nevertheless motivating belief that untried PIs will somehow blow it, waste the money, etc. I’ll likely get into this unsupported myth at some point but for now trust me, it is a powerful determinant of review outcome. The current long format R01-type proposal cannot completely cure the problem and indeed it doesn’t. However it gives the untried PI a fighting chance to address concerns. Lots of preliminary data, exquisitely argued/detailed research plans, additional autobiographical details slipped in (“sure these figures were from my postdoc days but i was running the project as evidenced by X, Y, Z….”), etc. All things that provide ammunition to the reviewer who is favorably disposed toward the application. Shorter applications are going to lessen the ability of the less-established investigator to establish a sense of confidence in the outcome of the project.