Aaaaahhh. You spent two months sweating that grant proposal and now it has been submitted. Time to relax, right? Sorry, you’re not done yet! Read the rest of this entry »

Writedit over at MWE&G has some advice for New Investigators from a NCI program officer. One salient point was

She spent a bit of time on what remains a controversial issue at the NIH, but her stance was clear: new PIs should concentrate on crafting and submitting a very competitive R01 rather than divert their effort to R21 or R03 proposals. Neither of the latter are renewable, and neither are appropriate “starter” grants on the road to independence.

I totally and completely agree and in fact will underline this by pointing out that if you go for these dinky mechanisms when what you really need is a R01 you are possibly setting yourself up for failure. It takes essentially as much time to prepare a R21 or R03 application as it does an R01 app. They are suffering the same revise-and-resubmit fate as well as the same dismal funding rates. Perhaps slightly better but not enough to make it worth it. And you don’t “have” to… Read the rest of this entry »

Uncertain Principles takes an interesting new tack on the scientists versus journalists debate. The post points out, quite rightly, that in some cases education requires that the “simple version” of the truth be conveyed because the nuanced, highly accurate version can’t possibly work. Better to communicate the gist than nothing whatsoever, right? The “simple version” of the truth can become lies-to-children which Chad attributes to Terry Pratchett.

I’ve previously touched on the frightening possibility that perception is everything in changing drug use epidemiology. I say “frightening” because it suggests that the real risks, the subject of my professional life, are somewhat tangential. I touch on our most fundamental lies-to-children in that post as well. Namely that “Drugs are bad”, meaning that if you try recreational drugs, even just a little you are going to be hooked into a spiral of drug dependency and despair. The Nancy Reagan “Just Say No” version of the “truth” about drug use.

Read the rest of this entry »

Perhaps “advocates” and “detractors” are the better terms. This is one of those heuristics that might help with crafting responses to the Summary Statement or the paper review. Others have views that touch on the topic for example MWE&G has the following in a recent post:

if you’ve hooked your primary reviewer into being a passionate advocate for your proposal, that will likely come through as well. If the summary statement lacks any sign of someone going to bat for your work, then you did not make your case even to the reviewer who should have been most excited about your project.

The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one’s case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn’t worth discussing. The same reviewer can simultaneously express pro and con views but as we’ll discuss this is just a special case. Etc. Nevertheless there are a couple of points which arise from this heuristic which apply to all of the above situations and suggest concrete approaches to both original presentation and, where applicable, in revising the proposal/manuscript. We’ll take the case in which one is crafting a revision to a grant in response to a prior critique as the example after the jump. Read the rest of this entry »

This blog is rated…

June 22, 2007

Thanks to Uncertain Principles for the source of blog ratings.

Drugmonkey is rated

What's My Blog Rated? From Mingle2 - Online Dating

apparently because of all the talkin’ about drugs. plus death.

This rating was determined based on the presence of the following words:

* cocaine (6x)
* ecstasy (2x)
* dead (1x)

dead? Anyway, the real question is why merely talking about drugs needs to be restricted to the presence of a parent or guardian. I know this is just for fun but still. Just talking here. Sheesh.

As mentioned, it is a meeting week in Drugmonkeyland. Naturally all the gender-equity buzz in blogoland had me thinking about issues of diversity in my field. I’m sure the actual hard numbers in drug abuse science match the usual dismal unrepresentativeness. But I’d like to address this from a perception issue, namely my perception of the diversity of PIs. Some anectodal observations after the jump:

Read the rest of this entry »

Off at a meeting this week. Of the small, focused type for my field, namely the College on Problems of Drug Dependence. In the fair Quebec City this year which is….nice. (Just a reminder of some of the perqs in this biz for those focused on the dismal aspects.) Actual science to follow but first the career stuff.

Read the rest of this entry »

I can’t leave Chembark’s opinion that poor stewardship of the public’s (grant) money verges on ethical misconduct alone. In particular this observation:

I also get upset with researchers who win grants for one set of ideas, then spend the money on projects that are not just tangential, but completely different. To me, this smacks of obtaining funding under false pretenses, and I consider it to be dishonest behavior.

What world is this guy living in? I’ll take the point when one is talking about a small funding agency with a highly focused and singular agenda. Shouldn’t take money from American Heart and work on drug abuse. Oh, well I guess MDMA causes valve problems and acute cardiac arrest and we all know about smoking and heart disease. Then there was that Len Bias thing… Hmm. Okay, well, you sure better not take money from some Autism foundation and then work on immunology, mercury toxicology or development of the temporal lobe structures….errr, right?

Read the rest of this entry »

Eye on the Prize…

June 13, 2007

There’s been some interesting hoopla in the last couple of days sparked by Zuska’s post examining gender discrimination in academia by way of excoriating those who ignore their own inherent privileges. The ensuing discussion became vehement and led Rob Knop to post his own examination of privilege and, more importantly, the strategy of demonizing one’s natural allies that some perceive in Zuska’s approach. I’ve also been reading Chembark’s little rant suggesting that poor stewardship of the public’s (grant) money verges on ethical misconduct. And YoungFemaleScientist expresses a common enough frustration with the dismal prospects for scientific transition. There’s an older one from Adventures in Ethics and Science on gender equity in science too. Finally, the situation at MIT with the tenure denial of James Sherley (tip to Dynamics of Cats) has been picked up by both Nature and Science in recent issues.

All of this has me thinking about agendas, advancing the same and styles of discourse and approach. Your Humble Narrator must confess to agenda, really, who doesn’t have a series of agendas? In terms of the future and present conduct of biomedical research science, most specifically in YHN’s chosen field, I have…opinions.

Read the rest of this entry »

Baby loves E…

June 11, 2007

It is bad enough that toddler land is filled with trippy toys these days. You can barely turn around without running into some crappy light stick bracelet or LED teeth inserts derived from the Ecstasy users being marketed to kids. I just stare at the parents in wonder when they point out the “cool” pacifiers with LEDs circling them…

Recently we took SpawnofDrugMonkey to a BabyLovesDisco event. I didn’t pick up on any trippy toys present so, good there. But the off-floor events included story time in the, I kid you not, “chill out room”. Call me crazy but I don’t recall any “chill out rooms” being part of the club experience until Ecstasy hyperthermia.

Sheesh.

Not only is it grant revision time, but it is also grant review time. Lots of study sections meeting to review the piles of applications submitted for last Feb/Mar deadlines. One pet peeve that you may wish to consider as you are crafting your Introduction to your revised grant.

There is a tendency to, well, apple polish to put in nicely. You know, to kiss ass. “We are greatful for the insightful comments of the reviewers” “The prior critique helpfully identified flaws for which we are eternally thankful…’. Etc.

Don’t.

Do.

This!

What exactly do people think they are accomplishing with this stuff? First, it suggests that you think the reviewer is in this for empty compliments and/or can be swayed by empty compliments. This is insulting. Second, it suggests that the PI is trying to buy the reviewer off with flattery because s/he has no intention of actually, say, responding to the substance of the critique. This is not helpful to the cause.

Here’s another hint. Don’t waste time re-iterating all the positive comments. The panel gets the summary statement to read and they read it. They know that nice things were written, they may have written those words themselves. You are just wasting space that could better be used to respond to the criticisms.

MWE&G notes that NIAID is particularly upfront about funding strategies, in substantial contrast to most ICs. I don’t like the opacity of most of the ICs on funding strategies either. But one reason they do it is to minimize certain study section behavior. There is a natural and perhaps inescapable psychology to grant review in which the reviewer is, at some level, thinking “fund it” or “don’t fund it”. This results in scores clustering around the “perceived” funding line.

ICs don’t like this because they want a nice flat distribution of scores so that no matter where the funding line is drawn, there are not a ton of “hard calls” to make. The more applications with the same score, the harder the decision. (Actually applicants should favor this approach too because in theory it decreases arbitrary IC behavior with regard to selecting apps for funding.)

Fortunately, from the IC perspective, there is some lack of calibration in the “perceived funding line” in the typical study section. (Also, SRAs are tasked with fighting this tendency by urging reviewers to distribute their scores across the entire available range.) This introduces variance into the result of the same psychological process, namely funding line seeking, in reviewers. I think that if all Institutes were highly vocal about the funding lines, hard and soft alike, the problem of score clustering would increase. I think you would also start to see mean scores for Institutes start to move around to match the funding line. “Oh, NIMH is at 135 and NIAAA is at 140? Well, I can assign a 130 to this one, a 140 to that one and the SRA can’t say I’m not spreading scores!” Over the tens of thousands of apps I think you would start to see effects. Then the ICs would have to cycle back on the funding line by saying “well, our grants average 5 pts higher so our cut line is going up”. So the process would cycle around recursively. Not to mention that ICs do compare on things like scores and percentile, I have no doubt. So they aren’ t really interested in doing things that might put their scores at a disadvantage relative to other ICs because their percentiles would start rising creating the impression that they fund substandard science.

It gets complicated.

To return to the applicant, unfortunately from the individual perspective, variance in the perceived funding line can introduce categorical problems. Often a reviewer who is less experienced or knowledgeable may assign a “good” score that is in fact not a “good” score at present time. So the actual intent of the reviewer is not realized because s/he thinks a 170 is a great score, which it might have been five years ago. So you might get hosed because you were, essentially randomly, assigned a reviewer that is less calibrated than those on another application.

Now that we’re past the new-R01 deadline and heading for the revised-R01 deadline it is time to talk summary statements. Out they come and we start perusing them for clues as to how to revise so as to improve our score. Frequently, one starts tearing one’s hair when it seems that the reviews cannot have been done by anyone 1) with a brain, 2) familiar with the science or 3) who actually read the grant.

A recent comment from writedit touches on the issue:

Oh, you haven’t read scores of summary statements over the past 2 decades or had PIs ask you if the 3 assigned reviewers all read the same proposal … or read/understood it at all (based on the irrelevant comments raised).

Also, I pick on Rob Knop again for his expression of a common frustration with essentially opposing critiques in NSF review land.

There are reasons for that frustrating pink sheet where reviewers are diametrically opposed, not all of them are nefarious. There are at least two important concepts in summary statement tea leaf reading that are not readily apparent until you’ve been on study section.

One, the reviewers are not always talking to you, despite what you might think. Some comments in there are a discussion between reviewers and/or reviewers trying to hit the study section’s cultural buttons. Huh? Well like most sub-cultures, grant reviewers generate some shorthand timesavers. This leads to the use of some Stock Critiques. Examples include “lack of productivity”, “too ambitious”, “lack of clear hypotheses”, “independence of the PI” and “fails to consider alternate approaches”. These evolve into shorthand because everyone agrees that certain items are a GoodThing to have in the grant application. Perhaps more importantly, even those who think these items may be silly tend to agree that there should be some consistency (read “fairness”) in review and thus if application 56 is beat up for Stock Critique Z, well application 99 better get beat up for that too. This means that reviewers can anticipate the use of Stock Critiques and, if inclined toward the grant, may state things in a way that are designed to head off the anticipated Stock Critique from other reviewers. If the other reviewer uses the Stock Critique then you get opposing reviews and not only that but they may only loosely fit the actual application because the reviewers are using a lazy shorthand. After all, if it gets serious (i.e., the grant does not end up triaged) they can focus on detail in the discussion. If the grant is a revision this can be even worse because the battle may already be joined and a favorably disposed and disfavorably disposed reviewers use the language that the know will have currency with the other members of the panel-so they are talking to the panel, not the applicant!

Example: “this grant has been fantastically productive in the prior interval” vs. “scientific output has been modest”. Huh? which is it? are they reading the same biosketch and progress report? well, sure. but there is no objective standard for what is “productive”. Some papers count more. Some people are willing to look at the PIs overall output without considering it is funded by 4 R01s. Some people want to divide by the number of grants or make sure the pubs listed are really directly relevant to the grant under discussion. etc. So when you see the above comments what it really translates to is “I suspect the other reviewer is going to brag on productivity to sway the panel but I don’t like this proposal so I’d better preempt the issue.”

Two, summary statement writing is frequently an exercise in confirmation. You might, perhaps, be under the impression that the reviewer dissects the proposal first and comes to a conclusion at the end of an exacting reading of the application. Not so. Often one reads the proposal over, with a beer or coffee in hand, and then comes to a Gestalt opinion about the grant. Next one writes the summary statement according to the established position. Thus, if you decide “triage” you are looking for some quick points to make to justify the opinion, this may or may not fit closely with your actual reasons. For example, it is vastly easier to communicate “the application failed to state any clear hypotheses nor explain how the proposed experiments would test such” than it is to communicate “yes, I know this model pumps out a paper every year or two but this area bores the bajeezus out of me, scientifically speaking”. If you decide “fund this puppy” you are looking for the best argument in support of the proposal. So you may overlook the deficits and really trumpet the strong points. One may shamelessly rely on Stock Critique type communication to sway the panel in either direction, depending. In many cases you can end up with an advocate writing a critique that is much more laudatory than the reviewer actually feels, analytically, about the grant. This is because s/he has decided that it is a great proposal despite minor flaws. Conversely, the detractor may write a critique which is much more critical than s/he actually feels. Among other reasons, why bother identifying a bunch of strengths if you are just going to assign a bad score? It confuses and lengthens the discusssion and in any case takes more time that could be better devoted to the good grant on one’s pile…

So as you are re-reading that summary statement that you haven’t been able to bring yourself to look at again in the past two months, try not to overreact.

woo, hoo! Another new R01 sent off. Congrats to everyone else who got theirs in!

This is a picture of Eloria noyesi eating a coca leaf from Chen et al 2006, Gene, 366 (1): 152-160, Molecular cloning and functional characterization of the dopamine transporter from Eloria noyesi, a caterpillar pest of cocaine-rich coca plants

E.noyesi eats coca leaf

The interesting thing is that the little buggers fail to die from cocaine poisoning, unlike other caterpillers. The authors cloned the dopamine transporter (DAT) from these guys and the silkworm (Bombyx mori) under the hypothesis that this primary target of cocaine might be responsible for the insensitivity of E. noyesi to coca poisoning. It turns out that the DATs were pretty similar and highly homologous to DAT from other invertebrate and vertebrate species. It also turns out they have a kickin’ esterase which chews up cocaine pretty rapidly.

This work led to another paper Chen et al 2006, in which a DAT insensitive to cocaine was knocked into a DAT knockout mouse. The knockin was insensitive to locomotor stimulant and conditioned place preference effects of cocaine but similar to wildtype in response to the closely related stimulant amphetamine. In vivo microdialysis, voltammetry and patch clamp data were supportive. The takeaway here is that the group was able to selectively narrow down on the cocaine-DAT interaction making a nice little model to dissociate DAT-mediated from serotonin and norepinephrine transporter mediated effects of cocaine. Selectivity being good in pharmacology, of course. It tends to cut down on “side effects” of eventual medications and the like.

What does this have to do with grant review? Well, it points out that first, research ideas are based on observation. Sometimes weird ones, sometimes unique ones. The supposed “hypothesis” may not be that rigorous. But there is value in “hey, how come that caterpiller can feed on coca and not die?” and the like. Two, it shows us that one can be completely wrong in one’s initial hypothesis, said hypothesis might even be a bit dubious at the outset and it can still lead to some pretty interesting science. For example, I can imagine where the grant application on that topic would have been met with a critique like “uhh, but why are you cloning the DAT when the most obvious thing to look at is drug metabolism and excretion, fella?”. And I can also imagine if they said in advance that they were going to clone the E. noyesi transporter and then do some knockin with DAT KO mice…well, let’s just say it would’ve been triaged. This is one example where efforts to step back and look at the big picture during grant review would possibly pay off.