Howdy Folks!

DrugMonkey, foolishly inspired by a couple of posts on bike riding over at Chad’s Uncertain Principles lair, and a reminiscence of the glory days of Usenet news groups by Chad’s guestbloggerNathan, not to mention some TdF idiocy over at Dr. Joan Bushwell’s Chimpanzee Refuge where DM engaged in old fashioned flamewar action with some fool named Hopper30111443627272 (approx.), ditto the complete and utter cock-up that has been the TdF and pro-cycling in general in the past couple of years (ohh, we’ll get to those links, oh yes we will) and, I suspect, heatstroke, suggested “Hey, why don’t you blog on Drugs and Cycling over on Drugmonkey?” (get it? Drugs, Cycling, man, brilliant this guy!).

So BikeMonkey was born.

I’ll take the suggestion on nom de blog. I’ll be talking about cycling mostly but sure, we can rant about these dumbasses blood doping and taking drugs to ride faster. Cycling science? Waaal, I’m no expert but when did that ever stop an old rec.bicycles.* Usenet newsgroup hand, eh? So we might have to BS a bit on exercise physiology, cycling physiology and the like. Emphasis on the BS but, what the heck, we can talk peer reviewed science once in a while.

“About BikeMonkey…” for the curious:

  •  He lives in cycling paradise, San Diego CA. So posts may be sprinkled with rides and locations local to SD, try not to get too jealous!
  • He is old, fat, out-of-shape and has a RealLife. Some posts may trend towards those dreary old articles in Bicycling “How My Bike Saved My Life” and such crap.
  • Ex-racer. Road, MTB, track. Purely amateur, no great shakes, not braggin’. This is not, however, going to be about cycle touring, old guys with beards on recumbants or the like.

Frequent commentor Physioprof has the call: this made me laugh my ass off:

http://www.sciencemag.org/cgi/content/full/317/5840/880/F1

S/he’s referring to a News of the Week in the 17 Aug issue of Science which reports on a survey of NSF applicants and reviewers. Quote from the article authored by J. Mervis.

Sent last fall to everyone who submitted a research proposal to NSF in the past 3 years (more than half were also reviewers), the survey also paints a picture of the typical applicant. He’s someone (three-fourths are men) who underestimates his chances of success but would have a go regardless of the odds. He needs the money primarily to keep his lab intact and is prepared to try and try again if his initial application is rejected. He’s reviewed up to a half-dozen proposals for NSF in the past 12 months, sometimes cutting corners, and thinks that few contain potentially transformative ideas. Yet he believes his own research, if funded, stands a good chance of shattering the existing paradigm in the field.

As Physioprof notes, this accompanying figure tells us all we need to know, say about characters like this.

transformativeresearch.gif

Go read The mismeasurement of science Lawrence PA. 2007. Curr. Biol. 17: R583-R585.(ht: evolgen). Really. Right now. It is a fantastic commentary on the detrimental effect impact-factor chasing and the like has on the course of scientific investigation. The title of the post is the concluding sentence.

I get into discussions about this problem from time to time. Although I’ve perhaps touched on the issues in blog posts once or twice, I’ve never done the full critique. And now I don’t have to, thanks to Lawrence’s commentary. [Update 08/16/07: Go see David Colquhoun’s GoodScience site for more on this.] Read the rest of this entry »

Fakin’ it

August 14, 2007

A comment on a recent post from Orac busting on the National Center for Complimentary and Alternative Medicine (NCCAM) suggests that one should just say whatever to get the money out of NCCAM and then go on to work on real science. NCCAM, for those not aware, is not viewed fondly by most of the NIH extramural PI masses who believe it to be pseudoscience at best. Me, I like their prior interest in “natural products”, “traditional medicine” and “herbal remedies” but I really have no idea whether or not they support going after the underlying pharmacology and there doesn’t appear to be any current interest. I’ve also been known to suggest that one should write grants that are “One Aim for Programmatic Interest and Two Aims for me, sounds good!”

Anyhow, the comment reminded me of a recent query from a colleague who wanted to know if I’ve yet just “faked up” a grant application. In the sense of starting out with the twin questions of “What is really fundable?” and “What can I do (read: make a plausible argument for my PI capabilities) to address this?” instead of “What is the most interesting next thing I want to do?”. Dear Reader have you faked one up yet? Read the rest of this entry »

I mark the passing of Lew Seiden, a giant in behavioral pharmacology and related areas of research. He’s revered here at DrugMonkey for his lifetime of work on the toxicity of amphetamine-related drugs, most specifically methamphetamine and MDMA. As with many great scientists, their legacy is not only their body of published work but the host of scientific descendants who continue on with additional excellent work.

Writedit notes:

Science has published an elegant posthumous article by Daniel E. Koshland, Jr. entitled The Cha-Cha-Cha Theory of Scientific Discovery … representing the 3 categories of discovery: Charge, Challenge, and Chance. In brief:

“‘Charge’ discoveries solve problems that are quite obvious … ‘Challenge’ discoveries are a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time … ‘Chance’ discoveries are those that are often called serendipitous and which Louis Pasteur felt favored ‘the prepared mind.’”

I want to go a little beyond writedit’s point so I’ll quote more extensively from the article

“Charge” discoveries solve problems that are quite obvious–cure heart disease, understand the movement of stars in the sky–but in which the way to solve the problem is not so clear. In these, the scientist is called on, as Nobel laureate Albert Szent-Györgyi put it, “to see what everyone else has seen and think what no one else has thought before.” Thus, the movement of stars in the sky and the fall of an apple from a tree were apparent to everyone, but Isaac Newton came up with the concept of gravity to explain it all in one great theory.

This is the wheelhouse of the NIH grant review game. Most applications identify problems that are understandable and have obvious importance. Then the applicants proceed to attempt to convince reviewers that they have a new brilliant way to solve the problem which is practically infallible. So far, so good, although we might debate the merits of needing a “practically infallible” approach to receive a good score.

“Challenge” discoveries are a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time. The discoverer perceives that a new concept or a new theory is required to pull all the phenomena into one coherent whole. Sometimes the discoverer sees the anomalies and also provides the solution. Sometimes many people perceive the anomalies, but they wait for the discoverer to provide a new concept. Those individuals, whom we might call “uncoverers,” contribute greatly to science, but it is the individual who proposes the idea explaining all of the anomalies who deserves to be called a discoverer.

This one is a little tricker to grasp, the author identifies Watson and Crick’s “base pairing in the DNA double helix” as one exemplar. These applications don’t tend to do so well in grant review. Mostly because the applicants propose critical experiments to “pull the phenomena into one coherent whole” and the skeptics go mad. First the theoretical position is attacked. Second, the assumption that the critical experiments can pull phenomena into a coherent whole is attacked. Finally, the reviewers come up with endless different experiments that he or she believes will be better than the ones listed. So the “Challenge” grant tends to suffer. I think a related point relevant to grant review is that most times we only find out in the doing. Empirical solutions to theoretical problems. Too much of the time grant review focuses on predicting empirical outcome (“not the right experiments to prove the point”, “theory not right”) rather than deciding first if the phenomena are important, if the application is a good approach to an empirical solution (rather than a guarantee), whether the PI can conduct a reasonable empirical program (i.e., flexible changes based on outcome), etc.

“Chance” discoveries are those that are often called serendipitous and which Louis Pasteur felt favored “the prepared mind.” In this category are the instances of a chance event that the ready mind recognizes as important and then explains to other scientists. This category not only would include Pasteur’s discovery of optical activity (D and L isomers), but also W. C. Roentgen’s x-rays and Roy Plunkett’s Teflon. These scientists saw what no one else had seen or reported and were able to realize its importance.

The NIH grant process does not do at all well with this. The main problem is the obsession with “hypothesis testing” and the universal critique called “it’s a fishing expedition”. True, science needs hypothesis testing and much effort can be wasted if a plan is not focused. But somehow along the way scientific culture has forgotten that all science starts with observation. Did I emphasize that enough? OBSERVATION! As in “Hey, here’s something cool about the natural world. Let me see if I can figure out if it is really true, how it works and what that might mean for understanding other cool stuff”. Really, isn’t this a sufficient description of the “scientific process”? 😛 So sometimes an application proposes some “let’s just kinda see what happens” experiments which draws the ire of reviewers. “Where’s the hypothesis”. “It’s a fishing expedition” they cry. I understand the point. I certainly see applications in which the prior published work of the PI and/or the sub-field suggests that such criticism is warranted and necessary. Ones in which “let’s just see what happens” seems to be the entire research program. However I would submit that when the track record suggests that the PI knows how to test hypotheses then perhaps a little credit should be extended for the one or two experiments that seem like discovery efforts.

Finally, I note that the author fits the Ten Commandments, the Magna Carta and the Bill of Rights into the Cha, Cha, Cha framework. So it must be a valid heuristic…

Looking over my pile of applications to review this time, I’m struck by a disconcerting point. Your proposal is not merely in micro-competition with others that are closely related to yours, you are sometimes going to be competing with exactly the same grants, round after revision round. It works like this… Read the rest of this entry »

I posted before on the tendency to apple polish in the response to prior review of your grant. I didn’t really think it would be necessary to address todays’ point. Looking over some revised grants, though, I am reminded that people still need to be steeped in the basics, i.e., the basics of decent human social behavior, nevermind the basics of  grantsmanship… Read the rest of this entry »

Barely past the last of the submission deadlines for this round and study section duties already call. The first one is an easy one as the SRA circulates the list of applications for our section. Titles, PIs, and collaborators with their respective institutions. The only job here is to review for conflicts of interest (COI). Which is worth thinking about from the applicant perspective because COI determines if a given reviewer is prevented from seeing the preliminary critiques, hearing/participating in the discussion or hearing the recommended scores.

The first and firmest set of COI rules have to do with the institutes receiving the money. So any applications from institutes the reviewer (or her/his spouse) work for or is currently seeking a job with are automatic COIs. Seems funny to have a higher degree of conflict from someone in your big University that works across campus in a pretty different department and you never see the PI or know their work that well as contrasted with your close circle of people working in your subfield, eh? Remember, grant awards are to institutions, not PIs and this rule comes from the sort of financial-interest conflict that most of the world worries about…

The second set of COI rules has to do with close interpersonal (note, not just “personal”) relationships. Previous or current spouses, obviously. But also collaborators (operationalized as co-published in last three years), mentors/mentees (typically 5 years but if the relationship is “ongoing”…..) and competitors. These are soft in the sense that you “may be” in conflict because there are too many varieties of relationships and situations to write comprehensive rules. Nevertheless if your new postdoc’s thesis advisor is on the panel, good bet that person will be COI.

Okay, the next first impression is the title of the proposal. Impact? Hard to judge but it is not neutral. Obviously you want to pique interest without being cutesy or inaccurate. Give some idea of the significance. I like to be able to determine the species being studied from the title but this may be personal. Just keep in mind that reviewers will see the title and collaborators with no other information and those gears will start turning, every so slowly.

Not much more the applicant can do about the rest. But for informations sake, the reviewer also can start getting an estimate of what the “pile” is going to look like. First, one notes the revisions of apps one has seen before. Second, one notes titles of applications that seem close to the usual type of assignment. Thus, the reviewer can start estimating what grants are going to be in the pile, the workload (revisions usually are easier), etc. Should get the actual assignments and applications in a couple of weeks or so…

MarkH over at the denialism blog justifiably dissects a mainstream media article sensationalizing a study on the effects of marijuana smoking on lung function. He then goes on to express drug-legalization denialist (see this comment as well) positions as a rebuttal to the drug-risk overstatements. To this I take exception. Most specifically the use of mainstream media hype and/or the usual scientific authors’ hyperbole making more of a study than is perhaps deserved to attempt to question a reasonable interpretation of the available scientific evidence. Also, I find a minimization technique suggesting that since the effects of marijuana withdrawal do not look like nicotine withdrawal, marijuana must only be “psychologically” addicting, not “physiologically” addicting particularly annoying. I address myself to this relatively common argument, as expressed in this instance by MarkH.
First, what in the heck is the basis for the use of “psychologically” and “physiologically” addicting? Are you a dualist who maintains that there is a “mind” or “psyche” that exists independently of the workings of the brain? I’m not. So by definition, if something is altering behavior, it is “physiological”. If one is attempting to dissociate somatic from brain symptoms, well, good luck with that. Yes there are some dissociable things but the interplay is really too involved, and the ultimate response that matter, i.e. behavior, is primarily brain in nature. Or are you one of these people that believes that many (all?) mental/behavioral disorders are due to personal choice and lack of “willpower” and that people should just snap out of it? This is what people usually mean when they say something is “psychological” and a contrast with “physiological” certainly suggests this is what one means. This is just as stupid for drug dependence as it is for major depression, ADHD and the like. MarkH commented that “physiological addiction has not been firmly established”. Really? How so? Take a little spin on PubMed for “cannabis withdrawal” or similar. The only way you can come up with an interpretation like Mark’s is by highly selective reading and denial of the literature in my view.

Second, beware of goalpost juggling related to this issue. Is your frame of reference for “addiction” is the rather dramatic phenomena of heroin withdrawal (perhaps a movie cartoon fictionalized version? not ALL heroin addicts look exactly alike in withdrawal severity) and presumably direct experience with nicotine withdrawal? The apparent severity, drama or somatic-pain-like nature of withdrawal from different drugs of abuse is not an exclusive indication of whether or not they are “physiologically addicting”. For that matter it is not even relevant to a reasonable discussion of “how addicting” because for this latter you need the behavior, i.e., continued drug taking in the context of a desire to quit, adverse consequences, etc.

Third, THC is a reasonably unique drug of abuse because it has a long halflife. Take a simplified version, say where withdrawal sets in once brain levels of drug / receptor occupancy decline. Symptoms can be alleviated by replacing this with a less effective but similarly acting drug like methadone for heroin or a limited, sustained dose like the nicotine patch (this is agonist therapy). The long halflife of THC in the body means that you have ongoing agonist therapy (low levels of drug still present, say, the next day) through an interval in which withdrawal would be observed after discontinuing nicotine or heroin. Again, the point is not the relative severity of symptoms or ecological significance for the organism. The point is that a subjective understanding of whether or not the human is “physiologically addicted” is compromised without accounting for this difference in pharmacokinetics. This time, try using “precipitated” in your search to review experimental ways around this issue.

Fourth, the “my friend Joe” argument (“My friend Joe smoked dope all day every day for 10 years and then just quit one day with no lasting symptoms” or similar), common to many legalize-it types, is just as flawed as the “one toke/injection and you are hooked for life” ReeferMadness type position. Donny and Dierker 2007 report that almost 40% of daily smokers (at least 10 / day for at least 10 years) don’t reach DSM criteria for dependence! Anthony et al 1994 (Exp Clin Psychopharm) shows us that the conditional probability for meeting dependence criteria given that you have experienced a drug at least once. Not the only way to calculate such a thing of course. But it suggests that only about 8% of cannabis smokers are dependent and, wait for it, only about 24% of heroin injectors are dependent. The relative population prevalences are much greater for cannabis (46% versus 1.5% in that sample), thus many, many more people are dependent on cannabis than on heroin. Odds are a given person knows a lot of cannabis users and next to zero heroin users. Even if one does know a lot of heroin users, the chances that person will be dependent is much greater. Finally, because most people’s assessment of “dependence” is biased for the drama as discussed above, a subjective impression of dependence is wildly off base.

Ultimately what I am trying to get at here is that the legalize-it crowd is just as prone to misusing science as is the bogey-man of the great Gov/DrugCzar/NIDA/Republican conspiracy to keep dope illegal. Reefer madness approaches are nutso. So are people that feel that chronic THC exposure is perfectly benign. So are people that believe that THC is not addicting. As someone who spends a deal of time trying to parse the actual risks of drugs of abuse scientifically, both types of misuse / ignorance of the available science are distasteful.