Well this news is big. HUGE, in fact.

It is no secret that these two guys’ political content draws the lion’s share of the traffic to Sb. Meaning two-thirds or better, day in and year out.

I will be absolutely fascinated to see how much remains at Sb after they move to freethoughtblogs.com on Monday.

My prediction is: Very Little.

reposting from my blog….

20110729-015640.jpg

20110729-015655.jpg

20110729-015742.jpg
Midshipman fish (Porichthys genus, not sure about species).

I’ve noticed a peculiar thing lately.

One of my collaborations over the years led to papers that seemed to garner a higher citation rate than other aspects of my work. At first it was ever so slightly disturbing that the collaborative papers got more attention, so I thought. Over time, as I grew to understand that citation is HUGELY skewed by field size and vigor I stopped worrying about such matters.

The funny thing is that my other papers have been catching up. Turns out that the collaborative work tended to have bigger initial splashes but much less staying power. Some of my other areas of interest have much lower initial citation rates in the first few years after the work was published. But the cites just keep ticking away at about the same low rate over time.

I find this gratifying. Sure, it would be nicer to have more people citing my work in the initial couple of years, anyone who says otherwise is nuts. But to have the work continue to be relevant to people’s scientific thinking over time is pretty important to me as well. So cite for cite, I think I’d actually prefer mine spread out more. Maybe even over a timeline as long as a decade.

So what about you folks? If you were forced to choose between initial big splash papers that disappeared and ones that had a slow and steady citation rate, which would you prefer? Which gratifies you more as a scientist?

What crossover point would you see as justifying slow/steady over big splash? Five years? Ten years? Twenty?

This is fantastic.

…CSR is piloting a new program that we call the early career reviewer, where we will take complete novice reviewers, people who have not reviewed for NIH before, very early in their career, probably new investigators.

More thoughts on the matter from Your Humble Narrator and Prof-Like Substance.

Oh, Dear Reader. I cannot tell you how happy I am about a new initiative of the NIH. As our longer term readers know, the PhysioProf and I have occasionally observed that serving on NIH study section panels that review grants is an invaluable experience in learning how the review process works. I mean how it really works. With all of its advantages, limitations and flaws.

It is my hypothesis, hard to prove of course, that this leads to improved grantsmithing of your own proposals. It also is my hypothesis that this leads to better strategic thinking about the process and therefore your career. At the very least, I conclude that having actual grant reviewing experience allows a more sanguine response to the inevitable disappointing reviews of some of your own proposals.

I have also ranted at some length about efforts in prior years to decrease the number of Assistant Professor rank participants on study sections. Going by the CSR data presentations I’ve seen, Assistant Professors never topped more than about 10% of all reviewers. Given that they were more likely to be ad hoc and perhaps to be assigned lighter review loads I conclude that the number of reviews written by Assistant Professors was considerably lower than 10%…at the high water mark. Then Scarpa laid down a series of efforts to purge Assistant Professors but I’ve not seen the current numbers.

I have also noted that it is peculiar that this particular class of applicants should be underrepresented by active discrimination, given the explicit mandates for CSR to have geographic, ethnic, University type, sex and topic diversity represented. Particularly when the NIH happens to notice that this class of applicant, particularly the newest of the newly applying Assistant Professors, seems to receive discriminatory review. No? Then why all the New Investigator checkbox and ESI initiative stuff? Eh? Right. The NIH has recognized that the less experienced of their applicants take it on the chin and unjustifiably so. This is why they put their heavy thumb of correction on the scales. After review.

Well, the glass is now half-way full. Or, maybe a quarter full, but still. We’re seeing the pendulum swing back…..every so slightly. A transcript of a podcast posted at a NIH site has this clue:

Cathie [Cooper; a SRO]: That’s true. And even though we generally use more senior and experienced reviewers on our panel because they have the depth and breadth of expertise that allows them to give a more knowledgeable assessment of the applications, we’re very careful to include junior scientists and younger investigators on the panel, as well, and part of the reason is that when I use a newer investigator, generally not a brand new investigator, but a newer investigator, they always tell me after the review meeting they’ve learned so much about how to write a grant. So I really look forward to including them on the panels. In addition, CSR is piloting a new program that we call the early career reviewer, where we will take complete novice reviewers, people who have not reviewed for NIH before, very early in their career, probably new investigators.

My emphasis at the end.

I confess I got wind of this a little bit ago and have been trying to nail down something hard and citable so that I could blog it. Rumors are flying all about and I’ve been drawing together bits and pieces where I might. Here’s my current state of understanding.

The CSR SROs will be encouraged to seek out reviewers who have not yet received NIH funding. These individuals are to be invited for a one or two ad-hoc type stints with a review load of no more than two grants. (Apparently Scarpa wanted these to be non-reviewing visits so clearly he hasn’t had any change of heart on the actual review front, only to the extent that n00bs might benefit from service. My understanding is that this is illegal under authorizing legislation or Federal advisory rules or something, hence a minimal load) It is possible that these must not be primary assignments as well. The SROs are being encouraged to invite no more than one of these per section meeting but to have one pretty much for all of them.

One issue on which there has been less clarity is on the identity of these Early Career Reviewers and their respective host Universities. It appears that there is an effort to prioritize people who work at Universities without copious amounts of NIH funding and to prioritize individuals who are underrepresented in science. Nevertheless it also appears to be the case that this will, in fact, be extended to any and all comers out of a sense of fairness. White American heteronormative doods from coastal mega-NIH-funded Universities welcome! [ahem]

So, DearReader, where do you come in? Well, if you are a noob junior faculty member this is the time to get your CV in front of the SROs of your most relevant study sections. You can cold-send your CV on the basis of that podcast comment, tell ’em DM sent you or just email and say you “heard a rumor”. You can get your Associate Professor peers who are on those sections or who have been on those sections to send your name/CV for you. You can get your Program Officer to put in a good word. Remember that SROs are busy people…anything you can do to ease their job helps. If they have a dozen CVs in front of them without any work, well, why would they go out and drum up more candidates? So send your CV their way!

If you are a more experienced investigator, and especially if you are on email terms with an SRO or two, go ahead and send her/him some names. What can it hurt?

NIH head of Extramural Research Salley Rockey has a post up defending peer review.

There has been much talk on the peer review process in our blog comments recently. We are extremely proud of our peer review system and its acceptance as one of the premier systems of peer review around the world. Our peer review system is based on a partnership between NIH staff and our reviewers. Each year we review about 80,000 applications with the help of tens of thousands of outside experts. I greatly appreciate those who have served and encourage everyone to participate in the process when you can.

The reason for this post seems to be one prolific commenter who has a bone to pick and he just keeps getting nuttier. The last exchange was a trigger:

I merely express my firm opinion, based on my own numerous experiences and without undermining the rules of the respected blog – that is why I am restricted from providing any specific examples. Should my respected opponent be interested in seeing these specific examples, I shall be very happy to share them in a private manner.

“numerous experiences”. yeah. so have we all. Had numerous experiences. Mine come from my *cough*cough* years of putting in anywhere from ~2-6 proposals per year, to a 4 year term of service on a study section (~60-100 apps per round), to having local departmental colleagues with similar experiences and through writing a blog that fields many comments from other NIH funded investigators.

I hesitate to suggest I have a full picture on NIH grant review; and I seek data from the broader NIH-wide perspective wherever possible. To buttress my very limited personal experiences. Rockey’s post says they review 80,000 applications per year. I don’t think anyone’s personal experience as an applicant, ad hoc reviewer or even multi-term appointed reviewer is all that comprehensive.

– break- I’m going to return to this thought later-

I have a post up over at Sb News on substituted cathinone stimulants, aka “bath salts” that discusses a Sunday New York Times piece on “bath salts”.

The short version is:

and also this.

[update: the post is now at https://drugmonkey.wordpress.com/2011/07/19/news_on_substituted_cathinone/]

The New York Times had a piece up Sunday that was entitled “An Alarming New Stimulant, Legal in Many States“. I was alerted to this by David Kroll who reposted some prior comments at his Take as Directed blog. I’ve been getting some traffic from a BoingBoing linker from Maggie Koerth-Baker to an older post from me so I thought I’d better address a couple of points that jump out at me.
First and foremost, the reader should be extremely cautious whenever there is conflation of two different drugs under one purported street name. Even if they are structurally quite similar and some human reports have overlapping properties. In the case of “bath salts”, there is quite a bit of confusion over whether a news account is referring to 4-methylmethcathinone (4-MMC), methylenedioxypyrovalerone (MDPV), sees no difference between them* or doesn’t know if there is any difference.

Read the rest of this entry »

Swedeland!

July 18, 2011

First, congrats to the Swedish World Cup team for coming in third and, in particular, for beating the French.

Second, congrats on the K9 corps dog thing. I would have expected no less, but still. You got it done.

The main point of the day, however, is much better. Here in the US we call the type of overattentive, smothering parent that makes you hurl* a “helicopter” parent. They are always hovering over their child, you see, just waiting to rescue little Maria from calamity. Or obsessing over the wonderousness of their average, normal behavior or something.

I have been informed that the Swedish call their version of this “curling” parents. As in the sport of curling. Now those of you who are not Canucklanders or Swedes may not immediately recognize this sublime reference. You may recall a half glimpsed interlude on the teevee during the Winter Olympics. When the coverage switches off that riveting bobsledding and you decide you have time to hit the loo. Perhaps when you return you are momentarily graced with some idiots madly sweeping the ice in front of a lumbering bit of rounded granite. Rendering an unobstructed path yet even more polished and smooth so as to further ease passage of the object of their devotion.

That’s curling. And the folks who are madly sweeping an apparently smooth sheet of ice? Those would be your image of the “curling parents”.

Evocative, isn’t it?

__
*I keed, I keed. (Most of those I link-pickedon know via other online venues that I’m the worst sort of braggart about my awesome offspring….)

The graph is from an analysis by Stuart Rojstaczer & Christopher Healy which looks at college/university level grades at 200 US institutions of higher education (via Catherine Rampel, via via Mike the Mad and Isis).

For some very odd reason nobody seems to be hypothesizing that the demographic of undergraduates that experience the greatest uptick in A grades and the greatest reduction of C grades was simply smarter than prior generations.

That should be the null hypothesis, right? I mean, it is pretty far fetched of Ms. Rampell and Dr. Isis to blame this on professor’s motivation to keep kids out of ‘nam since that should have affected the D and F grades, no?

Mike the Mad wants to blame it on graduate school admittance and the competition for limited slots….but can this really explain the late60s-early70s trends?

More likely to explain this is the incredible self-indulgence and self-righteousness of the late Boomers and the overweening generational meme that traditional “standards” were arbitrary constructs and have no bearing on proper evaluation. I’m okay, you’re okay and all that nonsense. and above all else, the sense of universal, personal entitlement.

and what do you know, as soon as the children of these people start hitting the Universities, up go the grades again.

I cannot put it any more clearly than this comment from CL at writedit’s blog:

Image the reviewer as you…or worse. Imaging the person you are writing for is someone who is stressed about their funding, trying to get papers out that are getting rejected, overworked and tired, fighting to get the right people in the lab, juggling teaching/clinical work, getting flack from their spousal equivalence unit for not helping out, their dog bit them, they are stuck in a middle seat in coach on the way to study section and sleep deprived. NOW they are picking up your grant to read.

about all I have to add is to note that this person is, furthermore, not a specific expert in your subfield.

If you can’t write clearly enough to get this person excited, your odds are somewhat lower than dodgy.

Cheaters

July 18, 2011

It’s an old story for the teaching professors in the audience, I realize. But this story made me profoundly sad. I mean WTF? I never, ever thought seriously about cheating on class work in my rather lengthy schooling career. Not to get a desired grade, not to make up for laziness or excessive weekend behavior, not for any reason.

Well, I suppose we know where the scientific data fakers come from. This population of undergrads which thinks cheating is a-ok.

Go read that bit and tell me it doesn’t make you sad….

A poll for my readers. Do, or did, you read the grant proposals that support the work that you are doing? I’m curious about that at all levels- from undergrad to tech to grad student to postdoc.

I don’t think I had any idea what was in the grants or even what grants supported my work until my last postdoc. In that one, I was given all the proposals and I certainly read them.

How about you? Have you read the grant applications that funded your work at each training stage?

ResearchBlogging.orgA link from writedit pointed me to a review of drugs that were approved in the US with an eye to how they were identified. Swinney and Anthony (2011) identified 259 agents that were approved by the US FDA between 1999 and 2008. They then identified 75 which were “first in class”, i.e., not just me-too drugs or new formulations of existing drugs or whatnot. There were 20 imaging agents, not further discussed, and 164 “follower” drugs.

The review also focused mostly on small molecule drugs instead of “biologics” because of an assumption that the latter are all exclusively “target based” discoveries. The main interest was in determining if the remaining small molecule drugs were discovered the smart way or the dumb way. That’s my formulation of what the authors term “target based screening” (which may include “molecular mechanism of action”) discovery and “phenotypic screening” type of discovery. As they put it:

The strengths of the target-based approach include the ability to apply molecular and chemical knowledge to investigate specific molecular hypotheses, and the ability to apply both small-molecule screening strategies (which can often be achieved using high-throughput formats) and biologic-based approaches, such as identifying monoclonal antibodies. A disadvantage of the target-based approach is that the solution to the specific molecular hypotheses may not be relevant to the disease pathogenesis or provide a sufficient therapeutic index.

A strength of the phenotypic approach is that the assays do not require prior understanding of the molecular mechanism of action (MMOA), and activity in such assays might be translated into therapeutic impact in a given disease state more effectively than in target-based assays, which are often more artificial. A disadvantage of phenotypic screening approaches is the challenge of optimizing the molecular properties of candidate drugs without the design parameters provided by prior knowledge of the MMOA.

You will note that this is related to some comments I made previously about mouse models of depression.

The authors found that 28 of the first-in-class new molecular entities (NMEs) were discovered via phenotypic screening, 17 via target based approaches and 5 via making synthetic mimics of existing natural compounds. To give you a flavor of what phenotypic screening means:

For example, the oxazolidinone antibiotics (such as linezolid) were initially discovered as inhibitors of Gram-positive bacteria but were subsequently shown to be protein synthesis inhibitors that target an early step in the binding of N-formylmethionyl-tRNA to the ribosome

and for target based approaches:

A computer-assisted drug design strategy that was based on the crystal structure of the influenza viral neuraminidase led to the identification of zanamivir

The authors even ventured to distinguish discovery approaches by disease area:

Evaluation of the discovery strategy by disease area showed that a phenotypic approach was the most successful for central nervous system disorders and infectious diseases, whereas target-based approaches were most successful in cancer, infectious diseases and metabolic diseases

Unsurprising of course, given that our state of understanding of nervous system disorders is, to most viewers, considerably less complete in comparison with some other health conditions. You would expect that if there are multiple targets or targets are essentially unknown, all you are left with are the predictive phenotypic models.

Of the follower drugs 51% were identified by target based discovery and 18% by phenotypic screening. This is perhaps slightly surprising given that in the cases of the me-too drugs, you would think target-based would be more heavily dominant. Perhaps we can think of a drug which initially looked to have property X that dominated but then in the phenotypic screening, it looked more like a property Y type of drug.

The authors take on this is that it is slightly surprising how poorly target-based discovery performed within a context of what they describe as a period in which there was a lot of effort and faith placed behind the target-based approaches. I suspect this is going to be in the eye of the beholder but I certainly agree. I can’t really go into the details but there are areas where my professional career is…affected, let us say…by the smart/dumb axis of drug discovery. It should be obvious to my longer term readers that I align most closely with animal models of various things related to health and neurobiology and so therefore you may safely conclude that I have a bias for phenotypic screening. And even in the case of the target-based discovery:

at least three hypotheses that must be correct to result in a new drug. The first hypothesis, which also applies to other discovery approaches, is that activity in the preclinical screens that are used to select a drug candidate will translate effectively into clinically meaningful activity in patients. The other two hypotheses are that the target that is selected is important in human disease and that the MMOA of drug candidates at the target in question is one that is capable of achieving the desired biological response.

Right. You still need good phenotypic models and ultimately you are going to have to pass human clinical trials. The authors further worry that this higher burden, especially knowing the MMoA is going to lead to some misses.

in the case of phenotypic-based screening approaches, assuming that a screening assay that translates effectively to human disease is available or can be identified, a potential key advantage of this approach over target-based approaches is that there is no preconceived idea of the MMOA and target hypothesis.

Ultimately I think this review argues quite effectively for an “all hands on deck” approach to drug discovery but it can’t help but come off as a strong caution to the folks that think that “smarter” (aka, “rational drug design”) is the only solution. Yes, this points the finger at Francis Collins’ big thrust for a new translational IC at the NIH but also at the BigPharma companies that seem to be shedding their traditional models-based, phenotypic discover research units as fast as they can. No matter which side you come down on, this is a great read with lots to think about for those of us who are interested in the discovery of new medicines.
__
Swinney, D., & Anthony, J. (2011). How were new medicines discovered? Nature Reviews Drug Discovery, 10 (7), 507-519 DOI: 10.1038/nrd3480

Following some chatter at the Rock Talk blog I ran across some very interesting news from the NHLBI:

The NHLBI will continue a commitment to help ESIs by a policy of maintaining separate paylines for new competing (Type 1) R01 and First Renewal (Type 2) applications in accordance with NIH guidelines. Regardless of amendment status, the paylines for new competing (Type 1) and First Renewal (Type 2) ESI R01 applications will be 5 percentile points above the regular R01 paylines for unamended (A0) applications in FY 2011. In addition and also regardless of amendment status, new competing (Type 1) ESI R01 applications that are >5 but <=10 percentile points above the regular R01 paylines for unamended (A0) applications in FY 2011 may undergo an expedited review to resolve comments in the summary statement. The funding policies will apply to all new competing (Type 1) and First Renewal (Type 2) ESI R01 applications under special funding consideration regardless of the amendment status of the application. All awards to ESI applicants under this policy will be funded for all years recommended by the NHLBAC. Please note that the NHLBI considers both NI and ESI status to have been determined at the time of the initial A0 grant application submission.

This is going to really, really anger the late Assistant Prof folks out there who are looking down the barrel of tenure decisions. Decisions in which the ability to renew their first (very hard won) award looms large.
As well it should anger them.

Read the rest of this entry »