Talking about the h-index reminded me of how I really feel about citation metrics. This post went up on Sept 21, 2007.


People argue back and forth over whether Impact Factor of journals, the h-index, Total Cites, specific paper cites, etc should be used as the primary assessment of scientific quality. Many folks talk out of both sides of their mouths, bemoaning the irrelevance of journal Impact Factor while beavering away to get their papers into those journals and using the criterion to judge others. In this you will note people arguing the case that makes their CV look the best. I have a proposal: Read the rest of this entry »

I like to use ISI’s web of knowledge thingy to keep track of who is citing our* papers. Often times I’ll pull up a few that I haven’t seen yet that are related to our work.

Fortunately, I don’t have consistent cause to review other performance metrics having to do with my pubs because the whole thing kind of gives me a headache.

But I do, now and again, look at the h-index and ponder it. I’m not quite grasping what it tells us, other than one’s longevity in science, but whatever. Seems to me that if you take a slice of your own approximate science-age cohort, then it might be at least somewhat meaningful.

I have a bunch of peers in my approximate subfields, of my approximate career status and, most importantly, who started publishing at approximately the same time as I did. This is the “hold all else equal” analysis, or at least as close as it comes.

I recently took a look at the citation reports of some folks that I think, in a very general sense, have been kicking my ass on the metrics. Particularly salient to me is the rate of publications flying out with their names on them, since I see them pass by TOC and PubMed topic alerts. And in many cases the graph of pubs per year on ISI’s web of knowledge confirms that impression. But the number of *citations* per year seems to feature a lot less variance than I would think.

Hmm I says to myself.

Then I look at the h-indices and find even less variance than I would have thought.

So now I’m back to trying to grasp what this measure really means. In an intuitive sense, I mean; I grasp the definition**.

If someone has a significantly larger number of papers, this should result in a higher h-index, right? I mean just going by the odds of what is going to lead to greater or fewer numbers of citations. If there is a longer length of time of publication, ditto, as they accumulate. And I grasp the notion that different subfields of science are going to be more or less active, citation wise. But when you start making comparisons between individual scientists who have approximately the same length of publishing history in approximately the same subfields, you should be able to use this h-index more accurately. It should say something meaningfully different. And I’m not seeing that right now.

Unless you argue that regardless of numbers of published articles that might be anywhere from 1.5-3 fold higher, the scientists in this grouping only manage to pump out about the same number of papers that have “top-X” amount of citation interest within the field?
__
*and I’m increasingly using this as a tool to track through literature that cites other specific papers in subfields. I’m getting more and more impressed with this as a time saver for my rapidly atrophying brain.

**The h-index as defined by the creator Hirsch: A scientist has index h if h of [his/her] Np papers have at least h citations each, and the other (Np − h) papers have at most h citations each.

Comrade PhysioProf alerted me to a couple of powerpoint presentations (converted to pdf in the links) of possible interest to the NIH grant geeks in the audience.

First one seems to be from Toni Scarpa to the CSR Advisory Council on May 2.
Lots of interesting data on each slide but I’ll pick out a few things I noticed.

-Slide 18, the number of PIs submitting grants and the number of applications per PI over the past decade. I’ve always thought the NIH paid too much attention to per-application success rate and not enough to per-PI success rate. Nice to see. 1.3 grants per PI to 1.6 Research Project grants per PI submitted each year is the range. Shows how skewed some of our experiences are but fits with the data on the Rockey blog about how most PIs only carry 1 or 2 grants.

-Slides 21,22 show a faster relative increase in the number of R21s submitted over the past 10 years compared with R01s. I’m sure we all know the reasons but interesting to see.

-Slide 23 (and Slide 5) testifies to Scarpa’s crusade to get more grants reviewed with fewer reviewers. I happen to disagree with this (I saw a little bit of this trend during my term of service) but no doubt the cost savings are tremendous.

-Slide 25 continues the cost-savings theme. In particular it is interesting to think about the savings associated with having more online, electronic reviews versus decreasing the number of actual reviewers. I’m not a big fan of the online, asynchronous review but then I’m not a fan of losing specific expertise either. These are not easy tradeoffs to make, clearly. Slide 41 seems to indicate the cost per application might be cut to a fifth by using online instead of face-to-face review mechanisms.

-Slide 33 has an interesting note about 1.7 million views, presumably for the study section description he was using as an example? That would be an interesting way to track the relative interest/load/etc for a given study section.

-Slide 35 shows trends of turning the A1 back in at the very next study section round but it would be a lot more useful as a percentage of all A1s that were put in…

-Slide 40 testifying on how reviewers like the online review methods is nice but is sure as hell needs a lot more context. Should be broken down by those who have served (or are willing to serve) on regular study section panels versus those that are not. And to delve into the reasons for being happy with online formats, etc. Not to mention do some additional queries on how the reviewers feel the outcome is for applications. For example, there are definitely times when I would say “no way, no how” to a study section that required me to visit Bethesda but might take on a phone or online review duty. That gets my expertise (such as it is) engaged where it otherwise would not have been involved. However, I also think applicants are not well served by the online review, all else equal.

-Slide 27 reiterates the accusation that it is a common complaint that there are not enough senior/experience reviewers. I’d still like to see some expansion on who is making this complaint, on what basis and how it is verified in fact. In contrast, the complaints about speed, burden on reviewers and favoring predictable research over innovation seem a lot more based on things that can be quantified and reasonably well described.

-Slide 44 continues this theme because of the heading “Recruiting the best reviewers” on a slide which reports the number of Full, Associate and Assistant rank reviewers over 98-08. You can just see the start of the great Scarpa purge of Asst Profs ( I do wonder why this slide is not updated to 2010). Again, no apparent explanation as to their justification for conflating “best” with seniority. Slide 45 has ways in which they have been convincing more reviewers to serve but again is pretty light on showing where this means they get more of the “best” reviewers. Somehow I feel confident Scarpa didn’t really expand on this in his presentation….

Slide 54- I really like the percentage of New Investigators tracked since 1962! w00t! all of their trends should go back that far.

Okay, that’s enough for you to chew over for now….

Look folks, the NIH made it pretty clear when they created the ESI designation that the NI designation absent ESI qualification was going to disappear. Borrowed time, IMO. So it is a little silly to parse individual IC policy statements and claim that if you weren’t warned specifically that this is some kind of underhanded throwing under the bus or changing rules in midstream.

I mean, if the latter is the principle, I’m sure you were on the barricades when they started extending the extra payline cutoff points to the ESI and/or NI applications, right?

The recent meeting of the College on Problems of Drug Dependence featured a very well attended session on the emerging recreational drugs that are best described generically as synthetic cannabis. Popularly these are frequently referred to as K2 or Spice as these seem to be the best known of the initial market branding. One of the first identified and most frequently observed active compounds in these products was JWH-018, so you may see this term as well.
The past year or two has seen an explosion in the use and supply of synthetic cannabinoid “incense” products in the US and worldwide. The basic nature of the product is well established- Small packets (typically 3g in the US) of dried plant material marketed as “incense” and marked “not for human consumption” that are priced well above what you might expect. In the range of $60 at my local fine inhalation glassware establishments, last I checked. Upon analysis, these products are typically found to have a variety of plant materials, but also to be adulturated with a host of drug compounds that have agonist effects at the CB1 receptor.
As you are aware the most-active psychotropic compound to be found in cannabis, Δ9-tetrahydrocannabinol (THC) confers the majority of its effects through partial agonist activity at CB1 receptors.
In short, these “incense” products are a constructed, synthetic mimic of cannabis. Since the active ingredients are, in many cases, full agonists this means that the maximum CB1 activation can potentially be higher than you could achieve with any dose of the partial agonist THC.

Read the rest of this entry »

Did I mention I enjoy learning more about the neurobiological and behavioral effects of recreational drugs as well as the development and treatment of addictions?

The College on Problems of Drug Dependence will be holding their annual meeting in Hollywood Florida this upcoming week. I’ve been going through the Itinerary Planner and Program Book to get a preview. There are a few presentations that touch on topics that we’ve blogged about here at the DrugMonkey blog, including

-treating the hyponatremia associated with MDMA-induced medical emergency

vaccination against drug abuse

exercise as a potential therapy for, or antidote against, stimulant drug addiction

-JWH-018 and other synthetic cannabinoid constituents of Spice/K2 and similar “incense” products

-some preclinical studies on mephedrone / 4-methylmethcathinone

-presentations from the DEA on scheduling actions that are in progress

I’m certainly looking forward to seeing a lot of interesting new data over the next week.

I wish to extend my warmest congratulations to our long term reader, annoyer of cobloggers, holder of feet to the fire and all around insightful and hilarious commenter becca who announced the successful defense of her dissertation today.

7 years…6 committee members…5 giant full lab notebooks…4 manuscripts/drafts…3 Thesis Advisors…2 grey hairs…1 PhD

You can tell from this and from the occasional details she posts in comments around the scientific blogs that it has not been an easy road. And yet she has persevered and succeeded in being awarded the Ph.D. for her work.

Congratulations, my friend.

Congratulations on all of your hard work, late nights and frustrating experiments.

I am very much looking forward to the tales of your next endeavors as a scientist.