Talking about the h-index reminded me of how I really feel about citation metrics. This post went up on Sept 21, 2007.


People argue back and forth over whether Impact Factor of journals, the h-index, Total Cites, specific paper cites, etc should be used as the primary assessment of scientific quality. Many folks talk out of both sides of their mouths, bemoaning the irrelevance of journal Impact Factor while beavering away to get their papers into those journals and using the criterion to judge others. In this you will note people arguing the case that makes their CV look the best. I have a proposal: Read the rest of this entry »

I like to use ISI’s web of knowledge thingy to keep track of who is citing our* papers. Often times I’ll pull up a few that I haven’t seen yet that are related to our work.

Fortunately, I don’t have consistent cause to review other performance metrics having to do with my pubs because the whole thing kind of gives me a headache.

But I do, now and again, look at the h-index and ponder it. I’m not quite grasping what it tells us, other than one’s longevity in science, but whatever. Seems to me that if you take a slice of your own approximate science-age cohort, then it might be at least somewhat meaningful.

I have a bunch of peers in my approximate subfields, of my approximate career status and, most importantly, who started publishing at approximately the same time as I did. This is the “hold all else equal” analysis, or at least as close as it comes.

I recently took a look at the citation reports of some folks that I think, in a very general sense, have been kicking my ass on the metrics. Particularly salient to me is the rate of publications flying out with their names on them, since I see them pass by TOC and PubMed topic alerts. And in many cases the graph of pubs per year on ISI’s web of knowledge confirms that impression. But the number of *citations* per year seems to feature a lot less variance than I would think.

Hmm I says to myself.

Then I look at the h-indices and find even less variance than I would have thought.

So now I’m back to trying to grasp what this measure really means. In an intuitive sense, I mean; I grasp the definition**.

If someone has a significantly larger number of papers, this should result in a higher h-index, right? I mean just going by the odds of what is going to lead to greater or fewer numbers of citations. If there is a longer length of time of publication, ditto, as they accumulate. And I grasp the notion that different subfields of science are going to be more or less active, citation wise. But when you start making comparisons between individual scientists who have approximately the same length of publishing history in approximately the same subfields, you should be able to use this h-index more accurately. It should say something meaningfully different. And I’m not seeing that right now.

Unless you argue that regardless of numbers of published articles that might be anywhere from 1.5-3 fold higher, the scientists in this grouping only manage to pump out about the same number of papers that have “top-X” amount of citation interest within the field?
__
*and I’m increasingly using this as a tool to track through literature that cites other specific papers in subfields. I’m getting more and more impressed with this as a time saver for my rapidly atrophying brain.

**The h-index as defined by the creator Hirsch: A scientist has index h if h of [his/her] Np papers have at least h citations each, and the other (Np − h) papers have at most h citations each.

Comrade PhysioProf alerted me to a couple of powerpoint presentations (converted to pdf in the links) of possible interest to the NIH grant geeks in the audience.

First one seems to be from Toni Scarpa to the CSR Advisory Council on May 2.
Lots of interesting data on each slide but I’ll pick out a few things I noticed.

-Slide 18, the number of PIs submitting grants and the number of applications per PI over the past decade. I’ve always thought the NIH paid too much attention to per-application success rate and not enough to per-PI success rate. Nice to see. 1.3 grants per PI to 1.6 Research Project grants per PI submitted each year is the range. Shows how skewed some of our experiences are but fits with the data on the Rockey blog about how most PIs only carry 1 or 2 grants.

-Slides 21,22 show a faster relative increase in the number of R21s submitted over the past 10 years compared with R01s. I’m sure we all know the reasons but interesting to see.

-Slide 23 (and Slide 5) testifies to Scarpa’s crusade to get more grants reviewed with fewer reviewers. I happen to disagree with this (I saw a little bit of this trend during my term of service) but no doubt the cost savings are tremendous.

-Slide 25 continues the cost-savings theme. In particular it is interesting to think about the savings associated with having more online, electronic reviews versus decreasing the number of actual reviewers. I’m not a big fan of the online, asynchronous review but then I’m not a fan of losing specific expertise either. These are not easy tradeoffs to make, clearly. Slide 41 seems to indicate the cost per application might be cut to a fifth by using online instead of face-to-face review mechanisms.

-Slide 33 has an interesting note about 1.7 million views, presumably for the study section description he was using as an example? That would be an interesting way to track the relative interest/load/etc for a given study section.

-Slide 35 shows trends of turning the A1 back in at the very next study section round but it would be a lot more useful as a percentage of all A1s that were put in…

-Slide 40 testifying on how reviewers like the online review methods is nice but is sure as hell needs a lot more context. Should be broken down by those who have served (or are willing to serve) on regular study section panels versus those that are not. And to delve into the reasons for being happy with online formats, etc. Not to mention do some additional queries on how the reviewers feel the outcome is for applications. For example, there are definitely times when I would say “no way, no how” to a study section that required me to visit Bethesda but might take on a phone or online review duty. That gets my expertise (such as it is) engaged where it otherwise would not have been involved. However, I also think applicants are not well served by the online review, all else equal.

-Slide 27 reiterates the accusation that it is a common complaint that there are not enough senior/experience reviewers. I’d still like to see some expansion on who is making this complaint, on what basis and how it is verified in fact. In contrast, the complaints about speed, burden on reviewers and favoring predictable research over innovation seem a lot more based on things that can be quantified and reasonably well described.

-Slide 44 continues this theme because of the heading “Recruiting the best reviewers” on a slide which reports the number of Full, Associate and Assistant rank reviewers over 98-08. You can just see the start of the great Scarpa purge of Asst Profs ( I do wonder why this slide is not updated to 2010). Again, no apparent explanation as to their justification for conflating “best” with seniority. Slide 45 has ways in which they have been convincing more reviewers to serve but again is pretty light on showing where this means they get more of the “best” reviewers. Somehow I feel confident Scarpa didn’t really expand on this in his presentation….

Slide 54- I really like the percentage of New Investigators tracked since 1962! w00t! all of their trends should go back that far.

Okay, that’s enough for you to chew over for now….

Look folks, the NIH made it pretty clear when they created the ESI designation that the NI designation absent ESI qualification was going to disappear. Borrowed time, IMO. So it is a little silly to parse individual IC policy statements and claim that if you weren’t warned specifically that this is some kind of underhanded throwing under the bus or changing rules in midstream.

I mean, if the latter is the principle, I’m sure you were on the barricades when they started extending the extra payline cutoff points to the ESI and/or NI applications, right?

The recent meeting of the College on Problems of Drug Dependence featured a very well attended session on the emerging recreational drugs that are best described generically as synthetic cannabis. Popularly these are frequently referred to as K2 or Spice as these seem to be the best known of the initial market branding. One of the first identified and most frequently observed active compounds in these products was JWH-018, so you may see this term as well.
The past year or two has seen an explosion in the use and supply of synthetic cannabinoid “incense” products in the US and worldwide. The basic nature of the product is well established- Small packets (typically 3g in the US) of dried plant material marketed as “incense” and marked “not for human consumption” that are priced well above what you might expect. In the range of $60 at my local fine inhalation glassware establishments, last I checked. Upon analysis, these products are typically found to have a variety of plant materials, but also to be adulturated with a host of drug compounds that have agonist effects at the CB1 receptor.
As you are aware the most-active psychotropic compound to be found in cannabis, Δ9-tetrahydrocannabinol (THC) confers the majority of its effects through partial agonist activity at CB1 receptors.
In short, these “incense” products are a constructed, synthetic mimic of cannabis. Since the active ingredients are, in many cases, full agonists this means that the maximum CB1 activation can potentially be higher than you could achieve with any dose of the partial agonist THC.

Read the rest of this entry »

Did I mention I enjoy learning more about the neurobiological and behavioral effects of recreational drugs as well as the development and treatment of addictions?

The College on Problems of Drug Dependence will be holding their annual meeting in Hollywood Florida this upcoming week. I’ve been going through the Itinerary Planner and Program Book to get a preview. There are a few presentations that touch on topics that we’ve blogged about here at the DrugMonkey blog, including

-treating the hyponatremia associated with MDMA-induced medical emergency

vaccination against drug abuse

exercise as a potential therapy for, or antidote against, stimulant drug addiction

-JWH-018 and other synthetic cannabinoid constituents of Spice/K2 and similar “incense” products

-some preclinical studies on mephedrone / 4-methylmethcathinone

-presentations from the DEA on scheduling actions that are in progress

I’m certainly looking forward to seeing a lot of interesting new data over the next week.

I wish to extend my warmest congratulations to our long term reader, annoyer of cobloggers, holder of feet to the fire and all around insightful and hilarious commenter becca who announced the successful defense of her dissertation today.

7 years…6 committee members…5 giant full lab notebooks…4 manuscripts/drafts…3 Thesis Advisors…2 grey hairs…1 PhD

You can tell from this and from the occasional details she posts in comments around the scientific blogs that it has not been an easy road. And yet she has persevered and succeeded in being awarded the Ph.D. for her work.

Congratulations, my friend.

Congratulations on all of your hard work, late nights and frustrating experiments.

I am very much looking forward to the tales of your next endeavors as a scientist.

Three years ago Ed Yong of Not Exactly Rocket Science asked his readers a simple question:

1) Tell me about you. Who are you? Do you have a background in science? If so, what draws you here as opposed to meatier, more academic fare? And if not, what brought you here and why have you stayed? Let loose with those comments.
2) Tell someone else about this blog and in particular, try and choose someone who’s not a scientist but who you think might be interested in the type of stuff found in this blog. Ever had family members or groups of friends who’ve been giving you strange, pitying looks when you try to wax scientific on them? Send ’em here and let’s see what they say.

I found the comments in response to this fascinating and used the excuse to meme it here. Things kinda took off after that.

Read the rest of this entry »

Three years ago Ed Yong of Not Exactly Rocket Science asked his readers a simple question:

1) Tell me about you. Who are you? Do you have a background in science? If so, what draws you here as opposed to meatier, more academic fare? And if not, what brought you here and why have you stayed? Let loose with those comments.

2) Tell someone else about this blog and in particular, try and choose someone who’s not a scientist but who you think might be interested in the type of stuff found in this blog. Ever had family members or groups of friends who’ve been giving you strange, pitying looks when you try to wax scientific on them? Send ’em here and let’s see what they say.

I found the comments in response to this fascinating and used the excuse to meme it here. Things kinda took off after that. Read the rest of this entry »

The Seattle PI’s Big Blog (Covering Seattle news, weather, arts and conversation, along with a grab bag of stuff that’s just plain interesting) has an article up covering an animal rights extremist group’s billboard campaign in their fair city. What is interesting about this is the rather fair handed set of options they have chosen.

  • Eye opening. Glad they’re spreading the message.
  • Unfair and sensationalistic.
  • Disturbing, but it’s a message we all need to hear.
  • Just plain incorrect.
  • Boring. I wish [AR extremist group] would go back to naked demonstrations.

Naturally I encourage you to visit the poll and cast your vote.
__
My point about this being a refreshing change can be best understood by reading this post.

Wikipedia meme: MDMA

June 10, 2011

via Pascale

1. MDMA
2. Entactogen
3. Psychoactive drug
4. Chemical substance
5. Chemistry
6. Science
7. Knowledge
8. Description
9. Rhetorical modes
10. Exposition (which goes to Expository writing)
11. Writing
12. Language
13. Human
14. Living
15. Biology
16. Natural science
17. Naturalistic (goes to Naturalism (philosophy))
18. Philosophical (goes to Philosophy)

We occasionally lose track of this fact. Our stance towards the reviewers of our manuscripts can be fairly antagonistic. After all, we wouldn’t have submitted the dang thing in the first place if we didn’t think it was ready for publication as-is, right?

It doesn’t help that one of the manuscripts we have out right now has drawn reviewer fire over some of the more maddening reasons. Basically a difference of opinion on interpretation, background and context- my view (even apart from my own manuscripts, thank you!) is that if the data are sound, well analyzed and placed in a context that is supported it is not my place to hold up publication because my interpretation differs. So these kind of “discussions” during peer review don’t really please me.

Another paper we have in the submission process is another matter. I have a little less than usual confidence that we know what is what when it comes to our findings. I am really keen on seeing what reviewers have to say about it. I am looking forward to what I think is the start of a pretty cool discussion. Hopefully in the sense of additional data, models and papers resulting because I have a sense this little subarea is about to take off. Not that we’re going to be the spark, mind you. Just that we’re getting into some stuff that a bunch of other usual-suspect labs can do and have all the same reasons that we do to delve into the questions. There are, however, a whole lot of ways to get into the question and model the behavior.

We took one approach and I am pretty interested in what the reviewers are going to think. Will they buy that our kewl effect is actually interesting? Will they come up with a whole ‘nother context in which it should be framed or interpreted or do they sign up for our view on the phenomenon?

Can’t wait for the reviews to come in on this one…..

A Twitt from the Foundation for Alcohol Research (@AlcoholResearch) today struck my attention. It sounded to me like the usual slippery slope of creating human health prescriptions from limited scientific findings.

Teen athletes may drink more, but smoke less & use fewer drugs. How do you lessen your teen’s risk? http://bit.ly/mysWna

ResearchBlogging.orgThe link is to their newsletter which overviews a paper by Terry-McElrath and O’Malley, currently in pre-print at Addiction. The overview is pretty straightforward, based closely on the paper and eschews the problem with the Twitt, which was the question as to whether you could “lessen your teen’s risk”. So I am mostly mollified.
The paper in question reports data from a survey of over 11,000 US high school seniors (classes of 1986-2001), captured as seniors and then followed longitudinally until age 26. These data were collected as part of the Monitoring the Future study which we discuss quite frequently on this blog.
The key focus of this paper is on the amount of physical activity the surveyed HS seniors reported at first contact. The Participation in Sports, Athletics or Exercising (PSAE) measure was derived:

…by asking, “How often do you actively participate in sports, athletics or exercising” (1=never, 2=a few times a year, 3=once or twice a month, 4=at least once a week, 5=almost every day).

Read the rest of this entry »

Many of you have noted the posts over at the NIGMS feedback loop blog that discuss the number of publications arising from their awarded grants. The last one was geekalicious doozy.

Ever since the first one I’ve been anticipating the use of this information in grant submissions.

I figure it is most likely to arise as a response to prior critique about productivity. Especially for competing continuation proposals that were dinges for modest productivity.

But heck, maybe this should be a preemptive strike on the first submission of the competing continuation?

And of course I have also been eagerly awaiting the deployment of such information during the grant review process. I have been on panels a time or two in which soft statements about “fantastic productivity” or “surprisingly limited productivity” have been matters of discussion. Sometimes contentiously. This is a place where some normative performance data could come in handy.

I wrote what???

June 5, 2011

After you submit a grant, how many days before you can bear to look at it?