H-index

December 18, 2014

I can’t think of a time when seeing someone’s h-index created a discordant view of their impact. Or for that matter when reviewing someones annual cites was surprising.

I just think the Gestalt impression you generate about a scientist is going to correlate with most quantification measures.

Unless there are weird outliers I suppose. But is there is something peculiar about a given scientist’s publications that skews one particular measure of awesomeness….wouldn’t someone being presented that measure discount accordingly?

Like if a h-index was boosted by a host of middle author contributions to a much more highly cited domain than the one most people associate you with? That sort of thing.

36 Responses to “H-index”

  1. Neuro-conservative Says:

    I was surprised to learn, after all of CPP’s braggadocio, that his h-index is lower than DM’s!

    Like

  2. mytchondria Says:

    All I can say is THANK JAYSUS I’m a zillionth author on a ‘Guidelines’ paper cuz BA-BAM! H-index badassery, amirite??

    Like

  3. qaz Says:

    I’ve found that within very restricted regions, these indices and quantifications tend to track. But we’ve talked many times on this blog how these indices do not translate across fields (math vs. physics vs. biology) and subfields (systems neuroscience vs molecular/subcellular neuroscience). They don’t even translate across subcomponents within a field (for example, in my experience, people in neuroscience cite experiments much more than they do theory, even when they design the experiments to explicitly test that theory – experiments get cited if they are vaguely related, but theory becomes accepted zeitgeist and doesn’t get cited anymore).

    What I’ve found is that when I look at an H-index and ask “is that reasonable for that person?’ I tend to say “sure, I buy that”. But that’s a different question than looking at a ranking order of H-indices and asking if I agree to that ordering. I have not found that ordering tends to be well-preserved in these quantifications.

    Like

  4. drugmonkey Says:

    Do you mentally account for the biggest sources of lean in the h-index? Age and subfield, I mean?

    Like

  5. AcademicLurker Says:

    Qaz nailed it. When I’m already familiar with someone’s contributions to the field, I don’t think I’ve ever been surprised by their H-index. But If I were handed H-indexes for 2 researchers I knew nothing about, I wouldn’t be comfortable drawing any conclusions based just on that, unless they worked in very closely related areas.

    Like

  6. The Other Dave Says:

    Yea, I agree with others, and there’s not much use hand-wringing over it. H-index and things like that are just tools. If people find them useful, they’ll pay attention to them. If they don’t think they’re useful, they’ll ignore them. Some fields obviously find H-index et al more useful than others.

    What bugs me is when things like the H-index are used outside a subfield. For example: Promotion & tenure committees. Some on the committee may find the H-index useful, but that doesn’t mean that it accurately represents productivity in the candidate’s field.

    In fact, that’s my only problem with these indices. They can be useful, but they’re too often mis-used. Like guns. A few bad people mis-use them, so no one should get to have them.

    Like

  7. qaz Says:

    @TOD: I don’t think that H-index is important within a subfield. Are there really any subfields in which you don’t know everybody? In all the subfields I know of, specific papers, specific results, and training lineages are much more informative than H-index for identifying the few people I don’t know directly.

    @DM: Yes, part of what makes me say “I buy that” is because I know the person, their age and where they fit within their subfield.

    The only reason H-indices (or other quantifications) matter for anything more than a drinking game is because administrators are using them.

    Like

  8. whizbang Says:

    The H index should only be used within a given field/subfield, and only at a given career stage. If a P&T committee were using it to judge me, they should compare my index to other pediatric nephrologists at the full professor level with 23 years of work after fellowship.

    It’s better than counting glamour pubs, but I’m not certain that it adds much value above everyone’s general impression of someone.

    Like

  9. The Other Dave Says:

    qaz: My stuff is sort of wide-ranging. I honestly can’t check a subfield more narrow than ‘molecular neuroscience’ or ‘cell biology’, and even those sometimes fail me, and don’t include my highest cited paper!

    Back in my PhD days I knew everyone in the field, and they knew me. I sort of miss that. But I publish in a lot ‘better’ journals now.

    Like

  10. Ola Says:

    My only surprises have been on the old/faded graybeard front, i.e., old farts who seem (based on their reputation within the institution) to be super-famous and big contributors, who use phrases like “as we discovered in 19xx” in their talks, and then they actually turn out to have h-indices of Like 30. A number like that is pitiful for someone “continuously funded by NIH for 25 years”. There are several such PIs out there.

    From my own experience, h-index is like compound interest – start saving for retirement in your 20s and you’re already ahead of the game. Get your name on a couple of review articles as an undergrad and reap the benefits 15 years from now. If you’re in grad school and not sending your thesis proposal in as a review article, you’re missing out on future “earnings”.

    Like

  11. jmz4 Says:

    I think that its a fine metric especially when you’re looking for postdoc and grad school labs, but it definitely falls flat where we need it most, which is evaluating junior researchers on the job market.
    That being said, I can’t really think of anything that WOULD be a good metric for evaluating those candidates, though having a paper with a really good altmetric score would probably catch my eye if I were a hiring committee.

    Like

  12. drugmonkey Says:

    Never seen one of those, Ola. Interesting.

    Like

  13. MoBio Says:

    @OLA and others…

    h-Index is not particularly useful for those few, stellar scientists who have made truly foundational discoveries and who simply do not publish a lot of papers.

    For instance Linda Buck –52 total papers via PubMed with one for which she received the Nobel Prize in 2004. Kerry Mullis would be another example as would Francis Crick and James Watson.

    Also, probably not entirely useful for predicting future success….

    Like

  14. MoBio Says:

    @jmz4: Having sat on a number of promotion committees I cannot remember a single example where an ‘altimetric score’ had any effect (FWIW)

    Like

  15. thorazine Says:

    Altmetric scores are basically a measure of whether or not your institution decided to press-release your work.

    Like

  16. The Other Dave Says:

    You guys have all read this, right?

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    And this!

    http://klab.smpp.northwestern.edu/h-index.html

    The press release:
    http://www.northwestern.edu/newscenter/stories/2012/09/kording-scientist-predictions.html

    Like

  17. The Other Dave Says:

    You guys have all read this, yes?

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    And this!

    http://klab.smpp.northwestern.edu/h-index.html

    The press release:
    http://www.northwestern.edu/newscenter/stories/2012/09/kording-scientist-predictions.html

    Like

  18. The Other Dave Says:

    You guys have all read this, yes

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    And this!

    http://klab.smpp.northwestern.edu/h-index.html

    The press release:
    http://www.northwestern.edu/newscenter/stories/2012/09/kording-scientist-predictions.html

    Like

  19. The Other Dave Says:

    Read these:

    http://www.northwestern.edu/newscenter/stories/2012/09/kording-scientist-predictions.html

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    Here is a handy online tool:

    http://klab.smpp.northwestern.edu/h-index.html

    Like

  20. The Other Dave Says:

    I have been trying to post this but this site is locking me out. I guess it has decided I am a bot. Let’s see if removing some links & stuff helps…

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    klab.smpp.northwestern.edu/h-index.html

    Like

  21. The Alternate Dave Says:

    I have been trying to post this but this site is locking me out. I guess it has decided I am a bot. Let’s see if removing some links & stuff helps…

    Daniel E. Acuna, Stefano Allesina & Konrad P. Kording, Future impact: Predicting scientific success, Nature 489, 201–202 (13 September 2012) doi:10.1038/489201a

    klab.smpp.northwestern.edu/h-index.html

    Like

  22. The Other Dave Says:

    Whoa…. this site is going weird with my comments. It says I already posted, so I change a word. Still nothing. Then I shorten it. Then whammo it barfs all my comments out at once. Sorry.

    Like

  23. E rook Says:

    M-index might be more interesting for evaluating early careers. M=h/n, where n=#years since first pub. Supposedly: 1-2 is normal, 2-3 is very good, 3 is star, <1 is less productive than expected. Maybe field-specific, etc etc.

    Like

  24. drugmonkey Says:

    More than one link in a post, btw, sends it to moderation and you’ll have to wait until I see it.

    Like

  25. Paul Says:

    Check out the following paper, which suggests that h-index is typically estimated
    by the formula 0.54 x sqrt(# citations).

    Click to access rnoti-p1040.pdf

    Like

  26. Juan Lopez Says:

    Serial mentor has a very interesting post on the h-index:
    http://serialmentor.com/blog/2014/12/8/relationship-between-h-index-and-total-citations-count

    In brief: h-index~0.54*sqrt(NumCitations)

    Like

  27. The Other Dave Says:

    DM: No, this was a weird new thing the last few days. I click the ‘Post Comment’ and the page reloads but with nothing on it. I go ‘huh?’ and try again and that’s when it says I already posted it. But it’s not posted. I recognize the moderation hold because it still shows the post in that case (for me) but says ‘your post is awaiting moderation’. I tried both Safari and Firefox and got the same thing. So it isn’t browser-specific. Maybe just my system.

    The only loss is that you’ll never get to read my 2015 goals in the other thread. I thought they were hilarious. But then again, I was half drunk, and I always think I’m brilliant anyway. So maybe no loss for you guys.

    Like

  28. The Other Dave Says:

    Anyway, seems to be fixed now. Maybe it was a North Korean attack on my commenting.

    Like

  29. Juan Lopez Says:

    E-rook. M index sounds interesting. Perhaps the m index should use first publication as first author.

    Also, h index and citation counts include all of our papers. Should these be different if we are not first or last authors? I know of some people with high citation counts, but I think differently of them if I realize they are a middle author and their own papers are far less cited. Looks to me like they are benefiting from being in the right lab.

    Some people and institutions only considered non-self citations.

    Like

  30. E rook Says:

    Juan, I agree… That’s much of what I meant with the “etc etc,” the standard caveats apply, but it gives a snapshot of a trajectory. For new investigators, it is driven by the output of a few years and can jump from 1.5 to 2.5 within a particularly productive period (after a grant funded), and with an older investigator, it reflects the level of sustained contributions (and, perhaps staying-power of the earlier work). I didn’t invent it, but it gives me something to work toward.

    Like

  31. The Other Dave Says:

    I had never heard of the M-index before. But it makes sense and is line with a standard for productivity that I have heard ever since I got into science. Basically: If you’re not publishing at least a good paper a year, you have a problem. If you’re publishing a few, you’re doing well.

    I’ve seen that rule of thumb applied explicitly during grant reviews, when discussing productivity. Never heard h-index mentioned.

    Like

  32. AcademicLurker Says:

    Although one issue with M is that it penalizes people who get on a paper as an undergraduate. I suppose it could be redefined as “first paper as a grad student”.

    Like

  33. Eli Rabett Says:

    FWIW, Eli looks first if there are one or two papers with hundreds of cites.

    Like

  34. E rook Says:

    AL- good point. In practice, comparing myself to a specific coworker, I disregard their pub from masters in this calculation. It’s funny thinking about this “one good paper” per year thing because it requires sustain foresight and planning to make it happen … one family misfortune and your timeline is screwed, which, apparently gets discussed in grant review (where supposedly the merits if teh scienze rules).

    Like

  35. Busy Says:

    Sure, the h-index correlates to the number of citations. It was always meant to do so for the majority of cases, while penalizing those few who publish too much chaff.

    If a measure is good it will correlate to many other alternate measures.

    Yet amateur critics are surprised when they find such correlations. Then the try to use said correlations as a reason to disqualify the measure as opposed to what it really is: confirmation that the signal is meaningful.

    To wit, a prominent physicist blogger used the correlation of +1 for every year at work for honest hard working R1 physicists as a scathing indictment of the h-index instead of what it really is: a confirmation that if you do good work the h-index unfailingly registers it.

    The h-index was always meant to be a “harder to subvert than straight citation counting measure”. Since most people at the top don’t spend time playing citation games is it surprising that it correlates well to that, as per the AMS study?

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: