As everyone enjoys themselves predicting their h-index using this new tool, it returns us to talking about the measurement of science and the bean-counting of citations.

For those who are new, citations of your academic papers are good, the more you have the better and all of this is well over 90% dependent on factors such as field size and vigor that have essentially nothing to do with the actual quality of your work itself.


Nevertheless, within some approximation of a related set of peer investigators who publish in roughly the same journals you do….well, the number of citations you get may have something to do with how cool your work is. So there’s that.

My point for today is an excessively narrow one. There are plenty of reasons to show why citations cannot be compared to each other. But I mention to you a reason that will forever be transparent to any bean-counting attempt to quantify your paper quality.

A citation is not a citation.

Sometimes a paper is cited in a fluffy or peripheral way. Mentioned once in the Intro along with four other citations as a general point. Maybe even overgeneralizing and getting it a bit wrong.

Sometimes, a paper is cited in a fundamental, formative way. It is an essential background motivation or concept around which the present work is constructed.

The latter is fantastic and means the paper really had impact.

The former can be little better than a marker for being in the game and communicates very little other than the mere fact that you published a paper. That popped up on the first page of a PubMed search or something. Or happened to be lazily cascade-cited through a small thread of science.

The bean counting doesn’t give a rat’s patootie about which type of citation your paper received.