Biased objective metrics

October 19, 2021

As you know, Dear Reader, one of the things that annoys me the most is being put in the position of having to actually defend Glam, no matter how tangentially. So I’m irritated.

Today’s annoyance is related to the perennial discussion of using metrics such as the Journal Impact Factor of journals in which a professorial candidate’s papers are published as a way to prioritize them for a job search. You can add h-index and citations of the candidate’s papers on an individual basis on this heap if you like.

The Savvy Scientist in these discussions is very sure that since these measures, ostensibly objective, are in fact subject to “bias”, this renders them risible as useful decision criteria.

We then typically downshift to someone yelling about how the only one true way to evaluate a scientist is to READ HER PAPERS and make your decisions accordingly. About “merit”. About who is better and who is worse as a scientist. About who should make the short list. About who should be offered employment.

The Savvy Scientist may even demonstrate that they are a Savvy Woke Scientist by yelling about how the clear biases in objective metrics of scientific ability and accomplishment work to the disfavor of non-majoritarians. To hinder the advancement of diversity goals by under-counting the qualities of URM, women, those of less famous training pedigree, etc.

So obviously all decisions should be made by a handful of people on a hiring committee reading papers deeply and meaningfully offering their informed view on merit. Because the only possible reason that academic science uses those silly, risibly useless, so called objective measures is because everyone is too lazy to do the hard work.

What gets lost in all of this is any thinking about WHY we have reason to use objective measures in the first place.

Nobody, in their Savvy Scientist ranting, seems to every consider this. They fail to consider the incredibly biased subjectivity of a handful of profs reading papers and deciding if they are good, impactful, important, creative, etc, etc.

Even before we get to the vagaries of scientific interests, there are hugely unjustified interpersonal biases in evaluating work products. We know this from the studies where legal briefs were de/misidentified. We can infer this from various resume-call back studies. We can infer this from citation homophily studies. Have you not every heard fellow scientists say stuff like “well, I just don’t trust the work from that lab”? or “nobody can replicate their work”? I sure have. From people that should know better. And whenever I challenge them as to why….let us just say the reasons are not objective. And don’t even get me started about the “replication crisis” and how it applies to such statements.

Then, even absent any sort of interpersonal bias, we get to the vast array of scientific biases that are dressed up as objective merit evaluations but really just boil down to “I say this is good because it is what I am interested in”. or “because they do things like I do”>

Citations metrics are an attempt to crowd source that quality evaluation so as to minimize the input of any particular bias.

That, for the slower members of the group, is a VERY GOOD THING!

The proper response to an objective measure that is subject to (known) biases is not to throw the baby out onto the midden heap of completely subjective “merit” evaluation.

The proper response is to account for the (known) biases.

Leave a comment