A smear campaign against Impact Factors…and the Sheep of Science

August 13, 2012

Stephen Curry has a nice lengthy diatribe against the Impact Factor up over at the occam’s typewriter collective. It is an excellent review of the problems associated with the growing dominance of journal Impact Factor in the career of scientists.

I am particularly impressed by:

It is time to start a smear campaign so that nobody will look at them without thinking of their ill effects, so that nobody will mention them uncritically without feeling a prick of shame.

Well, of course I would be impressed, wouldn’t I? I’ve been on the smear campaign for some time.

The problem I have with Curry’s post is the suggestion that we continue to need some mechanism, previously filled by journal identity/prestige, as a way to filter the scientific literature. As he quoted from a previous Nature EIC:

“nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”

This is the standard bollocks from those who have a direct or indirect interest in the GlamourMag game. Stephen Curry responds a bit too tepidly for my taste:

The trick will be to crowd-source the task.

Ya think?

Look, one of the primary tasks of a scientist is to sift through the literature. To review data that has been presented by other scientists and to decide, for herself, where these data fit. Are they good quality but dull? Exciting but limited? Need verification? Require validation in other assays? Gold-plated genius ready for Stockholm?

This. Is. What. We. Do!!!!!!

And yeah, we “crowdsource” it. We discuss papers with our colleagues. Lab heads and trainees alike. We come back to a paper we’ve read 20 times and find some new detail that is critical for understanding something else.

This notion that we need help “sifting” through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.

And acutely detrimental to the progress of science.

I mean really. You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?

That isn’t science.

That’s sheep herding.

And guess what scientists? You are the sheep in this scenario.

No Responses Yet to “A smear campaign against Impact Factors…and the Sheep of Science”


  1. Stephen makes the interesting observation that the impact factor wasn’t designed for scientists to find the most prestigious journals to aspire to publish in but rather for librarians to find journals that maybe weren’t absolutely essential to subscribe to given limited budgets. Reminds me of the fact that IQ tests weren’t originally developed to find the smartest people but rather to identify cases of mental disability.

    Like

  2. neuromusic Says:

    “The trick will be to crowd-source the task.”

    Here’s an idea for how a crowd-sourcing measure of impact would work:

    If a published paper is relevant to and informs my work, I do something kindof like a “like” on Facebook… we’ll call it a “citation”.

    Then, we’ll add up all the “citations” that a published paper gets and rank journals based on how many “citations” their papers get, compared to the total number of papers they publish!

    Like

  3. odyssey Says:

    Neuromusic,
    What does the journal identity/rank have to do with the impact of the paper? Why not simply count number of citations for that paper? Do you really decide whether or not to “like” it based on where it was published?

    Like

  4. neuromusic Says:

    odyssey – no, I was just trying to criticize the “oooh, I know, we should crowd-source it!!” approach. because IF is a method of crowd-sourcing.

    it seems to me that the value of IF (or any other metric of that squishy thing we call “impact”) varies according to the needs of the stakeholder. for scientists, I’m mostly with DM… we should be evaluating papers according to their own merit in consultation with our peers.

    but for many stakeholders (librarians, as mentioned, + granting agencies, institutions, journalists, etc, etc), *some* metric of impact is needed for papers/authors (or more generally, outputs/researchers) and the ideal metric is probably different for different stakeholders. do librarians need to judge the impact of entire journals? as long as closed publishers sell subscriptions in totality, yes. institutions & grantmakers probably need more author-level metrics. an applied science grantmaker ideally needs to actually estimate the expected economic impact of a research program. each of these necessitates different types of metrics.

    i guess I just have a hard time judging when IF-bashing is specific to IF but OK w/ metrics vs when the bashing is targeted toward the very act of using a metric to judge “impact”

    Like

  5. DrugMonkey Says:

    for librarians to find journals that maybe weren’t absolutely essential to subscribe to given limited budgets. Reminds me of the fact that IQ tests weren’t originally developed to find the smartest people but rather to identify cases of mental disability.

    ok, but you run into field-specific problems right away. In neuroscience topics, below about 2 IF right now is pretty low. For some fields I understand that is a smokin’ IF. so cross-field decision making is flawed.

    what about within-field? In the approximate 2-4 IF region for neuroscience sub-specialty and general society class journals, how is this supposed to identify the must-haves? is 2.48 different from 3.56 or should other considerations like size of the specialty come into play? Do you go for Neuroscience or Psychopharmacology? Or should a librarian just look at the number of research articles per subscription dollar in a sort of statistical game?

    Like

  6. DrugMonkey Says:

    but for many stakeholders (librarians, as mentioned, + granting agencies, institutions, journalists, etc, etc), *some* metric of impact is needed for papers/authors

    I agree that some of the ascendency of the IF was due to people outside of active science. Some, such as librarians, needed it because they had no other frame of reference. Others, such as University wide P&T committees and deans, needed some sort of allegedly objective measure for fairness sake.

    The question is why scientists got so enthusiastically on board, despite the structural problems identified by Curry and known from the get-go of IF.

    Like

  7. neuromusic Says:

    The question is why scientists got so enthusiastically on board, despite the structural problems identified by Curry and known from the get-go of IF.

    … um, because they are rational decision makers? http://en.wikipedia.org/wiki/Game_theory

    Like


  8. Yes, you have to deal with cross-field variation in impact factor before subscription culling starts. And the fact that many publishers such as Elsevier purposely prevent such actions — the infamous vanity journal “Chaos, Solitons and Fractals” (which for many years largely consisted of questionable papers by its former editor) is in many libraries simply because it is part of a bundle including more popular journals

    Like

  9. neuromusic Says:

    and I think that is a totally interesting and relevant point re: cross field decision making… but this is a pretty easy thing to solve, no? just incorporate a field-specific normalization parameter. rinse, lather, repeat for the next bias in the metric.

    but really, Curry’s basic argument is a criticism not just of IF, but of using a journal-level metric to evaluate individual papers and authors. which I think is totally valid.

    In this respect, I think its likely that IF is going to kill itself soon (or when the current tenured faculty die), as neither me nor my grad student peers ever read through a journal (except maybe the GlamMags), but instead are more targeted in our reading toward authors/topics regardless of the journal.

    I saw some data on this that indicated that this was a general trend, but I can’t remember where I saw it…

    Like

  10. neuromusic Says:

    by “which I think is totally valid,” I mean, “I think its a valid criticism.”

    Like

  11. Spiny Norman Says:

    The so called “bugs” in Impact factors are not bugs at all. They are features.

    The simple answer is that we REALLY ARE more interested in the social status of a scientist than we are in the quality of his or her science or the progress of the field.

    Why is that so difficult to understand?

    Like

  12. miko Says:

    I think Spiny Norman nails it, except maybe in our own little sub-field where we really can assess. Although I think it was not always this way, and there are new drivers

    1. Vast increases (slowly over decades) in the size/scope of biomedical research means that one scientist can competently assess a smaller and smaller % of it.

    2. The increasing role of administrators at every stage and level of every process, who can’t assess any of it.

    Something weird is going on, because every scientist I have ever talked to will say to your face that IF is a scam and doesn’t matter. Yet every search committee, study section, peer review, and P&T committee produces outcomes that suggest the exact opposite. Or explicitly state the exact opposite, anonymously.

    Like

  13. DrugMonkey Says:

    When I was on study section the journal IF came up maybe twice and at least one of those times exceptionally high IF productivity was a liability. The PI was getting busted for too few total pubs. People talked about “high-impact” and “influential” but they meant the individual works, not the journals in which they were published. So YM will most assuredly V.

    Like


  14. “This. Is. What. We. Do!!!!!!”

    +1

    This is why we read papers, we don’t need the glam mag filter. I have google scholar set to send me keyword updates and I don’t follow up on cool articles based on the journal title. I don’t think anyone who is serious about their science does. At least I hope not.

    There’s a lot of this going around, though. I felt the same way when I read about Science Exchange’s new plan to replicate published results. I kind of feel like I already replicate a lot of published results in my own research. At least the results that are important to me as far as continuing to test my hypotheses.

    http://www.reuters.com/article/2012/08/14/us-science-replication-service-idUSBRE87D0FV20120814

    Like

  15. pyrope Says:

    I’m not in a med field, but I have all papers compiled in ISI researcherID, which tells you total number of citations and average citations per article (and the beloved h-index). Anyone on P&T could go look at that. The problem is, I don’t know how you compare it to say that I am more productive than X and less than Y. Where do I get the stats for X & Y? The nice thing about IF is that you can pretty easily find the range for your field and quantify where you’ve published.
    I’m kind of tempted to put my individual stats somewhere in a tenure packet (in a couple years), but I’m also afraid that doing so would seem dickish…even if it’s a truer representation of my research than IF.

    Like


  16. “Stephen Curry responds a bit too tepidly for my taste”

    Tepid? The name’s Curry! Though I guess I am more of a tikka masala than a vindaloo.

    I’m not suggesting that you should be relying on the crowd to make your mind up for you. As you pointed out in the comment on my post, Pubmed is your friend. But (1) we still need to kill off the impact factor because of it’s corrosive effects on people’s careers and their approach to publication and (2) I think there is value in harnessing the discussion about papers on the internet (but not including facile indicators such as facebook ‘likes’).

    At the end of the day, sure, I’ll make my own assessment by at least reading the abstract (and, if that tickles, the rest of the paper), but I’d be grateful to have people flag up papers that might be of interest to me. Perhaps I need to do my keyword searches more effectively, but I retain a faith in the wisdom of (much of) the (research) crowd.

    Like


  17. […] the Impact Factor discussion has been percolating along (Stephen Curry, Björn Brembs, YHN) it has touched briefly on the core valuation of a scientific paper: […]

    Like


  18. […] Theory) Well In Massachusetts, nine NIH-funded research projects in this year’s $10 million club A smear campaign against Impact Factors…and the Sheep of Science The psychology of doping accusations: Which athletes raise the most […]

    Like


  19. […] A smear campaign against Impact Factors…and the Sheep of Science by Drugmonkey […]

    Like


  20. […] A smear campaign against Impact Factors…and the Sheep of Science by Drugmonkey […]

    Like


  21. […] were also interesting commentaries on the post from Telescoper, Bjorn Brembs, DrugMonkey and Tom […]

    Like


  22. […] decimal. NPG really is beyond hope, it seems. The other reason I thought I should comment was a post on DrugMonkey’s blog, where he writes that: This notion that we need help “sifting” through the vast literature and […]

    Like


Leave a comment