Understanding the altmetrics wackaloon

November 29, 2012

I’ve been entertaining myself in a twitscussion with my good friend @mrgunn, a dyed-in-the-wool altmetrics wackanut obsessive. It was all started because he RT’d a reference to an article by Taylor and Thorisson entitled “Fixing authorship – towards a practical model of contributorship” which includes subsections such as “Authorship broken, needs fixing” and “Inadequate definitions of authorship“.

These were the thrusts of the article that annoyed me since I feel there is this whole area of interest that is based on a footing of disgruntled sand. In short, there IS no problem with authorship that “needs fixing”. This has not been proven by the people advancing this agenda to any believable degree and you see an awful lot of “everyone knows” type of assertion.

Some other headings in the bit are illustrative, let’s start with “Varied authorship conventions across disciplines“. This is true. But it is not a problem. My analogy of the day is different languages spoken by different people. You do not tell someone speaking a language other than that you understand that they are doing it wrong and we all just need to learn Esperanto. What you do is seek a translation. And if you feel like that is not giving you a “true” understanding, by all means, take the time to learn the language with all of its colloquial nuance. Feel free.

Heck, you can even write a guide book. For all the effort these “authorship is broken” wackaloons take to restate the unproven, they could write up a whole heck of a lot of style-guidage.

“….the discipline of Experimental Psychology is heavily driven by Grand Theorye Eleventy approaches. Therefore the intellectualizing and theorizing is of relatively greater importance and the empirical data-making is lesser. The data may reflect only a single, rather simple model for producing it. This is why you see fewer authors, typically just a trainee and a supervisor. Or even single-author papers. In contrast, the more biological disciplines in the Neuroscience umbrella may be more empirical. Credit is based on who showed something first, and who generated the most diverse sets of data, rather than any grand intellectualizing. Consequently, the author lists are long and filled with people who contributed only a little bit of data to each publication….”

Done. Now instead of trying to force a review of a person’s academic contributions into a single unified framework, one can take the entirely easy step of understanding that credit accrues differently across scientific disciplines.

ahhhh, but now we come to the altmetrics wackaloons who are TrueBelievers in the Church of Universal Quantification. They insist that somehow “all measures” can be used to create….what? I suppose a single unified evaluation of academic quality, impact, importance, etc. And actually, they don’t give a rat’s patootie about the relevance, feasibility or impact of their academic endeavor to capture all possible measures of a journal article or a contributing author. It doesn’t matter if the measure they use entails further misrepresentations. All that they care about is that they have a system to work with, data to geek over and eventually papers to write. (some of them wish to make products to sell to the Flock, of course).

This is just basic science, folks. How many of us have veeeeeery thin justifications for our research topics and models? Not me of course, I work on substance abuse…but the rest of y’all “basic” scientists….yeah.

The wackaloon justifications sound hollow and rest on very shifty support because they really don’t care. They’ve landed on a few trite, truthy and pithy points to put in their “Introduction” statements and moved on. Everyone in the field buys them, nods sagely to each other and never. bothers. to. examine. them. further. Because they don’t even care if they believe it themselves, their true motivation is the tactical problem at hand. How to generate the altmetrics data. Perhaps secondarily how to make people pay attention to their data and theories. But as to whether there is any real world problem (i.e., with the conduct of science” to which their stuff applies? Whether it fixes anything? Whether it just substitutes a new set of problems for an old set? Whether the approach presents the same old problems with a new coat of paint?

They don’t care.

I do, however. I care about the conduct of science. I am sympathetic to the underlying ideas of altmetrics as it happens, so far as they criticize the current non-altmetric, the Journal Impact Factor. On that I agree that there is a problem. And let’s face it, I like data. When I land on a PLoS ONE paper, sure, I click on the “metrics” tab. I’m curious.

But make no mistake. Tweets and Fb likes and blog entries and all that crapola just substitute a different “elite” in the indirect judging of paper quality. Manuscripts with topics of sex and drugs will do relatively better than ones with obscure cell lines faked up to do bizarre non-biological shit on the bench. And we’ll just end up with yet more debates about what is “important” for a scientist to contribute. Nothing solved, just more unpleasantness.

Marrying these two topics together we get down to the discussion of the “Author Contribution” statement, increasingly popular with journals. Those of us in the trenches know that these are really little better than the author position. What does it tell us that author #4 in a 7 author paper generated Fig 3 instead of Fig 5? Why do we need to know this? So that the almetrics wackaloons can eventually tot up a score of “cumulative figures published”? Really? This is ridiculous. And it just invites further gaming.

The listed-second, co-equal contribution is an example. Someone dreamed up this as a half-assed workaround to the author-order crediting assumptions. It doesn’t work, as we’ve discussed endlessly on this blog, save to buy off the extra effort of the person listed not-first with worthless currency. So in this glorious future in which the Author Contribution is captured by the altmetrics wackaloons, there will be much gaming of the things that are said on this statement. I’ve already been at least indirectly involved in some discussion of who should be listed for what type of contribution already. It was entirely amiable but it is a sign of the rocky shoals ahead. I foresee a solution that is exactly as imprecise as what the critics are on about already (“all authors made substantial contributions to everything, fuck off”) and we will rapidly return to the same place we are now.

Now, is there harm?

I’d say yes. Fighting over irrelevant indirect indicators of “importance” in science is already a huge problem because it is inevitably trying to fit inherently disparate things into one framework. It is inevitably about prescribing what is “good” and what is “bad” in a rather uniform way. This is exemplified by the very thing these people are trying to criticize, the Journal Impact Factor. It boggles my mind that they cannot see this.

The harms will be similar. Scientists spending their time and effort gaming the metrics instead of figuring out the very fastest and best way to advance science*. Agencies will fund those who are “best” at a new set of measures that have little to do with the scientific goals….or will have to defend themselves when they violate** these new and improved standards. Vice Deans and P&T committees will just have even more to fight over, and more to be sued about when someone is denied tenure and the real reasons*** are being papered over with quantification of metrics. Postdoctoral bakers agonizing over meeting the metrics instead of working on what really matters, “fit” and “excitement”.

__
*Which is “publish all the data as quickly as possible” and let the hive/PubMed sort it out.
**see complaints from disgruntled NIH applicants about how they “deserve” grants because their publications are more JIF-awesome or more plentiful then the next person.
***”an asshole that we don’t want in our Dept forever”

No Responses Yet to “Understanding the altmetrics wackaloon”


  1. Almenau unu el viaj legantoj jam scipovas Esperanton, do eble via analogio ne vere funkcias…

    (And yes, Google translate does work on that, but translating scientific literature generally isn’t that simple. Basically literature written outside English or one’s native literature is ignored. Native English speakers might not see that as a probable right now because English is the current popular scientific language. But for how long? In the 19th century German and French were more useful to scientists than English..)

    Like


  2. If someone could show me clear evidence that there are major contributions to the scientific literature that are being systematically undercounted due to limitations of current metrics and trivial contributions that are being systematically overcounted, then I might decide to give a single flying fucke about all this wackaloon altmetrics gibberish. Until then, I will continue to consider it a very baroque form of disgruntled whining.

    Like

  3. bill Says:

    Du el viaj legantoj. Kvankam mi havis malmultan ŝancon praktiko kaj bezonis Google por skribi ĉi.

    Like

  4. becca Says:

    Oh come on. You know if we kidnapped Nate Silver and unleashed him upon Pubmed for like a week, tops, we could totally have a perfect measure of everyone’s contributions, even the people left off the paper for political reasons. AND which trainee records will correspond to consistently well funded PI tracks. We just don’t want those truths.
    YOU, my good sir, just lack sufficient faith in the datageekery. And you call yourself a scientist. Pshaw.

    Like

  5. miko Says:

    The problem with numbers is their propensity to abuse by deans and other morons. If you are a scientist, part of your job is evaluation of your peers. This typically includes a lot of tangible and intangible criteria, none of which require math above addition, and most of which require working knowledge of your field, both scientifically and practically. Metrics cede evaluation by peers to evaluation by supposedly “objective” outsiders.

    I’m sure some scientists are bad at peer evaluation and obsess over glamour or pedigree or asskissery, but there is no system that will stop stupidity, and giving administrators numbers they don’t understand (cf. The Journal Impact Factor) guarantees it.

    Like

  6. arrzey Says:

    As much as I love numbers (and I teach biostats), this IS the height of wackaloonery. One of the things that IS broken (or Borken, depending on your political memory) is the use of these stats in tenure decisions. H-index, shmasch-index. It’s now required on formal CV’s at my BSD Medical School. If you write 5 papers that change the world, each with a quintillion cites, but the rest of your output, not so much, then you suck. Or suck relative to the jackass who belongs to a circle-jerk-cite group that makes sure that they all cite each other’s papers enough to up their h-factors (and I swear I’ve heard the BSD’s talking about how to game this). The ultimate prisoner’s dilemma game. Authorship, h-index, IF’s – these things are about presentation, appearance and NOT substance. They are important to people who are concerned about their appearance. They are the scientific equivalent of a nose job. They are for people who would rather be a scientist than do science.

    Like

  7. odyssey Says:

    They are for people who would rather be a scientist than do science.

    This.

    Like


  8. That would be so awesome if we could predict tenure and proposal hit rates as well as elections using Nate Silver’s algorithms. Also, one real reason for having altmetrics: professor trading cards. Trade with your friends and collect them all. I think I might try to sell this idea to my department chair.

    Like


  9. One of the things that IS broken (or Borken, depending on your political memory) is the use of these stats in tenure decisions.

    Well, you know that this is in large part a consequence of another species of disgruntled complaining, right? This species is from all those who don’t get jobbes/tenure/grants and complain that it is too subjective and that letters of reference and peer reviewers are biased and favor those who are good at “networking” and “ass kissing” and “gladhanding” and the science “should speak for itself”.

    Basically, no matter what you do, there are going to be large numbers of people who don’t get jobbes/tenure/grants and who are going to complain that the reason is because the assessment system is unfairly either “too subjective” or “too objective”.

    Like


  10. Marrying these two topics together we get down to the discussion of the “Author Contribution” statement, increasingly popular with journals. Those of us in the trenches know that these are really little better than the author position. What does it tell us that author #4 in a 7 author paper generated Fig 3 instead of Fig 5? Why do we need to know this? So that the almetrics wackaloons can eventually tot up a score of “cumulative figures published”? Really? This is ridiculous. And it just invites further gaming.

    While I in general agree that people are too obsessed by metrics, I really think you miss the point of Author Contributions. It isn’t to honor people like the author position, but to help the readers to know who actually might be able to answer a question about the paper beyond just addressing it to the corresponding author who is often a PI who may have limited knowledge of the gritty details. If I have a question about the data analysis, I want to get in touch with the person who actually did the analysis in question (and presumably made the relevant figure).

    Like

  11. Mr. Gunn Says:

    It’s a bit hard to see engaging in a good-faith discussion from a starting point like this, but I’ll try to take on some of the points I think you’re making.

    This isn’t about disgruntled failed academics trying to fuck up the system for everyone else. Very Senior People at the NIH, including Francis Collins, are also saying Something Must Be Done. Why? What’s the problem? The problem is that 80% of the research that’s coming out of labs today is unrefined bullshitte, and people are starting to notice. Disease Foundations are joining together and taking all their research in-house. Patients are getting together and doing their own research.

    So that’s why I think there’s a problem. Other people have other reasons. I think maybe if we did some introspection and looked at our own processes, we might be able to figure out a few things that could be improved. At the very least, these things call for some discussion as opposed to saying “lalala everything’s fine, nothing to see here!” I know that any time the discussion of “fixing science” comes up, it makes people nervous, primarily because they haven’t been consulted, but most of the people working on altmetrics now are current and former researchers, many of them from the sciences, and they want you to be part of the discussion. Check altmetrics.org or the #altmetrics hashtag if you’re interested in hearing more about what’s actually being discussed, feel free to speculate on your own blog, or start your own effort to do something about it. All of these are good ways to participate.

    Looking into how we publish and how we decide what to work on is one was to examine what’s broken, and one measurement technique for this is altmetrics. This isn’t about replacing the impact factor. The IF shouldn’t be used for most of the stuff it is used for today and the inventor of the IF himself says it shouldn’t. Nobody wants to make the same mistakes again. Altmetrics is about having more data available to help all sorts of people better understand what’s important to them. Those people might be journals or tenure committees or funders, or they might be that guy who doesn’t really work in your field but whose project is taking him in that direction and is trying to figure out what stuff he should read or who he should talk to about using some specialized technique. A publisher might see a bunch of interest in a new topic and say, “Hmm, there should probably be a journal to cover this topic.” A academic database might notice some strange activity patterns and flag the affected papers. It’s quite easy to spot gaming when metrics are openly available and how they’re calculated is transparent, but the important point is that the consumer of the data decides what to do with it. This is quite different from the unidimensional ranking that the IF provides today. The IF is but one input into altmetrics. Just as there are a variety of reasons to support open access beyond just easing the financial burden on libraries, there’s a variety of reasons for altmetrics beyond ranking people based on some universal number. We know that doesn’t work already.

    So I’d really like to get past the idea that altmetrics is just a replacement for the IF. The goal, for me, is to understand more about how to use research to improve the human condition.

    Like

  12. DrugMonkey Says:

    H-index gaming eh? So I should put up on my lab site a list of the papers I need cited to up my h-index and invite my science homies to help a brother out?

    Like

  13. DrugMonkey Says:

    JB- your reason has nothing to do with the stuff these nutters are talking about. Click the link I put at the beginning of the post…

    Like

  14. DrugMonkey Says:

    engineeringprof- sorry man, I’m not selling my mint rookie PhysioProf card for any sum.

    Like

  15. Hermitage Says:

    They had paper then?

    Like

  16. qaz Says:

    JB says author contributions lists are “to help the readers to know who actually might be able to answer a question about the paper beyond just addressing it to the corresponding author who is often a PI who may have limited knowledge of the gritty details. ”

    Um… no. Author contributions appeared as an attempt to deal with fraudsters. There had been several cases in which collaborators of convicted fraudsters were authors on fraudulent papers and were trying to say “I didn’t have responsibility for THAT.” There was a large discussion in several places about how to balance the two extremes of authorship meaning “helped somewhere along the line” and authorship meaning “is responsible for the whole paper”. Authorship contributions was so you could say “I did Figure X, Figure X is fine, even if that dirty so-and-so faked Figure Y. I didn’t do Figure Y, don’t blame me.”

    See http://blogs.nature.com/nautilus/2007/11/post_12.html

    Note: I’m not defending this whole author-contributions silliness. I’m just pointing out that the goal (as stated in the journals who implemented this) was not about correspondence, but about responsibility.

    Like

  17. arrzey Says:

    @Mr. Gunn – who gets to decide what is “unrefined bullshitte”?

    There is an old African tale: The animals are asked to chose the ruler of the jungle. The eagle says: the one who can fly the highest. The mole says: the one who can dig the deepest. The antelope says: the one who can run the furthest. And then the lion ate them all up.

    Like

  18. odyssey Says:

    @Mr. Gunn,
    Seems to me you haven’t defined what is broken. Beyond “science is broken” that is. What does that even mean? What specifically are the problems that have “broken” science? You’ve outlined a solution, but not the problem. What exactly is altmetrics a solution to?

    Like

  19. jzsimon Says:

    Sometime adding helpful information to a CV *can* clear up a lot of potential confusion/misunderstanding/ignorance when it comes to evaluating co-authors’ roles. When I went up for tenure, one of my mentors (thank god for my senior faculty mentors!) said this was absolutely necessary for me, due to my 1) interdisciplinary research 2) previous academic history in an unrelated field (physics) and 3) frequent habit of collaborating with faculty at my institution. My mentor was concerned that that triple combination would generate way too much uncertainty about my case.

    I settled on this text (and categories) for use in my CV:

    ————
    Publications

    Note: Research conducted in multidisciplinary environments produces publications whose author-lists may not well summarize the role of the individual contributors. Different disciplines use conflicting author order conventions, e.g. head of lab last (common for biologists) or predominantly alphabetical (common for physicists). For this reason I include the following annotations to indicate my role in co-authored publications:

    LEAD AUTHOR : Responsible for conducting and writing up the majority of the research (e.g. often the first author).
    ANCHOR AUTHOR: Supervised the work of the student or postdoc who was the lead author (e.g. often the last author).
    CORE CO-AUTHOR : Not lead author, but still crucial to the foundations of the research, or in some cases, co-lead author.
    MINOR CO-AUTHOR: Contributions, though significant, were secondary to those of other co-authors.
    ————

    Then I choose the most appropriate tag for every multi-authored article.

    I found this to *much* more useful than when I tried annotating each article with my specific contributions (main ideas? writing the first draft? data crunching?). I also found that including the “minor co-author” tag on some articles really helped emphasize that for all the other articles I was leading the way.

    Maybe someone else will find it useful.

    Like

  20. Christina Pikas Says:

    IRT CPP: “If someone could show me clear evidence that there are major contributions to the scientific literature that are being systematically undercounted due to limitations of current metric”

    Two small examples:
    In fields like nursing citations do not accurately reflect importance or impact of a paper because there are a potential shittonne of clinical users who don’t publish and hence don’t cite.

    In biogeography there are these thingies called _flora_ that are used but never cited (see).

    In these cases usage (an altmetric) would show a lot more than tradition citation-based measures. However, usage can of course be gamed so that’s where you get into a multidimensional metric to try to correct for that.

    Like


  21. @qaz.
    That may be why glammags like _Nature_ did it, but that’s not why author contributions got started (which was in the Open Access journals). And I still don’t see why it is “silly”. At least in my field of genomics, a paper may have 50 authors. Who did what? And sometimes the corresponding author isn’t very helpful. As an AE, I sometimes contact them to review a related paper and they say things like “That’s not my field of expertise”. And maybe it isn’t — but then why are they listed as a corresponding author on a paper they probably don’t understand?

    Like


  22. I’m not sure what you mean by Grand Theorizing. It strikes me as odd in particular because psychology does not have (in my humble opinion) nearly enough real theoretical work (though in comparison with neuroscience, which tends towards atheoretic data-gathering, I suppose it might look theory heavy). I mean, random data-gathering isn’t science; it’s scrap-collection.

    It is true that merely contributing data isn’t considered authorship-worthy (my recollection that APA guidelines state this specifically). To be an author, you need to have contributed intellectually in some way (e.g., design or interpret the study, not merely run subjects). It’s actually quite difficult to publish a non-empirical paper in psychology. But it’s very possible to do single-author empirical papers. I understand that this is not the case in some fields.

    This varies across countries, too. I remember a linguist from somewhere in East Asia (Korea?) saying that simply discussing your ideas with a senior colleague automatically resulted in them being an author.

    I realize this was an example off the top of your head, but it’s worth pointing out that the differences across academic cultures are sometimes the result of deliberate decisions on the part of a community in terms of what they value.

    Like

  23. DrugMonkey Says:

    Oh of course, ggw. Of course every subsubsubdiscipline of science ha the OneTrueWay to do things. Because of VeryImportant types of Reasons. I grasp this entirely.

    Like


  24. DM – I think we agree here?

    Like


  25. […] Quiet Decline Of The American Scientific Empire Understanding the altmetrics wackaloon Stopping CRE: Where there’s a will, there’s a way? Are we causing antibiotic resistance […]

    Like

  26. ecologist Says:

    @jzsimon — that is a really good classification of publications on a CV. I have colleagues who do lots of interdisciplinary, lots-of-authors work, who would be helped by this. The term “anchor author” is new to me, and captures an important category.

    Like


  27. Save All Paperwork: Whatever paperwork arrives with
    your parts or which is provided from the seller should be maintained.
    He knew the system well enough to not pay many of his suppliers and sub-contractors,
    then would cover it up up by handing out fake lien releases to
    make it look like they were paid. The specific combination
    of reps, sets, exercises, and weight depends upon the desires
    of the body builder.

    Like

  28. kant Says:

    nobody understands anything

    Like


Leave a comment