Thought of the Day

September 10, 2013

There seems to be a sub population of people who like to do research on the practice of research. Bjoern Brembs had a recent post on a paper showing that the slowdown in publication associated with having to resubmit to another journal after rejection cost a paper citations.

Citations of a specific paper are generally thought of as a decent measure of impact, particularly if you can relate it to a subfield size.

Citations to a paper come in various qualities, however, ranging from totally incorrect (the paper has no conceivable connection to the point for which it is cited) to the motivational (paper has a highly significant role in the entire purpose of the citing work).

I speculate that a large bulk of citations are to one, or perhaps two, sub experiments. Essentially a per-Figure citation.

If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are.

On the other side, fans of “complete story” arguments for high impact journal acceptances are suggesting that the bulk of citations are to this “story” rather than for the individual experiments.

I’d like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.

21 Responses to “Thought of the Day”

  1. dr24hours Says:

    I’ve often thought about doing a paper on checking wikipedia citations of the peer reviewed literature. The same could be done with, say, every paper published in NEJM (or whatever) in a year:

    How many citations actually support the argument the author is making, or say what the author is quoting? I bet it’s smaller than we’d all like to believe.

    Like

  2. Dr. Noncoding Arenay Says:

    I recently had a reviewer comment suggesting that I cite some of “these papers” that “may be of importance” to the manuscript. After some digging, as I suspected the reviewer was trying to get his own papers cited (not surprising). The thing was, those papers were only loosely related to the story, but we still cited a couple of his papers. You know #beware #reviewerscorn

    Like

  3. DrugMonkey Says:

    Is not service as a reviewer a sign of scholarly “impact” that justifies more concrete credit by way of an extra citation or two?

    /slowgrin

    Like

  4. AcademicLurker Says:

    I was amused to see one of my papers cited as supporting exactly the opposite of the point said paper was making. Since it was cited in the introduction section along with a bunch of other papers in the general area, I assume the authors only read the title and didn’t bother with even the abstract.

    Like

  5. DrugMonkey Says:

    Go review your cites and h-index on ISI until that feeling of annoyance goes away, AL.

    Like

  6. AcademicLurker Says:

    Go review your cites and h-index on ISI until that feeling of annoyance goes away, AL.

    I suppose if a paper can get cited as both supporting and opposing the same position, that’s double the number of citations.

    Sounds like a viable strategy for boosting one’s h-index to me…

    Like

  7. Ola Says:

    I would imagine there’s also variance in the type of citation based on where it is in paper. The intro will (should?) cite papers on both sides of an argument, to build a case that there’s something wort investigating. The methods will (depending on age of lab) contain a lot of self citations to previous works in which methods are described. The results often contain no citations at all, and then the discussion is where all hell breaks loose and all the peripheral “what has this got to do with the consumption rate of mayonnaise by Iranian camels?” stuff gets thrown in.

    Comparing intro vs. discussion, I’d place more “value” on a cite in the former, less value in the latter. To be cited in the intro conveys some idea that the paper was central to the rationale or design process of the current study. To be cited in the discussion means the authors just mentioned it because the reviewer bought it up.

    Like

  8. Cynric Says:

    I assume the authors only read the title and didn’t bother with even the abstract.

    I’ve had some like that. Totally left-field citations that can only have come from a quick PubMed search for keywords.

    Still, all counts to h index. Obviously my skill in writing alluring titles has impact.

    Like

  9. Juan Lopez Says:

    “If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are” assuming that longer papers are equally readable. I find that readers at most get one message from a paper. Still, I agree that citations are probably to a specific point or result. The rest of the results are ignored, or perhaps give the other result an aura of credibility.

    My experience: Clear figures and captions help with citations. Many more people will look at the figures than read the text. If they like the figure, they will read the caption. Only if they reaaaally get intrigued, they read the text.

    Like


  10. “I’d like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.”

    Which is why, if we ever get publishing reform rolling beyond palliative care around access to actually implement post-19th century technology, we’ll have link typologies to be able to automate exactly such queries.

    Like


  11. Which is why, if we ever get publishing reform rolling beyond palliative care around access to actually implement post-19th century technology, we’ll have link typologies to be able to automate exactly such queries.

    Who the fucke are you going to pay to curate these typologies, and who the fucke is going to come uppe with the money to pay them? This is done in legal citation analysis, and it costs a fuckeloade of money to pay a fuckeloade of people to do the curation.

    I swear, you fucken OPENEVERYTHING loonies must think that human labor is free, because you never provide any satisfying answers to how all of your labor-intensive schemes are going to be paid for.

    Like

  12. rs Says:

    Since they started counting citations and h-index for promotion, hiring and all decisions, it really doesn’t matter where the citations come from. They all contribute to the good. People used to care about these things when it was not a game and citations were real. Isn’t that the idea of whole h-index game system, an easy solution to measure people up?

    Like

  13. drugmonkey Says:

    Librarians, PP.

    Like

  14. bacillus Says:

    I got cited by one group on the grounds that their in vitro results confirmed my in vivo work.

    Like


  15. And who’s gonna pay for the extra motherfucken librarians it’s gonna take to do this massive amount of curation? They are just gonna all donate their time for free?

    Like

  16. drugmonkey Says:

    They work cheap, I hear.

    Like

  17. Dr Strangely Strange Says:

    A few years back, I had a reviewer complain that I cited too many primary references and suggested a few recent reviews including one written by a second year graduate student and his mentor. Is that how we give and lose credit, you work your butt off for years to come up with something good but then years later some barely litterate graduate student sitting on some invited review gets credit for your work and observations?

    Like

  18. drugmonkey Says:

    What? Too many primary references? Screw that! No such thing. Well, there could be in theory I guess.

    Like


  19. PP: I can think of several sources for the link typology: authors, readers, semantic technology, librarians, only some of which cost substantial sums. The typology also need not be perfect and complete to provide a benefit. In essence, it would already be an improvement just to allow a typology to grow, even without anybody curating anything (but one can always do better, obviously). Just having the functionality would be better than what we have now.

    As far as money is concerned, if I were to have my way, then “fucken OPENEVERYTHING” would entail something like SciELO which currently publishes peer-reviewed papers in ~900 journals at $90 a paper:

    http://ojs.library.ubc.ca/index.php/cjhe/article/view/479/504

    compared to the current $4.871 a paper in subscription publishing:

    http://www.nature.com/news/open-access-the-true-cost-of-science-publishing-1.12676

    With 98% of current subscription funds freed up, there are plenty of billions to go around for a few more librarians, other much needed infrastructure and then some.

    And besides, all the unemployed GlamMag editors will be desperate for a job then 🙂

    Like

  20. dsks Says:

    “Is that how we give and lose credit, you work your butt off for years to come up with something good but then years later some barely litterate graduate student sitting on some invited review gets credit for your work and observations?”

    Yeah, my opinion has changed a lot over this as the interweb has become the new normal. Back in the day of print, it sure made sense to be selective with references, and on top of that individual citations were not a commonly used metric for success; folk took the indirect approach of looking at journal IF instead. So cheating in the intro with a “See so-&-so for recent review” wasn’t such a bad thing all said and done.

    But now it’s increasingly common for search committees, tenure committees &c to look at individual citations stats, and so you are kinda shortchanging folk, particularly if they’re early investigators, if you indirectly reference their work via a review written by somebody else. Plus, with everything online, the idea that we have reference limits is a bit daft anyway.

    Like


  21. […] it’s found in the pages of a for-profit Elsevier journal. It’s interesting how often posts about papers, like this one about another post about the Şekercioğlu piece, seem to garner more attention than the papers […]

    Like


Leave a comment