LPU Redoux

April 12, 2013

Another round of trying to get someone blustering about literature “clutter” and “signal to noise ratio” to really explain what he means.

Utter failure to gain clarity.

Again.

Update 1:

It isn’t as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don’t ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the “clutter”, “signal to noise”, “complete story” and “LPU=bad” dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that’s your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure’s worth of help. Some will merely sow confusion…but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That’s the job. That’s the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

UPDATE 2:
Thoughts on the Least Publishable Unit

LPU

Authors fail to illuminate the LPU issue

Better Living Through Least Publishable Units

Yet, publishing LPU’s clearly hasn’t harmed some prominent people. You wouldn’t be able to get a job today if you had a CV full of LPU’s and shingled papers, and you most likely wouldn’t get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don’t completely understand this phenomenon.

Avalanche of Useless Science

Our problem is an “Avalanche of Low Quality Research”? Really?

Too Many Papers

We had some incidental findings that we didn’t think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) “incidental” results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).

35 Responses to “LPU Redoux”

  1. becca Says:

    Rants against LPU are nearly always related to the general problem that as a scientist, you never have enough time to read, and that you therefore resent all the mounds of reading that sits there reminding you of your inadequacy as a scholar.
    Every once in a while, such rants may also be triggered by legitimate annoyance with labs who, having enough data for two skimpy papers (by the same first author) or one really great paper, opt for the former and publish them in paywalled journals, thus forcing people to go through hoops of looking up both to get the information they need. This is a silly problem that technology should have solved for us by now, but due to the incentives that exist for money in publishing and ‘prestige’ in paper count, is still with us.

    Like

  2. AcademicLurker Says:

    Forget LPUs. If we’re going to complain about something, let’s complain about the labs that publish 3 or 4 review articles for every 1 research paper.

    Like

  3. drugmonkey Says:

    Rants against LPU are nearly always related to the general problem that as a scientist, you never have enough time to read, and that you therefore resent all the mounds of reading that sits there reminding you of your inadequacy as a scholar.

    I disagree. I think it is motivated mostly by a sort of elitist gatekeeping mentality. As in, “The standards I and my buddies meet (at this point in time) should be the minimum standard,wot wot, and all those riff raff shouldn’t even be allowed to play in the sandbox”. It is just another facet of the GlamourPub race in which extensive methodological breadth (and expense) is deployed as a gatekeeping mechanism.

    These may have little to do with the most essential nature of the finding.

    Like

  4. odyssey Says:

    When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist.

    No, you’re definitely correct.

    Like

  5. miko Says:

    2 issues here:

    trivial / unimportant papers. these are subjective assessments that might change in time, so of course they should be published. this is what plos one is for, though publishing in p1 is arduous enough that you wouldn’t want to do it for a very small (1-2 figure) study. biologists need a way to publish data/results that don’t warrant a manuscript. the barriers here are format, metadata, and adoption, none of which will be overcome in our lifetime because biologists.

    bad papers. poorly designed or done experiments. bad analysis. data don’t support conclusions. these should not be published, but regularly are in all kinds of journals, but some bottom-feeding journals specialize in this stuff.

    Like

  6. drugmonkey Says:

    biologists need a way to publish data/results that don’t warrant a manuscript.

    no, wrong. what is the problem with two figure studies or Journal of Obscure But Potentially Useful Methods? What is the cost? Still searching for an answer on this…..

    these are subjective assessments that might change in time, so of course they should be published.

    The End.

    this is what plos one is for
    and , this is what the host of IF 2 journals at Elsevier are for.

    bad papers. poorly designed or done experiments. bad analysis. data don’t support conclusions. these should not be published

    totally agreed but this has nothing to do with the sort of too-many-papers complaints levied by the noLPU, complete story, literature pollution warriors.

    because this…
    some bottom-feeding journals specialize in this stuff.

    is inaccurate. In fact the higher IF journals have higher retraction rates and when they do so you can see why. Because they have poorly done experiments that are glossed over by the Glamourousness of the package.

    Like

  7. miko Says:

    I dunno man, shit that can’t get into P1 (which suggests either a lack of minimal scientific standards or editorial fuck up) ends up in a wide variety of places, but I’d never heard of most of them (what’s “JAMA”?). Agree glam courts cheatfucks, but wide spectrum between dumps and glam do not as a rule publish garbage. And shit no one reads because it doesn’t make any sense does not get retracted because no one cares.

    Writing up an ms, formatting figures, writing background material, etc, takes time, sometimes a lot of time. Publishing ain’t cheap for OA, and for paywalled dump journals no one will ever read it. Why not be able to publish an experiment online with data?

    Like

  8. kevin. Says:

    God, I hope you’re kidding about JAMA being a potential no-name journal.

    Like

  9. fjordmaster Says:

    In fact the higher IF journals have higher retraction rates and when they do so you can see why. Because they have poorly done experiments that are glossed over by the Glamourousness of the package.

    This is a great point that I think is often lost in the “LPU: good or bad” debate. Clutter and noise, in the form of poorly designed or incorrect experiments, embedded in a paper in a high impact journal can be more damaging to the progress of a field. It is harder to publish contradictory, potentially more accurate, data if the original finding is rolled into a CNS paper.

    Like

  10. neuromusic Says:

    “I think it is motivated mostly by a sort of elitist gatekeeping mentality.”
    Bingo.

    If isolating signal from noise is *really* the problem, there are much better solutions than making certain that lots of science never sees the light of day. Separating signal from noise is largely a solved problem. There’s decades of research on doing so (statistics, engineering, detection theory). We have entire mega corporations like Google making boatloads of money because they have been able to figure out how to separate the signal from the noise of the entire fraking internet.

    And if you aren’t a fan of using the #altmetrics fanbois’ algorithms to do what humans probably do best, you’ve got DM’s approach: “this is an essential ESSENTIAL part of my job as a scientist.”

    They type of pre-publication filtering of “signal” science from “noise” science is ultimately foolish because of BASIC STATISTICS, namely Type I & Type II errors. Making a decision at publication about what is “noise” and preventing its distribution also prevents us from ever knowing what the Type II error rate is. It leaves too many “unknown unknowns”.

    Like

  11. neuromusic Says:

    AND in science, signal and noise exist along a continuum. If you take a “signal only” approach to publication, you quickly get to where you only want the “best” signals and BAM! You’re just playing the glam game.

    Like

  12. drugmonkey Says:

    and for paywalled dump journals no one will ever read it

    The people who are active in a given subarea of my own interest seem familiar with the papers in such journals to which I refer in conversation. They seem to cite them much as I do. I don’t understand your point.

    It is harder to publish contradictory, potentially more accurate, data if the original finding is rolled into a CNS paper.

    Try getting a grant funded that includes experiments that are based on “that existing literature is exceptionally thin, the data past the Abstract are really crappy in that paper everyone cites….and we need to get into this with some teeth”.

    Like

  13. miko Says:

    I’m saying there is shit dumber than #arseniclife published every day in the Proceedings of the Royal Grand Fenwick Society of Alchemy and Birds that never gets closely examined. Now, that paper never should have been published, but publishing it in a glam journal got it the fairly immediate corrective response it deserved. So I think the IF-retraction correlation is partly cheatfuckery and partly ascertainment bias.

    Like

  14. Mike Says:

    I have no problem with the LPU when it’s a matter of something either getting published or deemed too unimportant to publish. Nothing is too unimportant to publish.

    But it’s a problem when there are papers that make very similar points, in related but not identical ways. Multiple papers where you have to look at one to see the effect in primary monocytes, another one to see the same effect in the monocyte cell line. One to see the effect after knockdown, and another to see the same effect in a genetic knockout. One to see the effect in human cells, and another to see the same effect in mouse cells. Multiple papers that practically overlap, submitted simultaneously to different journals. It seems like a sort of three-card monte, as they can use one paper to cite another, and vice versa, making it seem like what they’re saying is well-established and reproducible when it’s really quite tenuous.

    Either they had one coherent paper that got rejected by a top-tier journal and they decided to disassemble it into three minor papers that would come out simultaneously and confuse everybody, or they are trying to get credit for this big pile of data without ever assembling it themselves into something coherent.

    Like

  15. drugmonkey Says:

    Well, that’s the thing isn’t it? Everyone has their own approach to what a given lab “should” do. And as the reader we are entirely unconstrained by things like grant cycles, grant reviewers who want to see “productivity”, grant reviewers who want proof you can do some technique, 12 page NIH Grant limits, tenure bean counters, postdocs who need first author papers, grad students who need X papers to defend, and assorted other realities. We are not the ones who had to battle stupid reviewers who didn’t like the omnibus paper because a few little details were off. We are not the ones that felt like they were about to get scooped and lose priority for some aspect of the study.

    Like

  16. Comradde PhysioProffe Says:

    When it comes to the funding realm, you are constantly going on and on pompously about how everyone else only proposes solutions to the “broken system” that benefit them personally.

    Well guess what holmes? That is you when it comes to publishing.

    You prefer to publish your little pellets in sub-dump journals that no one reads, and you think “the system is broken” because it doesn’t sufficiently reward you for that preference, you blame all those other fuckers over there that prefer to do things differently, and you propose self-serving solutions to the supposedly “broken system” that are designed to benefit you personally.

    Like

  17. DrugMonkey Says:

    Sure….except for the fact that there are clear detrimental effects, backed by data in the case if retraction rates, of your preferred Glamour Humping. On the conduct of science across many sub fields, nothing to do with me specifically nor even my sub areas of greatest interest. Obviously I continue to survive in the absence of Glamourdouchery.

    One might rather ask why my nattering would bother you if the Glamour chase is so clearly superior and the awesomeness….and does not in fact encourage cheating and fraud.

    Also why you choose to snipe instead of addressing my actual criticisms.

    Hits too close to home does it?

    Like

  18. dsks Says:

    Anybody complaining about LPUs in the information age needs a cold shower and a slap around the chops.

    I routinely pick up tasty tidbits of info about this or that from LPUs discovered via searches. Last week I pulled up a bunch published in the last ten years, none of which had more than an annual citation index of 4. The contents might not matter to the Grand Scheme, or even the Wee Field, but it provided me with important technical info about the cell model I’ve just started working with and illuminated a dead end before I went trudging off towards it. I think I spent about 1 hr searching for, locating, and reading through those papers. 1 Hr broken down neatly into the ten minute time point intervals on an experiment I was running. Yay, multitasking!

    Fuckwits complaining about how much there is to read, signal/noise bollocks need to get their heads out of the 20th century and learn how to use a bloody search engine. I mean seriously? Too much noise on the internet? No shit! Work around it dipshits or bugger off down to the stack where you can fester and grumble along with the remains of the last century without bothering the rest of us having a good time in this one.

    Like

  19. The Other Dave Says:

    Not enough funding to go around, too many papers. These scientists are such a bother! Always whining about something. The obvious solution to all these problems is to cut them off, reduce their number.

    Like

  20. Jim Woodgett Says:

    Just out of the cold shower…. What fraction of papers are never cited (or only self cited)? Depending on the field its estimated to be around 25-30%. Perhaps these papers are just undiscovered gems lying around for some search engine to happen upon in the distant future. Or they perhaps have no intrinsic value, add no new knowledge and exist for reasons only known to their authors. Do they do harm? Not according to protagonists of search filters that simply ignore them. But this “harmless” clutter costs real money to produce and distracts those who may accidentally rediscover it. More importantly, it fuels the argument among some unenlightened quarters that scientists waste their time and money producing academic pages that no one (not even a machine) bothers to read. The argument that any LPU warrants its existence because it does no harm and might do some undefinable good ignores the real expense. While anyone has a right to publish if they pay for it, someone has to pay. I’d rather scientists self-censored themselves rather than using publications as some form of bean counting.

    I realise the point is moot and the literature will continue to accumulate low/no information content material as it is impossible to know exactly what might have value to someone and pay to publish means there will always be a receptor for crap. There again uncited papers are dark energy to CVs implying the existence of some floor to the amount of junk out there, search engines or not. There is a cost.

    Like

  21. physioprof Says:

    It doesn’t “bother” me; I find it amusing.

    And do we really want to do this again? There is no persuasive evidence supporting the claim that “glamour” journals publish more fraudulent research than other journals, nor is there persuasive evidence that the “glamour chase” leads to more fraud than the “not losing one’s job chase”.

    Like

  22. DrugMonkey Says:

    Of course there is persuasive evidence. And also there are those, such as yourself, who are highly motivated to deny it.

    Like

  23. DrugMonkey Says:

    JW- hmm. There are definitely papers that help me in my research that have not been cited in my own papers so I’m not sure that is a good metric. I also know of quite a few scientific efforts motivated and informed by my work (b/c I helped them) that didn’t end up as cites to my papers.

    In my view, citations are not the only proof of papers’ contributions.

    Like

  24. The Other Dave Says:

    @CPP: Unfortunately, saying stuff with vigor does not make you correct.

    http://www.nature.com/news/misconduct-is-the-main-cause-of-life-sciences-retractions-1.11507#/table

    Note the table.

    That said, it is indeed a possibility that there is massive fraud in obscure journals that goes unnoticed because people don’t notice or care. But to say there is ‘no persuasive evidence’, like you did, is either willfully ignorant or disingenuous.

    With regard to the ‘Glamourchase’, I don’t understand how that is separable from the ‘losing one’s job chase’, given that Glamourmag publications are weighted heavily by employers, and everyone knows that.

    Like

  25. The Other Dave Says:

    @becca (first comment):

    The tendency to publish two skimpy papers rather than one big paper sometimes results from the psychology of the review process. Reviewers basically look for stuff that is wrong. Something may be skimpy, but solid, and be published (especially at a specialized journal) relatively easily. But sticking two skimpy things together for a less specialized journal raises all sorts of problems. Reviewers see more inconsistencies, or holes, or just don’t like that things aren’t tied together very well. So the combined paper has problems that separate pieces don’t. A good paper is not just a bunch of lesser things stuck together.

    I once submitted a paper to Cell, and the main complaint was about the semi-weak connection between the molecular mechanism and the behavioral stuff. We split the paper into two pieces that were published quite easily in Nature Neuroscience and J. Neuroscience. A colleague 9who no doubt was a reviewer on at least one) told me later that he thought they were “much better as separate papers.” And in the long run, it was maybe a good thing, because we reached a wider, sorta more diverse audience than we might have with Cell. A lot of people that are interested in our stuff don’t read Cell.

    Also, remember that it is our job not just to do good science, but to share it. I have friends that publish only in Cell Science Nature Cell. They will drag stuff out for years to get it in to those journals. Isn’t it better to acknowledge that all scientists are on the same team, and give other people a chance to quickly know about and build on your stuff?

    Like

  26. The Other Dave Says:

    Does anyone else here think that DM is still suffering the psychological fallout from a recent glamourmag rejection letter?

    Like

  27. DJMH Says:

    TOD, the table shows 19 fraud retractions in Nature. EVER. As far as I can tell, those data are from everything indexed in Pubmed, so even if you assume a conservative time period of what, 50 years, we’re talking about 0.4 fraud-based retractions per year? Across all disciplines?

    I get that the retraction rate has been increasing. I am sure in part this may be due to an increase in Glamour fetishization. But I am not sure that, numerically, the sky is falling.

    Also, personally when I use the term LPU, I am certainly not referring to most stuff in J Neurosci, that’s for sure.

    Like

  28. DrugMonkey Says:

    Really DJMH? How can you and CPP defend those turdlets in JNeuro that sub-7 IF (and heading downward) PoS journal? Real scientists stick with the kind of substantial, complete story that can get into Cell. Otherwise you are polluting the literature, wot wot Jim Woodgett?

    Like

  29. dsks Says:

    “But this “harmless” clutter costs real money to produce…”

    Presumably it’s produced using the money obtained from peer-reviewed grant applications. Somewhere along the line somebody thought the question leading to those results was worth asking. Likely because the possibility of the final answer being particularly significant was deemed high enough to take the risk in funding. So the investigator got unlucky with mother nature’s response to their question. That’s science. It would be a waste of time and money if it wasn’t reported for lack of wow!liness.

    “… and distracts those who may accidentally rediscover it”

    This sounds like your conflating the LPU with crap science. The presence of crap science in the literature, whether simply poorly conducted or outright fraudulent, is a completely separate issue and one, incidentally, that affects the glam-ragz as much as any other sphere of the publishing industry. I don’t see the harm in a tight, to the point LPU story being “rediscovered” so long as the science behind it is sound.

    BTW, surely the LPU isn’t entirely dissimilar to the publication criterion for the letter-based submissions to glamragz like Nature and Science anyway? It was/is not uncommon that a brace of letters from the same group on the same topic be accepted for publication in such a journal. It’s was also not uncommon for a brief letter to CNS to be followed closely by a more fleshed-out report on related studies in a lower tier journal, until the genuinely odious trend of taking this related information – which was at least given sound discussion when published separately – and dumping it all in the Supplemental data with little more than a one line mention in the body of the letter proper. Supplemental data that is also usually stored only in .PDF form, which is a lot less likely to be picked up by search engines with the same efficacy as html-based LPUs generally are. In terms of priority, the problem of data being lost in supplemental pages is surely a lot more pressing in terms of both time and money potentially lost than the proliferation of LPUs?

    Finally, in terms of cost, I thing we would have to see some genuine numbers to see if the LPU phenomenon is really saddling science with a significant debt. I doubt it. So a cheeky junior faculty up for tenure splits a potential 15 fig treatise into three publications. Assuming they’re paying top dollar in fees, that’s a cost of perhaps $4500 instead of $1500. Even if the shifty bastard did this five times on one RO1 grant, the extra pub fees amount to <2% of the total directs.

    Like

  30. Dave Says:

    Correct the data for IF/number of eyes on each paper and I’m not certain there is a significant difference between the number of retractions in C/N/S vs. other less-glamorous journals. There is no doubt that in general C/N/S papers get much more scrutiny than papers that might fly somewhat under the radar.

    Like

  31. physioprof Says:

    How can you and CPP defend those turdlets in JNeuro that sub-7 IF (and heading downward) PoS journal?

    (1) I haven’t said jacke fucken dicke about my opinion of the supposed “LPU” problem, so get your fucken head out of your fucken asse and pay attention, dickebagge. (I actually couldn’t give a fucken shitte whether people choose to publish small nuggets or gigantic “complete stories”.)

    (2) The Journal of Neuroscience is a high-quality field-specific journal, and it has as much of a tendency to publish “complete story”-type papers as supposed “glamour” journals. So in this regard you should also get your fucken head out of your fucken asse and pay attention, dickebagge.

    Like

  32. kevin. Says:

    I dunno about Jim Woodgett. Looking through his papers, while he was on a lot of high profile papers, it doesn’t seem he was the senior on most. He seems to benefit from productive collaborators, which could mean he has a tight reign on his animals, assays, or both.

    Like

  33. Dave Says:

    What does that have to do with anything, Kev?

    Like

  34. kevin. Says:

    Woodgett seems to bemoaning the proliferation of LPU papers clogging the literature. But, he may be just as guilty as the rest of us, except for some advantageous collaborations raising his profile. I personally don’t care one way or the other. It’s just a matter of scientific style and what works best for you, your people, and the grant. I’ll go home now.

    Like

  35. toto Says:

    As a lazy bottom-rung science wannabe, the problem with LPU is that it makes it hard to get the big picture of whatever the lab is doing, and the Important Conclusion to be taken from it.

    Unless, of course, the LPU-abiding author does the decent thing and publishes a big-picture, “summary of previous episodes” review paper every once in a while.

    Case in point: the Gilles Laurent cricket olfaction stuff:

    – Almost every paper in the series was cool enough to warrant a GlamourMag publication (“this sub-system does this or that”)

    – But it is the “review” papers that really provide a feel for the importance of the overall research (“Early insect olfaction works like this”).

    Like


Leave a comment