More on "shitasse" journals

January 30, 2012

A bit of a Twittstorm was raised today by Jonathan Eisen (@phylogenomics) who posted a (Modest?) proposal that we should respond to Elsevier’s naughtiness over the Research Works Act by

stop helping promote articles published in Elsevier journals.  Don’t blog about papers in Elsevier journals.  Don’t tweet about them.  Don’t use Elsevier papers for journal clubs.  In essence, ignore them – consider them dead – make them invisible.  Not completely of course.  Any work should be considered a contribution to science or math or whatever your field is.  But there are LOTS and LOTS of things to do with your time.   

This is in strikeout because Dr. J. Eisen thought better of his snark. As I suggest above, perhaps he was making a Modest Proposal. At any rate, the original twittstorm was fanned by YHN.

My recent readers may have come to the conclusion that I am one of those Open Access wackanuts. I am not. I am, or have been, rather a skeptic of the more….excessively fervent OpenScienceELEV3NTY111!!!111!! types.

It is my position that the TruBeliever Acolytes of OpenAccess need a firm hand now and again to bring them back down to reality. Think of the responsibility churches have to keep the snake-handlers and speakers-of-tongues within reason. The way regular Christian folks need to point out that Pat Robertson is, in fact, pretty insane with his God’s revenge on the Hurricane-Belt stuff. You take my point.

Anyway, there is one core point that needs to be explicated in this because it segued from (J.) Eisen’s flipping of the bird to the overall notion that it is okay to dismiss / ignore research papers on the basis of where they are published. Some Twitt going by @lightsam1 is of the PhysioProffian opinion that there exist “shitasse” journals in which we will never find anything of any worth, scientifically, so there is every excuse to simply ignore them.

This is unscholarly in the extreme. The argument, in my view, is exactly the flip side of lauding papers that are published in GlamourMags as if they are something special. They are not.

In either case. The science is the thing. There should be no substitution, ever, for making a scientific judgment about the merits of a given paper based on association.

This was the essence of my objection to (J.) Eisen’s original post. No matter how pissed you may be at the publisher, it is not right to overlook the best, most relevant papers. It is not. Similarly, it is not right to overlook the best (or first, or most comprehensive) citation because it was not in a sufficiently Glamourous journal. This runs counter to good scholarship in academic science. I will entertain the debate over priority, if it is the best scholarship to cite the first report that touches on an issue simply because it appeared first. I happen to think an excess of this is a very large part of Glamour problem but…ok. It is always okay to have differences of opinion over what is the “best” or “most comprehensive” or “most elegant” demonstration of a much-replicated effect. Fine. That we can debate.

However. The notion that you are citing (or not citing) a paper based on where it is published is always wrong.

Consistent with what I was saying in a prior post, there was an excuse in the Pre-PubMed era to focus on a subset of the available journals because humans weren’t capable of keeping up with everything. In the PubMed era, however? No excuse whatsoever. Online databases and search engines provide readily available, simple and reasonably* comprehensive mechanisms to sort the literature.

If anything, you should be almost unaware of journal identity and IF and perceived “status” these days. It just doesn’t have any scholarly value.

__
*remember, not everything relevant to your work is indexed by your favorite search database. Who know when some odd economics paper might be really cool to cite, eh?

No Responses Yet to “More on "shitasse" journals”


  1. I am with you 100% on this. Ideas and discoveries and knowledge should be recognized wherever they are presented … thanks for keeping me in line

    Like

  2. Brian K Says:

    “If anything, you should be almost unaware of journal identity and IF and perceived “status” these days. It just doesn’t have any scholarly value. ”

    Should we apply this to cases of tenure too?

    Like


  3. Yes Brian K you should apply that to tenure cases. People should be evaluated by the quality of their work. It is wrong to use a surrogate (like the name of a journal) for that. For tenure evaluations one should read people’s papers, evaluate their service work, get letters, and evaluate them by their total contribution and potential and not leave that work up to a few journal editors and reviewers.

    Like

  4. lightsam1 Says:

    Perhaps this is a result of the shortcomings of twitter conversations but you’ve completely distorted my point.

    I contended that there are way too many journals. Many of these journals have shockingly low standards. This is evinced both by the science contained within publications and the lack of even cursory grammatic edits. These publications make it so that one has to sift through tons of junk when researching a scientific topic. They are a waste of the author’s time (in many cases are simply meant as CV padding) and provide a disservice to the scientific community.

    Note that this whole topic came up when I DISPUTED Eisen’s point about ignoring Elesevier publications. I argued that it would be a dereliction of duty to neglect the high quality content published in some Elesevier journals, such as Cell. I accepted that in some way”ignoring” these problematic (for the enumerated reasons) journals might be a valid Elesevier protest strategy.

    Like


  5. Lightsam1 et al.

    There are many many awful journals and I blog/criticize them often. However, there may be nuggets of brilliance published there and we need to accept that as a possibility. Obviously, people are busy and cannot be expected to look at everything so you are right that these journals are a disservice to science in many ways and any time spent looking at them is almost certainly a waste. But what I think DrugMonkey was trying to say – and what I agree with – is that it is the quality of the work that is important – not the name of the journal in which it is published. It is an important point.

    Like

  6. lightsam1 Says:

    That Science should be judged on its own merit is an obvious point. One I never disputed.

    My position is simply that there are bad journals out there that could be valid targets of an organized protest.

    Like


  7. Yes, agreed – those bad journals should be eliminated if possible – see http://phylogenomics.blogspot.com/2012/01/scary-and-funny-functional-researcher.html for example …

    Like

  8. drugmonkey Says:

    Brian K-

    Hell yes. Even more so because the stakes are so high. The fact that P&T committees (and hiring committees) substitute association for true evaluation is a crying shame.

    Like

  9. BugDoc Says:

    “For tenure evaluations one should read people’s papers, evaluate their service work, get letters, and evaluate them by their total contribution and potential and not leave that work up to a few journal editors and reviewers.”

    Totally on board with the whole EvaluateScienceForItsOwnDamnSake thing. Unfortunately, that ain’t how it works in many hiring & tenure committees. Committees commonly look at numbers of publications, see if the papers are in a journal anyone has ever heard of and then read the letters hoping that Dr. BSD will mention whether the science is actually good or not. I’d be happy to hear about all the exceptions to this common practice (assuming there are some).

    Like


  10. Bugdoc – no doubt that is how many / most tenure committees work now. However, there are occasional exceptions … and we can always dream

    Like

  11. BugDoc Says:

    You’re right, JE. Dreaming is good. FWIW, I don’t think it’s a matter of being lazy, rather the tenure committee at a large institution may not happen to have the key expertise needed to easily evaluate all candidates for tenure when it comes down to the science.

    Like


  12. well – lazy may have been the wrong word but it is not an issue of expertise – it is an issue of frequently not reviewing all the material – that could be due to being too busy or a bit lazy or both – as well as sometimes not having the expertise

    Like

  13. whimple Says:

    That Science should be judged on its own merit is an obvious point.

    I’m going to disagree with this one in the general case. The judging on the merit is the job of the journal reviewers. Unless I’m in the field, which usually I am not, I don’t have the time or expertise to figure out whether the conclusions are a) valid and b) important. That’s why when my PubMed search turns up “Nature” I pay way more attention than when my PubMed search turns up “Journal of Open Access Obscurity”.

    Like

  14. becca Says:

    Lots of obscure facts are in “shitasse” journals and you don’t have a choice in where to cite- you either cite the obscure thing or you pretend it doesn’t exist. I’d always fall down on the ‘cite them’ side here, irrespective of whether it’s open-access or not- this seems obvious to me.

    That said, lots of commonly demonstrated facts are in a multitude of journals and you have a choice in where you cite. I’m a terrible scholar in this case- I don’t care who got there first. I don’t care who did it more elegantly. I don’t care who is a bigger person in the field (unless not citing them will prevent the paper from being published). I just care if the science is competent, widely available, and reasonably well communicated (the last being mostly a tie breaker).
    (though of course, for things like my thesis, I just cited everything- what’s it gonna hurt?)
    If you work on something that isn’t just a problem of spoiled rich first world countries, this too probably seems obvious. Heck, if you are working on something that is a problem of spoiled rich first world countries and you are still not such an idiot that you can see some biological links to other research topics, this should be pretty obvious.

    Like

  15. Namnezia Says:

    I agree with Whimple. It is much harder to get a paper published in a high-visibility journal, with much more vigorous and stringent peer review, than in some obscure journal of who knows what. Sure there are exceptions – many papers in glamourmagz are crappy and got published by virtue of the clout the PI has, and there are many excellent papers in crappy journals, maybe the authors didn’t feel like dealing with months of peer review. But I see that the overall quality of the papers in the top journals in my field is much better than the quality in the crappier journals.

    It would be terrible advice to give to a new PI to pass up the opportunity to publish in a high visibility journal just to make a point about open access or whatever. If you publish where people will actually SEE your paper (and most folks aren’t as thorough in their scholarship as DM) you are more likely to get invited to meetings, seminars, etc.

    It would also be bad advice to hold out from publishing unless you can get your paper into a fancy journal, but that’s a different issue.

    Like

  16. lightsam1 Says:

    Clearly, publications in highly esteemed journals tend to be of better quality — but that is a general correlation that doesn’t hold true for individual papers. One shouldn’t mindlessly base decisions on the quality of a paper based only on the journal in which it is published.

    Like

  17. Alex Says:

    Look, reading is harder than math for most science types. If they can simply count the number of papers and notice the presence of larger numbers for IF, that is much simpler than either reading the papers themselves or interpreting statements in letters. I mean, some of those letters might run on for multiple paragraphs. What, you expect tenure committees to read?

    Like

  18. drugmonkey Says:

    rather the tenure committee at a large institution may not happen to have the key expertise needed to easily evaluate all candidates for tenure

    One defense (sort of) of Uni wide P&T committees is that they need to try to be fair to all people…not just in science but in ALL disciplines. So they have to try to be as “objective” as possible. This is totally valid. Indeed, it is admirable.

    The trouble is that I fear substitution of IF just gives a convenient patina of objectivity that is a total sham. And even if it is not a total sham, just an imperfect measure, it is going to end up being unfair just like any other measure. So the optimal solution should be 1) to identify and explicitly recognizes the types of bias associated with the IF/Glamour “measure” and 2) to allow the competition of biases measures on an even playing field. I.e., to not reify IF/Glamour above and beyond all else.

    That’s why when my PubMed search turns up “Nature” I pay way more attention than when my PubMed search turns up “Journal of Open Access Obscurity”.

    Me, I don’t have time to run out a whole line of research on a finding that is statistically more likely to be retracted or “corrected”. So I’m looking at the strength of the evidence, personally.

    It is much harder to get a paper published in a high-visibility journal, with much more vigorous and stringent peer review, than in some obscure journal of who knows what.

    This has nothing to do with whether the particular figure that holds my interest is a good or bad demonstration of what it purports to be. In fact, given the Frankensteinian cobbled together nightmare of the average Nature/Science paper in fact any given figure is likely to have shitasse support and backing in it.

    But I see that the overall quality of the papers in the top journals in my field is much better than the quality in the crappier journals.

    Science is not about “overall quality” or a paper. it is rather about whether or not some hypothesis or inference or theory has support or does not have support. Based on evidence, one data figure or table at a time. Heck, one *observation* at a time. Elegant, comprehensive, awesome papers may make me think well upon the scientists involved but it has nothing to do directly with the conduct of my own science or my understanding and interpretation of the data available in the subfields that hold my interest.

    It would be terrible advice to give to a new PI to pass up the opportunity to publish in a high visibility journal

    Dude, advice about how best to advance one’s career in the world as we find it is totally orthogonal to what is best for science and the furtherance of knowledge. I have my considerable problems with GlamourMag science but I would never for a second suggest anyone in this day and age give up the opportunity for playing the GlamourGame on principle. Now, I would suggest the balance of risky Glamour pursuit versus sustained output in more pedestrian journals needs careful adjustment depending on your subfield, institution, status, etc. But I would say give up paying attention to IF at all.

    It is a sad, sad, and corrosive reality but it is very much a reality for careers.

    Like

  19. lightsam1 Says:

    In my opinion your criticism of IF is overwrought. Yes, in an ideal world people wouldn’t care how their research/productivity was judged but in reality they always will. Realistically those judging aren’t going to pour over a body of literature to come to a considered judgment of its merits. Who has the time or expertise to do that?

    There has to be a simple metric. While IF isn’t perfect it is actually a pretty decent. It is far better than a metric based solely on # of publications, which would incentive quantity over quality and lead to more of your “shitasse” journals. IF is good because it incentives important work.

    Like

  20. drugmonkey Says:

    While IF isn’t perfect it is actually a pretty decent.
    No, it isn’t. It is first and foremost intellectually dishonest given the huge skew in citations that make up the IF average. It is gamable by the journals themselves and some external attempts to replicate the numbers have come up short. It is furthermore distorted by the size of the field. Finally, the gatekeeper role of professional editors who are not actually working scientists leads to a whole host of shenanigans that decouple “the best” from the process.

    It is far better than a metric based solely on # of publications, which would incentive quantity over quality and lead to more of your “shitasse” journals.

    If you would read more closely you would understand that I am entirely unworried about quantity of papers, given modern search engine technology. My science proceeds one figure at a time and I really don’t care if I reference a body of work spread out over 3 papers versus one. What I *do* care about is if things are so jampacked and squashed into the one paper that I am missing highly relevant detail that would have been provided if the “story” were split into three or five papers. This is one reason why the “quality” argument about GlamourMag articles is ass backward.

    IF is good because it incentives important work.

    No, it incentives “hot” work of a particularly topical and sensational nature. It incentives stealing and scooping and a whole host of related bad practices, including faking. The nature of what is “important” in science is rarely known until many, many years later.

    Like

  21. whimple Says:

    Me, I don’t have time to run out a whole line of research on a finding that is statistically more likely to be retracted or “corrected”. So I’m looking at the strength of the evidence, personally.

    Sure, but often I’m skimming abstracts to get the broad sense of a different field. Where the work was published is part of the strength of the evidence.

    Like

  22. lightsam1 Says:

    The problem with quantity isn’t that papers may be broken up.

    The main reason that “shite” journals exist is because researchers want to publish for the sake of publishing. The want to publish even if there data is utterly trivial or incomprehensible simply so they can add another publication to their CV.

    We want researchers to work on what is meaningful over what is easy. Therefore, we should aim to create incentive structures that value the significance of findings over the # of them.

    All the problems with IF that you point out are valid, and I do think IF could be fine-tuned to better align with what is really in the public/scientific interest, but ultimately those costs are outweighed by the good of disincentiving quantity as a metric of success.

    Like

  23. Alex Merz Says:

    “While IF isn’t perfect it is actually a pretty decent.”

    So IF is a good proxy for average article quality, you think. Like DM, I call BS.

    For starters, IF is an arithmetic mean applied to a data set that (roughly) follows a power distribution. Consequently, the median paper in any of these journals is cited far less than the journal’s IF would imply. Fine, if you’re trying to assess the journal’s overall impact. Almost unspeakably lousy if you’re trying to make a guess about the quality of a median paper in that journal. Sure, sometimes you get lucky. You look at a paper and its citation rate over time is near the journal’s IF, or even a little better. And maybe it’s even cited steadily for a decade instead of a year or two. But pleasing though they may be, papers like that don’t explain the IF’s of the Glamour Mags. Papers like this do. Such papers as that are exceedingly rare, even within the context of a GlamMag. And you absolutely cannot judge one by its proximity to the other. All you can do is read the thing, and see if it advances The Work.

    The above considerations set aside the selection criteria used to curate the underlying dataset used to compute IF, and the ways in which publishers and editors attempt to game IF calculations by influencing the composition of the underlying dataset.

    Moreover, IF deals only with citations in the first two years post-publication. Thus, the extent to which a journal publishes papers of lasting value might or might not be reflected in the journal’s IF. At least there you have an option, the Eigenfactor.org Article Influence Score, which uses a 5-year window rather than a 2-year window. But it’s still just an arithmetic boxcar average of a skewed and gamed data set that follows something like a power distribution. As such, it tells you essentially nothing about the median paper that will appear in the next issue.

    Like


  24. IF is good because it incentives important work.

    I would consider myself someone who doesn’t despise IF with the fervor of many anti-IF enthusiasts, but this is just fucken embarrassing. The real lie of IF is that it doesn’t exclude review articles. This is exactly why for-profit journals publish fuckeloades of review articles, even having “special issues” that have nothing but review articles in them. Because even a shitty review article summarizing a field and published in a decent journal is gonna get many dozens of citations right off the bat.

    I would like to see the IFs of Neuron and Journal of Neuroscience *without* review articles. I bet they would be statistically indistinguishable. (Neuron publishes vast numbers of reviews, and has review-only special issues; Journal of Neuroscience publishes essentially no reviews at all.)

    Like


  25. The real lie of IF is that it doesn’t exclude review articles.
    Oooh, CPP exposes the dirty secret of IF inflation in some journals.

    Like

  26. lightsam1@gmail.com Says:

    Comrade PhysioProf,

    That is an imperfection in the IF metric that , literally, everyone recognizes. No metric is ever going to be perfect that doesn’t mean we should scrap it all together. I agree that it would be great if we could improve it but for now we have to keep in mind the review paper caveat when discussing/thinking about IF. People do the exact same thing when discussing important metrics like unemployment rate (flawed because doesn’t account for underemployment, discouragement, etc.)

    Like

  27. Alex Merz Says:

    Literally? There’s a web log that you might like: http://literally.barelyfitz.com/

    Like

  28. lightsam1@gmail.com Says:

    Literally, any serious discussion of IF makes that point.

    At any rate, it is hard to see how a random blogger’s language pet peeve has any bearing on this conversation.

    Like

  29. Alex Merz Says:

    The more serious point, lightsam, is that you are, literally, wrong. It is not the case that “everyone” recognizes the limitations of IF. Many, many people don’t, and many of those people have significant say in decisions about funding and professional development.

    Alternatively, perhaps you understand what “literally” means, but not what “everyone” means.

    Like

  30. Alex Merz Says:

    Also, see above for my “serious” comment on IF, which has now made it through moderation: http://scientopia.org/blogs/drugmonkey/2012/01/30/more-on-shitasse-journals/#comment-27148

    Like

  31. drugmonkey Says:

    One of my problems lightsamiam is that “everyone knows” all the problems with IF…and then shrugs, throws up their hands, say “it’s pretty decent” or “it’s what we have” and then proceed to behave precisely as if they do not understand the limitations one bit.

    Like

  32. drugmonkey Says:

    Re: meaningful vs easy. One huge problem with a publication being the tip of 5 person years of work is not how “hard” it should be to get a publication. It is that much of that work, unpublished and inaccessible, now has to be repeated by someone else in their “hard” effort to get the next Glamourous Paper. Totally inefficient and uncooperative way to do science. Big waste of taxpayer money, too. Were I running the NIH I’d be mighty disturbed by this and be looking to fund those who maximize the % of their data that is published.

    Like

  33. lightsam1 Says:

    Alex,

    To your other post,, which I just noticed.

    The earlier point I made about the metric not being perfect is again applicable. There is plenty of room for healthy debate about the best IF formulation.

    About judging the quality of any individual paper based on IF, I agree that is inadvisable.

    Again, the benefit of high impact factors is that its pursuit motivates researchers to tackle difficult, but important, questions.

    Like

  34. lightsam1 Says:

    DM,

    Point taken about duplication of efforts but how is that uniquely a glamor mag problem? And how do you avoid that without instituting central coordination?

    Like

  35. Alex Merz Says:

    lightsam, Come on by our Department. I’ll introduce you to Eddie Fischer, and you can explain to him how if only he’d set his sights on Science or Nature instead of the JBC, he might have been motivated to attack more important questions. You can explain the same thing to Earl Davie, while you’re at it.

    http://www.nobelprize.org/nobel_prizes/medicine/laureates/1992/
    http://www.jbc.org/content/281/48/e39.full

    Like

  36. lightsam1 Says:

    “One of my problems lightsamiam is that “everyone knows” all the problems with IF…and then shrugs, throws up their hands, say “it’s pretty decent” or “it’s what we have” and then proceed to behave precisely as if they do not understand the limitations one bit.”

    Is it really that inconsistent to acknowledge its imperfections but still believe it has enough benefits to overcome these?

    Like

  37. DrugMonkey Says:

    It is not “uniquely” a GlamourMag problem but it is highly enriched in the GlamourGame. It is a nearly inevitable feature of most GlamourMag articles that a shitload of work goes into them that has not appeared in the actual article. (Not even in the abomination of “Suplementary Materials”. ) It is a nearly inevitable part of those types of labs that they are more interested in the next Glamour pursuit than in tidying up all the details with lesser publications in “shitteasse” journals. Ditto that getting scooped means “too bad, grad student” instead of publishing a me-too in a dump journal.

    Sometimes, even if the trainee would do the work, the PI won’t have the good IF name of the lab sullied with lesser pubs (this is a bit more variable from what I can tell).

    Like

  38. lightsam1 Says:

    Agreed, the Nature/Science format is too restrictive.

    You think that getting rid of the high profile publications would improve the culture in these big-shot labs?

    Like

  39. DrugMonkey Says:

    Is it really that inconsistent to acknowledge its imperfections but still believe it has enough benefits to overcome these?

    No, just dumb.

    Like

  40. DrugMonkey Says:

    …unless what you mean by “enough benefits” is, “enough benefits for me personally that I totally buy into this game”. Then it is selfish and scummy but smart.

    Like

  41. lightsam1 Says:

    You haven’t even commented on my central thesis. What do you dispute in it?

    Like

  42. drugmonkey Says:

    what “central thesis”? I’ve disputed plenty about the notion that IF and/or GlamorGaze is at all useful for science

    Like

  43. lightsam1 Says:

    About the quality v quantity incentive structure.

    Like

  44. drugmonkey Says:

    You think that getting rid of the high profile publications would improve the culture in these big-shot labs?

    Not necessarily but it would improve the culture for us real scientists doing real science.

    Like

  45. drugmonkey Says:

    Apparently you have reading comprehension problems.

    1) I disagreed that Glamour is an incentive to “quality”

    2) I suggested that particularly when it comes to the very tippy top of Glamour mags, they actually disincentivise scientific quality in a way that advances science as a whole. you appeared to agree with me so I fail to see how you missed this part of my addressing your central thesis.

    3) I disagree with your implicit point that there is anything wrong with “quantity” incentives

    Like

  46. lightsam1 Says:

    Your points have tackled perceived inefficiencies, not diminished quality, in the glamor mags.

    I’ve had enough of your insulting condescension so this is my last post.

    Like

  47. drugmonkey Says:

    The retractions kinda speak for themselves…

    Like

  48. Alex Says:

    There’s a lot wrong with GlamourPub science, but there is an alternative hypothesis for the retractions: Retractions happen when people pay attention to the work and somebody spots a problem. In my own group meetings, my students and I have picked apart non-Glamour and Glamour articles, but we’ve been more fastidious with the Glamour papers. If something makes a splash in the community, as a member of the community I feel a greater urge to understand WTF everybody is going on about. If something makes less of a splash, I am more likely to say at some point “OK, I’ve puzzled over this for a while, time to move on.”

    Of course, it should go without saying that if the paper is directly related to the question we’re working on at the moment, we pick it apart regardless of the IF. But if it is something tangential to the immediate question in front of us, our desire to tear it apart does depend in part on the buzz in the community that we interact with. We’re social creatures, for good or for ill.

    All that said, I’m not convinced that the alternative hypothesis is true. I think it’s entirely possible that Glamour papers get retracted more often because the pursuit of glory leads to sloppiness. I find both hypotheses plausible but I also find both in need of further data.

    Somebody should do some flashy-looking scientometrics on this and submit a slap-dash paper to Nature to test the hypothesis in the most meta way possible…

    Like

  49. Namnezia Says:

    DM says,

    Sometimes, even if the trainee would do the work, the PI won’t have the good IF name of the lab sullied with lesser pubs (this is a bit more variable from what I can tell).

    That would be bad, and it might happen in a few labs, but this is not the norm, even for fancy labs.

    Like

  50. DJMH Says:

    I don’t totally buy the idea that just because search engines exist, it doesn’t matter whether you publish your data in small, low-profile, individually less important chunks or larger, more comprehensive papers. Oftentimes, the whole is greater than the sum of the parts. You can extract a larger hypothesis, a bigger idea from several distinct experiments and analyses–and it does matter.

    If you can only say, in your discussion, “If you draw on these 16 pieces of evidence from 12 different labs, any of whom might have been using different methods or analysis, it totally suggests Big Idea X,” I find that far less persuasive than if you say, “The results of our study [performed by one group of people using consistent methods and analysis] indicate Big Idea X.”

    Like

  51. AcademicLurker Says:

    What puzzles me is the argument “OK, IF isn’t great but it’s the best we’ve got”.

    That would have been true in 1990 when hunting down all the citations to individual papers was prohibitively difficult and time consuming. But now I can get that information in 30 seconds without leaving my desk. Worried about someone gaming the system by citing themselves? No problem, web of science has an “omit self-citations” button.

    I just don’t see the argument for relying so heavily on IF when these other metrics are now right at our fingertips.

    Like

  52. drugmonkey Says:

    DJMH, while you are not wrong, I think your example does a touch of side stepping and goalpost moving. I was mostly talking about a single lab or group in which we might assume the methods used for three papers are as consistent as anything else that goes on in a laboratory and might be put into one mega paper.

    but i do have one caveat. what if the results really do depend on one lab doing it and there is no such thing as cross-lab replication. Should we take those results as valid? or as an accident that may very well have arisen by chance? in my world, the highest confidence I have for results are those for which the general idea is supported across the type of methodological variation you mention. if it stands up to someone doing it in different systems, different strains of rodents, different breeding lines, heck, different species, then I am more inclined to believe it. If a result requires esoteric shit, just the right conditions, antibodies harvested at the rise of the New Moon or wtf-ever I am more inclined to view it as a cute demonstration…but less likely to credit it as solid.

    Like


  53. […] Scientopian DrugMonkey has blogged a perfect storm of a discussion on impact factor and glamour science. Click on over and read the comments (warning: your head may explode). This argument will sound […]

    Like

  54. DJMH Says:

    DM, ok if you want to restrict it to one lab situations, but even then–my point is that the Big Idea only comes out if you (a) read people’s discussions exhaustively, or (b) know the field well enough that you can extract the Big Idea by reading those five papers, or (c) you hear the PI give a talk that brings it all together. Since (a) and (b) are likely to be restricted to people who are deep in the trenches in the same field, I still think that there is merit in pulling the data together for a “bigger” paper.

    That said, I’m not a big fan of the labs (that do in fact exist, Namnezia….you think everyone in Richard Axel’s lab has exclusively cell/nature/science type ideas?) that will ONLY publish in the fanciest mags. Moderation in all things.

    Like

  55. whimple Says:

    Gotta add Bob Tjian’s (your current HHMI president) name to the list of GlamMag-only last authors. Looks like PNAS is as far down the food-chain as he goes for his last-author material.

    Like

  56. anon Says:

    “GlamMag-only last authors”

    Not to diss anyone’s work or anything, but certain names tagged on as last authors are sufficient for an easy pass (or easier pass) into GlamMag journals. Reviewers are easier on these people – both for publishing and funding.

    Like

  57. drugmonkey Says:

    the Big Idea only comes out if you (a) read people’s discussions exhaustively, or (b) know the field well enough that you can extract the Big Idea by reading those five papers, or (c) you hear the PI give a talk that brings it all together. Since (a) and (b) are likely to be restricted to people who are deep in the trenches in the same field, I still think that there is merit in pulling the data together for a “bigger” paper.

    Obviously I think a-b-c is the normal and proper way that science should go down. You seem to undersell the way the Discussion of the third of a series builds on all three papers (?).

    If there are people outside the field that need it all pulled together, well, that’s what review articles are for. Those can do one hell of a lot more pulling together than can a typical article in Science or Nature, let me tell you.

    Also, you are ignoring the loss side of the equation which is my real problem. The loss of time waiting for the larger story to emerge and the loss of data which is necessary to the eventual production of the paper but ends up being unpublished because it is “only” a part of the journey to the Glamourous end. Both of these lead to inefficiencies in the system because other labs can’t synergize, launch, extend, interpret, refine, replicate….etc.

    Like

  58. Alex Merz Says:

    This point by DM is so important that it bears repeating.

    …in my world, the highest confidence I have for results are those for which the general idea is supported across the type of methodological variation you mention.

    Darn right.

    Over and over again I caution my trainees that there’s a world of difference between technical reproducibility under highly circumscribed experimental conditions, where a result might swing wildly in another direction if a small detail is changed, and robust reproducibility.

    Now, that said, one of my peeves with the current GlamMag style is that one often sees a “story” assembled by multiple labs, where the editors have obviously demanded biochemistry, cell biology, animal experiments so that the “story” would be “complete” — but one or more legs of the “story” are shoddy and weak. In these cases the paper would have been vastly stronger and over the long run much more useful to the field if only (say) the immaculate genetic analysis had been presented, and the shoddy biochemistry (or electrophysiology, or whatever) had been omitted.

    It’s clear what’s happening here. Experimental excellence is sacrificed for the sake of a marketable narrative.

    More and more, what I want to see are great experiments not “complete” “stories.”

    Like

  59. drugmonkey Says:

    Now, that said, one of my peeves with the current GlamMag style is that one often sees a “story” assembled by multiple labs, where the editors have obviously demanded biochemistry, cell biology, animal experiments so that the “story” would be “complete” — but one or more legs of the “story” are shoddy and weak. In these cases the paper would have been vastly stronger and over the long run much more useful to the field if only (say) the immaculate genetic analysis had been presented, and the shoddy biochemistry (or electrophysiology, or whatever) had been omitted.

    Preach on. As a refinement, the overall science would be stronger if each subfield’s bit of work was reviewed with vigor by those subfield experts. Instead of thinking some BSD GlamourMag PI (who doesn’t understand all the ins and outs of the techniques in his OWN damn papers) is capable of providing good review of work in six disparate disciplines. The funny thing about all the GlamourDeluded is that they seem to have never seen the third degree to which cottage industry review extends in the “shitasse” journals of BunnyHopperology. On a per-experiment and per-figure basis, the actual review of the scientific rigor can be tremendous.

    Just think of this. If GlamourMag review is so awesome and rigorous, how can there ever be unlabeled error bars or a lack of statistical analysis of group data? Hmmm? Why should a “representative figure” EVER be considered “data”? What a friggin’ joke.

    Like

  60. DJMH Says:

    No question there is a lot of shit published, high and low. My point is that if you publish in dribs and drabs, you are putting the burden on your readers to connect the dots. And readers have other things to do with their time, like reading papers that have a few more fucking dots.

    Sure, you can write a review to put it all together, but no one presents a review for journal club, so unless it is directly in my field I won’t know about it. Also, there are a lot of reviews that people talk about writing but never get around to. /yes that’s me.

    To be clear, yes there are labs that hold back everyone’s work until it’s in Nature. Some PIs are explicit that J neurosci isn’t worth their time, others just conveniently lose those manuscripts on their desk. I think that is shitty behavior and a huge disservice to trainees, especially those who just want to get the paper and move on. But not every Nature paper comes from giant, unreliable collaborations of hyper pressurized desperate postdocs. I hope.

    Like

  61. Grumble Says:

    Alex says: “I think it’s entirely possible that Glamour papers get retracted more often because the pursuit of glory leads to sloppiness.”

    Sloppiness? I think many Glamour retractions are the result of outright FRAUD. Come on. There is immense pressure to produce the obvious-yet-sexy results that get published in these kinds of journals. Just one such paper, and hiring committees start paying more attention, grants get easier, etc. Scientists are just people. People lie. How hard is it for a post-doc, worried about his/her future, to make up great-looking results and spoon feed them to the PI?

    Retractions don’t usually admit fraud, but that doesn’t mean that a “regrettable error” or an “inability to reproduce the results in Figure 2” means that the original “findings” weren’t manufactured out of whole cloth.

    And the retractions are only the tip of the iceberg. How many Glamour results have you (personally, or someone in your lab) been unable to replicate? What fraction of those was due to someone else’s fraud?

    Like

  62. drugmonkey Says:

    I agree with Grumble that many retractions just don’t meet the smell test. I mean, this Science and Nature stuff is supposed to be the pinnacle of science. Somehow “placeholder” figures are “left by mistake”? The “wrong version of the figure was submitted”? Computer crash destroyed the original files, come to find on inquiry? Keep in mind these things go through at least one round of review and revision, maybe several. And nobody notices some bullshit “mistake”?

    It doesn’t add up.

    But there is not always hard proof so we are left with our innocent-til-proven way of saying things.

    Hence “sloppiness”.

    Like

  63. dsks Says:

    IF? Fuck, I thought that conversation was over. I thought we had at least progressed to H-index/G-index by now.

    Well, whatever. I’ll play this stinking game and hoe that I get tenure. Then I’m done with this bullshit. I’m publishing everything on Cafe Press from then on. The best figure get’s it’s own T-shirt.

    Like

  64. rork Says:

    There is lying.
    There are also 50 ways to polish a result or fool reader’s socks off. More than half the papers I read carefully contain mild to serious cheats. It’s often hard to tell if it is crafty, or the results of the writer being clueless. There’s an everyone-does-it-to-some-degree thinking. It is indisputable that reviewers are largely innumerate, and rarely send their pit-bull nerd to dig into data deeply – it’s very time consuming for the nerd.

    Lead the good life:
    Being more honest than your competition means being constantly angry about what they are getting away with. A statistician who is very honest will be disliked by many co-workers – you are always the bad guy. Some other nerd will be “easier to work with”. You loose collaborations. You will have to ask that your name be removed from papers rather often, after you are heavily invested. You will have to refuse to do certain analyses requested by powerful folks at your institution. Finally, you better not inquire too deeply when you smell bad things from the benches, cause your suspicions may then be proven correct, and you’d be obliged to report it (and what that ever got you), so run away early, or tell yourself you aren’t 100% sure. Peace.

    Like

  65. DrugMonkey Says:

    Statisticians are notoriously bad at understanding their “rules” for the way things “have to be done” are not in fact etched on two stone tablets handed down by Godz.

    Like

  66. Alex Merz Says:

    Coercive citation requests by journal editors who want to bump their journal’s impact factor: http://www.sciencemag.org/content/335/6068/542.full

    Like

  67. rork Says:

    Rule 1: Don’t say there were more replicates than there were.
    Rule 2: If the replicates were merely pipetting the same RNA into 3 wells, say so, or better, do an experiment with serious enough biological replication that is actual able to test an idea.
    Rule 3: Show tumor or cell growth on the log scale, so we can see what differences there were in the starting populations. Correct for initial conditions when testing. Have growth rates have units that correspond to a rate of growth.
    Rule 4: If you aren’t taking logs, have a reason other than “p-value was smaller thanks to luck”.
    Rule 5: cross-validate when not doing so is biased.
    Rule 6: Say what the wisker’s mean.
    Rule 7: If your 100 favorite genes cluster the famous data set into better and worse prognosis groups, don’t say that means anything unless they did better than 100 random genes (and your genes go in the expected direction too often).
    Rule 8: Don’t test main effects when the hypothesis is about an interaction.
    Rule 9: Don’t normalize and T-test when you really need an ANOVA with 4 groups that accounts for variation in the controls.
    Rule 10: Say what the colors in the heatmap mean.
    Rule 11: Don’t show mean and standard deviation if it is easy to show the data.

    “notorious” for honesty is good.

    Like

  68. drugmonkey Says:

    Disagree totally with 11.

    8 and 9 fall into the usual trap about statistics, I.e. everyone’s conceit that theirs is the only “right” way. I advocate consistency w/in a type of experiment and subfield semi-consensus over fixed notions.

    Like

  69. rork Says:

    Also wildly popular:

    Rule 12: When your new pet marker fails to be significant in a multivariable Cox model with clinical covariates, don’t dumb-down (e.g. dichotomize) or loose those covariates until your marker finally “works”.
    Rule 13: Don’t selected data to prove a point if showing all the data is horrifying, or proves the point you wanted to make is probably wrong. Show the data reader wants to see, not just the part you want them to see.
    Rule 14: if there is a simple statistical test for a claim, but it gives p=.08, admit it.
    Rule 15: do not cherry pick the 50th best enrichment result unless you admit it was 50th best, and let reader see the 49 that were better.
    Rule 16: Don’t discard genes from consideration with pre-filters that depend on the sample labels (unless you cross-validate that).
    Rule 17: Give your best estimate of how many of the so-called “significant” genes are expected to be false positives, so we can call it science. Don’t say how you estimated false discovery rates but fail to tell us what the estimates were.

    Like

  70. rork Says:

    “I advocate consistency w/in a type of experiment and subfield semi-consensus over fixed notions.”
    Arguments from popularity/authority are often used when the act is common precisely because it is bad. Commandment 11 is a good example. Folks don’t want us to see the data, so they show a summary instead. I know what I want to see – the damned data.

    http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html
    is about point 8, famously. Many researchers do not even realize that they want to test an interaction rather than the main effects – even after you explain it to them 3 different ways.

    For 9, when you do it wrong, it is wrong. There is consensus among those that know how to write down a model.

    Like

  71. drugmonkey Says:

    Consensus does not imply majority rules, not the perpetuation of errors.

    Regarding interactions, I agree that this is screwed up frequently. Often times though, messed up analysis can be laid at the door of pre-planned comparisons being used far, far too infrequently. This is likely in large part because they don’t tend to be included in simple, user friendly stat packages and because nobody really understands they should be used.

    Like


  72. […] DrugMonkey for “More on “shitasse” journals” (why any call for academics not to cite papers published in Elsevier journals – or ANY journal / group of journals – is wrong-headed and unscientific) […]

    Like


Leave a comment