Addressing the File Drawer Problem of (Non)Replication in Science

March 30, 2012

This is absolutely BRILLIANT!

PsychFileDrawer.org is:

a tool designed to address the File Drawer Problem as it pertains to psychological research: the distortion in the scientific literature that results from the failure to publish non-replications. Most journals (especially high impact journals that specialize in publishing surprising findings that have low prior odds of being correct) are rarely willing to publish even carefully conducted non-replications that question the validity of a finding that they have published. Often the only people who learn about non-replications are those who happen to be “plugged in” to social networks that circulate this information in a fragmentary and inefficient way. Even textbook authors are rarely well informed about the replicability of the results that they report on, and may often rely upon results that are known to be dubious by those working in the area.

What a great idea. One of the reasons I recently held out as a justification for the LPU approach to publishing is the hoarding of not-enough-for-pub data out there that might save someone else a whole hell of a lot of time. Well, chasing after a supposed published finding as your control or launching point for new studies can land you in one of those little potholes. Wouldn’t it be nice to see a half dozen (or more) attempts to replicate an effect to (at the very least) tell you which are the key conditions and which can be manipulated for your purposes?

Other fields should try something like this.
__
Disclaimer: I’m professionally acquainted with one or more of the people apparently involved in this effort.

No Responses Yet to “Addressing the File Drawer Problem of (Non)Replication in Science”

  1. physioprof Says:

    PLoS ONE will happily publish failures to replicate, so long as all of our rigorous standards of peer review are satisfied.

    Like

  2. drugmonkey Says:

    When did you turn into such a PLoS wackaloon?

    Like

  3. physioprof Says:

    Dude, I’ve been a PLoS ONE academic editor for four years! PLoS ONE is an excellent journal, and I recommend it very highly over sub-field-specific commercial and society journals.

    Like

  4. Beaker Says:

    PLoS ONE good. The chief criterion is whether the data justify the conclusions. The conclusions could be trivial, but in practice it seems like that most of the conclusions are more than bunny-hopper navel-gazing. That is enough. Significance will come out in the wash, as time passes.

    Having said that, I have not yet seen a PLoS ONE paper take down a prior glamor journal’s mis-framing of the data. Better to ignore rather than challenge/refute the occasional excesses of the kool kidz. After all, they will be reviewing your grants.

    Like

  5. Lance Turtle Says:

    I thought that was the whole point of PLoS One….

    Like

  6. physioprof Says:

    The “whole point of PLoS ONE” is to publish manuscripts describing carefully performed, properly analyzed experiments, and where the stated conclusions are well supported by the presented experimental results. Things like “impact”, “scope”, “breadth of interest”, “completeness of ‘story'”, are explicitly excluded from consideration during peer review and editorial decision-making.

    Like

  7. AcademicLurker Says:

    In my own corner of science I’ve been consistently impressed with the papers in PLoS ONE. As per CPP, they are often of comparable quality to what I see in one of my main society journals.

    Like

  8. drugmonkey Says:

    But…but…they aren’t peer reviewed! And they don’t assess impact!

    Like

  9. Lance Turtle Says:

    @physioprof – Yeah that’s what I mean! I thought PLoS One was meant to get the data out there, no matter what they show, provided they are of sufficient quality etc. My understanding was this is one potential place for negative data or data that don’t replicate original findings (though well conducted).

    Like


  10. My understanding was this is one potential place for negative data or data that don’t replicate original findings (though well conducted).

    Your understanding is 100% correct, and we encourage such submissions.

    Like


  11. But…but…they aren’t peer reviewed! And they don’t assess impact!

    I have edited dozens and dozens of PLoS ONE submissions, and the rigor of peer review is even greater than is typical for “glamour” journals, because the titillation factor of “impact” and “novelty” does not mitigate poorly conducted, improperly controlled, sloppily analyzed, and/or overinterpreted experimental findings.

    And no, we don’t “assess impact”. That is the whole point.

    Like

  12. drugmonkey Says:

    Hahhahahhaa, you sound like me now PP!!!

    Like

  13. Jekka Says:

    PLoS Genetics FTW!

    (takedown of back-to-back Science papers)

    http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1002600

    Like

  14. dsks Says:

    “One of the reasons I recently held out as a justification for the LPU approach to publishing is the hoarding of not-enough-for-pub data out there that might save someone else a whole hell of a lot of time.”

    I think this reason can also be used to justify the creation of an outlet in which to publish experiments that the investigators could never get to work. A journal entitles something like, “Methods in FAIL!”, or something.

    I’ve used The Science Creative Quarterly for this sort of stuff in the past, but the readership of that rag is a tad limited.

    Like

  15. Sezam Says:

    PLOS1 should work for good negative results. For not as good (but still publishable) negative results there’s a “Journal of Negative Results”:
    http://www.jnrbm.com/

    But I would go for PLOS1 where possible =)

    Like


Leave a comment