February 25, 2014
The latest round of waccaloonery is the new PLoS policy on Data Access.
I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.
The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.
These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.
Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.
Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.
This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.
But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.
The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?
This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.
Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?
Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?
This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.
June 6, 2013
Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.
January 17, 2013
To be absolutely clear, I use the term “dump journal” without malice. Some do, I know, but I do not. I use it to refer to journals of last resort. The ones where you and your subfield are perfectly willing to publish stuff and, more importantly, perfectly willing to cite other papers. Sure, it isn’t viewed as awesome, but it is….respectable. The Editor and sub-editors, probably the editorial board, are known people. Established figures who publish most of their own papers in much, much higher IF journals. It is considered a place where the peer review is solid, conducted by appropriate experts who, btw, review extensively for journals higher up the food chain.
What interests me today, Dear Reader, are the perceptions and beliefs of those people who are involved in the dump journal. Authors who submit work there, the Editor and any sub-editors….and the reviewers. Do we all commonly view the venue in question as a “dump journal”? Or are there those that are surprised and a bit offended that anyone else would consider their solid, society level journals as such a thing?
Are there those who recognize that others view the journal as a dump journal but wish to work to change this reputation? By being harsher during the review process than is warranted given the history of the journal? That approach is a game of chicken though…if you think a dump journal is getting too uppity for its current IF then you are going to just move on to some other journal for your data-dumping purposes, are you not? If a publisher or journal staff wanted to make a serious move up the relative rankings, they’d better have a plan and a steely nerve if you ask me.
This brings me around to my fascination with PLoS ONE and subjective notions of its quality and importance. What IS this journal? Is it a dumping grounds for stuff you had rejected elsewhere on “importance” and “impact” grounds and you just want the damn data out there already? That would qualify as a dump journal in my view. Or do you view it as a potential primary venue…because it enjoys an IF in the 4s and that’s well into run-of-the-mill decent for your subfield?
Furthermore, how does this color your interaction with the journal? I know we have a few folks around here who function as Academic Editors. Are you one of those that thinks PLoS ONE should be ever upping its “quality” in an attempt to improve the reputation? Do you fear it becoming a “dump journal”? Or do you embrace that status?
Are you involved with another journal that some might consider a dump journal for your field? Do you think of it this way yourself? Or do see it as a solid journal and it is that other journal, 0.245 IF points down, which is the real dump journal?
January 14, 2013
There are two complaints that I see as supposed objective reasons for old school folks’ easy complaining bout how it is not a real journal. First, that they simply publish “too many papers”. It was 23,468 in 2012. This particular complaint always reminds me of
which is to say that it is a sort of meaningless throwaway comment. A person who has a subjective distaste and simply makes something up on the spot to cover it over. More importantly, however, it brings up the fact that people are comparing apples to oranges. That is, they are looking at a regular print type of journal (or several of them) and identifying the disconnect. My subfield journals of interest maybe publish something between about 12 and 20 original reports per issue. One or two issues per month. So anything from about 144 to 480 articles per year. A lot lower than PLoS ONE, eh? But look, I follow at least 10 journals that are sort of normal, run of the mill, society level journals in which stuff that I read, cite and publish myself might appear. So right there we’re up to something on the order of 3,000 article per year.
PLoS ONE, as you know, covers just about all aspects of science! So multiply my subfield by all the other subfields (I can get to 20 easy without even leaving “biomedical” as the supergroup) with their respective journals and…. all of a sudden the PLoS ONE output doesn’t look so large.
Another way to look at this would be to examine the output of all of the many journals that a big publisher like Elsevier puts out each year. How many do they publish? One hell of a lot more that 23,000 I can assure you. (I mean really, don’t they have almost that many journals?) So one answer to the “too many notes” type of complaint might be to ask if the person also discounts Cell articles for that same reason.
The second theme of objection to PLoS ONE is as was recently expressed by @egmoss on the Twitts :
An 80% acceptance rate is a bit of a problem.
So this tends to overlook the fact that much more ends up published somewhere, eventually than is reflected in a per-journal acceptance rate. As noted by Conan Kornetsky back in 1975 upon relinquishing the helm of Psychopharmacology:
“There are enough journals currently published that if the scientist perseveres through the various rewriting to meet style differences, he will eventually find a journal that will accept his work”.
Again, I ask you to consider the entire body of journals that are normal for your subfield. What do you think the overall acceptance rate for a given manuscript might be? I’d wager it is competitive with PL0S ONE’s 80% and probably even higher!
August 14, 2012
As you know, Dear Reader, I have been pondering the role of the open access journal PLoS ONE of late. In particular, pondering whether my subfield of science should use this journal more and, obviously, whether I should use it for any of my various publishing purposes. This pondering includes paying attention to peoples’ experiences with the journal, in both online and real life settings.
On the Twitts today, @Bashir_Course9 indicates that he’s had a little problem in the course of a submission to PLoS ONE.
6 wks after submitting to @Plosone have yet to even be assigned an editor. I guess technically amazing the review process hasn’t started.
You may assume, Dear Reader, that I would not be posting about this if it were the first time I had heard of such a thing.
What I have come to appreciate about the PLoS ONE Academic Editor* system is that it is opt-in. In other real** journals, there is a shorter list of Associate Editors, they have reasonably well defined areas of coverage and the assignment process is more directed. I mean sure, one can always beg off on workload but there are certain expectations.
The upshot of this is that with PLoS ONE submissions there can be a bottle neck / slow down in the assignment of a submitted manuscript. Much slower than I’ve experienced at my usual venues.
Six weeks is a ridiculous amount of time for a paper to be bouncing around without assignment to an editor and a decision to send out for review or reject it outright. I don’t know what the problem is with any specific paper. I have heard of at least one case where it is clear that there are some administrative/procedural problems in which nobody on the administrative side so much as notices a paper is languishing in limbo. This latter issue motivates me to advise PLoS ONE submitters to stay in contact with the head office if anything seems funny. Like the status bar reading “editor invited” for more than a week. Send an email.
I do not know what happens when the administrative staff has trouble finding an Academic Editor to take the paper. As I noted before, coverage can be spotty in some subfields of science, e.g., mine. It’s the Field of Dreams/Catch22 problem being played out. The authors won’t come until they build it (a stable of AEs in each subfield) and AEs won’t volunteer unless it is seen to be a worthwhile effort for their subfield. Since the AE assignment is opt-in, you furthermore have to have someone in your subfield that is at least interested in taking the paper for review.
Is the inability to find an AE the PLoS ONE equivalent of a desk reject? Maybe. Is there ever anything that actually gets returned to the authors as rejected because PLoS ONE can’t find an AE to take it? This I don’t know. Perhaps one of my readers knows more.
Since this post is a bit critical, let me end on the upnote. Just so long as you stay on top of the journal staff and make sure they are actively trying to find an AE for your manuscript, the addition of a week or three to the process (relative to journals where the assignment is nearly automatic) is no big deal. If we assume the most obvious merits of PLoS ONE are valid (acceptance on quality, no rejection based on importance, impact and other more-subjective reasons) then one has to assume one is saving on a round of getting rejected from one journal and having to resubmit to another. Also a gain in terms of not getting demands for more experiments (again, in design if not 100% in practice). In this context, a few weeks delay in AE assignment still leaves you ahead of the game with PLos ONE.
There is one more benefit of the opt-in system which is that you are going to be slightly more likely to get an AE that has at least some interest in the topic. And you will minimize the chances*** of an AE who is resentful of having to manage the review for a manuscript she finds uninteresting, boring or crappy to begin with. That seems like a pretty good plus to me.
The ultimate takeaway message for me right now is that it is essential to understand this bottleneck at PLoS ONE that doesn’t exist at many other journals. Minimizing the bad effects requires a little more active attention on the part of the submitting author to make sure assignment doesn’t fall into a blind hole.
*roughly the function of an Associate Editor at most journals. These people select and invite reviewers and make the primary decision on publication acceptance. They are peers, this list is here.
**staffed by working scientists volunteering their time (or nominally paid) as editors.
***I may be naively projecting here. I don’t see where I’d want to waste my time managing the review of a manuscript that bored the crap out of me based on the Abstract or Title alone. I guess there may be some people who look forward to putting in that work just to rip a paper apart and eviscerate the authors’ egos. That isn’t me though.
case in point, michael b eisen, who we know as @mbeisen. He’s HHMI, UCB prof, of a certain age and publishing stature….basically your science 1%er.
He has no fucking clue about normal people.
still think people mostly use it as excuse; page charges for most nonOA society Js are higher
What is under discussion is the publication fee of some $1,350 required at PLoS ONE.
This came about because I have been idly speculating of late about the Impact Factor of PLoS ONE..it’s about 4.4. This compares favorably with many run of the mill journals (tied to a society or otherwise) that publish huge amounts of general neuroscience stuff. Take initial modifier [American, European, Canuckian, International….etc], add “Journal of”, insert [Neuroscience, Pharmacology, Toxicology, Drug, Alcohol, Neurophysiology, Behavior, Cognition….blahdeblah] and you’ll get the corpus. Some variants such as “Neuroscience” or “Psychopharmacology” or “Neuropharmacology” or …. You get the point. Published by the usual suspects: Springer, Wiley-Blackwell, Elsevier.
Most of these come in with IFs under 4.4…or at least as close as make no practical difference.
They also publish a LOT of the papers in the fields that I follow and participate in.
I happen to think this is where the real science exists. If you’ve ever cited a paper in one of these journals…..yeah.
I also protest, when people are talking about the level of peer review at the Glamour Mags and attempting to sidestep the outsized retraction rate at those journals (hi PP!), that oftentimes the review is harshest at these journals. The reviews are by more directly focused experts and the scope of the paper is lesser. So the review comments can be brutal.
They can also, at times, be pretty demanding. I, myself, have in recent memory been asked for essentially an Aims worth of data be added to an already not-insubstantial manuscript at one of these sub-PONE-IF journals. AYFK? If I added that, I’d be submitting UPWARD you dumbasses!!!
As you know, PLoS ONE promises to accept manuscripts that are SOUND. Not on the basis of all the extra stuff some reviewer “would like to see”. Not satisfying the nutty subjective “disappointment” of the reviewer that you didn’t do the study he would (in theory) have conducted. Most emphatically not on the prediction of “impact” and “influence”. Supposedly, not on the basis of even having a positive finding!
So with a higher IF and this promise….I’m all of a sudden having a hard time figuring out why people aren’t just putting all their stuff in PLoS ONE? What is keeping them back?
It appears to me from doing some harder thinking about what is IN this journal that subfields are either in or out. There are some cultural forces going on here which I touched on previously. People want to make assumptions that they are going to get “their” editors and “their” reviewers….not just whatever random fringe OpenAccess Wackaloon who signed on to the PLoS ONE train sort-of/kinda overlaps with their work.
The other huge problem is the cost. $1,350 to be exact. There’s a waiver….but it isn’t really clear how likely one is to GET that fee waived. They don’t make any promises before you submit the paper. And that’s where it counts! Why go through the hassle of review just to find out several weeks later that you have to pull it for the $$? Might as well not even try.
Part of the problem here is the 1%ers like mbeisen and @namnezia think “society journal” means: PNAS is $70/page, JNsci is about $950 total.
yeah, SOME journals that technically qualify as “society” journals have page charges or publication fees. But the ones I’m talking about, for the most part, do not. Not. ONE. dime. Not a $75 “submission fee”. Not a page charge.
They are FREE from start to finish.
JNeuro and PNAS are not normal, run of the mill society journals. This is not what we are discussing. It strikes me that this frame of reference is why mbeisen can’t grasp the problem I’m trying to explore. It makes me fear that PLoS ONE is falling short of what it could be because it was founded by Science 1%ers who are clueless and out of touch.
It’s like I’m blogging in the wind here.