The latest round of waccaloonery is the new PLoS policy on Data Access.

I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.

Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.

Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.

This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.

But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.

The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?

This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.

Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?

Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?

This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.

Advertisements

An email from current president of the Society for Neuroscience announced the intent of the society to launch a new Open Access journal. They are seeking an Editor in Chief, so if you know any likely candidates nominate them.

The Society for Neuroscience Council has appointed a Search Committee to recommend candidates to serve as editors-in-chief for two Society-published journals:

The Editor-in-Chief of The Journal of Neuroscience, to be appointed for a 5-year term beginning Jan. 1, 2015, after a period of transition with the current editor; and
The first Editor-in-Chief of a new online, open access neuroscience journal, expected to launch in late 2014, and temporarily referred to herein as “New Journal.” Please see the announcement here for more information about New Journal. This 5-year appointment will commence in the spring of 2014, to allow the new editor to be involved in decisions connected with the start-up of New Journal and the organizing of an initial editorial board.

The members of the Search Committee are: Moses Chao, Chair; Holly Cline; Barry Everitt; David Fitzpatrick; and Eve Marder.

The list of evaluation criteria may help you to think about who you should nominate.

In evaluating candidates for the editor-in-chief positions, the Search Committee will consider the following criteria:

  • previous editorial experience

  • adequate time flexibility to take on the responsibilities of editor-in-chief

  • a distinguished record of research in neuroscience

  • familiarity with online submission, peer review and manuscript tracking systems

  • ideas about novel approaches and receptivity to innovation during a time of great change in the scientific publishing field

  • service to and leadership in the neuroscience community (e.g., SfN committees)

  • evidence of good management skills and the ability to lead colleagues on an editorial board

  • for New Journal: the capacity to proactively engage on a start-up venture, and to innovate and lead in the creation of a high quality open access neuroscience journal, and guide it on a path to success

  • for The Journal of Neuroscience: the capacity to build on an established record of success, while continuing to evolve a leading journal in the field and take it to the next level

Interesting next step for the SfN. Obviously reflects some thinking that they may be left behind (even further, see diminishing reputation after the launch of Nature Neuroscience and Neuron) in the glorious New World Order of Open Access publication. Might just be a recognition that Open Access fees for a new journal when all the infrastructure is already there is going to be a cash cow for the Society from the beginning.

What I will be fascinated to see is where they pitch the New Journal* in terms of impact. Are they just trying to match JNeuro? Will they deliberately go a little lower down the feeding chain to avoid undercutting the flagship journal?

__
*my suggestion of Penfield must have been too esoteric a reference…..

PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I’m sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.
has_user_comments[sb]

In case you are interested in seeing what sorts of comments are being made.

@mbeisen is on fire on the Twitts:

@ianholmes @eperlste @dgmacarthur @caseybergman and i’m not going to stop calling things as they are to avoid hurting people’s feelings

Why? Open Access to scientific research, naturally. What else? There were a couple of early assertions that struck me as funny including

@eperlste @ianholmes @dgmacarthur @caseybergman i think the “i should have to right to choose where to publish” argument is bullshit

and

@eperlste @ianholmes @dgmacarthur @caseybergman funding agencies can set rules for where you can publish if you take their money

This was by way of answering a Twitt from @ianholmes that set him off, I surmise:

@eperlste @dgmacarthur how I decide where to pub is kinda irrelevant. The point is, every scientist MUST have the freedom to decide for self

This whole thing is getting ridiculous. I don’t have the unfettered freedom to decide where to publish my stuff and it most certainly is an outcome of the funding agency, in my case the NIH.

Here are the truths that we hold to be self-evident at present time. The more respected the journal in which we publish our work, the better the funding agency “likes” it. This encompasses the whole process from initial peer review of the grant applications, to selection for funding (sometimes via exception pay) to the ongoing review of program officers. It extends not just from the present award, but to any future awards I might be seeking to land.

Where I publish matters to them. They make it emphatically clear in ever-so-many-ways that the more prestigious the journal (which generally means higher IF, but not exclusively this), the better my chances of being continuously funded.

So I agree with @mbeisen about the “I have the right to choose where I publish is bullshit” part, but it is for a very different reason than seems to be motivating his attitude. The NIH already influences where I “choose” to publish my work. As we’ve just seen in a prior discussion, PLoS ONE is not very high on the prestige ladder with peer reviewers…and therefore not very high with the NIH.

So quite obviously, my funder is telling me not to publish in that particular OA venue. They’d much prefer something of a lower IF that is better respected in the field, say, the journals that have longer track records, happen to sit on the top of the ISI “substance abuse” category or are associated with the more important academic societies. Or perhaps even the slightly more competitive rank of journals associated with academic societies of broader “brain” interest.

Even before we get to the Glamour level….the NIH funding system cares where I publish.

Therefore I am not entirely “free” to choose where I want to publish and it is not some sort of moral failing that I haven’t jumped on the exclusive OA bandwagon.

@ianholmes @eperlste @dgmacarthur @caseybergman bullshit – there’s no debate – there’s people being selfish and people doing the right thing

uh-huh. I’m “selfish” because I want to keep my lab funded in this current skin-of-the-teeth funding environment? Sure. The old one-percenter-of-science monster rears it’s increasingly ugly head on this one.

@ianholmes @eperlste @dgmacarthur @caseybergman and we have every right to shame people for failing to live up to ideals of field

What an ass. Sure, you have the right to shame people if you want. And we have the right to point out that you are being an asshole from your stance of incredible science privilege as a science one-percenter. Lecturing anyone who is not tenured, doesn’t enjoy HHMI funding, isn’t comfortably ensconced in a hard money position, isn’t in a highly prestigious University or Institute, may not even have achieved her first professorial appointment yet about “selfishness” is being a colossal dickweed.

Well, you know how I feel about dickweedes.

I do like @mbeisen and I do think he is on the side of angels here*. I agree that all of us need to be challenged and I find his comments to be this, not an unbearable insult. Would it hurt to dip one toe in the PLoS ONE waters? Maybe we can try that out without it hurting us too badly. Can we preach his gospel? Sure, no problem. Can we ourselves speak of PLoS ONE papers on the CVs and Biosketches of the applications we are reviewing without being unjustifiably dismissive of how many notes Amadeus has included? No problem.

So let us try to get past his rhetoric, position of privilege and stop with the tone trolling. Let’s just use his frothing about OA to examine our own situations and see where we can help the cause without it putting our labs out of business.

__
*ETA: meaning Open Access, not his attacks on Twitter

For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.

But here is my question…. How many of you are barraged by requests for reprints? That’s the way open access on the personal level has always worked. I use it myself to request things I can’t get to by the journal’s site. The response is always prompt from the communicating author.

Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say…more than one per week?

So are you? Are you getting this many requests?

Remember when Nature offered us a completely objective and unbiased review of PLoS?

Public Library of Science (PLoS), the poster child of the open-access publishing movement, is following an haute couture model of science publishing — relying on bulk, cheap publishing of lower quality papers to subsidize its handful of high-quality flagship journals.

drdrA alerts us to the fact that Nature Publishing Group seems to have changed their minds about dirty, gutter, bulk publication of lower quality papers.

Nature Scientific Reports

Commentary from Martin Fenner over at PLoS blogs and from Bjorn Brembs.
This is why NPG cracks me up. Totally unembarrassed to say whatever, whenever no matter how inconsistent with their supposed other goals (see goals for robust online discussion of published papers) or with their prior statements or with their other actions (see hand wringing about Impact Factors). Just like a good business should, I suppose.

In case anyone missed this, The Brain Observatory at UCSD is slicing perhaps the most well known brain in cognitive neuroscience. That of Henry Molaison, aka “HM”.
http://thebrainobservatory.ucsd.edu/hm_live.php
DAY 2 UPDATE: They are slicing again after quite a bit of time to get a new microtome blade going. You can follow Twitter commentary on the #HM hashtag (even if you don’t have a Twitt account).