Commenter mikka wants to know why:

I don’t get this “professional editors are not scientists” trope. All the professional editors I know were bench scientists at the start of their career. They read, write, look at and interpret data, talk to bench scientists and keep abreast of their fields. In a nutshell, they do what PIs do, except writing grants and deciding what projects must be pursued. The input some editors put in some of my papers would merit a middle authorship. They are scientists all right, and some of them very good ones.

Look, yes you are right that they are scientists. In a certain way. And yes, I regret the way that my opinion that they are 1) very different from Editors and Associate Editors who are primarily research scientists and 2) ruining science tends to be taken as a personal attack on their individual qualities and competence.

But there is simply no way around it.

The typical professional editor, typically at a Glamour(ish) Mag publication, is under-experienced in science compared with a real Editor.

Regardless of circumstances, if they have gone to the Editorial staff from a postdoc, without experience in the Principal Investigator chair then they have certain limitations.

It is particularly bad that ass kissing from PIs who are desperate to get their papers accepted tends to persuade these people over time that they are just as important as those PIs.

“Input” merits middle authorship, eh? Sure, anyone with half a brain can suggest a few more experiments. And if you have the despotic power of a Nature editor’s keyboard behind you, sure…they damn well will do it. And ask for more. And tell you how uniquely brilliant of a suggestion it all was.

And because it ends up published in a Glamour Mag, all the sheep will bleat approvingly about what a great paper it is.

Pfaagh.

Professional editors are ruining science.

They have no loyalty to the science*. Their job is to work to aggrandize their own magazine’s brand at the cost of the competition. It behooves them to insist that six papers worth of work gets buried in “Supplemental Methods” because no competing and lesser journal will get those data. It behooves them to structure the system in a way that authors will consider a whole bunch of other interesting data “unpublishable” because it got scooped by two weeks.

They have no understanding or consideration of the realities of scientific careers*. It is of no concern to them whether scientific production should be steady, whether uninteresting findings can later be of significance, nor whether any particular subfield really needs this particular kick in the pants. It is no concern to them that their half-baked suggestion requires a whole R01 scale project and two years of experiments. They do not have to consider any reality whatsoever. I find that real, working scientist Editors are much more reasonable about these issues.

Noob professional editors are star-struck and never, ever are able to see that the Emperor is, in fact, stark naked. Sorry, but it takes some experience and block circling time to mature your understanding of how science really works. Of what is really important over the long haul. Notice how the PLoSFail fans (to pick one recent issue) are heavily dominated by the wet-behind-the-ears types and the critics seem to mostly be established faculty? This is no coincidence.

Again, this is not about the personal qualities of the professional editors. The structure of their jobs, and typical career arc, makes it impossible for them to behave differently.

This is why it is the entire job category of professional editor that is the problem.

If you require authoritah, note that Nobel laureate Brenner said something similar.

It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.

He was clearly not talking about peer review itself, but rather the professional Glamour Mag type editor.

_
*as well they should not. It is a structural feature of the job category. They are not personally culpable, the institutional limitations are responsible.

Advertisements

The latest round of waccaloonery is the new PLoS policy on Data Access.

I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.

Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.

Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.

This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.

But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.

The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?

This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.

Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?

Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?

This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.

A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.

Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.

[PubMed, Publisher]

I think Figure 2 makes the point, even without knowing much about the particulars
Danthi14-Fig2

and the last part of the Abstract makes it clear.

We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.

“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”

Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.

The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.


Science is unpredictable.

Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.

or so asketh Mike Eisen:

There’s really no excuse for this. The people in charge of the rover project clearly know that the public are intensely interested in everything they do and find. So I find it completely unfathomable that they would forgo this opportunity to connect the public directly to their science. Shame on NASA.

This whole situation is even more absurd, because US copyright law explicitly says that all works of the federal government – of which these surely must be included – are not subject to copyright. So, in the interests of helping NASA and Science Magazine comply with US law, I am making copies of these papers freely available here:

FORWARD THE REVOLUTION, COMRADE!!!!!!!

Go Read, and download the papers.

h/t: bill

There should be a rule that you can’t write a review unless you’ve published at least three original research papers in that topic/area of focus.

Also a rule that your total number of review articles cannot surpass your original research articles.

Thought of the Day

September 10, 2013

There seems to be a sub population of people who like to do research on the practice of research. Bjoern Brembs had a recent post on a paper showing that the slowdown in publication associated with having to resubmit to another journal after rejection cost a paper citations.

Citations of a specific paper are generally thought of as a decent measure of impact, particularly if you can relate it to a subfield size.

Citations to a paper come in various qualities, however, ranging from totally incorrect (the paper has no conceivable connection to the point for which it is cited) to the motivational (paper has a highly significant role in the entire purpose of the citing work).

I speculate that a large bulk of citations are to one, or perhaps two, sub experiments. Essentially a per-Figure citation.

If this is the case, then citations roughly scale with how big and diverse the offerings in a given paper are.

On the other side, fans of “complete story” arguments for high impact journal acceptances are suggesting that the bulk of citations are to this “story” rather than for the individual experiments.

I’d like to see some analysis of the type of citations won by papers. All the way across the foodchain, from dump journals to CNS.

As we all know, much of the evaluation of scientists for various important career purposes involves the record of published work.

More is better.

We also know that, at any given point in time, one might have work that will eventually be published that is not, quiiiiiite, actually published. And one would like to gain credit for such work.

This is most important when you have relatively few papers of “X” quality and this next bit of work will satisfy the “X” demand.

This can mean first-author papers, papers from a given training stint (like a 3-5 yr postdoc) or the first paper(s) from a new Asst Professor’s lab. It may mean papers associated with a particular grant award or papers conducted in collaboration with a specific set of co-authors. It could mean the first paper(s) associated with a new research direction for the author.

Consequently, we wish to list items that are not-yet-papers in a way that implies they are inevitably going to be real papers. Published papers.

The problem is that of vaporware. Listing paper titles and authors with an indication that it is “in preparation” is the easiest thing in the world. I must have a half-dozen (10?) projects at various stages of completion that are in preparation for publication. Not all of these are going to be published papers and so it would be wrong for me to pretend that they were.

Hardliners, and the NIH biosketch rules, insist that published is published and all other manuscripts do not exist.

In this case, “published” is generally the threshold of receiving the decision letter from the journal Editor that the paper is accepted for publication. In this case the manuscript may be listed as “in press“. Yes, this is a holdover term from the old days. Some people, and institutions requiring you to submit a CV, insist that this is the minimum threshold.

But there are other situations in which there are no rules and you can get away with whatever you like.

I’d suggest two rules of thumb. Try to follow the community standards for whatever the purpose and avoid looking like a big steaming hosepipe of vapor.

“In preparation” is the slipperiest of terms and is to be generally avoided. I’d say if you are anything beyond the very newest of authors with very few publications then skip this term as much as possible.

I’d suggest that “in submission” and “under review” are fine and it looks really good if that is backed up with the journal’s ID number that it assigned to your submission.

Obviously, I suggest this for manuscripts that actually have been submitted somewhere and/or are out for review.

It is a really bad idea to lie. A bad idea to make up endless manuscripts in preparation, unless you have a draft of a manuscript, with figures, that you can show on demand.

Where it gets tricky is what you do after a manuscript comes back from the journal with a decision.

What if it has been rejected? Then it is right back to the in preparation category, right? But on the other hand, whatever perception of it being a real manuscript is conferred by “in submission” is still true. A manuscript good enough that you would submit it for consideration. Right? So personally I wouldn’t get to fussed if it is still described as in submission, particularly if you know you are going to send it right back out essentially as-is. If it’s been hammered so hard in review that you need to do a lot more work then perhaps you’d better stick it back in the in preparation stack.

What if it comes back from a journal with an invitation to revise and resubmit it? Well, I think it is totally kosher to describe it as under review, even if it is currently on your desk. This is part of the review process, right?

Next we come to a slightly less kosher thing which I see pretty frequently in the context of grant and fellowship review. Occasionally from postdoctoral applicants. It is when the manuscript is listed as “accepted, pending (minor) revision“.

Oh, I do not like this Sam I Am.

The paper is not accepted for publication until it is accepted. Period. I am not familiar with any journals which have accepted pending revision as a formal decision category and even if such exist that little word pending makes my eyebrow raise. I’d rather just see “Interim decision: minor revisions” but for some reason I never see this phrasing. Weird. It would be even better to just list it as under review.

Final note is that the acceptability of listing less-than-published stuff on your CV or biosketch or Progress Report varies with your career tenure, in my view. In a fellowship application where the poor postdoc has only one middle author pub from grad school and the two first author works are just being submitted…well I have some sympathy. A senior type with several pages of PubMed results? Hmmmm, what are you trying to pull here. As I said above, maybe if there is a clear reason to have to fluff the record. Maybe it is only the third paper from a 5 yr grant and you really need to know about this to review their continuation proposal. I can see that. I have sympathies. But a list of 8 manuscripts from disparate projects in the lab that are all in preparation? Boooo-gus.