Professional editors are bad for science
February 28, 2014
Commenter mikka wants to know why:
I don’t get this “professional editors are not scientists” trope. All the professional editors I know were bench scientists at the start of their career. They read, write, look at and interpret data, talk to bench scientists and keep abreast of their fields. In a nutshell, they do what PIs do, except writing grants and deciding what projects must be pursued. The input some editors put in some of my papers would merit a middle authorship. They are scientists all right, and some of them very good ones.
Look, yes you are right that they are scientists. In a certain way. And yes, I regret the way that my opinion that they are 1) very different from Editors and Associate Editors who are primarily research scientists and 2) ruining science tends to be taken as a personal attack on their individual qualities and competence.
But there is simply no way around it.
The typical professional editor, typically at a Glamour(ish) Mag publication, is under-experienced in science compared with a real Editor.
Regardless of circumstances, if they have gone to the Editorial staff from a postdoc, without experience in the Principal Investigator chair then they have certain limitations.
It is particularly bad that ass kissing from PIs who are desperate to get their papers accepted tends to persuade these people over time that they are just as important as those PIs.
“Input” merits middle authorship, eh? Sure, anyone with half a brain can suggest a few more experiments. And if you have the despotic power of a Nature editor’s keyboard behind you, sure…they damn well will do it. And ask for more. And tell you how uniquely brilliant of a suggestion it all was.
And because it ends up published in a Glamour Mag, all the sheep will bleat approvingly about what a great paper it is.
Pfaagh.
Professional editors are ruining science.
They have no loyalty to the science*. Their job is to work to aggrandize their own magazine’s brand at the cost of the competition. It behooves them to insist that six papers worth of work gets buried in “Supplemental Methods” because no competing and lesser journal will get those data. It behooves them to structure the system in a way that authors will consider a whole bunch of other interesting data “unpublishable” because it got scooped by two weeks.
They have no understanding or consideration of the realities of scientific careers*. It is of no concern to them whether scientific production should be steady, whether uninteresting findings can later be of significance, nor whether any particular subfield really needs this particular kick in the pants. It is no concern to them that their half-baked suggestion requires a whole R01 scale project and two years of experiments. They do not have to consider any reality whatsoever. I find that real, working scientist Editors are much more reasonable about these issues.
Noob professional editors are star-struck and never, ever are able to see that the Emperor is, in fact, stark naked. Sorry, but it takes some experience and block circling time to mature your understanding of how science really works. Of what is really important over the long haul. Notice how the PLoSFail fans (to pick one recent issue) are heavily dominated by the wet-behind-the-ears types and the critics seem to mostly be established faculty? This is no coincidence.
Again, this is not about the personal qualities of the professional editors. The structure of their jobs, and typical career arc, makes it impossible for them to behave differently.
This is why it is the entire job category of professional editor that is the problem.
If you require authoritah, note that Nobel laureate Brenner said something similar.
It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.
He was clearly not talking about peer review itself, but rather the professional Glamour Mag type editor.
_
*as well they should not. It is a structural feature of the job category. They are not personally culpable, the institutional limitations are responsible.
Accepting manuscript reviews by Journal
February 28, 2014
Do you decide whether to accept a manuscript for review based on the Journal that is asking?
To what extent does this influence your decision to take a review assignment?
Why?
Congress is losing it.
February 27, 2014
Just after we noticed that Congress has seen fit to add a special prohibition on anything done with Federal grant funds that might suggest gun control is in order, there’s another late breaking Congressional mandate notice.
FY 2014 New Legislative Mandate
Restriction of Pornography on Computer Networks (Section 528)
“(a) None of the funds made available in this Act may be used to maintain or establish a computer network unless such network blocks the viewing, downloading, and exchanging of pornography.(b) Nothing in subsection (a) shall limit the use of funds necessary for any Federal, State, tribal, or local law enforcement agency or any other entity carrying out criminal investigations, prosecution, or adjudication activities.”
Really guys? That was a top priority item?
Interesting though, isn’t it? Including indirect cost expenditures this would seem to apply to a very large number of Universities in the US. And now Congress has demanded they adopt nanny pR0n filters.
I don’t see any exceptions for classwork here, either.
Pot kills?
February 25, 2014
Apparently pot CAN kill.
Hartung and colleagues conclude from two Cases:
After exclusion of other causes of death we assume that the young men died from cardiovascular complications evoked by smoking cannabis….The assumption of fatal heart failure in both cases is corroborated by the acute effects of marijuana, including a marked increase in heart rate that may result in cardiac ischemia in susceptible individuals, lesser increases in cardiac output, supine blood pressure and postural hypotension….We assume the deaths of these two young men occurred due to arrhythmias evoked by smoking cannabis; however this assumption does not rule out the presence of predisposing cardiovascular factors.
h/t:
PLoS is letting the inmates run the asylum and this will kill them
February 25, 2014
The latest round of waccaloonery is the new PLoS policy on Data Access.
I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.
The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.
These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.
Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.
Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.
This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.
But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.
The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?
This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.
Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?
Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?
This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.
NIH Multi-PI Grant Proposals.
February 24, 2014
In my limited experience, the creation, roll-out and review of Multi-PI direction of a single NIH grant has been the smoothest GoodThing to happen in NIH supported extramural research.
I find it barely draws mention in review and deduce that my fellow scientists agree with me that it is a very good idea, long past due.
Discuss.
Say, what about the diversity of intramural scientists at the NIH?
February 21, 2014
While I’m getting all irate about the pathetic non-response to the Ginther report, I have been neglecting to think about the intramural research at NIH.
From Biochemme Belle:
What the NHLBI paper metrics data mean for NIH grant review
February 21, 2014
In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.
I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.
I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.
The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.
I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.
It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.
So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.
The NHLBI data say this is utter nonsense.
Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.
What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.
I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.
This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.
NHLBI data shows grant percentile does not predict paper bibliometrics
February 20, 2014
A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.
Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.
I think Figure 2 makes the point, even without knowing much about the particulars
and the last part of the Abstract makes it clear.
We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.
The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.
“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”
Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.
The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.
Science is unpredictable.
Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.
Ask DrugMonkey: Electronic Lab Notebooks
February 20, 2014
A query came in through the email box:
Do you use ELNs in your lab? Is that something that you think would make a useful blog post? I haven’t found much elsewhere in the blogosphere about ELNs. Maybe you will find this to be a shining example of why you have stuck with paper and pen.
I don’t use one so I’m turning this over to you folks. Any recommendations for your fellow Reader?
Test I
February 20, 2014
Zerhouni Sets the Record Straight on Animal Research
February 19, 2014
Glad to see Zerhouni walk this one back….
On June 4th 2013, Elias Zerhouni, a former Director of the National Institutes of Health (NIH) made some comments at a Scientific Management and Review Board (SMRB) meeting that were reported in NIH Record as follows:
“We have moved away from studying human disease in humans,” he lamented. “We all drank the Kool-Aid on that one, me included.” With the ability to knock in or knock out any gene in a mouse—which “can’t sue us,” Zerhouni quipped—researchers have over-relied on animal data. “The problem is that it hasn’t worked, and it’s time we stopped dancing around the problem…We need to refocus and adapt new methodologies for use in humans to understand disease biology in humans.”
This comment has been used by many animal rights activists to claim that animal research does not work. Here is a selection (many more examples exist):
View original post 291 more words
ResearchGate “confirmation” of citations? WTF?
February 19, 2014
ResearchGate is, as you are undoubtedly aware, the latest and most annoying version of “The Facebook/LinkedIn for Scientists™”.
I just noticed that they have some sort of request for you to “confirm” that your publication indeed cited their publication.
What POSSIBLE goal does this serve? I mean, just look at the damn paper! Did it cite yours? Yes/No. Done.
It isn’t your fault the reviewers are enraged
February 19, 2014
…or maybe it is.
One of the things that I try to emphasize in NIH grant writing strategy is to ensure you always submit a credible application. It is not that difficult to do.
You have to include all the basic components, not commit more than a few typographical errors and write in complete sentences. Justify the importance of the work. Put in a few pretty pictures and plenty of headers to create white space. Differentiate an Aim from a hypothesis from an Experiment.
Beyond that you are often constrained by the particulars of your situation and a specific proposal. So you are going to have to leave some glaring holes, now and again. This is okay! Maybe you are a noob and have little in the way of specific Preliminary Data. Or have a project which is, very naturally, a bit of a fishing expedition hypothesis generating, exploratory work. Perhaps the Innovation isn’t high or there is a long stretch to attach health relevance.
Very few grants I’ve read, including many that were funded, are even close to perfect. Even the highest scoring ones have aspects that could readily be criticized without anyone raising an eyebrow.
The thing is, you have to be able to look at your proposal dispassionately and see the holes. You should have a fair idea of where trouble may lie ahead and shore up the proposal as best you can.
No preliminary data? Then do a better job with the literature predictions and alternate considerations/pitfalls. Noob lab? Then write more methods and cite them more liberally. Low Innovation? Hammer down the Significance. Established investigator wanting to continue the same-old, same-old under new funding? Disguise that with an exciting hypothesis or newish-sounding Significance link. (Hint: testing the other person’s hypothesis with your approaches can go over great guns when you are in a major theoretical dogfight over years’ worth of papers.)
What you absolutely cannot do is to leave the reviewers with nothing. You cannot leave gaping holes all over the application. That, my friends, is what drops you* below the “credible” threshold.
Don’t do that. It really does not make you any friends on the study section panel.
__
*This is one case where the noob is clearly advantaged. Many reviewers make allowances for a new or young-ish laboratory. There is much less sympathy for someone who has been awarded several grants in the past when the current proposal looks like a slice of Swiss cheese.
Legislative Mandates for NIH Grant Awardees
February 18, 2014
The Legislative Mandates have been issued for FY 2014.
The intent of this Notice is to provide information on the following statutory provisions that limit the use of funds on NIH grant, cooperative agreement, and contract awards for FY2014.
It contains the usual familiar stuff, of pointed interest is the prohibition against using grant funds to promote the legalization of Schedule I drugs and the one that prohibits any lobbing of the government. With respect to the Schedule I drugs issue, for a certain segment of my audience, I remind you of the critical exception:
(8) Limitation on Use of Funds for Promotion of Legalization of Controlled Substances (Section 509)
“(a) None of the funds made available in this Act may be used for any activity that promotes the legalization of any drug or other substance included in schedule I of the schedules of controlled substances established under section 202 of the Controlled Substances Act except for normal and recognized executive-congressional communications. (b)The limitation in subsection (a) shall not apply when there is significant medical evidence of a therapeutic advantage to the use of such drug or other substance or that federally sponsored clinical trials are being conducted to determine therapeutic advantage.”
I wouldn’t like to find out the hard way but I would presume this means that research into the medical benefits of marijuana, THC and/or other cannabinoid compounds are just fine. I seem to recall reading more than one paper listing NIH support that might be viewed in this light.
What I found more fascinating was a little clause that I had not previously noticed in the anti-lobbying section.
(3) Anti-Lobbying (Section 503)
…
(c) The prohibitions in subsections (a) and (b) shall include any activity to advocate or promote any proposed, pending or future Federal, State or local tax increase, or any proposed, pending, or future requirement or restriction on any legal consumer product, including its sale or marketing, including but not limited to the advocacy or promotion of gun control.”
there is also another stand-alone section in case you didn’t get the point:
(2) Gun Control (Section 217)
“None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.”
I was sufficiently curious to go back through the years and found out that this language did not appear in the Notice for FY 2011 and was inserted for FY 2012. This was part of the “FY 2012 the Consolidated Appropriations Act, 2012 (Public Law 112-74) signed into law on December 23, 2011“. I didn’t bother to go back through the legislative history and try to figure out when the gun control part was added but it looks like something similar that affected the CDC appropriation was put into place in 1996.
So I guess we should have expected the anti-gun-control forces to get around to it eventually?