RetractionVsNIHsuccessWell this is provocative. One James Hicks has a new opinion bit in The Scientist that covers the usual ground about ethics, paper retractions and the like in the sciences. It laments several decades of “Responsible Conduct of Research” training and the apparent utter failure of this to do anything about scientific misconduct. Dr. Hicks has also come up with a very provocative and truthy graph. From the article it appears to plot annual data from 1997 to 2011 in which the retraction rate (from this Nature article) is plotted against the NIH Success Rate (from Science Insider).

Like I said, it appears truthy. Decreasing grant success is associated with increasing retraction rates. Makes a lot of sense. Desperate times drive the weak to desperate measures.

Of course, the huge caveat is the third factor…..time. There has been a lot more attention paid to scientific retractions lately. Nobody knows if increased retraction rates over time are being observed because fraud is up or because detection is up. It is nearly impossible to ever discover this. Since NIH grant success rates have likewise been plummeting as a function of Fiscal Year, the relationship is confounded.

Thought of the Day

May 4, 2013

Listed-third author gets to refer to it as a second-author-paper when the first two are co-equal first authors, right?

One duffymeg at Dynamic Ecology blog has written a post in which it is wondered:

How do you decide which manuscripts to work on first? Has that changed over time? How much data do you have sitting around waiting to be published? Do you think that amount is likely to decrease at any point? How big a problem do you think the file drawer effect is?

This was set within the background of having conducted too many studies and not finding enough time to write them all up. I certainly concur that by the time one has been rolling as a laboratory for many years, the unpublished data does have a tendency to stack up, despite our best intentions. This is not ideal but it is reality. I get it. My prior comments about not letting data go unpublished was addressing that situation where someone (usually a trainee) wanted to write up and submit the work but someone else (usually the PI) was blocking it.

To the extent that I can analyze my de facto priority, I guess the first priority is my interest of the moment. If I have a few thoughts or new references to integrate with a project that is in my head…sure I might open up the file and work on it for a few hours. (Sometimes I have been pleasantly surprised to find a manuscript is a lot closer to submitting than I had remembered.) This is far from ideal and can hardly be described as a priority. It is my reality though. And I cling to it because dangit…shouldn’t this be the primary motivation?

Second, I prioritize things by the grant cycle. This is a constant. If there is a chance of submitting a manuscript now, and it will have some influence on the grant game, this is a motivator for me. It may be because I am trying to get it accepted before the next grant deadline. Maybe before the 30 day lead time before grant review when updating news of an accepted manuscript is permitted. Perhaps because I am anticipating the Progress Report section for a competing continuation. Perhaps I just need to lay down published evidence that we can do Technique Y.

Third, I prioritize the trainees. For various reasons I take a firm interest in making sure that trainees in the laboratory get on publications as an author. Middle author is fine but I want to chart a clear course to the minimum of this. The next step is prioritizing first author papers…this is most important for the postdocs, of course, and not strictly necessary for the rotation students. It’s a continuum. In times past I may have had more affection for the notion of trainees coming in and working on their “own project” from more or less scratch until they got to the point of a substantial first-author effort. That’s fine and all but I’ve come to the conclusion I need to do better than this. Luckily, this dovetails with the point raised by duffymeg, i.e., that we tend to have data stacking up that we haven’t written up yet. If I have something like this, I’ll encourage trainees to pick it up and massage it into a paper.

Finally, I will cop to being motivated by short term rewards. The closer a manuscript gets to the submittable stage, the more I am engaged. As I’ve mentioned before, this tendency is a potential explanation for a particular trainee complaint. A comment from Arne illustrates the point.

on one side I more and more hear fellow Postdocs complaining of having difficulties writing papers (and tellingly the number of writing skill courses etc offered to Postdocs is steadily increasing at any University I look at) and on the other hand, I hear PIs complaining about the slowliness or incapabability of their students or Postdocs in writing papers. But then, often PIs don’t let their students and Postdocs write papers because they think they should be in the lab making data (data that might not get published as your post and the comments show) and because they are so slow in writing.

It drives me mad when trainees are supposed to be working on a manuscript and nothing occurs for weeks and weeks. Sure, I do this too. (And perhaps my trainees are bitching about how I’m never furthering manuscripts I said I’d take a look at.) But from my perspective grad students and postdocs are on a much shorter time clock and they are the ones who most need to move their CV along. Each manuscript (especially first author) should loom large for them. So yes, perceptions of lack of progress on writing (whether due to incompetence*, laziness or whatever) are a complaint of PIs. And as I’ve said before it interacts with his or her motivation to work on your draft. I don’t mind if it looks like a lot of work needs to be done but I HATE it when nothing seems to change following our interactions and my editorial advice. I expect the trainees to progress in their writing. I expect them to learn both from my advice and from the evidence of their own experiences with peer review. I expect the manuscript to gradually edge towards greater completion.

One of the insights that I gained from my own first few papers is that I was really hesitant to give the lab head anything short of what I considered to be a very complete manuscript. I did so and I think it went over well on that front. But it definitely slowed my process down. Now that I have no concerns about my ability to string together a coherent manuscript in the end, I am a firm advocate of throwing half-baked Introduction and Discussion sections around in the group. I beg my trainees to do this and to work incrementally forward from notes, drafts, half-baked sentences and paragraphs. I have only limited success getting them to do it, I suspect because of the same problem that I had. I didn’t want to look stupid and this kept me from bouncing drafts off my PI as a trainee.

Now that I think the goal is just to get the damn data in press, I am less concerned about the blah-de-blah in the Intro and Discussion sections.

But as I often remind myself, when it is their first few papers, the trainees want their words in press. The way they wrote them.

__
*this stuff is not Shakespeare, I reject this out of hand

LPU Redoux

April 12, 2013

Another round of trying to get someone blustering about literature “clutter” and “signal to noise ratio” to really explain what he means.

Utter failure to gain clarity.

Again.

Update 1:

It isn’t as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don’t ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the “clutter”, “signal to noise”, “complete story” and “LPU=bad” dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that’s your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure’s worth of help. Some will merely sow confusion…but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That’s the job. That’s the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

UPDATE 2:
Thoughts on the Least Publishable Unit

LPU

Authors fail to illuminate the LPU issue

Better Living Through Least Publishable Units

Yet, publishing LPU’s clearly hasn’t harmed some prominent people. You wouldn’t be able to get a job today if you had a CV full of LPU’s and shingled papers, and you most likely wouldn’t get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don’t completely understand this phenomenon.

Avalanche of Useless Science

Our problem is an “Avalanche of Low Quality Research”? Really?

Too Many Papers

We had some incidental findings that we didn’t think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) “incidental” results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).

Idle thought

March 6, 2013

Relevant to Sci’s recent ranting about the paper chase in science…

Sorry reviewers, I am not burning a year and $250K to satisfy your curiosity about something stupid for a journal of this IF.

SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like “low-impact journals” are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn’t be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word “tier”. As in “The work from the prior interval of support was mostly published in second tier journals…“.

It is almost always second tier that is used.

But this is never correct in my experience.

If we’re talking Impact Factor (and these people are, believe it) then there is a “first” tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the “second tier”.

A jump down to the IF 12 or so of PNAS most definitely represents a different “tier” if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about “second tier journals” they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a “the rest of the pack” zone in which there is a meaningful perception difference from the fifth tier. So…. Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn’t it?)

So there you have it. If you* are going to use “tier” to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

___
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use “second tier journal” to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as “second tier”. Go nuts.

@mbeisen is on fire on the Twitts:

@ianholmes @eperlste @dgmacarthur @caseybergman and i’m not going to stop calling things as they are to avoid hurting people’s feelings

Why? Open Access to scientific research, naturally. What else? There were a couple of early assertions that struck me as funny including

@eperlste @ianholmes @dgmacarthur @caseybergman i think the “i should have to right to choose where to publish” argument is bullshit

and

@eperlste @ianholmes @dgmacarthur @caseybergman funding agencies can set rules for where you can publish if you take their money

This was by way of answering a Twitt from @ianholmes that set him off, I surmise:

@eperlste @dgmacarthur how I decide where to pub is kinda irrelevant. The point is, every scientist MUST have the freedom to decide for self

This whole thing is getting ridiculous. I don’t have the unfettered freedom to decide where to publish my stuff and it most certainly is an outcome of the funding agency, in my case the NIH.

Here are the truths that we hold to be self-evident at present time. The more respected the journal in which we publish our work, the better the funding agency “likes” it. This encompasses the whole process from initial peer review of the grant applications, to selection for funding (sometimes via exception pay) to the ongoing review of program officers. It extends not just from the present award, but to any future awards I might be seeking to land.

Where I publish matters to them. They make it emphatically clear in ever-so-many-ways that the more prestigious the journal (which generally means higher IF, but not exclusively this), the better my chances of being continuously funded.

So I agree with @mbeisen about the “I have the right to choose where I publish is bullshit” part, but it is for a very different reason than seems to be motivating his attitude. The NIH already influences where I “choose” to publish my work. As we’ve just seen in a prior discussion, PLoS ONE is not very high on the prestige ladder with peer reviewers…and therefore not very high with the NIH.

So quite obviously, my funder is telling me not to publish in that particular OA venue. They’d much prefer something of a lower IF that is better respected in the field, say, the journals that have longer track records, happen to sit on the top of the ISI “substance abuse” category or are associated with the more important academic societies. Or perhaps even the slightly more competitive rank of journals associated with academic societies of broader “brain” interest.

Even before we get to the Glamour level….the NIH funding system cares where I publish.

Therefore I am not entirely “free” to choose where I want to publish and it is not some sort of moral failing that I haven’t jumped on the exclusive OA bandwagon.

@ianholmes @eperlste @dgmacarthur @caseybergman bullshit – there’s no debate – there’s people being selfish and people doing the right thing

uh-huh. I’m “selfish” because I want to keep my lab funded in this current skin-of-the-teeth funding environment? Sure. The old one-percenter-of-science monster rears it’s increasingly ugly head on this one.

@ianholmes @eperlste @dgmacarthur @caseybergman and we have every right to shame people for failing to live up to ideals of field

What an ass. Sure, you have the right to shame people if you want. And we have the right to point out that you are being an asshole from your stance of incredible science privilege as a science one-percenter. Lecturing anyone who is not tenured, doesn’t enjoy HHMI funding, isn’t comfortably ensconced in a hard money position, isn’t in a highly prestigious University or Institute, may not even have achieved her first professorial appointment yet about “selfishness” is being a colossal dickweed.

Well, you know how I feel about dickweedes.

I do like @mbeisen and I do think he is on the side of angels here*. I agree that all of us need to be challenged and I find his comments to be this, not an unbearable insult. Would it hurt to dip one toe in the PLoS ONE waters? Maybe we can try that out without it hurting us too badly. Can we preach his gospel? Sure, no problem. Can we ourselves speak of PLoS ONE papers on the CVs and Biosketches of the applications we are reviewing without being unjustifiably dismissive of how many notes Amadeus has included? No problem.

So let us try to get past his rhetoric, position of privilege and stop with the tone trolling. Let’s just use his frothing about OA to examine our own situations and see where we can help the cause without it putting our labs out of business.

__
*ETA: meaning Open Access, not his attacks on Twitter