In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.

I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.

I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.

The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.

I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.

It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.

So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.

The NHLBI data say this is utter nonsense.

Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.

What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.

I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.

This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.

A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.

Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.

[PubMed, Publisher]

I think Figure 2 makes the point, even without knowing much about the particulars

and the last part of the Abstract makes it clear.

We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.

“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”

Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.

The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.

Science is unpredictable.

Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.

Nature editor Noah Gray Twittered a link to a 2003 Editorial in Nature Neuroscience.

The key takeaway is in the figure (which Noah also twittered).


In 2003 the JIF for Nature Neuroscience was 15.14, for J Neuro 8.05 and for Brain Research 2.474. Nature itself was 30.98.

Plenty of people refer to the skew and the relative influence of a handful of very highly cited papers but it is interesting and more memorable to see in graphical form, isn’t it?


As far as I can tell, the British Journal of Pharmacology has taken to requiring that authors who use animal subjects conduct their studies in accordance with the “ARRIVE” (Animals in Research: Reporting In Vivo Experiments) principles. These are conveniently detailed in their own editorial:

McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL.Guidelines for reporting experiments involving animals: the ARRIVE guidelines.Br J Pharmacol. 2010 Aug;160(7):1573-6. doi: 10.1111/j.1476-5381.2010.00873.x.

and paper on the guidelines:

Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG; NC3Rs Reporting Guidelines Working Group.Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol. 2010 Aug;160(7):1577-9. doi: 10.1111/j.1476-5381.2010.00872.x.

The editorial has been cited 270 times. The guidelines paper has been cited 199 times so far and the vast, vast majority of these are in, you guessed it, the BRITISH JOURNAL OF PHARMACOLOGY.

One might almost suspect the journal now has a demand that authors indicate that they have followed these ARRIVE guidelines by citing the 3 page paper listing them. The journal IF is 5.067 so having an item cited 199 times since it was published in the August 2010 issue represents a considerable outlier. I don’t know if a “Guidelines” category of paper (as this is described on the pdf) goes into the ISI calculation. For all we know they had to exempt it. But why would they?

And I notice that some other journals seem to have published the guidelines under the byline of the self same authors! Self-Plagiarism!!!

Perhaps they likewise demand that authors cite the paper from their own journal?

Seems a neat little trick to run up an impact factor, doesn’t it? Given the JIT and publication rate of real articles in many journals, a couple of hundred extra cites in the sampling interval can have an effect on the JIT.

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains….I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.


Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I’d take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience’s journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I’m going to score the last couple of years’ of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That’s a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index….I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don’t enjoy as much time to acquire citations.

As you know, I have a morbid fascination with PLoS ONE and what it means for science, careers in science and the practices within my subfields of interest.

There are two complaints that I see as supposed objective reasons for old school folks’ easy complaining bout how it is not a real journal. First, that they simply publish “too many papers”. It was 23,468 in 2012. This particular complaint always reminds me of

which is to say that it is a sort of meaningless throwaway comment. A person who has a subjective distaste and simply makes something up on the spot to cover it over. More importantly, however, it brings up the fact that people are comparing apples to oranges. That is, they are looking at a regular print type of journal (or several of them) and identifying the disconnect. My subfield journals of interest maybe publish something between about 12 and 20 original reports per issue. One or two issues per month. So anything from about 144 to 480 articles per year. A lot lower than PLoS ONE, eh? But look, I follow at least 10 journals that are sort of normal, run of the mill, society level journals in which stuff that I read, cite and publish myself might appear. So right there we’re up to something on the order of 3,000 article per year.

PLoS ONE, as you know, covers just about all aspects of science! So multiply my subfield by all the other subfields (I can get to 20 easy without even leaving “biomedical” as the supergroup) with their respective journals and…. all of a sudden the PLoS ONE output doesn’t look so large.

Another way to look at this would be to examine the output of all of the many journals that a big publisher like Elsevier puts out each year. How many do they publish? One hell of a lot more that 23,000 I can assure you. (I mean really, don’t they have almost that many journals?) So one answer to the “too many notes” type of complaint might be to ask if the person also discounts Cell articles for that same reason.

The second theme of objection to PLoS ONE is as was recently expressed by @egmoss on the Twitts :

An 80% acceptance rate is a bit of a problem.

So this tends to overlook the fact that much more ends up published somewhere, eventually than is reflected in a per-journal acceptance rate. As noted by Conan Kornetsky back in 1975 upon relinquishing the helm of Psychopharmacology:

“There are enough journals currently published that if the scientist perseveres through the various rewriting to meet style differences, he will eventually find a journal that will accept his work”.

Again, I ask you to consider the entire body of journals that are normal for your subfield. What do you think the overall acceptance rate for a given manuscript might be? I’d wager it is competitive with PL0S ONE’s 80% and probably even higher!

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the “median impact factor” supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about…
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person’s CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn’t all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

Stephen Curry has a nice lengthy diatribe against the Impact Factor up over at the occam’s typewriter collective. It is an excellent review of the problems associated with the growing dominance of journal Impact Factor in the career of scientists.

I am particularly impressed by:

It is time to start a smear campaign so that nobody will look at them without thinking of their ill effects, so that nobody will mention them uncritically without feeling a prick of shame.

Well, of course I would be impressed, wouldn’t I? I’ve been on the smear campaign for some time.

The problem I have with Curry’s post is the suggestion that we continue to need some mechanism, previously filled by journal identity/prestige, as a way to filter the scientific literature. As he quoted from a previous Nature EIC:

“nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”

This is the standard bollocks from those who have a direct or indirect interest in the GlamourMag game. Stephen Curry responds a bit too tepidly for my taste:

The trick will be to crowd-source the task.

Ya think?

Look, one of the primary tasks of a scientist is to sift through the literature. To review data that has been presented by other scientists and to decide, for herself, where these data fit. Are they good quality but dull? Exciting but limited? Need verification? Require validation in other assays? Gold-plated genius ready for Stockholm?

This. Is. What. We. Do!!!!!!

And yeah, we “crowdsource” it. We discuss papers with our colleagues. Lab heads and trainees alike. We come back to a paper we’ve read 20 times and find some new detail that is critical for understanding something else.

This notion that we need help “sifting” through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.

And acutely detrimental to the progress of science.

I mean really. You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?

That isn’t science.

That’s sheep herding.

And guess what scientists? You are the sheep in this scenario.

I received a kind email from Elsevier this morning, updating me on the amazing improvement in 2011 Impact Factor (versus 2010) for several journals in their stable of “Behavioral & Cognitive Neuroscience Journals”. There are three funny bits here, first that the style was:

2010 Impact Factor WAS 2.838, 2011 Impact Factor NOW 3.174

You have to admit the all-caps is a crack up. Second, THREE decimal places! Dudes, this shit is totally precise and that means….sciencey.

As you know, however, DearReader, I have a rather unhealthy interest in the hilariousity of the Impact Factor and I was thinking about the more important issue here.

Is this a significant difference? Who gives a hoot if the IF goes up by 0.336? Is this in any way meaningful?

I suspect the number of available citations is ever on the increase. The business of science is ever expanding, the pressure to publish relentless and the introduction of new journals continues. This means that IFs will be on some baseline level of background increase over time. This is borne out, I will note, by my completely unscientific tracking of journals most closely related to my interests over the past *cough*cough* years *cough*cough*. They all seem to have gradually inched up a few decimal points year in, year out.

For the 0.336 increase, let us do a little seat of the pants. Let’s say a journal with 20 articles per issue, 12 issue per year….480 items over the 2 year tracking interval for calculating IF. Round it to 160 extra citations*. If only 17% of the articles got two more citations, this would account for it. If a mere 3% of articles turned out to be AMAZING for the sub-sub-sub field and won an extra 10 citations each….this would account for the change.

For one thing, I can now see why editors would be willing to try the “Cite us a few more times” gambit with authors in the review stage. It doesn’t take many intimidated authors throwing in 4-5 more citations of recent work from the journal in question to move a third of an impact factor.

Heck, just one solo operator author could probably make a notable impact over two years. If I put everything we submit into a single journal over two years time, and did my level best to make sure to cite everything plausibly relevant from that journal, I could generate 40 extra citations in two year easily. Probably without anyone so much as noticing what I was up to!

The fact that the vast majority of society rank journals that I follow fail to experience dramatic IF gains suggests that nobody is trying to game the system like this and that seemingly universal increases are a reflection of overall trends for total number of publications. But it does make you wonder about those few journals that managed to gain** a subjective rank over a few years time, say from the 2-4 to the 6-8 range and just how they pulled it off.

This tool permits you to search some citation trends by journal.
*Yes, I realize the overlap year for adjacent annual IFs. For our thought exercise, imagine it is non-overlapping years if this bothers you.

**My hypothesis is that an editorial team would only have to pull shenanigans for 2-4 years and after that the IF would be self-sustaining.

When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as “acceptable for publication”.

If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal’s standard.

If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.

So you finally got your paper accepted, the proofs have come and been returned in 48 hrs (lest some evil, unspecified thing happen). You waited a little bit and BOOM, up it pops on PubMed and on the pre-publication list at the journal. The article is, for all most intents and purposes, published.


Now get back to work on that next paper.

But there’s that nagging little thought… isn’t really published until it gets put in a print issue. Most importantly, you don’t know for sure which year it will be properly published in, so the citation is still in flux. So you look at the number of items below yours in the pre-print list, figure out approximately how many articles are published per issue in the journal and game it out. Ugh…. four months? Six? EIGHT????

WHY O WHY gods of publishing?? WHY must it take so long???????

Whenever I’ve heard a publishing flack address this it has been some mumblage about making sure they have a smooth publication track. That they are never at a loss to publish enough in a given issue. And they have to stick to the schedule don’t you know!

(except they don’t. Volumes are pretty fixed but you’ll notice a “catch up” extra issue of a volume now and again.)

Well, well, well. Something I’ve never considered was raised in a blog post at Scholarly Kitchen. An article by Frank Krell in Learned Publishing (I swear I’m not making that journal title up) asks if publishers might be using this to game the Impact Factor of their journals.

Dammit! Totally true. Think about it…

Now, before I get started, the Scholarly Kitchen, good publisher flacks that they are, caution:

To me, there needs to be some evidence — even anecdotal — that editors are purposefully post-dating publication for the purposes of citation gaming. Large January issues may be one piece of evidence; however, it may also signal the funding and publication cycle of academics. I’d be more interested to know whether post-dating conversations are going on within editorial boards, or whether authors have been told that the editor is holding back their article to maximize its contribution to the journal’s impact factor.

But this only really addresses the specific point that Krell made about pushing issues around with respect to the start of a new year.

There’s a larger point at hand. One of the points of objection I’ve always had about the IF calculation is that the two-year window puts a serious constraint on the types of citations that are available in certain kinds of science. The kind where it just takes a lot of time to come up with a publishable data set.

Take normal old, run of the mill behavioral experiments that can be classified as behavioral pharmacology (within which a lot of substance abuse studies live). Three to four months, easily, just to get an animal experiment done. Ordering, habituating, pre-training, surgeries and recovery…it takes time. A typical study might be 3-4 groups of subjects, aka, experiments. That’s if you get lucky. Throw in some false avenues and failed experiments and you are easily up to 6 or 8 groups. Keep in mind that physical resources like operant boxes, constraints such as the observation window (could be a 6 hr behavioral experiment, no problemo) and available staff (not everyone has a tech) really narrow down the throughput. You can’t just “work faster” or “work harder” like supposedly is possible at the bench. The number of “experiments” you can do don’t scale up with time spent in the lab if you are doing behavioral studies with some sort of timecourse. The PI may not even be able to do much by throwing more people into the project even if the laboratory does have this sort of flexibility.


So here you are, Joe Scholar, reading your favorite journal when BLAM! You see an awesome paper that gives you a whole line of new ideas that you could and should set out to studying. Like, RIGHT FREAKING NOW!!! Okay, so suppose money is not an issue and you don’t have anything else particularly pressing. Order some animals and off you go.

It’s going to be a YEAR minimum to complete the studies. A month to write up the draft, throw in three months for peer review and another month for the journal to get it’s act together. Thus, if things go really, really well for you there is only a 6 month window of slack to get a citation in for that original motivating paper before the 2 year IF citation window elapses.

Things never go that well.

In my view this makes it almost categorically impossible for a publication to garner IF credit for a citation that is the most meaningful of all. A citation from a paper motivated almost entirely by said prior work.

The principle extends though. Even if you only see the paper and realize you need to incorporate it into your Discussion or Introduction, the length of time the paper is available with respect to the IF window matters. If there were just some way journals could extend that window between general availability of a work and the expiration of the IF window then this would, statistically, boost the number of citations. If the clock doesn’t start running until the paper has been visible for 6 months….say, how could we do that? How….? Hmm.

Ah yes. Let it languish in the “online firstarchive! Brilliant! It goes up on PubMed and people can read the paper. Get their experiments rolling. Write the findings into their Intros and Discussions.


I agree with the Scholarly Kitchen post that we don’t know that this is why some journals keep such a hefty backlog of articles in their pre-print queue. Having watched a handful of my favorite journals maintain anywhere from six to thirteen month offsets over periods of many months to years, however, I have my suspicions. The journals I pay attention to have maintained their offsets over at least a decade if you assume the lower bound of about 4-5 months (and trust that my spot-checking is valid as a general rule). The idea that they do this to avoid publication dryspells is nonsense, they have plenty of accepted articles on a frequent enough basis so that they could trim down to, say, 2-3 months of backlog. So there must be another reason.

As you are aware, calls to boycott submitting articles to, and reviewing manuscripts for, journals published by Elsevier are growing. The Cost of Knowledge petition stands at 4694 as of this writing. Of these some 623 signatories have identified themselves as being in Biology, 380 in Social Sciences, 260 in Medicine and 126 in Psychology.

These disciplines cover the sciences and the scientists I know best, including my own work.

There seems to be some dismay in certain quarters with the participation of people in these disciplines. This is based, I would assume, on a seat of the pants idea that there are way more active scientists in these disciplines than seems represented by signatures on the petition. Also, I surmise, based on the host of journals published by Elsevier that cater to various aspects of these broader disciplinary categories.

Others have pointed out that in certain cases, such as Cell or The Lancet, there is no way a set of authors are going to give up the cachet of a possible paper acceptance in that particular journal.

I want to address some more quotidian concerns.

I already mentioned the notion of academic societies which benefit from their relationship with Elsevier. Like it or not, they host a LOT of society journals. Sometimes this is just ego and sometimes the society might really be making some ca-change from the relationship. For those scientists who really love the notion that their society has its own journal, this needs to be addressed before they will get on board with a boycott.

Moving along we deal with the considerations that go into selection of a journal to publish in. Considerations that are not driven by Impact Factor since within the class of society journals, such concerns fade. The IFs are all really close, even if they do like to brag about incremental improvement, or about their numerical advantage over a competitor. Yes, 4.5 is better than 4.3 but c’mon. Other factors come into play.

Cost: Somewhere or other (was it Dr. Zen?) someone in this discussion brought up the notion that paying Open Access fees upfront is a big stumbling block. Yes, in one way or another the taxpayers (state and federal in the US) are footing the bill but from the perspective of the PI, increasing library fees to the University don’t matter. What matters are the Direct Cost budgets of her laboratory (and possible the Institutional funds budget). Sure, OA journals allow you to ask for a fee waiver…but who knows if they will give it? Why would you go through all that work (and time) to get the manuscript accepted just to have to pull it if they refuse to let you skip out on the fee? I mean, heck, $1,000 is always handier to have in the lab than being shunted off to the OA publisher, right? I don’t care how many R01s you have…

Convenience: The online manuscript handling system of Elsevier is good. I’ve had experience with a few others, Scholar ONE based systems, etc. Just heard a complaint about the PLoS system on the Twitts the other day, as it happens. Bottom line is that the Elsevier one works really well. Easy file uploading, fast PDF creation, reasonably workable input of all the extraneous info….and good progress/status updating as the manuscript undergoes peer review and decision-making at the editorial offices. This is not the case for all other publishers/journals. And what can I say? I like easy. I don’t like fighting with file uploads. I don’t like constantly having to email the managing editorial team to find out if my fucking manuscript is out for review, back from review, sitting on the Editor’s desk or what. And yeah, we didn’t have that info back in the day. And knowing the first two reviews are in but the journal is still waiting for the third one doesn’t really change a damn thing. But you know what? I like to see the progress.

Audience: One of the first things I do, when considering submitting to a journal in which I do not usually publish, is to keyword search for recent articles. Do they publish stuff like the one we’re about to submit? If yes, then I feel more comfortable in a general sense about editorial decision making and the selection of relevant reviewers. If no…well, why waste the time? Why start off with the dual problem of arguing the merits of both the specific paper and the general topic of interest? Now note, this is not always a valid assumption. I have a clear example in which the journal description seemed to encompass our work…but if you looked at the papers they generally published you’d think we were crazy to submit there. “But they only publish BadgerDigging Studies, not a BunnyHopper to be seen” you’d say. Well, turns out we didn’t have one lick of trouble about topic “fit” from that journal. Go figure. But even with that experience under my belt, I’m still gonna hesitate.

Editor (friendly): Yes, yes, I frequently point out how stupid and wrong we are when trying to game out who is going to respond favorably to our grant proposals. Same thing holds for paper review. But still. I can’t help but feel that I’ve gotten more editorial rulings going my way from editors that I know personally, know they know my work/me and suspect that they are at least 51% favorable towards me/my submissions. The hit rate from people that I’m pretty convinced don’t really know who I am seems somewhat reduced. So yeah, you are damn right I am going to scrutinize the Editorial board of a journal for signs of a friendly name.

Editor (unfriendly): Again, I know it is a fool’s errand. I know that just because I think someone is critical of our work, or has a personal dislike for me, this means jackall. Heck, I’ve probably given really nice manuscript and / or grant reviews to scientists who I personally think are complete jerks, myself. But still… it is common enough that biomedical scientists see pernicious payback lurking behind every corner. Perhaps with justification?

I don’t intend to just stay mad, but to get fucken EVEN the next time I’m reviewing one of theirs. Which will fucken happen. It will.

So yeah, many biomedical scientists are going to put “getting the damn paper accepted already” way up above any considerations about Elsevier’s support for closing off access to tax-payer funded science. Because they feel it is not their fight, yes, but also because it has the potential to cost ’em. This is going to have to be addressed.

On a personal note, PLoSONE currently fails the test. Their are some papers starting to come out in the substance abuse and behavioral pharmacology areas. Some. But not many. And it is hard to get a serious feel for the whole mystique over there about “solid study, not concerned about impact”. Because opinions vary on what represents a solid demonstration. Considerably. Then I look at the list of editors that claim to handle substance abuse. It isn’t extensive and I note at least a few…..strong personalities. Surely these individuals are going to trigger friendly/unfriendly issues for different scientists in their fields. Even worse, however, is the fact that many of them are not listed as having edited any papers published in PLoSONE yet. And that is totally concerning to me if I were considering submitting to that journal instead of one of the many Elsevier titles that might work for us.

Dr Becca has a post up in which she ponders a perennial issue for newly established labs….and many other labs as well.

The gist is that which journal you manage to get your work published in is absolutely a career concern. Absolutely. For any newcomers to the academic publishing game that stumbled on this post, suffice it to say that there are many journal ranking systems. These range from the formal to the generally-accepted to the highly personal. Scientists, being the people that they are, tend to take shortcuts when evaluating the quality of someone else’s work, particularly once it ranges afield from the highly specific disciplines which the reviewing individual inhabits. One such shortcut is inferring something about the quality of a particular academic paper by knowledge of the reputation of the journal in which it is published.

One is also judged, however, by the rate at which one publishes and, correspondingly, the total number of publications given a particular career status.

Generally speaking there will be an inverse correlation between rate (or total number) and the status of the journals in which the manuscripts are published.

This is for many reasons, ranging from the fact that a higher-profile work is (generally) going to require more work. More time spent in the lab. More experiments. More analysis. More people’s expertise. Also from the fact that the manuscript may need to be submitted to more higher-profile journals (in sequence, never simultaneously), on average, to get accepted then to get picked up by so-called lesser journals.

This negative correlation of profile/reputation with publishing rate is Dr Becca’s issue of the day. When to keep bashing your head against the “high profile journal” wall and when to decide that the goal of “just getting it published” somewhere/anywhere* takes priority.

I am one who advises balance. The balance that says “don’t bet the entire farm” on unknowables like GlamourMag acceptance. The balance that says to make sure a certain minimum publication rate is obtained. And for a newly transitioning scientist, I think that “at least one pub per year” needs to be the target. And I mean, per year, in print, pulled up in PubMed for that publishing year. Not an average, if you can help it. Not Epub in 2011, print in 2012. Again, if you can help it.

The target. This is not necessarily going to be sufficient…and in some cases a gap of a year or two can be okay. But I think this is a good general rubric for triaging your submission strategy.

It isn’t that one C/N/S pub won’t trump a sustained pub rate and a half-dozen society level publications. It will. The problem is that it is a far from certain outcome. So if you end up with a three year publication gap, no C/N/S pubs and you end up dumping the data in a half-dozen society level journal pubs anyway…well, in grant-getting and tenure-awarding terms, a 2-3 year publication gap with “yeah but NOW we’re submitting this stuff to dump journals like wild fire so all, good, k?” just isn’t smart.

My advice is to take care of business first, get that 1-2 pub per year in bare minimum or halfway decent journals track going, and then to think about layering high-profile risky business on top of that.

Dang, I got all distracted. What I really meant to blog about was a certain type of comment popping up in Dr. Becca’s thread.

The kind of comment that I think pushes the commenter’s pet agenda, vis a vis academic publishing, over what is actually good advice for someone that is newly transitioned to an independent laboratory position. I have my own issues when it comes to this stuff. I think the reification of IF and the pursuit of GlamorMag publication is absolutely ruining the pursuit of knowledge and academic science.

But it is absolutely foolish and bad mentoring to ignore the realities of our careers and the judging of our talents and accomplishments. I’d rather nobody *ever* submitted to journal solely because of the journal’s reputation. I long for the end of each and every academic journal in which the editors are anything other than actual working scientists. The professional journal “editors” will be, as they say, the first against the wall come the revolution in my glorious future. Etc.

But you would never catch me telling someone in Dr. Becca’s position that she should just ignore IF and journal status and publish everything in the easiest venue to get accepted. Never.

You wackaloon Open Access Nazdrul and followers need to dissociate your theology from your advice giving.
*there are minimum standards. “Peer Reviewed” is one such standard. I would argue that “indexed in PubMed” (or your relevant major database) is another such. Also, my arbitrary sub-field snobbery** starts at an Impact Factor of around 1.something…..however I notice that the IF of my touchstone journals for “the bottom” have inched up over the years. Perhaps “2” is my lower bound now.

**see? for some fields this is snobbery. for others, a ridiculous, snarky statement. Are you getting the message yet?

This is just pathetic and sad

December 16, 2010

I happened to be on a journal’s website trying to download a paper just recently when I noticed the following prominent icon.

C’mon now. Why bother? 2.8 is perfectly respectable, I’m not capping on that. But you’d think they’d have some logic in there to forgo the bragging icon if the change was less than, say, a full point.