Stupid JIF tricks, take eleven
November 3, 2020
As my longer term Readers are well aware, my laboratory does not play in the Glam arena. We publish in society type journals and not usually the fancier ones, either. This is a category thing, in addition to my stubbornness. I have occasionally pointed out how my papers that were rejected summarily by the fancier society journals tend to go on to get cited better than their median and often their mean (i.e., their JIF) in the critical window where it counts. This, I will note, is at journals with only slightly better JIF than the vast herd or workmanlike journals in my fields of interest, i.e. with JIF from ~2-4.
There are a lot of journals packed into this space. For the real JIF-jockeys and certainly the Glam hounds, the difference between a JIF 2 and JIF 4 journal is imperceptible. Some are not even impressed in the JIF 5-6 zone where the herd starts to thin out a little bit.
For those of us that publish regularly in the herd, I suppose there might be some slight idea that journals towards the JIF 4-5 range is better than journals in the JIF 2-3 range. Very slight.
And if you look at who is on editorial boards, who is EIC, who is AE and who is publishing at least semi-regularly in these journals you would be hard pressed to discern any real difference.
Yet, as I’ve also often related, people associated with running these journals all seem to care. They always talk to their Editorial Boards in a pleading way to “send some of your work here”. In some cases for the slightly fancier society journals with airs, they want you to “send your best work here”….naturally they are talking here to the demiGlam and Glam hounds. Sometimes at the annual Editorial Board meeting the EIC will get more explicit about the JIF, sometimes not, but we all know what they mean.
And to put a finer point on it, the EIC often mentions specific journals that they feel they are in competition with.
Here’s what puzzles me. Aside the fact that a few very highly cited papers would jazz up the JIF for the lowly journals if the EIC or AEs or a few choice EB members were to actually take one for the team, and they never do, that is. The ONLY thing I can see that these journals can compete on are 1) rapid and easy acceptance without a lot of demands for more data (really? at JIF 2? no.) and 2) speed of publication after acceptance.
My experience over the years is that journals of interchangeable JIF levels vary widely in the speed of publication after acceptance. Some have online pre-print queues that stretch for months. In some cases, over a year. A YEAR to wait for a JIF 3 paper to come out “in print”? Ridiculous! In other cases it can be startlingly fast. As in assigned to a “print” issue within two or three months of the acceptance. That seems…..better.
So I often wonder how this system is not more dynamic and free-market-y. I would think that as the pre-print list stretches out to 4 months and beyond, people would stop submitting papers there. The journal would then have to shrink their list as the input slows down. Conversely, as a journal starts to head towards only 1/4 of an issue in the pre-print list, authors would submit there preferentially, trying to get in on the speed.
Round and round it would go but the ecosphere should be more or less in balance, long term. right?
Wait, HOW is the JIF calculated?
April 29, 2019
I still don’t understand the calculation of Journal Impact Factor. Or, I didn’t until today. Not completely. I mean, yes, I had the basic idea that it was citations divided by the number of citable articles published in the past two years. However, when I write blog posts talking about how you should evaluate your own articles in the context (e.g., this one), I didn’t get it quite right. The definition from the source:
the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years
So when we assess how our own article contributes to the journal impact factor of the journal it was published in, we need to look at citations in the second and third calendar years. It will never count the first calendar year of publication, somewhat getting around the question of whether something has been available to be seen and cited for a full calendar year before it “counts” for JIF purposes. So when I wrote:
The fun game is to take a look at the articles that you’ve had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the “JCR” (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.
I don’t think I fully appreciated that you can look at citations in the second and third year and totally ignore the first year of citations. Look at the second and third calendar year of citations, individually, or average them together as a short cut. Either way, if you want to know if your paper is boosting the JIF of the journal, those are the citations to focus on. Certainly in my mind when I do the below mentioned analysis I used to think I had to look at the first year and sort of grumble to myself about how it wasn’t fair, it was published in the second half of the year, etc. And the second year “really counted”. Well, I was actually closer with my prior excuse making than I realized. You look at the second and third years.
Obviously this also applies to the axe grinding part of your analysis of your papers. I was speaking with two colleagues recently, different details but basically it boils down to being a little down in the dumps about academic disrespect. As you know Dear Reader one of the things that I detest most about the way academic science behaves is the constant assault on our belongingness. There are many forces that try to tell you that you suck and your science is useless and you don’t really deserve to have a long and funded career doing science. The much discussed Imposter Syndrome arises from this and is accelerated by it. I like to fight back against that, and give you tools to understand that the criticisms are nonsense. One of these forces is that of journal Impact Factor and the struggle to get your manuscripts accepted in higher and higher JIF venues.
If you are anything like me you may have a journal or two that is seemingly interested in publishing the kind of work you do, but for some reason you juuuuuust miss the threshold for easy acceptance. Leading to frequent rejection. In my case it is invariably over perceived impact with a side helping of “lacks mechanism”. Now these just-miss kinds of journals have to be within the conceivable space to justify getting analytical about it. I’m not talking about stretching way above your usual paygrade. In our case we get things in this one particular journal occasionally. More importantly, there are other people who get stuff accepted that is not clearly different than ours on these key dimensions on which ours are rejected. So I am pretty confident it is a journal that should seriously consider our submissions (and to their credit our almost inevitably do go out for review).
This has been going on for quite some time and I have a pretty decent sample of our manuscripts that have been rejected at this journal, published elsewhere essentially unchanged (beyond the minor revisions type of detail) and have had time to accumulate the first three years of citations. This journal is seriously missing the JIF boat on many of our submissions. The best one beat their JIF by a factor of 4-5 at times and has settled into a sustained citation rate of about double theirs. It was published in a journal with a JIF about 2/3rd as high. I have numerous other examples of manuscripts rejected over “impact” grounds that at least met that journal’s JIF and in most cases ran 1.5-3x the JIF in the critical second and third calendar years after publication.
Fascinatingly, a couple of the articles that were accepted by this journal are kind of under-performing considering their conceits, our usual for the type of work etc.
The point of this axe grinding is to encourage you to take a similar quantitative look at your own work if you should happen to be feeling down in the dumps after another insult directed at you by the system. This is not for external bragging, nobody gives a crap about the behind-the-curtain reality of JIF, h-index and the like. You aren’t going to convince anyone that your work is better just because it outpoints the JIF of a journal it didn’t get published in. Editors at these journals are going to continue to wring their hands about their JIF, refuse to face the facts that their conceits about what “belongs” and “is high impact” in their journal are flawed and continue to reject your papers that would help their JIF at the same rate. It’s not about that.
This is about your internal dialogue and your Imposter Syndrome. If this helps, use it.
Journal Citation Metrics: Bringing the Distributions
July 3, 2018
The latest Journal Citation Reports has been released, updating us on the latest JIF for our favorite journals. New for this year is….
…..drumroll…….
provision of the distribution of citations per cited item. At least for the 2017 year.
The data … represent citation activity in 2017 to items published in the journal in the prior two years.
This is awesome! Let’s drive right in (click to enlarge the graphs). The JIF, btw is 5.970.
Oh, now this IS a pretty distribution, is it not? No nasty review articles to muck it up and the “other” category (editorials?) is minimal. One glaring omission is that there doesn’t appear to be a bar for 0 citations, surely some articles are not cited. This makes interpretation of the article citation median (in this case 5) a bit tricky. (For one of the distributions that follows, I came up with the missing 0 citation articles constituting anywhere from 17 to 81 items. A big range.)
Still, the skew in the distribution is clear and familiar to anyone who has been around the JIF critic voices for any length of time. Rare highly-cited articles skew just about every JIF upward from what your mind things, i.e., that that is the median for the journal. Still, no biggie, right? 5 versus 5.970 is not all that meaningful. If your article in this journal from the past two years got 4-6 citations in 2017 you are doing great, right there in the middle.
Let’s check another Journal….
Ugly. Look at all those “Other” items. And the skew from the highly-cited items, including some reviews, is worse. JIF is 11.982 and the article citation median is 7. So among other things, many authors are going to feel like they impostered their way into this journal since a large part of the distribution is going to fall under the JIF. Don’t feel bad! Even if you got only 9-11 citations, you are above the median and with 6-8 you are right there in the hunt.
Not too horrible looking although clearly the review articles contribute a big skew, possibly even more than the second journal where the reviews are seemingly more evenly distributed in terms of citations. Now, I will admit I am a little surprised that reviews don’t do even better compared with primary review articles. It seems like they would get cited more than this (for both of these journals) to me. The article citation mean is 4 and the JIF is 6.544, making for a slightly greater range than the first one, if you are trying to bench race your citations against the “typical” for the journal.
The first takeaway message from these new distributions, viewed along with the JIF, is that you can get a much better idea of how your articles are fairing (in your favorite journals, these are just three) compared to the expected value for that journal. Sure, sure we all knew at some level that the distribution contributing to JIF was skewed and that median would be a better number to reflect the colloquial sense of typical, average performance for a journal.
The other takeaway is a bit more negative and self-indulgent. I do it so I’ll give you cover for the same.
The fun game is to take a look at the articles that you’ve had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the “JCR” (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.
Still with me, fellow axe-grinders?
Every editorial staff I’ve ever seen talk about journal business in earnest is concerned about raising the JIF. I don’t care how humble or soaring the baseline, they all want to improve. And they all want to beat some nearby competitors. Which means that if they have any sense at all, they are concerned about decreasing the uncited dogs and increasing the articles that will be cited in the JCR year above their JIF. Hopefully these staffs also understand that they should be beating their median citation year over year to improve. I’m not holding my breath on that one. But this new publication of distributions (and the associated chit chat around the campfire) may help with that.
Final snark.
I once heard someone concerned with JIF of a journal insist that they were not “systematically overlooking good papers” meaning, in context, those that would boost their JIF. The rationale for this was that the manuscripts they had rejected were subsequently published in journals with lower JIFs. This is a fundamental misunderstanding. Of course most articles rejected at one JIF level eventually get published down-market. Of course they do. This has nothing to do with the citations they eventually accumulate. And if anything, the slight downgrade in journal cachet might mean that the actual citations slightly under-represent what would have occurred at the higher JIF journal, had the manuscript been accepted there. If Editorial Boards are worried that they might be letting bigger fish get away, they need to look at the actual citations of their rejects, once published elsewhere. And, back to the story of the day, those actual citations need to be compared with the median for article citations rather than the JIF.
JIF notes 2016
June 27, 2016
If it’s late June, it must be time for the latest Journal Impact Factors to be announced. (Last year’s notes are here.)
Nature Neuroscience is confirming its dominance over Neuron with upward and downward trends, respectively, widening the gap.
Biological Psychiatry continues to skyrocket, up to 11.2. All pretensions from Neuropsychopharmacology to keep pace are over, third straight year of declines for the ACNP journal lands it at 6.4. Looks like the 2011-2012 inflation was simply unsustainable for NPP. BP is getting it done though. No sign of a letup for the past 4 years. Nicely done BP and any of y’all who happen to have published there in the past half-decade.
I’ve been taking whacks at the Journal of Neuroscience all year so I almost feel like this is pile-on. But the long steady trend has dropped it below a 6, listed at 5.9 this year. Oy vey.
Looks like Addiction Biology has finally overreached with their JIF strategy. It jumped up to the 5.9 level 2012-2013 but couldn’t sustain it- two consecutive years of declines lowers it to 4.5. Even worse, it has surrendered the top slot in the Substance Abuse category. As we know, this particular journal maintains an insanely long pre-print queue with some papers being assigned to print two whole calendar years after appearing online. Will anyone put up with this anymore, now that the JIF is declining and it isn’t even the best-in-category anymore? I think this is not good for AB.
A number of journals in the JIF 4-6 category that I follow are holding steady over the past several years, that’s good to see.
Probably the most striking observation is what appears to be a relatively consistent downward trend for JIF 2-4 journals that I watch. These were JIFs that have generally trended upward (slowly, slowly) from 2006 or so until the past couple of years. I assumed this was a reflection of more scientific articles being published and therefore more citations available. Perhaps this deflationary period is temporary. Or perhaps it reflects journals that I follow not keeping up with the times in terms of content?
As always, interested to hear what is going on with the journals in the fields you follow, folks. Have at it in the comments.
Ruining scholarship, one bad mentor at a time
March 22, 2016
via comment from A Salty Scientist:
When you search for papers on PubMed, it usually gives the results in chronological order so many new but irrelevant papers are on the top. When you search papers on Google Scholar, it usually gives results ranked by citations, so will miss the newest exciting finding. Students in my lab recently made a very simple but useful tool Gnosis. It ranks all the PubMed hits by (Impact Factor of the journal + Year), so you get the newest and most important papers first.
Emphasis added, as if I need to. You see, relevant and important papers are indexed by the journal impact factor. Of course.
Seriously? Payment for citations?
August 14, 2015
A Reader submitted this gem of a spam email:
We are giving away $100 or more in rewards for citing us in your publication! Earn $100 or more based on the journal’s impact factor (IF). This voucher can be redeemed your next order at [Company] and can be used in conjunction with our ongoing promotions!
How do we determine your reward?
If you published a paper in Science (IF = 30) and cite [Company], you will be entitled to a voucher with a face value of $3,000 upon notification of the publication (PMID).
This is a new one on me.
JIF notes
June 24, 2015
More on NPP’s pursuit of BP is here.
see this for reference
Additional Reading:
The 2012 JIFs are out
Subdiscipline categories and JIF
Is the J Neuro policy banning Supplemental Materials backfiring?
January 20, 2015
As you will recall, I was very happy when the Journal of Neuroscience decided to ban the inclusion of any Supplemental Materials in articles considered for publication. That move took place back in 2010.
Dr. Becca, however, made the following observation on a recent post:
I’m done submitting to J Neuro. The combination of endless experiment requests due to unlimited space and no supp info,
I find that to be a fascinating comment. It suggests that perhaps the J Neuro policy has been ineffectual, or even has backfired.
To be honest, I can’t recall that I have noticed anything in a J Neuro article that I’ve read in the past few years that reminded me of this policy shift one way or the other.
How about you, Dear Reader? Noticed any changes that appear to be related to this banning of Supplemental Materials?
For that matter, has the banning of Supplemental Materials altered your perception of the science that is published in that journal?
H-index
December 18, 2014
I can’t think of a time when seeing someone’s h-index created a discordant view of their impact. Or for that matter when reviewing someones annual cites was surprising.
I just think the Gestalt impression you generate about a scientist is going to correlate with most quantification measures.
Unless there are weird outliers I suppose. But is there is something peculiar about a given scientist’s publications that skews one particular measure of awesomeness….wouldn’t someone being presented that measure discount accordingly?
Like if a h-index was boosted by a host of middle author contributions to a much more highly cited domain than the one most people associate you with? That sort of thing.
The “whole point” of Supplementary Data
December 10, 2014
Our good blog friend DJMH offered up the following on a post by Odyssey:
Because the whole point of supplemental material is that the publisher doesn’t want to spend a dime supporting it
This is nonsense. This is not “the whole point”. This is peripheral to the real point.
In point of fact, the real reason GlamourMags demand endless amounts of supplementary data is to squeeze out the competition journals. They do this by denying those other journals the data that would otherwise be offered up as additional publications. Don’t believe it? Take a look through some issues of Science and Nature from the late 1960s through maybe the mid 1970s. The research publications were barely Brief Communications. A single figure, maybe two. And no associated “Supplemental Materials”, either. And then, if you are clever, you will find the real paper that was subsequently published in a totally different journal. A real journal. With all of the meat of the study that was promised by the teaser in the Glam Mag fleshed out.
Glamour wised up and figured out that with the “Supplementary Materials” scam they can lock up the data that used to be put in another journal. This has the effect of both damping citations of that specific material and collecting what citations there are to themselves. All without having to treble or quadruple the size of their print journal.
Nice little scam to increase their Journal Impact Factor distance from the competition.
A tweet from @babs_mph sent me back to an older thread where Rockey introduced the new Biosketch concept. One “Senior investigator” commented:
For those who wonder where this idea came from, please see the commentary by Deputy Director Tabak and Director Collins (Nature 505, 612–613, January 2014) on the issue of the reproducibility of results. One part of the commentary suggests that scientists may be tempted to overstate conclusions in order to get papers published in high profile journals. The commentary adds “NIH is contemplating modifying the format of its ‘biographical sketch’ form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.”
Here’s Collins and Tabak, 2014 in freely available PMC format. The lead in to the above referenced passage is:
Perhaps the most vexed issue is the academic incentive system. It currently overemphasizes publishing in high-profile journals. No doubt worsened by current budgetary woes, this encourages rapid submission of research findings to the detriment of careful replication. To address this, the NIH is contemplating…
Hmmm. So by changing this, the ability on grant applications to say something like:
“Yeah, we got totally scooped out of a Nature paper because we didn’t rush some data out before it was ready but look, our much better paper that came out in our society journal 18 mo later was really the seminal discovery, we swear. So even though the entire world gives primary credit to our scoopers, you should give us this grant now.”
is supposed to totally alter the dynamics of the “vexed issue” of the academic incentive system.
Right guys. Right.
What the NHLBI paper metrics data mean for NIH grant review
February 21, 2014
In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.
I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.
I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.
The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.
I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.
It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.
So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.
The NHLBI data say this is utter nonsense.
Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.
What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.
I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.
This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.
NHLBI data shows grant percentile does not predict paper bibliometrics
February 20, 2014
A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.
Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.
I think Figure 2 makes the point, even without knowing much about the particulars
and the last part of the Abstract makes it clear.
We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.
The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.
“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”
Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.
The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.
Science is unpredictable.
Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.
The distribution of citations in the context of Journal Impact Factor
February 13, 2014
Nature editor Noah Gray Twittered a link to a 2003 Editorial in Nature Neuroscience.
The key takeaway is in the figure (which Noah also twittered).
In 2003 the JIF for Nature Neuroscience was 15.14, for J Neuro 8.05 and for Brain Research 2.474. Nature itself was 30.98.
Plenty of people refer to the skew and the relative influence of a handful of very highly cited papers but it is interesting and more memorable to see in graphical form, isn’t it?
BJP pulls a neat little self-citation trick
September 24, 2013
As far as I can tell, the British Journal of Pharmacology has taken to requiring that authors who use animal subjects conduct their studies in accordance with the “ARRIVE” (Animals in Research: Reporting In Vivo Experiments) principles. These are conveniently detailed in their own editorial:
McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL.Guidelines for reporting experiments involving animals: the ARRIVE guidelines.Br J Pharmacol. 2010 Aug;160(7):1573-6. doi: 10.1111/j.1476-5381.2010.00873.x.
Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG; NC3Rs Reporting Guidelines Working Group.Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol. 2010 Aug;160(7):1577-9. doi: 10.1111/j.1476-5381.2010.00872.x.
The editorial has been cited 270 times. The guidelines paper has been cited 199 times so far and the vast, vast majority of these are in, you guessed it, the BRITISH JOURNAL OF PHARMACOLOGY.
One might almost suspect the journal now has a demand that authors indicate that they have followed these ARRIVE guidelines by citing the 3 page paper listing them. The journal IF is 5.067 so having an item cited 199 times since it was published in the August 2010 issue represents a considerable outlier. I don’t know if a “Guidelines” category of paper (as this is described on the pdf) goes into the ISI calculation. For all we know they had to exempt it. But why would they?
And I notice that some other journals seem to have published the guidelines under the byline of the self same authors! Self-Plagiarism!!!
Perhaps they likewise demand that authors cite the paper from their own journal?
Seems a neat little trick to run up an impact factor, doesn’t it? Given the JIT and publication rate of real articles in many journals, a couple of hundred extra cites in the sampling interval can have an effect on the JIT.