Admittedly I hadn’t looked all that hard but I was previously uncertain as to how NIH grants with tied scores were percentiled. Since the percentiles are incredibly important* for funding decisions, this was a serious question after the new scoring approach (reviewers vote 1-9 integer values, the average is multiplied by 10 for final score. lower is better.) which was designed to generate more tied scores.

The new system poses the chance that a lot of “ranges” for the application are going to be 1-2 or 2-3 and, in some emerging experiences, a whole lot more applications where the three assigned reviewers agree on a single number. Now, if that is the case and nobody from the panel votes outside the range (which they do not frequently do), you are going to end up with a lot of tied 20 and 30 priority scores. That was the prediction anyway.
NIAID has data from one study section that verifies the prediction.

As a bit of an aside, we also learned along the way that percentile ranks are always rounded UP.

Percentiles range from 1 to 99 in whole numbers. Rounding is always up, e.g., 10.1 percentile becomes 11.

So you should be starting to see that the number of applications assigned to your percentile base** and the number that receive tying scores is going to occasionally throw some whopping discontinuities into the score-percentile relationship.

Rock Talk explains:

However, as you can see, this formula doesn’t work as is for applications with tied scores (see the highlighted cells above) so the tied application are all assigned their respective average percentile

In her example, the top applications in a 15 application pool scored impact scores of 10, 11, 19, 20, 20, 20…. This is a highly pertinent example, btw. Since reviewers concurring on a 2 overall impact is very common and represents a score that is potentially in the hunt for funding***.

In Rockey’s example, these tied applications block out the 23, 30 and 37 percentile ranks in this distribution of 15 possible scores. (The top score gets a 3%ile rank, btw. Although this is an absurdly small example of a base for calculation, you can see the effect of base size…10 is the best possible score and in an era of 6-9%ile paylines the rounding-up takes a bite.) The average is assigned so all three get 30%ile. Waaaaay out of the money for an application that has the reviewers concurring on the next-to-best score? Sheesh. In this example, the next-best-scoring application averaged a 19, only just barely below the three tied 20s and yet it got a 17%ile for comparison with their 30%ile.

You can just hear the inchoate screaming in the halls as people compare their scores and percentiles, can’t you?

Rockey lists the next score above the ties as a 28 but it could have just as easily been a 21. And it garners a 43%ile.

Again, cue screaming.

Heck, I’m getting a little screamy myself, just thinking about sections which are averse to throwing 1s for Overall Impact and yet send up a lot of 20 ties. Instead of putting all those tied apps in contention for consideration they are basically guaranteeing none of them get funded because they are all kicked up to their average percentile rank. I don’t assert that people are intentionally putting up a bunch of tied scores so that they will all be considered. But I do assert that there is a sort of mental or cultural block at going below (better than) a 2 and for many reviewers, when they vote a 2 they think this application should be funded.

In closing, I am currently breaking my will to live by trying to figure out the possible percentile base sizes that let X number of perfect scores (10s) receive 1%iles versus being rounded up to 2%iles and then what would be associated with the next-best few scores. NIAID has posted an 8%ile payline and rumours of NCI working at 5%ile or 6%ile for next year are rumbling. The percentile increments that are permitted, based on the size of the percentile base and their round-up policy, become acute.
__
*Rumor of a certain IC director who “goes by score” rather than percentile becomes a little more understandable with this example from Rock Talk. The swing of a 20 Overall Impact score from 10%ile to 30%ile is not necessarily reflective of a tough versus a softball study section. It may have been due to the accident of ties and the size of the percentile base.

**typically the grants in that study section round and the two prior rounds for that study section.

***IME, review panels have a reluctance to throw out impact scores of 1. The 2 represents a hesitation point for sure.

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains….I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.

whoo-hoo!

Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I’d take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
2012-ImpactFactor1
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience’s journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I’m going to score the last couple of years’ of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That’s a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index….I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don’t enjoy as much time to acquire citations.