This is my query of the day to you, Dear Reader.

We’ve discussed the basics in the past but a quick overview.

1) Since the priority score and percentile rank of your grant application is all important (not exclusively so but HEAVILY so) it is critical that it be reviewed by the right panel of reviewers

2) You are allowed request in your cover letter that the CSR route your NIH grant application to a particular study section for review.

3) Standing study section descriptions are available at the CSR website as are the standing rosters and the rosters for the prior three rounds of review (i.e., including any ad hoc reviewers).

4) RePORTER allows you to search for grants by study section which gives you a pretty good idea of what they really, really like.

5) You can, therefore, use this information to slant your grant application towards the study section in which you hope it will be reviewed.

A couple of Twitts from @drIgg today raised the question of study section “fit”. Presumably this is related to an applicant concluding that despite all the above, he or she has not managed to get many of his or her applications reviewed by the properly “fitting” panel of reviewers.

This was related to the observation that despite ones’ request and despite hitting what seem to be the right keywords it is still possible that CSR will assign your grant to some other study section. It has happened to me a few times and it is very annoying. But does this mean these applications didn’t get the right fit?

I don’t know how one would tell.

As I’ve related on occasion, I’ve obtained the largest number of triages from a study section that has also handed me some fundable scores over the past *cough*cough*cough* years. This is usually by way of addressing people’s conclusion after the first 1, 2 or maybe 3 submissions that “this study section HATES me!!!“. In my case I really think this section is a good fit for a lot of my work, and therefore proposals, so the logic is inescapable. Send a given section a lot of apps and they are going to triage a lot of them. Even if the “fit” is top notch.

It is also the case that there can be a process of getting to know a study section. Of getting to know the subtleties of how they tend to feel about different aspects of the grant structure. Is it a section that is really swayed by Innovation and could give a fig about detailed Interpretations, Alternatives and Potential Pitfalls? Or is it an orthodox StockCritiqueSpewing type of section that prioritizes structure over the content? Do they like to see it chock full of ideas or do they wring their hands over feasibility? On the other side, I assert there is a certain sympathy vote that emerges after a section has reviewed a half dozen of your proposals and never found themselves able to give you a top score. Yeah, it happens. Deal. Less perniciously, I would say that you may actually convince the section of the importance of something that you are proposing through an arc of many proposal rounds*.

This leaves me rather confused as to how one would be able to draw strong conclusions about “fit” without a substantial number of summary statements in hand.

It also speaks to something that every applicant should keep in the back of his or her head. If you can never find what you think is a good fit with a section there are only a few options that I can think of.
1) You do this amazing cross-disciplinary shit that nobody really understands.
2) Your applications actually suck and nobody is going to review it well.
3) You are imagining some Rainbow Fairy Care-a-lot Study section that doesn’t actually exist.

What do you think are the signs of a good or bad “fit” with a study section, Dear Reader? I’m curious.
__
*I have seen situations where a proposal was explicitly mentioned to have been on the fourth or fifth round (this was in the A2 days) in a section.

Additional Reading:
Study Section: Act I
Study Section: Act II

No Return

June 27, 2013

President Obama and first lady Michelle Obama look out of a doorway that slaves departed from on Goree Island in Dakar, Senegal on June 27, 2013. (REUTERS/Gary Cameron)

President Obama and first lady Michelle Obama look out of a doorway that slaves departed from on Goree Island in Dakar, Senegal on June 27, 2013. (REUTERS/Gary Cameron)

via Washington Post article by David Nakamura

Will you be an angel?

June 26, 2013

via a Reader

Grumpy reviewer is….

June 25, 2013

grumpy.

Honestly people. What in the hell happened to old fashioned scholarship when constructing a paper? Pub Med has removed all damn excuse you might possibly have had. Especially when the relevant literature comprises only about a dozen or two score papers.

It is not too much to expect some member of this healthy author list to have 1) read the papers and 2) understood them sufficiently to cite them PROPERLY! i.e., with some modest understanding of what is and is not demonstrated by the paper you are citing.

Who the hell is training these kids these days?

__
Yes, I am literally shaking my cane.

Admittedly I hadn’t looked all that hard but I was previously uncertain as to how NIH grants with tied scores were percentiled. Since the percentiles are incredibly important* for funding decisions, this was a serious question after the new scoring approach (reviewers vote 1-9 integer values, the average is multiplied by 10 for final score. lower is better.) which was designed to generate more tied scores.

The new system poses the chance that a lot of “ranges” for the application are going to be 1-2 or 2-3 and, in some emerging experiences, a whole lot more applications where the three assigned reviewers agree on a single number. Now, if that is the case and nobody from the panel votes outside the range (which they do not frequently do), you are going to end up with a lot of tied 20 and 30 priority scores. That was the prediction anyway.
NIAID has data from one study section that verifies the prediction.

As a bit of an aside, we also learned along the way that percentile ranks are always rounded UP.

Percentiles range from 1 to 99 in whole numbers. Rounding is always up, e.g., 10.1 percentile becomes 11.

So you should be starting to see that the number of applications assigned to your percentile base** and the number that receive tying scores is going to occasionally throw some whopping discontinuities into the score-percentile relationship.

Rock Talk explains:

However, as you can see, this formula doesn’t work as is for applications with tied scores (see the highlighted cells above) so the tied application are all assigned their respective average percentile

In her example, the top applications in a 15 application pool scored impact scores of 10, 11, 19, 20, 20, 20…. This is a highly pertinent example, btw. Since reviewers concurring on a 2 overall impact is very common and represents a score that is potentially in the hunt for funding***.

In Rockey’s example, these tied applications block out the 23, 30 and 37 percentile ranks in this distribution of 15 possible scores. (The top score gets a 3%ile rank, btw. Although this is an absurdly small example of a base for calculation, you can see the effect of base size…10 is the best possible score and in an era of 6-9%ile paylines the rounding-up takes a bite.) The average is assigned so all three get 30%ile. Waaaaay out of the money for an application that has the reviewers concurring on the next-to-best score? Sheesh. In this example, the next-best-scoring application averaged a 19, only just barely below the three tied 20s and yet it got a 17%ile for comparison with their 30%ile.

You can just hear the inchoate screaming in the halls as people compare their scores and percentiles, can’t you?

Rockey lists the next score above the ties as a 28 but it could have just as easily been a 21. And it garners a 43%ile.

Again, cue screaming.

Heck, I’m getting a little screamy myself, just thinking about sections which are averse to throwing 1s for Overall Impact and yet send up a lot of 20 ties. Instead of putting all those tied apps in contention for consideration they are basically guaranteeing none of them get funded because they are all kicked up to their average percentile rank. I don’t assert that people are intentionally putting up a bunch of tied scores so that they will all be considered. But I do assert that there is a sort of mental or cultural block at going below (better than) a 2 and for many reviewers, when they vote a 2 they think this application should be funded.

In closing, I am currently breaking my will to live by trying to figure out the possible percentile base sizes that let X number of perfect scores (10s) receive 1%iles versus being rounded up to 2%iles and then what would be associated with the next-best few scores. NIAID has posted an 8%ile payline and rumours of NCI working at 5%ile or 6%ile for next year are rumbling. The percentile increments that are permitted, based on the size of the percentile base and their round-up policy, become acute.
__
*Rumor of a certain IC director who “goes by score” rather than percentile becomes a little more understandable with this example from Rock Talk. The swing of a 20 Overall Impact score from 10%ile to 30%ile is not necessarily reflective of a tough versus a softball study section. It may have been due to the accident of ties and the size of the percentile base.

**typically the grants in that study section round and the two prior rounds for that study section.

***IME, review panels have a reluctance to throw out impact scores of 1. The 2 represents a hesitation point for sure.

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains….I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.

whoo-hoo!

Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I’d take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
2012-ImpactFactor1
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience’s journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I’m going to score the last couple of years’ of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That’s a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index….I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don’t enjoy as much time to acquire citations.

One of the more fascinating things I attended at the recent meeting of the College on Problems of Drug Dependence was a Workshop on “Novel Tobacco and Nicotine Products and Regulatory Science”, chaired by Dorothy Hatsukami and Stacey Sigmon. The focus on tobacco is of interest, of course, but what was really fascinating for my audience was the “Regulatory Science” part.

As background the Family Smoking Prevention and Tobacco Control Act became law on June 22, 2009 (sidebar, um…four years later and..ahhh. sigh.) This Act gave “the Food and Drug Administration (FDA) the authority to regulate the manufacture, distribution, and marketing of tobacco products to protect public health.”

As the Discussant, David Shurtleff (up until recently Acting Deputy Director at NIDA and now Deputy Director at NCCAM), noted this is the first foray for the NIH into “Regulatory Science”. I.e., the usual suspect ICs of the NIH will be overseeing conduct of scientific projects designed directly to inform regulation. I repeat, SCIENCE conducted EXPLICITLY to inform regulation! This is great. [R01 RFA; R21 RFA]

Don’t get me wrong, regulatory science has existed in the past. The FDA has whole research installments of its very own to do toxicity testing of various kinds. And we on the investigator-initiated side of the world interact with such folks. I certainly do. But this brings all of us together, brings all of the diverse expert laboratory talents together on a common problem. Getting the best people involved doing the most specific study has to be for the better.

In terms of specifics of tobacco control, there were many on this topic that you would find interesting. The Act doesn’t permit the actual banning of all tobacco products and it doesn’t permit reducing the nicotine in cigarettes to zero. However, it can address questions of nicotine content, the inclusion of adulterants (say menthol flavor) to tobacco and what comes out of a cigarette (Monoamine Oxidase Inhibiting compounds that increase the nicotine effect, minor constituents, etc). It can do something about a proliferation of nicotine-containing consumer products which range from explicit smoking replacements to alleged dietary supplements.

Replacing cigarette smoking with some sort of nicotine inhaler would be a net plus, right? Well…..unless it lured in more consumers or maintained dependence in those who might otherwise have quit. Nicotine “dietary supplements” that function as agonist therapy are coolio….again, unless they perpetuate and expand cigarette use. Or nicotine exposure…while the drug itself is a boatload less harmful than is the smoking of cigarettes it is not benign.

There are already some grants funded for this purpose.

NIH administers several and there was a suggestion that this is new money coming into the NIH from the FDA. Also a comment that this was non-appropriated money, it was being taken from some tobacco-tax fund. So don’t think of this as competing with the rest of us for funding.

I was enthused. One of the younger guns of my fields of interest has received a LARGE mechanism to captain. The rest of the people who seem to be involved are excellent. The science is going to be very solid.

I really, really (REALLY) like this expansion of the notion that we need to back regulatory policy with good data. And that we are willing as a society to pay to get it. Sure, in this case we all know that it is because the forces *opposing* regulation are very powerful and well funded. And so it will take a LOT of data to overcome their objections. Nevertheless, it sets a good tone. We should have good reason for every regulatory act even if the opposition is nonexistent or powerless.

That brings me to cannabis.

I’m really hoping to see some efforts along these lines [hint, hmmmm] to address both the medical marijuana and the recreational marijuana policy moves that are under experimentation by the States. In the past some US States have used state cigarette tax money (or settlement money) to fund research, so this doesn’t have to be at the Federal level. Looking at you, Colorado and Washington.

__
As always, see Disclaimer. I’m an interested party in this stuff as I could very easily see myself competing for “regulation science” money on certain relevant topics.