This comment at Rock Talk…

NIH should stop allowing multi-awarded PIs to double count publications on multiple grants. This leads to inflation in productivity, and is one reason well funded can fool study sections into thinking they are more productive than they really are. If a publication is counted on multiple grants, its impact on each should be fractionally decreased so that its total impact is the same as the one publication on a single grant PI. Unless council does this, they will not have an accurate indication of whether the increased funding is actually leading to commensurate increase in productivity.

…has the right of it.

It took me about five minutes into reading my very first competing-continuation application when I first sat on a study section to realize that many people are very generous about crediting the grants they happen to be holding on the papers that they submit.

It took me perhaps half a day of study section discussion to figure out why they do this. Because “this amazingly productive researcher” has serious value in the discussion of a grant proposal. And this viewpoint on the part of a reviewer need not be objective in any way. All that is required is the thought that “gee, I see a lot of papers coming out of Dr. Squirrel’s laboratory”. And if ol’ Dr. Squirrel happens to fill up the first page of PubMed hits with papers from the current year and several pages of publications within the past 5 years, then everyone sitting around the table who cares to check will start nodding in agreement.

This is particularly important when it is a competing continuation applications. This type of application (asking for another 5 year interval of support for a project that is already underway) has an explicit section for detailing productivity. Nowadays, I think most people are on board with the idea that they need to list specific NIH grant numbers on each paper (sometime ago it was reasonably common to just say “NIH support” or “NIMH support” or something). So the progress report list better be of papers that mention the grant under discussion. So the smart PI is thinking all along about this list and how long she would like it to be. So she cites as many of her grants as possible on each publication.

And nobody checks.

Well, this isn’t strictly accurate. I have heard people try to reign in a comment about “wonderful productivity under this award” with a close analysis of the listed publications. To point out where a publication appearing in print three months after the start of the award couldn’t possibly have been conducted with the support of that particular award. To show where the attribution of a paper to this particular grant, given the other attributed grants, was an overreach of epic scale. To argue that even if two or three grants might have contributed equally to the paper, it was necessary and fair to divide by two.

I don’t think I’ve ever seen this actually work. I don’t think I’ve once seen a reviewer who stated “wonderfully productive” fully grasp what a critic was saying and reverse his/her opinion.

What I wonder about is the degree to which overall culture on study section can change with respect to this. (And, per usual, I throw this out to my readers for their respective experiences.)

My thought is that this sort of take on “productivity” is entirely dictated by the grant and seniority status of the reviewer. One-grant noobs are absolutely ENRAGED by this seeming disparity. Established PIs who do exactly this same thing in their own grant management strategy act like they don’t know what the youngsters are talking about.

The question is how the various Councils and POs will view this whenever “productivity” becomes an issue.

And I have to tell you, Dear Reader, my confidence that various Program types understand what is going on with this sort of gaming is not very high.

__
Additional Reading:
Your Grant in Review: Productivity on Prior Awards
Musing on NIGMS’ grant performance data
Another Look at Measuring the Scientific Output and Impact of NIGMS Grants
Productivity Metrics and Peer Review Scores
Mapping Publications to Grants
Comparing performance of within-payline and “select pay” pickup NIH grants at NIAID