Datahound has a cool new analysis posted on the distribution of competing continuation R01/R37 awards (Type 2 in NIH grant parlance).

There is one thing that I noticed that makes for a nice simple soundbite to go along with your other explanations to the willfully blind old guard about how much harder the NIH grant game is at the moment.

Datahound reports that in FY 1995 there were 2653 Type 2 competing continuation R01/R37 awards funded by the NIH. In FY 2014 there were only 1532 Type 2 competing continuation R01/R37 grants awarded.

I make this out to be 58% of the 1995 total.

This is a huge reduction. I had no idea that this was the case. I mean sure, I predicted that there would be a big decline in Type 2 following the ban of A2 revisions*. And I would have predicted that the post-Doubling, Undoubling, Defunding dismality would have had an impact on Type 2 awards. And I complained for years that the increasing odds of A0 apps being sent into the traffic holding pattern itself put a kibosh on Type 2 because PIs simply couldn’t assume a competing continuation would be funded in time to avoid a gap. Consequently PIs were strategically putting in closely related but “new” apps in say Year 3 of the original noncompeting interval.

But I think if I had been asked to speculate I would have estimated a much smaller reduction.

__
*I can’t wait until Datahound brackets this interval so we can see if this was the major effect or if the trend has developed more gradually since 1995.

The announcement for the policy is here.

Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already – and that the editors and reviewers do not know who the authors are.

The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.

The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.

We should all be most interested in making science publication as excellent as possible.

Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.

My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.

Here are the issues I see with the proposed Nature experiment.
1) It doesn’t blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.

2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They’d be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.

3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded “we previously reported…” statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.

4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That’s on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.

So. We’re left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted “double blind” review of manuscripts.

They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.

So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.