I have heard a very dispiriting rumour floating about and I raise it on the blog to see if any of my Readers have seen similar things happening.

Once upon a time, the NIH came to the realization that the peer review process for grant applications had a bias against the less-established, newer, younger, etc Principal Investigator. That is, their proposals did not score as well and were not being funded at the same rate as those applications on which more senior and established investigators were the PI.

Someone clearly came to the conclusion, which I share, that this difference was not due to any meaningful difference in the chance that the ensuing science would be valuable and productive. So the NIH set about a number of steps to redress the situation.

One of the pathetic bandaid solutions steps they came up with was to ensure that the burden of triage did not fall disproportionally upon the younger PI applications.

As you are aware, approximately half of applications do not get discussed at the meeting. This is based on preliminary scores issued by the three assigned reviewers, generated prior to the actual meeting date. Without being fully considered by the entire committee during discussion, an application cannot be “rescued” from various sources of unfair or bad review. Although we all recognized that a full rescue from nearly-triaged to a fundable score is rare it is at least possible. And since the NIH is really just looking at aggregate scores when it comes to the bias-against-noobPI-apps stuff, the movement in the positive direction is still a desired goal.

What someone in the halls of the Center for Scientific Review at the NIH realized was that if noobPI apps were generally scoring worse than those of established PIs, then they were more likely to be triaged. So if the general triage line was 50% of applications, then perhaps 75% of Early Stage Investigator or New Investigator apps were being triaged.

The solution was to put down an edict to the SROs that “the burden of triage should not fall unfairly upon the ESI/NI applications”. Meaning that when the triage lines were originally drawn based on the preliminary scores, the SRO had to specifically review the population of ESI/NI apps and make sure that an equal proportion of them were being discussed, let’s say 50% for convenience. This meant that sometimes ESI apps were being dragged up for discussion with preliminary scores that were worse than scores of several apps from established PIs which were being triaged/not discussed.

You will anticipate my skepticism. At the time, and this was years ago by now, I thought it was a ridiculous and useless move. Because once the preliminary scores were in that range, they were very unlikely to move. And it did nothing to address the presumed bias that put those ESI/NI scores so much lower, on average, then they should have been. It was a silly dodge to keep the aggregate numbers up without doing anything about the fundamental outcome- fundable? or not-fundable?

HOWEVER.

The rumour I have heard is that some SROs have been interpreting this rule to mean that only 50% of ESI/NI apps should be discussed. A critical distinction from “at least“, which was my prior understanding of the policy and certainly how it was used in any study sections I participated in. In such a new interpretation of the policy, there would potentially be established investigator applications being discussed which had preliminary scores worse than some ESI applications that were not being discussed.

This is so backwards that it burns.

Nice of you to notice!

November 1, 2012

We have touched on the Investigator criterion of NIH grant review in the past. I observe that it has little dynamic range because for the most part the investigators applying for grants are very accomplished. So when your summary statement talks about how awesome you are….

I take mine with a grain of salt. It has to be in the nature of “uniquely qualified to conduct these studies” before I get too stoked.

It is relatively rare that, by intent or accident, a reviewer manages to hit on phrasing that melds with my own view of my accomplishments. And lauds me very subtly on that basis. Not to reveal too many details but it has to do with the ability to get shit done under adverse scientific conditions. A recognition that I’ve done so on a consistent basis over an extended interval of time. So that feels pretty good.

Most importantly, however, this set of reviews appears to recognize that when I propose to do X, it is a good bet that I am going to accomplish something reasonably close to X. With the resources I’ve proposed to deploy for the project.

According to a blog entry at Nature the NIH is reconsidering their policy limiting applicants to a single revision of an unfunded grant proposal.

Good. They should do so. Unless, as I’ve said repeatedly, they can show that the increase in the percentage of grants funded on the first try is driven by genuine “first” tries. Rather than being driven by ideas which have been previously reviewed, in different guise.

Grumpy

October 3, 2012

I dunno.

I liked this manuscript I just reviewed okay but it wasn’t super unusually awesome or anything.

But I just had the strongest desire to rebut the Third Reviewer and the Associate Editor to the Editor in Chief.

I thought they were being a bit too demanding and harsh.

And the other two reviews (one of them mine) were hardly softballs.

Meh.

Back to my grant writing.

Oh, this is a good one.

@boehninglab asks:

Has anyone heard of study section tanking a grant b/c of too much % effort listed (in particular, 40%)? @drugmonkeyblog #NIH

I have no specific recollection of such a thing but it does tingle a slight chord of my memory. Suffice it to say that I am not surprised one bit if this has occurred.

If anyone has seen such a thing go down in a study section (or received such comments on a summary statement) I would be fascinated to hear the rationale that was advanced.

Is this an attack on soft-money faculty?

I have definitely seen criticisms that not enough effort was being devoted, but that has typically been in the realm of supporting BSD investigators at 3% or the relatively junior PI at something below 15% effort. The criticism over too much effort seems to contrast with this.

Potnia Theron observed that journals which impose limits on the number of citations that can be included in a manuscript are getting it wrong.

I agree, totally ridiculous. If the manuscript is egregiously overcited, the editorial and review process can handle it.

The drawback of such policy is palpable. It necessarily will prioritize particular papers (Glamour? First?) and obscure others. It hinders the process of citation thread-pulling which is an essential feature of scholarly reading. As such, it will slow the pace of science.

There’s a good one up at retraction watch. An author suggested reviewers for his manuscripts using email addresses he had access to, then supplied his own reviews. Apparently suspicions were aroused by the 24h return of reviews- an obvious sign of fakery.

My learned colleague Odyssey opined that this situation strengthened his resolve to never select the suggested reviewers when acting as an Associate Editor.

I think this is ridiculous. A few bad apples, blah, blah. But more importantly, it seems simple fairness that if a journal is going to request suggestions for reviewers then they should use them. And not, as Odyssey is suggesting, as an exclusion list.

I think AEs should use one and only one of the suggestions.

The Proceedings of the National Academy of Sciences has an interesting manuscript submission process.

Apart from allowing NAS members to “contribute” a paper from their own lab that they’ve gotten peer-reviewed themselves, there is a curious distinction for more normal submissions.

The pre-arranged editor track permits you to find a PNAS editor before you submit it. Presumably a friendly editor.

In the best case it is similar to a pre-submission inquiry practiced formally or informally at the GlamourMags. In the worst case, an end run around “pure” peer-review via the Insider’s Club.

(The end run being as benign as simply avoiding the desk-reject and as pernicious as getting a gamed peer-review.)

But is this any different from other journals? GlamourEditors require some buttering up. They brag in unguarded moments about how much they’ve “worked with” the authors to make the paper awesome. So many of those papers end up functionally identical to having a pre-arranged editor who has agreed to handle the manuscript.

In pedestrian-journal land, one can easily go Editor hunting. If a host of journals sort-of fit, and the IFs are indistinguishable, then it behooves the authors to seek a journal with a friendly Associate Editor. And to ask for that person in the many submission systems that permit such requests.

So really, how does the PNAS system really differ?

In fact, you might see that as being more honest and transparent.

Reviews

July 10, 2012

Do you save the manuscript reviews you’ve written?

I’ve never purged that directory.

I have no idea why not.

The manuscript peer review process is supposed to be secret, for the most part. The authors are not to know who reviewed their manuscript…this is generally for the protection against potential retaliation and the corresponding expectation of unfettered evaluation.

Yet one often has conversations with ones fellow scientists at conferences where it becomes obvious the other person reviewed your manuscript. Or that you reviewed theirs.

I find, especially lately, that this is *good* for science. You can discuss the issues with the person. Naturally this is only in cases for which the reviewer wasn’t a total hater…don’t think I’ve had that conversation yet!

It is common enough in the manuscript peer review process. You have submitted your best professional analysis of the manuscript and then the dang editor proceeds to ignore you.

Does this bother you?

Is it worse or better when your opinion is to reject, or to accept?

Do you go by the reviewer box-score and remain unconcerned of it is 2/3 against your review? Or do you insist those other two idiots missed all the key points?

Honestly I never know. NiAID has a feature article on the R21 posted.

They claim doe-eyed failure to understand why most PIs include preliminary data even as they show only one of their funded R21s failed to include data.

Maybe my head will stop shaking in disbelief by tomorrow. Maybe.

There are, to my thinking, two versions of the cover letter you send with a manuscript.

1)  Short ‘n Sweet: Dear Editor, this is  about blah, de and blah which is significant because zippede. I think this will interest your readers, Sincerely, R. E. Squirrel

 

2) The Fluff Job. This is the one that goes on for two pages about how awesome the paper is and why it totally is new and solves cold fusion and shit like that.

 

I have always been a Short and Sweet kind of guy.

 

It has only recently come to my attention that people go ON with their cover letters.

 

WTH?

 

Do any of y’all with Associate Editor or EIC type experience read those long winded letters or do you just go straight to the Abstract?

 

 

Bad Timing

February 9, 2012

One occasionally puts the pressure on to submit one’s paper on topic X with enough lead time to have a prayer of a decision just prior to submitting a grant proposal on X.

Unfortunately this may mean that the manuscript reviewers are busy trying to wrap up their own grant applications over the same timeframe.

Guess which job takes priority?

Grrrrr….

In other news, have you ever submitted a manuscript to a particular journal in the hopes that Associate Editor Jones who just so happens to be on a particular study section will see that it exists?

As you are aware, calls to boycott submitting articles to, and reviewing manuscripts for, journals published by Elsevier are growing. The Cost of Knowledge petition stands at 4694 as of this writing. Of these some 623 signatories have identified themselves as being in Biology, 380 in Social Sciences, 260 in Medicine and 126 in Psychology.

These disciplines cover the sciences and the scientists I know best, including my own work.

There seems to be some dismay in certain quarters with the participation of people in these disciplines. This is based, I would assume, on a seat of the pants idea that there are way more active scientists in these disciplines than seems represented by signatures on the petition. Also, I surmise, based on the host of journals published by Elsevier that cater to various aspects of these broader disciplinary categories.

Others have pointed out that in certain cases, such as Cell or The Lancet, there is no way a set of authors are going to give up the cachet of a possible paper acceptance in that particular journal.

I want to address some more quotidian concerns.

I already mentioned the notion of academic societies which benefit from their relationship with Elsevier. Like it or not, they host a LOT of society journals. Sometimes this is just ego and sometimes the society might really be making some ca-change from the relationship. For those scientists who really love the notion that their society has its own journal, this needs to be addressed before they will get on board with a boycott.

Moving along we deal with the considerations that go into selection of a journal to publish in. Considerations that are not driven by Impact Factor since within the class of society journals, such concerns fade. The IFs are all really close, even if they do like to brag about incremental improvement, or about their numerical advantage over a competitor. Yes, 4.5 is better than 4.3 but c’mon. Other factors come into play.

Cost: Somewhere or other (was it Dr. Zen?) someone in this discussion brought up the notion that paying Open Access fees upfront is a big stumbling block. Yes, in one way or another the taxpayers (state and federal in the US) are footing the bill but from the perspective of the PI, increasing library fees to the University don’t matter. What matters are the Direct Cost budgets of her laboratory (and possible the Institutional funds budget). Sure, OA journals allow you to ask for a fee waiver…but who knows if they will give it? Why would you go through all that work (and time) to get the manuscript accepted just to have to pull it if they refuse to let you skip out on the fee? I mean, heck, $1,000 is always handier to have in the lab than being shunted off to the OA publisher, right? I don’t care how many R01s you have…

Convenience: The online manuscript handling system of Elsevier is good. I’ve had experience with a few others, Scholar ONE based systems, etc. Just heard a complaint about the PLoS system on the Twitts the other day, as it happens. Bottom line is that the Elsevier one works really well. Easy file uploading, fast PDF creation, reasonably workable input of all the extraneous info….and good progress/status updating as the manuscript undergoes peer review and decision-making at the editorial offices. This is not the case for all other publishers/journals. And what can I say? I like easy. I don’t like fighting with file uploads. I don’t like constantly having to email the managing editorial team to find out if my fucking manuscript is out for review, back from review, sitting on the Editor’s desk or what. And yeah, we didn’t have that info back in the day. And knowing the first two reviews are in but the journal is still waiting for the third one doesn’t really change a damn thing. But you know what? I like to see the progress.

Audience: One of the first things I do, when considering submitting to a journal in which I do not usually publish, is to keyword search for recent articles. Do they publish stuff like the one we’re about to submit? If yes, then I feel more comfortable in a general sense about editorial decision making and the selection of relevant reviewers. If no…well, why waste the time? Why start off with the dual problem of arguing the merits of both the specific paper and the general topic of interest? Now note, this is not always a valid assumption. I have a clear example in which the journal description seemed to encompass our work…but if you looked at the papers they generally published you’d think we were crazy to submit there. “But they only publish BadgerDigging Studies, not a BunnyHopper to be seen” you’d say. Well, turns out we didn’t have one lick of trouble about topic “fit” from that journal. Go figure. But even with that experience under my belt, I’m still gonna hesitate.

Editor (friendly): Yes, yes, I frequently point out how stupid and wrong we are when trying to game out who is going to respond favorably to our grant proposals. Same thing holds for paper review. But still. I can’t help but feel that I’ve gotten more editorial rulings going my way from editors that I know personally, know they know my work/me and suspect that they are at least 51% favorable towards me/my submissions. The hit rate from people that I’m pretty convinced don’t really know who I am seems somewhat reduced. So yeah, you are damn right I am going to scrutinize the Editorial board of a journal for signs of a friendly name.

Editor (unfriendly): Again, I know it is a fool’s errand. I know that just because I think someone is critical of our work, or has a personal dislike for me, this means jackall. Heck, I’ve probably given really nice manuscript and / or grant reviews to scientists who I personally think are complete jerks, myself. But still… it is common enough that biomedical scientists see pernicious payback lurking behind every corner. Perhaps with justification?

I don’t intend to just stay mad, but to get fucken EVEN the next time I’m reviewing one of theirs. Which will fucken happen. It will.

So yeah, many biomedical scientists are going to put “getting the damn paper accepted already” way up above any considerations about Elsevier’s support for closing off access to tax-payer funded science. Because they feel it is not their fight, yes, but also because it has the potential to cost ’em. This is going to have to be addressed.

On a personal note, PLoSONE currently fails the test. Their are some papers starting to come out in the substance abuse and behavioral pharmacology areas. Some. But not many. And it is hard to get a serious feel for the whole mystique over there about “solid study, not concerned about impact”. Because opinions vary on what represents a solid demonstration. Considerably. Then I look at the list of editors that claim to handle substance abuse. It isn’t extensive and I note at least a few…..strong personalities. Surely these individuals are going to trigger friendly/unfriendly issues for different scientists in their fields. Even worse, however, is the fact that many of them are not listed as having edited any papers published in PLoSONE yet. And that is totally concerning to me if I were considering submitting to that journal instead of one of the many Elsevier titles that might work for us.