Do you decide whether to accept a manuscript for review based on the Journal that is asking?

To what extent does this influence your decision to take a review assignment?


PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I’m sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.

In case you are interested in seeing what sorts of comments are being made.

Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.

Repost: Study Section, Act I

February 11, 2013

I think it has been some time since I last reposted this. This originally appeared Jun 11, 2008.

Time: February, June or October
Setting: The Washington Triangle National Hotel, Washington DC

    Dramatis Personæ:

  • Assistant Professor Yun Gun (ad hoc)
  • Associate Professor Rap I.D. Squirrel (standing member)
  • Professor H. Ed Badger (standing member, second term)
  • Dr. Cat Herder (Scientific Review Officer)
  • The Chorus (assorted members of the Panel)
  • Lurkers (various Program Officers, off in the shadows)

Read the rest of this entry »

SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like “low-impact journals” are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn’t be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word “tier”. As in “The work from the prior interval of support was mostly published in second tier journals…“.

It is almost always second tier that is used.

But this is never correct in my experience.

If we’re talking Impact Factor (and these people are, believe it) then there is a “first” tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the “second tier”.

A jump down to the IF 12 or so of PNAS most definitely represents a different “tier” if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about “second tier journals” they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a “the rest of the pack” zone in which there is a meaningful perception difference from the fifth tier. So…. Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn’t it?)

So there you have it. If you* are going to use “tier” to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use “second tier journal” to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as “second tier”. Go nuts.

GrantRant XI

January 31, 2013

Combative responses to prior review are an exceptionally stupid thing to write. Even if you are right on the merits.

Your grant has been sunk in one page you poor, poor fool.

The glass is half full

January 31, 2013

There is nothing like a round of study section to make you wish you were the Boss of ALL the Science.

There is just soooo much incredible science being proposed. From noob to grey beard the PIs are coming up with really interesting and highly significant proposals. We’d learn a lot from all of them.

Obviously, it is the stuff that interests me that should fund. That stuff those other reviewers liked we can do without!

Sometimes I just want to blast the good ones with the NGA gun and be done.

Notice of Grant Award

GrantRant VIII

January 19, 2013

Do not EVER spend so much time geeking away about the amazingly swell trees that you will be characterizing that you forget to convince the reviewer that the forest itself holds any interest. And I mean ANY interest…..Seriously dudes, I’m trying to help you out here but you are giving me absolutely nothing to work with. There is barely any point in me even reading your experimental manipulations….I can tell already there is no overall justification for doing them in the first place!

Damaged Goods

January 14, 2013

Have you ever had a manuscript severely damaged by the process of peer review?

by way of example, I can recall one time where the Editor demanded I chop off two experiments..and I did so*.

Otherwise, I’m generally of the opinion that peer review has a positive impact on the manuscript.

*Those figures have yet to see the light of day and may never get published. A shame, but then, we got the paper published and the main point was one of the other figures.

GrantRant VII

January 10, 2013

Everyone is going to hate you, pretty much.

Think about it. You have 7-10 grants assigned in your pile on a typical study section these days. Odds are good that at best one or two of these is going to be good enough to be in the hunt for funding. The rest of the panel is in the same boat, so it really doesn’t matter that the applicants don’t know precisely which of you* on the panel reviewed his or her proposal.

80-90 % of the applicants are going to be mad at you.

Since you have been selected for expertise in the relevant field…these are people who you know. You know their work and you probably like and cite it. They know you. They know your work.

And for at least a while after they see their disappointing score, and for another while after the pink sheets are posted, they cannot help but hate you a little.

Maybe even a lot.


*If you were triaged you do know for absolute sure that every member listed on that panel roster stood by and refused to pull your application up for discussion.

Back in the distant past, younguns, the US was involved in a struggle with the Soviet Union that many felt was an existential threat to our continuation on this planet. Among other features, this Cold War (perhaps better termed Ongoing Proxy War) featured the buildup of ecosphere destroying megaweapon bombs.

The fuzzy blankie we used to keep from going insane was the thought that since both sides could destroy huge amounts of the other side’s population, render much of its territory uninhabitable, and could do so should the other side move first, we were safe.

Since we were mutually assured to destroy each other, the logic of starting some serious beef was an insane one. Nobody in their right mind would actually do such a thing. So this kept certain behaviors (like the hilariously NewSpeak “pre-emptive counter-strike” with nuclear weapons) off the table.

In discussions of NIH Grant review, there is often a certain paranoia voiced that members of the review panel use this position of tremendous power to screw over their scientific rivals. Sounds plausible, does it not? After all, this grant stuff is a zero-sum game and the “peers” of peer review are after the same pool of money that each applicant is eying. These days it is a good bet that the reviewer has her own application under review elsewhere in the CSR…or has one pending funding in this self-same Fiscal Year.

That’s before we get to scientific competition to publish papers in some research area first. We all know that first is best and all others might as well go home, right? And any rational grant funding agency (don’t laugh) like the NIH should diversify their portfolio such that if they fund grant on a topic, the chances of another one on nearly the same topic should be lesser.

Naturally, the closer the reviewer expertise is to the grant in question, the closer this reviewer is to being in direct conflict of interest at some level.

My first approach to comforting the distraught Assistant Professor is to emphasize that our peers are professionals, with some degree of ethical centeredness who are for the most part attempting to do the job as asked.

This doesn’t comfort everyone. So today I offer the Mutually Assured Destruction theory for your consideration.

One of the most surprising things I found about study section service is the rapidity and surety with which payback opportunity was provided. During the early days of my study section service it was the appearance of many grants in my piles to review that were submitted by PIs who had previously appeared on study section panels reviewing my own proposals. After I’d been reviewing for a little bit, it was remarkable how quickly people who’s grants had appeared in study sections that I was on (and in some cases apps to which I had been assigned) were now in a position on panels reviewing other grants of mine.

I came away from all of this with the understanding that what goes around comes around VERY quickly in NIH grant review.

So for the paranoid types…do consider this additional source of pressure on the reviewer. If you don’t trust their professionalism, trust in their self-interest. This Mutual Assurance tends to suggest that reviewers would be crazy to screw with applicants out of pure self-interested bias.

GrantRant V

January 8, 2013

A grant review subculture that has been established from sub fields in which not much happens between grant submission and review has difficulty dealing with an exploding topic.

In the general case, it seems slightly unfair to kill a proposal over the four papers that have appeared after the poor sucka PI submitted the application.

Grant Rant IV

January 7, 2013

A comment I made about grants being “saved” in discussion reminded me of one of the first experiences I had on study section in this regard. I can’t go into too many details but suffice it to say I battered a couple of cultural memes/expectations about scoring within a particular study section at a time when I was still on the earlier side of my independent career.

I hadn’t really paid too much attention to the PI or the project, save a notice when the name turned up on a study section roster. Today I took the trouble to wander over to RePORTER and check up on the Results tab for the grant.

That grant and the PI have exhibited excellent productivity ever since I fought for it.

I love being right.

Ahh, reviewers

December 13, 2012

One thing that cracks me up about manuscript review is the reviewer who imagines that there is something really cool in your data that you are just hiding from the light of day.

This is usually expressed as a demand that you “consider” a particular analysis of your data. In my work, behavioral pharmacology, it can run to the extent of wanting to parse individual subjects’ responses. It may be a desire that you approach your statistical analysis differently, changing the number of factors included in your ANOVA, a suggestion you should group your data differently (sometimes if you have extended timecourses such as sequential drug self-administration sessions, etc, you might summarize the timecourse in some way) or perhaps a desire to see a bunch of training, baseline or validation behavior that…

….well, what?

Many of these cases that I’ve seen show the reviewer failing to explain exactly what s/he suspects would be revealed by this new analysis or data presentation. Sometimes you can infer that they are predicting that something surely must be there in your data and for some reason you are too stupid to see it. Or are pursuing some agenda and (again, this is usually only a hint) suspected of covering up the “real” effect.

Dudes! Don’t you think that we work our data over with a fine toothed comb, looking for cool stuff that it is telling us? Really? Like we didn’t already think of that brilliant analysis you’ve come up with?

Didya ever think we’ve failed to say anything about it because 1) the design just wasn’t up to the task of properly evaluating some random sub-group hypothesis or 2) the data just don’t support it, sorry. or 3) yeah man, I know how to validate a damn behavioral assay and you know what? nobody else wants to read that boring stuff.

and friends, the stats rules bind you just as much as they do us. You know? I mean think about it. If there is some part of a subanalysis or series of different inferential techniques that you want to see deployed you need to think about whether this particular design is powered to do it. Right? I mean if we reported “well, we just did this ANOVA, then that ANOVA…then we transformed the data and did some other thing…well maybe a series of one-ways is the way to go….hmm. say how about t-tests? wait, wait, here’s this individual subject analysis!” like your comments seem to be implying we should now do…yeah that’s not going to go over well with most reviewers.

So why do some reviewers seem to forget all of this when they are wildly speculating that there must be some effect in our data that we’ve not reported?

I’m attending a meeting that is enriched in the older and established luminary type of scientist. Relative to more….democratic academic meetings.

I’ve seen the head of the CSR of the NIH here two years in a row. Now, I don’t know how many meetings this person attends in a year, perhaps it is dozens. I bet not though.

Which means this crowd gets an extra special opportunity to ‘splain what is wrong with grant review and how to fix it.

I guarantee it is mostly about what is good for them with their huge labs and well established programs, and much less about what is good for the riff-raff like us, Dear Reader.