Grant awards and the new, new NIH Biosketch
June 2, 2022
Way back in 2015 the NIH made some major changes to the Biosketch. As detailed in this post, one of the major changes was replacing the long list of publications with a “contribution to science” section which was supposed to detail up to five areas of focus with up to four papers, or other research products, cited for each contribution. Some of the preamble from NIH on this suggests it was supposed to be an anti-Glamour measure. Sure. There was also an inclusion of a “personal statement” which was supposed to be used to further brag on your expertise as well as to explain anything…funny… about your record.
In dealing with the “contributions to science” change, I more or less refused to do what was requested. As someone who had been a PI for some time, had mostly senior author pubs and relatively few collaborative papers, I could do this. I just made a few statements about an area I have worked in and listed four papers for each. I didn’t describe my specific role as instructed. I didn’t really describe the influence or application to health or technology. So far this has gone fine, as I can’t remember any comments on Investigator on grants I’ve submitted with this new (old) Biosketch that appear confused about what I have done.
The NIH made some other changes to the Biosketch in 2021, the most notable of which was the removal of the list of Research Support that was previously in Section D. I pointed out in a prior post that I suspect this was supposed to be an attempt to break a specific culture of peer review. One that had hardened reviewers and applicants against the longstanding intent of the NIH. It is very clear in the prior instructions that Section D was not supposed to list all active and completed funding over the past three years. The NIH instructed us to only include that which we wanted to call attention to and added the note that it was for reviewers to assess qualifications of the research team for the new project being proposed. They further underlined this by instructing applicants not to confuse this with the Other Support page which was explicitly for reporting all funding. This failed entirely.
As we have said many times, many ways on this blog…. woe betide any poor newbie applicant who takes the instructions about other grant support at face value and omits any funding that can be easily found on funder websites or the investigator’s lab or University splash page. Reviewers will get in a high dudgeon if they think the PI is trying to conceal anything about their research support. This is, I will assert, because they either overtly or covertly are interested in two things. Neither of which the NIH wants them to be interested in.
One, does the PI have “too much money” in their estimation. The NIH is absolutely opposed to reviewers letting their evaluation of proposal merit be contaminated with such concerns but….people are people and jealously reigns supreme. As does self-righteous feelings about how NIH funds should be distributed. So…review, in practice, is biased in a way that the NIH does not like.
The second concern is related, but is about productivity and is therefore slightly more palatable to some. If the recitation of funding is selective, the PI might be motivated to only present projects that have been the most productive or led to the most Glammy papers. They might be also motivated to omit listing any project which have, by some views, under-produced. This is a tricky one. The instructions say reviewers will look at what the PI chooses to list on the Biosketch as evidence of their overall qualifications. But. How can a reviewer assess qualifications only from the projects that went amazingly well without also assessing how many tanked, relatively speaking? Or so would think a reviewer. The NIH is a little more wobbly on this one. “Productivity” is a sort-of tolerated thing and some analysis of papers-per-grant-dollar (e.g. from NIGMS) show their interest, at least from a Program policy perspective. But I think overall that Program does not want this sort of reviewer bean counting to contaminate merit review too much- the Biosketch instructions insist that the direct costs should not be included for any grants that are mentioned. Program wants to make the calls about “too much money”.
Ok so why am I blogging this again today? Well, we’re into the second year of the new, new attempt of NIH to get the list of grants on the Biosketch more selective. And I’m thinking about how this has been evolving in grants that I’ve been asked to review. Wait..”more selective”? Oh yes, the list of grants can now be added to Section A, the Personal Statement. With all of the same language about how this is only for ongoing or completed projects “that you want to draw attention to“. NOT-OD-21-073 even ties this new format description to the re-organization of the Other Support page, again making it clear that these are not the same thing.
So the question of the day is, how are applicants responding? How are reviewers reacting to various options taken by applicants?
I put in my first few applications with the grant list simply removed. I added a statement to Section A summarizing my total number of intervals of competitive support as PI and left it at that. But I’ve seen many applicants who put all their grants in Section A, just as they would have put them in Section D before.
I guess I had better do the same?
Reconsidering “Run to Daylight” in the Context of Hoppe et al.
January 7, 2022
In a prior post, A pants leg can only accommodate so many Jack Russells, I had elucidated my affection for applying Vince Lombardi’s advice to science careers.
Run to Daylight.
Seek out ways to decrease the competition, not to increase it, if you want to have an easier career path in academic science. Take your considerable skills to a place where they are not just expected value, but represent near miraculous advance. This can be in topic, in geography, in institution type or in any other dimension. Work in an area where there are fewer of you.
This came up today in a discussion of “scooping” and whether it is more or less your own fault if you are continually scooped, scientifically speaking.
The trouble is, that despite the conceits in study section review, the NIH system does NOT tend to reward investigators who are highly novel solo artists. It is seemingly obligatory for Nobel Laureates to complain about how some study section panel or other passed on their grant which described the plans to pursue what became the Nobel-worthy work. Year after year a lot of me-too grants get funded while genuinely new stuff flounders. The NIH has a whole system (RFAs, PAs, now NOSI) set up to beg investigators to submit proposals on topics that are seemingly important but nobody can get fundable scores to work on.
In 2019 the Hoppe et al. study put a finer and more quantitatively backed point on this. One of the main messages was the degree to which grant proposals on some topics had a higher success rate and some on other topics had lower success rates. You can focus on the trees if you want, but the forest is all-critical. This has pointed a spotlight on what I have taken to calling the inherent structural conservatism of NIH grant review. The peers are making entirely subjective decisions, particularly right at the might-fund/might-not-fund threshold of scoring, based on gut feelings. Those peers are selected from the ranks of the already-successful when it comes to getting grants. Their subjective judgments, therefore, tend to reinforce the prior subjective judgments. And of course, tend to reinforce an orthodoxy at any present time.
NIH grant review has many pseudo-objective components to it which do play into the peer review outcome. There is a sense of fair-play, sauce for the goose logic which can come into effect. Seemingly objective evaluative comments are often used selectively to shore up subjective, Gestalt reviewer opinions, but this is in part because doing so has credibility when an assigned reviewer is trying to convince the other panelists of their judgment. One of these areas of seemingly objective evaluation is the PI’s scientific productivity, impact and influence, which often touches on publication metrics. Directly or indirectly. Descriptions of productivity of the investigator. Evidence of the “impact” of the journals they publish in. The resulting impact on the field. Citations of key papers….yeah it happens.
Consideration of the Hoppe results, the Lauer et al. (2021) description of the NIH “funding ecology” in the light of some of the original Ginther et al. (2011, 2018) investigation into the relationship of PI publication metrics is relevant here.
Publication metrics are a function of funding. The number of publications a lab generates depend on having grant support. More papers is generally considered better, fewer papers worse. More funding means an investigator has the freedom to make papers meatier. Bigger in scope or deeper in converging evidence. More papers means, at the very least, a trickle of self-cites to those papers. More funding means more collaborations with other labs…which leads to them citing both of you at once. More funding means more trainees who write papers, write reviews (great for h-index and total cites) and eventually go off to start their own publication records…and cite their trainee papers with the PI.
So when the NIH-generated publications say that publication metrics “explain” a gap in application success rates, they are wrong. They use this language, generally, in a way that says Black PIs (the topic of most of the reports, but this generalizes) have inferior publication metrics so this causes a lower success rate. With the further implication that this is a justified outcome. This totally ignores the inherent circularity of grant funding and publication measures of awesomeness. Donna Gither has written a recent reflection on her work on NIH grant funding disparity, which doubles down on her lack of understanding on this issue.
Publication metrics are also a function of funding to the related sub-field. If a lot of people are working on the same topic, they tend to generate a lot of publications with a lot of available citations. Citations which buoy up the metrics of investigators who happen to work in those fields. Did you know, my biomedical friends, that a JIF of 1.0 is awesome in some fields of science? This is where the Hoppe and Lauer papers are critical. They show that not all fields get the same amount of NIH funding, and do not get that funding as easily. This affects the available pool of citations. It affects the JIF of journals in those fields. It affects the competition for limited space in the “best” journals. It affects the perceived authority of some individuals in the field to prosecute their personal opinions about the “most impactful” science.
That funding to a sub-field, or to certain approaches (technical, theoretical, model, etc, etc) has a very broad and lasting impact on what is funded, what is viewed as the best science, etc.
So is it good advice to “Run to daylight”? If you are getting “scooped” on the regular is it your fault for wanting to work in a crowded subfield?
It really isn’t. I wish it were so but it is bad advice.
Better advice is to work in areas that are well populated and well-funded, using methods and approaches and theoretical structures that everyone else prefers and bray as hard as you can that your tiny incremental twist is “novel”.
Peer review is merely advisory to the decision maker
November 2, 2021
How many times have you heard another academic scientist say “I rejected that manuscript…“. Or, “I accepted that manuscript….“? This is usually followed by some sort of criticism of an outcome for that manuscript that is inconsistent with their views on what the disposition should be. Most often ” I rejected that manuscript…but it was accepted for publication anyway, how dare they??!!??”
We somewhat less often hear someone say they “rejected” or “funded” a grant proposal…but we do hear disappointed applicants claim that one reviewer “killed my grant”.
This is, in general, inaccurate.
All, and I mean ALL, of the review input on NIH grants that takes place from receipt and referral through to the Advisory Council input (and whatever bean counting tetris puzzle fitting happens post-Council) is merely advisory to the Director. The IC Director is the deciderer.
Similarly, all peer review input to manuscripts is merely advisory to the Editor. In this case, there may be some variability in whether it is all being done at the Editor in Chief level, to what extent she farms that out to the handling sub-Editors (Associate, Senior, Reviewing, etc) or whether there is a more democratic discussion amongst a group of deciding editors.
What is clear, however, is that the review conducted by peers is merely advisory.
It can be the case that the deciding editor (or editorial process) sides with a 2-1 apparent vote. It could be siding with a 1-2 vote. Or overruling a 0-3 vote. Either for or against acceptance.
This is the process that we’ve lived with for decades. Scientific generations.
Yet we still have people expressing this bizarre delusion that they are the ones “accepting” or “rejecting” manuscripts in peer review. Is this a problem? Would it be better, you ask, if we all said “I recommended against accepting it”?
Yes. It would be better. So do that.
This post is brought to you by a recent expression of outrage that a paper was rejected despite (an allegation of) positive-sounding comments from the peer reviewers. This author was so outraged that they contacted some poor fool reviewer who had signed their name to the review. Outside of the process of review, the author demanded this reviewer respond. Said reviewer apparently sent a screen shot of their recommendation for, well, not rejection.
This situation then usually goes into some sort of outrage about how the editorial decision making process is clearly broken, unethical, dishonest, political….you know the drill. Bad.
For some reason we never hear those sorts of complaints from the authors when an editor has overruled the disfavorable reviewers and issued an acceptance for publication.
No, in those cases we hear from the outraged peer reviewer. Who also, on occasion, has been know to rant about how the editorial decision making process is clearly broken, unethical, dishonest, political….you know the drill. Bad.
All because we have misconstrued the role of peer review.
It is advisory. That is all.
A lesson for DEI strategies from the NIH ESI policy
March 1, 2021
The Director of the NIH, in the wake of a presentation to the Advisory Committee to the Director meeting, has issued a statement of NIH’s commitment to dismantle structural racism.
Toward that end, NIH has launched an effort to end structural racism and racial inequities in biomedical research through a new initiative called UNITE, which has already begun to identify short-term and long-term actions. The UNITE initiative’s efforts are being informed by five committees with experts across all 27 NIH institutes and centers who are passionate about racial diversity, equity, and inclusion. NIH also is seeking advice and guidance from outside of the agency through the Advisory Committee to the Director (ACD), informed by the ACD Working Group on Diversity, and through a Request for Information (RFI) issued today seeking input from the public and stakeholder organizations. The RFI is open through April 9, 2021, and responses to the RFI will be made publicly available. You can learn more about NIH’s efforts, actions, policies, and procedures via a newly launched NIH webpage on Ending Structural Racism aimed at increasing our transparency on this important issue.
This is very much welcome, coming along as it does, a decade after Ginther and colleagues showed that Black PIs faced a huge disadvantage in getting their NIH grants funded. R01 applications with Black PIs were funded at only 58% of the rate that applications with white PIs were funded.
Many people in the intervening years, accelerated after the publication of Hoppe et al 2019 and even further in the wake of the murder of George Floyd at the hands of the Minneapolis police in 2020, have wondered why the NIH does not simply adopt the same solution they applied to the ESI problem. In 2006/2007 the then-Director of NIH, Elias Zerhouni, dictated that the NIH would practice affirmative action to fund the grants of Early Stage Investigators. As detailed in Science by Jocelyn Kaiser
Instead of relying solely on peer review to apportion grants, [Zerhouni] set a floor—a numerical quota—for the number of awards made to new investigators in 2007 and 2008.
A quota. The Big Bad word of anti-equity warriors since forever. Gawd forbid we should use quotas. And in case that wasn’t clear enough
The notice states that NIH “intends to support new investigators at success rates comparable to those for established investigators submitting new applications.” In 2009, that will mean at least 1650 awards to new investigators for R01s, NIH’s most common research grant.
As we saw from Hoppe et al, the NIH funded 256 R01s with Black PIs in the interval from 2011-2015, or 51 per year. In a prior blog post I detailed how some 119 awards to poorer-scoring applications with white PIs could have been devoted to better-scoring proposals with Black PIs. I also mentioned how doing so would have moved the success rate for applications with Black PIs fro 10.7% to 15.6% whereas the white PI success rate would decline from 17.7% to 17.56%. Even funding every single discussed application with a Black PI (44% of the submissions) by subtracting those 1057 applications from the pool awarded with white PIs would reduce the latter applications’ hit rate only to 16.7% which is still a 56% higher rate than the 10.7% rate that the applications with Black PIs actually experienced.
I have been, and continue to be, an advocate for stop-gap measures that immediately redress the funding rate disparity by mandating at least equivalent success rates, just as Zerhouni mandated for ESI proposals. But we need to draw a key lesson from that episode. As detailed in the Science piece
Some program directors grumbled at first, NIH officials say, but came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni. That is, a previous slight gap in review scores for new grant applications from first-time and seasoned investigators widened in 2007 and 2008, [then NIGMS Director Jeremy] Berg says. It revealed a bias against new investigators, Zerhouni says.
I don’t know for sure that this continued, but the FY 2012 funding data published by Kienholz and Berg certainly suggest that several NIH ICs continued to fund ESI applications at much lower priority scores/percentiles than were generally required for non-ESI applications to receive awards. And if you examine those NIH ICs pages that publish their funding strategy each year [see the writedit blog for current policy links], you will see that they continue to use a lower payline for ESIs. So. From 2007 to 2021 that is a long interval of policy which is not “affirmative action”, just, in the words of Zerhouni, “leveling the playing field”.
The important point here is that the NIH has never done anything to get to the “real reason” for the fact that early stage investigators proposals were being scored by peer review at lower priority than they, NIH, desired. They have not undergone spasms of reviewer “implicit bias” training. They have not masked the identity of the PI or done anything suggesting they think they can “fix” the review outcome for ESI PIs.
They have accepted the fact that they just need to counter the bias.
NIH will likewise need to accept that they will need to fund Black PI applications with a different payline for a very long time. They need to accept that study sections will “punish*” those applications with even worse scores. They will even punish those applications with higher ND rates. And to some extent, merely by talking about it this horse has left the stall and cannot be easily recalled. We exist in a world where, despite all evidence, white men regularly assert with great confidence that women or minority individuals have all the advantages in hiring and career success.
So all of this entirely predictable behavior needs to be accounted for, expected and tolerated.
__
*I don’t happen to see it as “punishing” ESI apps even further than whatever the base rate is. I think reviewers are very sensitive to perceived funding lines and to reviewing grants from a sort of binary “fund/don’t fund” mindset. Broadcasting a relaxed payline for ESI applications almost inevitably led to reviewers moving their perceived payline for those applications.
Rule Followers
February 12, 2021
As always Dear Reader, I start with personal confession so you know how to read my biases appropriately.
I am a life time Rule Follower.
I am also a life time Self-Appointed Punisher of Those Who Think the Rules Do Not Apply to Them.
What does “Rule Follower” mean to me? No, not some sort of retentive allegiance to any possible guideline or rule, explicit or implicit. I’ve been known to speed once in awhile. It doesn’t even mean that rule followers are going to agree with, and follow, every rule imaginable for any scenario. It is just an orientation of a person that believes there are such things as rules of behavior, these rules are good things as a social or community compact and that it is a good idea to adhere to them as a general rule. It is a good idea to work within the rules and that this is what is best for society, but also for the self.
The other kind of person, the “Rules Don’t Apply to ME” type, is not necessarily a complete sociopath*. And, in fact, such people may actually be a Rule Follower when it comes to the really big, obvious and Important (in their view) rules. But these are people that do not agree that all of the implicit social rules that Rule Followers follow actually exist. They do not believe that these rules apply to them, and often extend that to the misdemeanor sort of actual formal Rules, aka The Law.
Let’s talk rules of the road- these are the people who routinely speed, California Stop right on reds, and arc into the far lane when making a turn on a multi-lane road. These are the people that bypass a line of patiently waiting traffic and then expect to squeeze into the front of the line with an airy “oops, my badeee, thanks!” smile and wave. They are the ones that cause all sorts of merging havoc because they can’t be arsed to simply go down to the next street or exit to recover from their failure to plan ahead. These are often the people who, despite living in a State with very well defined rules of the road for bicycle traffic, self-righteously violate those rules as a car driver and complain about how the law-compliant bicycle rider is the one in the wrong.
But above all else, these people feel entitled to their behavior. It is an EXTREME OUTRAGE whenever they are disciplined in any way for their selfish and rude behavior that is designed to advantage themselves at the cost to (many) others.
If you don’t let them in the traffic line, you are the asshole. When you make the left turn into lane 2 and they barely manage to keep from hitting you as they fail to arc their own turn properly..you are the asshole. When they walk at you three abreast on the sidewalk and you eyeball the muppethugger trying to edge you off your single lane coming the other way and give every indication you are willing to bodycheck their selfish ass until they finally grudgingly rack it the fuck in…YOU are the asshole.
When they finally get a minor traffic citation for their speeding or failing to stop on a right on red… Oh, sister. It’s forty minutes of complaining rationalization about how unfair this is and why are those cops not solving real crimes and oh woe is me for a ticket they can easily pay. Back in the day when it was still illegal, this was the person caught for a minor weed possession citation who didn’t just pay it but had to go on at length about how outrageous it was to get penalized for their obvious violation of the rules. Don’t even get me started about how these people react to a citation for riding their bicycle on the sidewalk (illegal!) instead of in the street (the law they personally disagree with).
Back before Covid you could identify these two types by hanging around the bulk food bin at your local hippy grocery store. Rule Followers do not sample the items before paying and exiting the store. Those other people…..
Hopefully I’ve chosen examples that get you into the proper mindset of a complex interplay of formal rules that not everyone follows and informal rules of conduct that not everyone follows. I shouldn’t have to draw your attention to how the “Rules Don’t Apply to Me” sail along with convenient interpretations, feigned ignorances and post-hoc everyone-does-it rationales to make their lives a lot easier. That’s right, it’s convenient to not follow the rules, it gets them ahead and frankly those Rule Followers are beta luser cucks for not living a life of personal freedom!
We’re actually in the midst of one of these scenarios right now.
Covid vaccination
As you are aware, there are formal “tiers” being promulgated for who gets schedule for vaccines at which particular time. You know the age cutoffs- we started with 75+ and are now at 65+ in most locations. Then there are the job categories. Health care workers are up first, and then we are working a cascade of importance given occupation. Well, in my environment we had a moment in which “lab workers” were greenlit and Oh, the science lab types rushed to make their appointments. After a short interval, the hammer came down because “lab” meant “lab actually dealing with clinical care and health assessment samples” and not just “any goofaloon who says they work in a lab”.
Trust me, those at the head of that rush (or those pushing as the lab head or institution head) were not the Rule Followers. It was, rather, those types of people who are keen to conveniently define some situation to their own advantage and never consider for a second if they are breaking the Rules.
Then there have been some vaccine situations that are even murkier. We’ve seen on biomedical science tweeter that many lab head prof types have had the opportunity to get vaccinated out of their apparent tier. It seemed, especially in the earlier days prior to vaccine super centers, that a University associated health system would reach the end of their scheduled patients for the day and have extra vaccine.
[ In case anyone has been hiding under a rock, the first vaccines are fragile. They have to be frozen for storage in many cases and thus thawed out. They may not be stable overnight once the vial in question has been opened. In some cases the stored version may need to be “made up” with vehicles or adjuvants or whatever additional components. ]
“Extra” vaccine in the sense of active doses that would otherwise be lost / disposed of if there was no arm to stick it in. Employees who are on campus or close by, can readily be rounded up on short notice, and have no reason to complain if they can’t get vaccinated that particular day, make up this population of arms.
Some Rule Followers were uncomfortable with this.
You will recognize those other types. They were the ones triumphantly posting their good luck on the internet.
In my region, we next started to have vaccine “super centers”. These centers recruited lay volunteers to help out, keep an eye on patients, assist with traffic flow, run to the gloves/syringe depot, etc. And, as with the original health center scenario, there were excess doses available at the end of the day which were offered to the volunteers.
Again, some Rule Followers were uncomfortable with this. Especially because in the early days it was totally on the DL. The charge nurse closest to you would pull a volunteer aside and quietly suggest waiting around at the end of the day just “in case”. It was all pretty sketchy sounding….. to a Rule Follower. The other type of person? NO PROBLEM! They were right there on day one, baby! Vacc’d!
Eventually the volunteer offer policy became someone formalized in my location. Let me tell you, this was a slight relief to a Rule Follower. It for sure decreases the discomfort over admitting one’s good fortune on the intertoobs.
But! It’s not over yet! I mean, these are not formalized processes and the whole vaccine super-center is already chaos just running the patients through. So again, the Rules Don’t Need To Be Followed types are most likely to do the self-advocacy necessary to get that shot in their arm as quickly and assuredly as possible. Remember, it’s only the excess doses that might be available. And you have to keep your head up on what the (rapidly shifting and evolving) procedure might be at your location if you want to be offered vaccine.
Fam, I’m not going to lie. I leaned in hard on anyone I think of as a Rule Follower when I was relating the advantages of volunteering** at one of our vaccine super-centers. I know what we are like. I tell them as much about the chaotic process as I know so as to prepare them for self-advocacy, instead of their native reticence to act without clear understanding of rules that entitle them to get stuck with mRNA.
Still with me?
NIH has been cracking down on URLs in grant applications lately. I don’t know why and maybe it has to do with their recent hoopla about “integrity of review” and people supposedly sharing review materials with outside parties (in clear violation of the review confidentiality RULES, I will note). Anyway, ever since forever you are not supposed to put URL links in your grant applications and reviewers are exhorted never ever to click on a link in a grant. It’s always been explained to me in the context of IP address tracking and identifying the specific reviewers on a panel that might be assigned to a particular application. Whatever. It always seemed a little paranoid to me. But the Rules were exceptionally clear. This was even reinforced with the new Biosketch format that motivated some sort of easy link to one’s fuller set of publications. NIH permits PubMed links and even invented up this whole MyBibliography dealio at MyNCBI to serve this purpose.
Anyway there has been a few kerfuffles of EXTREME ANGER on Science Twitter from applicants who had their proposals rejected prior to review for including URLs. It is an OUTRAGE, you see, that they should be busted for this clear violation of the rules. Which allegedly, according to Those To Whom Rules Do Not Apply, were incredibly arcane rules that they could not possibly be expected to know and waaah, the last three proposals had the same link and weren’t rejected and it isn’t FAAAAAIIIIR!
My gut reaction is really no different than the one I have turning left in a two lane turn or walking at sidewalk hogs. Or the one I have when a habitual traffic law violator finally has to pay a minor fine. Ya fucked around and found out. As the kids say these days.
For some additional perspective, I’ve been reviewing NIH grants since the days when paper hard copies were submitted by the applicant and delivered to the reviewers as such. Pages could be missing if the copier effed up- there was no opportunity to fix this once a reviewer noticed it one week prior to the meeting. Font size shenanigans were seemingly more readily played. And even in the days since, as we’ve moved to electronic documents, there are oodles and oodles of rules for constructing the application. No “in prep” citations in the old Biosketch….people did it anyway. No substituting key methods in the Vertebrate Animals section…..people still do it anyway. Fonts and font size, okay, but what about vertical line spacing….people fudge that anyway. Expand figure “legends” (where font size can be smaller) to incorporate stuff that (maybe?) should really be in the font-controlled parts of the text. Etc, etc, etc.
And I am here to tell you that in many of these cases there was no formal enforcement mechanism. Ask the SRO about a flagrant violation and you’d get some sort of pablum about “well, you are not obliged to consider that material..”. Font size? “well…..I guess that’s up to the panel”. Which is enraging to a Rule Follower. Because even if you want to enforce the rules, how do you do it? How do you “ignore” that manuscript described as in prep, or make sure the other reviewers do? How do you fight with other reviewers about how key methods are “missing” when they are free to give good scores even if that material didn’t appear anywhere in figure legend, Vertebrate Animals or, ISYN, a 25% of the page “footnote” in microfont. Or how do your respond if they say “well, I’m confident this investigator can work it out”?
If, in the old days, you gave a crappy score to a proposal that everyone loved by saying “I put a ruler on the vertical and they’ve cheated” the panel would side eye you, vote a fundable score and fuck over any of your subsequent proposals that they read.
Or such might be your concern if your instinct was to Enforce the Rules.
Anyway, I’m happy to see CSR Receipt and Referral enforce rules of the road. I don’t think it an outrage at all. The greater outrage is all the people who have been able to skirt or ignore the rules and advantage themselves against those of us who do follow the rules***.
__
*Some of my best friends are habitual non-followers-of-rules.
**I recommend volunteering at a vaccine super station if you have the opportunity. It is pretty cool just to see how your health care community is reacting in this highly unusual once-in-a-generation crisis. And its cool, for those of us with zero relevant skills, to have at least a tiny chance to help out. Those are the Rules, you know? 🙂
***Cue Non-Followers-of-Rules who, Trumplipublican- and bothsiders-media-like, are absolutely insistent then when they manage to catch a habitual Rule Follower in some violation it proves that we’re all the same. That their flagrant and continual behavior is somehow balanced by one transgression of someone else.
Why do we participate in manuscript review?
December 21, 2020
Why indeed.
I have several motivations, deployed variably and therefore, my answers to his question about a journal-less world vary.
First and foremost I review manuscripts as a reciprocal professional obligation, motivated by the desire I have to get my papers published. It is distasteful free-rider behavior to not review at least as often as you require the field to review for you. That is, approximately 3 times your number of unique-journal submissions. Should we ever move to a point where I do not expect any such review of my work to be necessary, then this prime motivator goes to zero. So, “none”.
The least palatable (to me) motivation is the gatekeeper motivation. I do hope this is the rarest of reviews that I write. Gatekeeper motivation leads to reviews that try really hard to get the editor to reject the manuscript or to persuade the authors that this really should not be presented to the public in anything conceivably related to current form. In my recollection, this is because it is too slim for even my rather expansive views on “least publishable unit” or because there is some really bad interpretation or experimental design going on. In a world where these works appeared in pre-print, I think I would be mostly unmotivated to supply my thoughts in public. Mostly because I think this would just be obvious to anyone in the field and therefore what is the point of me posturing around on some biorxiv comment field about how smart I was to notice it.
In the middle of this space I have the motivation to try to improve the presentation of work that I have an interest in. The most fun papers to review for me are, of course, the ones directly related to my professional interests. For the most part, I am motivated to see at least some part of the work in print. I hope my critical comments are mostly in the nature of
“you need to rein back your expansive claims” and only much less often in the vein of “you need to do more work on what I would wish to see next”. I hate those when we get them and I hope I only rarely make them.
This latter motivation is, I expect, the one that would most drive me to make comments in a journal-less world. I am not sure that I would do much of this and the entirely obvious sources of bias in go/no-go make it even more likely that I wouldn’t comment. Look, there isn’t much value in a bunch of congratulatory comments on a scientific paper. The value is in critique and in drawing together a series of implications for our knowledge on the topic at hand. This latter is what review articles are for, and I am not personally big into those. So that wouldn’t motivate me. Critique? What’s the value? In pre-publication review there is some chance that this critique will result in changes where it counts. Data re-analysis, maybe some more studies added, a more focused interpretation narrative, better contextualization of the work…etc. In post-publication review, it is much less likely to result in any changes. Maybe a few readers will notice something that they didn’t already come up with for themselves. Maybe. I don’t have the sort of arrogance that thinks I’m some sort of brilliant reader of the paper. I think people that envision some new world order where the unique brilliance of their critical reviews are made public have serious narcissism issues, frankly. I’m open to discussion on that but it is my gut response.
On the flip side of this is cost. If you don’t think the process of peer review in subfields is already fraught with tit-for-tat vengeance seeking even when it is single-blind, well, I have a Covid cure to sell you. This will motivate people not to post public, unblinded critical comments on their peers’ papers. Because they don’t want to trigger revenge behaviors. It won’t just be a tit-for-tat waged in these “overlay” journals of the future or in the comment fields of pre-print servers. Oh no. It will bleed over into all of the areas of academic science including grant review, assistant professor hiring, promotion letters, etc, etc. I appreciate that Professor Eisen has an optimistic view of human nature and believes these issues to be minor. I do not have an optimistic view of human nature and I believe these issues to be hugely motivational.
We’ve had various attempts to get online, post-publication commentary of the journal-club nature crash and burn over the years. Decades by now. The efforts die because of a lack of use. Always. People in science just don’t make public review type comments, despite the means being readily available and simple. I assure you it is not because they do not have interesting and productive views on published work. It is because they see very little positive value and a whole lot of potential harm for their careers.
How do we change this, I feel sure Professor Eisen would challenge me.
I submit to you that we first start with looking at those who are already keen to take up such commentary. Who drop their opinions on the work of colleagues at the drop of a hat with nary a care about how it will be perceived. Why do they do it?
I mean yes, narcissistic assholes, sure but that’s not the general point.
It is those who feel themselves unassailable. Those who do not fear* any real risk of their opinions triggering revenge behavior.
In short, the empowered. Tenured. HHMI funded.
So, in order to move into a glorious new world of public post-publication review of scientific works, you have to make everyone feel unassailable. As if their opinion does not have to be filtered, modulated or squelched because of potential career blow-back.
__
*Sure, there are those dumbasses who know they are at risk of revenge behavior but can’t stfu with their opinions. I don’t recommend this as an approach, based on long personal experience.
It’s Uninterpretable!
August 6, 2020
No, it isn’t.
One of my favorite species of manuscript reviewer comment is that the data we are presenting are “uninterpretable”. Favorite as in the sort of reaction I get where I can’t believe my colleagues in science are this unbelievably stupid and are not completely embarrassed to say any such thing ever.
“Uninterpretable” is supposed to be some sort of easy-out Stock Critique, I do understand that. But it reveals either flagrant hypocrisy (i.e., the reviewer themselves would fall afoul of such a criticism with frequency) or serious, serious misunderstanding of how to do science.
Dr. Zen is the latest to run afoul of my opinion on this. He posted a Tweet:
and then made the mistake of bringing up the U word.
(his followup blog post is here)
Now, generally when I am laughing at a reviewer comment, it is not that they are using “uninterpretable” to complain about graphical design (although this occasionally comes into the mix). They usually mean they don’t like the design of the experiment(s) in some way and want the experiment conducted in some other way. Or the data analyzed in some other way (including graphical design issues here) OR, most frequently, a whole bunch of additional experiments.
“If the authors don’t do this then the data they are presenting are uninterpretable” – Reviewer # 3. It’s always reviewer #3.
Let me address Zen’s comment first. It’s ridiculous. Of COURSE the graph he presented is interpretable. It’s just that we have a few unknowns and some trust. A whole lot of trust. And if we’ve lost that, science doesn’t work. It just doesn’t. So it’s ridiculous to talk about the case where we can’t trust that the authors aren’t trying to flagrantly disregard norms and to lie to us with fake data. There’s just no point. Oh and don’t forget that Zen construed this in the context of a slide presentation. There just isn’t time for minutia and proving beyond any doubt that the presenter/authors aren’t trying to mislead with fakery.
Scientific communication assumes some reasonable common ground, particularly within a subfield. This is okay. When there is cross talk between fields with really, really different practices, ok, maybe a little extra effort is needed.
But this is a graph using the box-and-whiskers plot. This is familiar to the audience and indeed Zen does not seem to take issue with it. He is complaining about the exact nature of the descriptive statistic conventions in this particular box-and-whiskers plot. He is claiming that if this is not specified that the data are “uninterpretable”. NONSENSE!
These plots feature an indicator of central tendency of a distribution of observations, and an indicator of variablity in that distribution. Actually, most descriptive illustrations in science tackle this task. So..it’s familiar. This particular type of chart gives two indications of the variability- a big one and a small one. This is baseline knowledge about the chart type and, again, is not the subject of Zen’s apparent ire. The line is the central tendency. The box outlines the small indicator and the whiskers outline the big indicator. From this we move into interpretation that is based on expectations. Which are totally valid to deploy within a subfield.
So if I saw this chart, I’d assume it was most likely depicting the central tendency of a median or mean. Most likely the median, particularly if the little dot indicates the mean. The box therefore outlines the intraquartile range, i.e., the 25%ile and 75%ile values. If the central tendency is the mean, then it is most likely that the box outlines plus or minus one standard error of the mean or one standard deviation. Then we come to the whiskers. I’d assume it was either the 95% Confidence Interval or the range of values.
I do NOT need to know which of these minor variants is involved to “interpret” the data. Because scientific interpretation functions along a spectrum of confidence in the interpretation. And if differences between distributions (aha another ready assumption about this chart) cannot be approximated from the presentation then, well, it’s okay to delve deeper. To turn to the inferential statistics. In terms of if the small indicator is SD or SEM? meh, we can get a pretty fair idea. If it isn’t the SD or SEM around a mean, or the 25%ile/75%ile around a median, but something else like 3SEM or 35/65? Well, someone is doing some weird stuff trying to mislead the audience or is from an entirely disparate field. The latter should be clear.
Now, of COURSE, different fields might have different practices and expectations. Maybe it is common to use 5 standard deviations as one of the indicators of variability. Maybe it is common to depict the mode as the indicator of central tendency. But again, the audience and the presenter are presumably operating in approximately the same space and any minor variations in what is being depicted do not render the chart completely uninterpretable!
This is not really any different when a manuscript is being reviewed and the reviewers cry “Uninterpretable!”. Any scientific paper can only say, in essence, “Under these conditions, this is what happened”. And as long as it was clear what was done and the nature of the data, the reporting of can be interpreted. We may have more or fewer caveats. We may have a greater or smaller space of uncertainty. But we can most certainly interpret.
It sometimes gets even worse and more hilarious. I have this common area where we present data where the error bars are smaller than the (reasonably sized) symbols for some (but not all) of the groups. And we may have cases where the not-different (by inferential stats *and* by any rational eyeball and consideration of the data at hand) samples cannot be readily distinguished from each other (think: overlapping longitudinal or dose curves).
“You need to use color or something else so that we can see the overlapping details or else it is all uninterpretable!” – Reviewer 3.
My position is that if the eye cannot distinguish any differences this is the best depiction of the data. What is an error is presenting data in a way that gives some sort of artificial credence to a difference that is not actually there based on the stats, the effect size and a rational understanding of the data being collected.
Impact, in the Time of Corona
May 8, 2020
In an earlier post I touched on themes that are being kicked around the Science Twitters about how perhaps we should be easing up on the criteria for manuscript publication. It is probably most focused in the discussion of demanding additional experiments be conducted, something that is not possible for those who have shut down their laboratory operations for the Corona Virus Crisis.
I, of course, find all of this fascinating because I think in regular times, we need to be throttling back on such demands.
The reasons for such demands vary, of course. You can dress it up all you want with fine talk of “need to show the mechanism” and “need to present a complete story” and, most nebulously, “enhance the impact”. This is all nonsense. From the perspective of the peers who are doing the reviewing there are really only two perspectives.
- Competition
- Unregulated desire we all have to want to see more, more, more data if we find the topic of interest.
From the side of the journal itself, there is only one perspective and that is competitive advantage in the marketplace. The degree to which the editorial staff fall strictly on the side of the journal, strictly on the side of the peer scientists or some uncomfortable balance in between varies.
But as I’ve said before, I have had occasion to see academic editors in action and they all, at some point, get pressure to improve their impact factor. Often this is from the publisher. Sometimes, it is from the associated academic society which is grumpy about “their” journal slowly losing the cachet it used to have (real or imagined).
So, do standards having to do with the nitty-gritty of demands for more data that might be relevant to the Time of Corona slow/shut downs actually affect Impact? Is there a reason that a given journal should try to just hold on to business as usual? Or is there an argument that topicality is key, papers get cited for reasons not having to do with the extra conceits about “complete story” or “shows mechanism” and it would be better just to accept the papers if they seem to be of current interest in the field?
I’ve written at least one post in the past with the intent to:
encourage you to take a similar quantitative look at your own work if you should happen to be feeling down in the dumps after another insult directed at you by the system. This is not for external bragging, nobody gives a crap about the behind-the-curtain reality of JIF, h-index and the like. You aren’t going to convince anyone that your work is better just because it outpoints the JIF of a journal it didn’t get published in. …It’s not about that…This is about your internal dialogue and your Imposter Syndrome. If this helps, use it.
There is one thing I didn’t really explore in whingy section of that post, where I was arguing that the citations of several of my papers published elsewhere showed how stupid it was for the editors of the original venue to reject them. And it is relevant to the Time of Corona discussions.
I think a lot of my papers garner citations based on timing and topicality more than much else. For various reasons I tend to work in thinly populated sub-sub-areas where you would expect the relatively specific citations to arise. Another way to say this is that my papers are “not of general interest”, which is a subtext, or explicit reason, for many a rejection in the past. So the question is always: Will it take off?
That is, this thing that I’ve decided is of interest to me may be of interest to others in the near or distant future. If it’s in the distant future, you get to say you were ahead of the game. (This may not be all that comforting if disinterest in the now has prevented you from getting or sustaining your career. Remember that guy who was Nobel adjacent but had been driving a shuttle bus for years?) If it’s in the near future, you get to claim leadership or argue that the work you published showed others that they should get on this too. I still believe that the sort of short timeline that gets you within the JIF calculation window may be more a factor of happening to be slightly ahead of the others, rather than your papers stimulating them de novo, but you get to claim it anyway.
For any of these things does it matter that you showed mechanism or provided a complete story? Usually not. Usually it is the timing. You happened to publish first and the other authors coming along several months in your wake are forced to cite you. In the more distant, medium term then maybe do you start seeing citations of your paper from work that was truly motivated by it and depends on it. I’d say a 2+ year runway on that.
This citations, unfortunately, will come in just past the JIT window and don’t contribute to the journal’s desire to raise its impact.
I have a particular journal which I love to throw shade at because they reject my papers at a high rate and then those papers very frequently go on to beat their JIF. I.e., if they had accepted my work it would have been a net boost to their JIF….assuming the lower performing manuscripts that they did accept were rejected in favor of mine. But of course, the reason that their JIF continues to languish behind where the society and the publisher thinks it “should” be is that they are not good at predicting what will improve their JIF and what will not.
In short, their prediction of impact sucks.
Today’s musing were brought around by something slightly different which is that I happened to be reviewing a few papers that this journal did publish, in a topic domain reasonably close to mine, not particularly more “complete story” but, and I will full admit this, they do seem a little more “shows mechanism” sciency in a limited way in which my work could, I just find that particular approach to be pedantic and ultimately of lesser importance than broader strokes.
These papers are not outpointing mine. They are not being cited at rates that are significantly inflating the JIF of this journal. They are doing okay, I rush to admit. They are about the middle of the distribution for the journal and pacing some of my more middle ground offering in my whinge category. Nothing appears to be matching my handful of better ones though.
Why?
Well, one can speculate that we were on the earlier side of things. And the initial basic description (gasp) of certain facts was a higher demand item than would be a more quantitative (or otherwise sciencey-er) offering published much, much later.
One can also speculate that for imprecise reasons our work was of broader interest in the sense that we covered a few distinct sub-sub-sub field approaches (models, techniques, that sort of thing) instead of one, thereby broadening the reach of the single paper.
I think this is relevant to the Time of Corona and the slackening of demands for more data upon initial peer review. I just don’t think in the balance, it is a good idea for journals to hold the line. Far better to get topical stuff out there sooner rather than later. To try to ride the coming wave instead of playing catchup with “higher quality” work. Because for the level of journal I am talking about, they do not see the truly breathtakingly novel stuff. They just don’t. They see workmanlike good science. And if they don’t accept the paper, another journal will quite quickly.
And then the fish that got away will be racking up JIF points for that other journal.
This also applies to authors, btw. I mean sure, we are often subject to evaluation based on the journal identity and JIF rather than the actual citations to our papers. Why do you think I waste my time submitting work to this one? But there is another game at foot as well and that game does depend on individual paper citations. Which are benefited by getting that paper published and in front of people as quickly as possible. It’s not an easy calculation. But I think that in the Time of Corona you should probably shift your needle slightly in the “publish it now” direction.
Corruption of NIH Peer Review
April 13, 2020
The Office of the Inspector General at the HHS (NIH’s government organization parent) has recently issued a report [PDF] which throws some light on the mutterings that we’ve been hearing. Up to this point it has mostly been veiled “reminders” about the integrity of peer review at NIH and how we’re supposed to be ethical reviewers and what not.
As usual when upstanding citizens such as ourselves hear such things we are curious. As reviewers, we think we are trying our best to review ethically as we have been instructed. As applicants, of course, we are curious about just what manner of screwing we’ve suffered at the hands of NIH’s peer review now. After all, we all know that we’re being screwed, right?
“NIH isn’t funding [X] anymore“, we cry. X can be clinical, translational, model system, basic…. you name it. X can be our specific subarea within our favorite IC. X can be our model system or analytical approach or level of analysis. X can be our home institution’s ZIP code, or prestige, or type within the academic landscape.
And of course, our study section isn’t giving us a good score because of a conspiracy. Against X or against ourselves, specifically. It’s a good old insider club, doncha see, and we are on the outside. They just give good scores to applications of the right X or from the right people who inhabit the club. The boys. The white people. The Ivy League. The R1s. Those who trained with Bobs. Glam labs. Nobel club.
Well, well, well, the OIG / HHS report has verified all of your deepest fears.
NIH Has Acted To Protect Confidential Information Handled by Peer Reviewers, But It Could Do More [OEI-05-19-00240; March 2020; oig.hhs.gov; Susanne Murrin, Deputy Inspector General for Evaluation and Inspections.
Let’s dig right in.
In his August 2018 statement on protecting the integrity of U.S. biomedical research, NIH Director Dr. Francis Collins expressed concern about the inappropriate sharing of confidential information by peer reviewers with others, including foreign entities.5 At the same time, Dr. Collins wrote to NIH grantee institutions to alert them to these foreign threats, noting that “foreign entities have mounted systematic programs to influence NIH researchers and peer reviewers.”6 As an example of these programs, NIH’s Advisory Committee to the Director warned NIH of China’s Thousand Talents plan, which is intended to attract talented scientists while facilitating access to intellectual property
…
Additionally, congressional committees have expressed concerns and requested information about potential threats to the integrity of taxpayer-funded research, including the theft of intellectual property and its diversion to foreign entities.8, 9 In a June 2019 Senate hearing, NIH Principal Deputy Director Dr. Lawrence Tabak testified that NIH was “aware that a few foreign governments have initiated systematic programs to capitalize on the collaborative nature of biomedical research and unduly influence U.S.-based researchers
So the rumors are true. It’s about the Chinese. One of the reasons I’ve been holding off blogging about this during the whispers and hints era was this. This may be why NIH itself has been so circumspect. Nobody wants to conflate what looks like racism along with what appears to be state-sponsored activity to take advantage of our relatively open scientific system. Many academic scientists love to bleat about the wonderful international nature of the scientific endeavor. I like it myself and occasionally reference this. I wish it was not inevitably and ultimately wrapped up in geo-politics and what not. But it is. Science influences economic activity and therefore power.
I am on the record as a protectionist when it comes to academic employment in the public and public-funded sectors. I don’t think we need hard absolute walls but I also think in hard times, we raise serious and very high barriers to funding NIH grants to foreign applicant institutions. I think, of course, that we need to take a harder look at employment politics. Like any other sector, immigrant postdocs and graduate students often devalue the labor market for domestic employees. I’d like to see a little more regulation on that to keep opportunities for US citizens prioritized.
But I also appreciate that we are an immigrant nation founded on the hard work of immigrants who often ARE more eager to work hard than native born folks (of which I am one, people. I’m including myself in the lazy sack category here). Hard. So we need to have some academic science immigration, of course. And I am not that keen on traditional lines of white supremacy dictating who gets to immigrate here to do science.
So, when I started getting the feeling this was directed specifically at the Chinese, let’s just say the hairs on my neck went up.
But, this report makes it pretty clear this is the problem. They are targeting this “Thousand Talents” effort of China very specifically and are going after US-employed scientists who do not report financial conflicts….from China. And other sources, but…the picture in this report is sharp.
I have heard of more than one local investigator who had a Chinese lab or company who was not reporting this appropriately. They also hold NIH funds and so were disciplined. Grants were pulled. At least one person has disappeared back to China. At least one person is apparently under some sort of NIH suspension but the grants are still running out the clock on the current fiscal year so I can’t quite validate the rumors. A multi-year suspension from grant seeking is being whispered around the campfire.
So what about the reviewers? Where does this come in?
As of November 2019, NIH had flagged 77 peer reviewers across both CSR- and IC-organized study sections as Do Not Use because of a breach of peer review confidentiality. A reviewer who is flagged as Do Not Use may not participate in further study section meetings or review future applications until the flag is removed
…Between February 2018 and November 2019, NIH terminated the service of 10 peer reviewers who not only had undisclosed foreign affiliations, but had also disclosed confidential information from grant applications. For example, some of these reviewers shared critiques of grant applications with colleagues or shared their NIH account passwords with colleagues.
There is a bunch of more of this talk in bullet points about reviewers being suspended or under investigation for both violations peer review and undisclosed foreign conflicts of interest. It could be companies or funding, although this is not clearly specified. Then….. the doozy:
As of November 2019, NIH dissolved two study sections because of evidence of systemic collusion among the reviewers in the section. At least one instance involved the disclosure of confidential information. NIH dissolved the first study section in 2017 and the second in 2018. All grant applications that the study sections reviewed were reassessed by different reviewers.
AHA! There IS a conspiracy against your grants. Look, this is bad. I’m trying to maintain some humor here, but the fact is that this would be relatively easy to pull off, so long as the conspirators were all on board and nobody ratted. What would you need? A third of a study section? A quarter? Half? I dunno but it isn’t *that* many people. Some are in on the main conspiracy (puppeted by a foreign government?), some are willing pawns because their own grants do well, some are just plain convinced by their buddies that this is how it actually works here?. And if they are all in contact, how long would it take? five minute phone conversations about how they need to support applications from A, B and C and run down those likely looking top-scoring apps from X, Y and Z?
I don’t know how they caught these conspiracies but there were probably emails to go along with the forensic evidence on their foreign conflicts of employment, affiliation and funding. Oh wait, the report tells us:
One way NIH learns about instances of possible undue foreign influence is through its national security partners. Since 2017, NIH has increasingly worked with the FBI on emerging foreign threats to NIH-funded research. NIH reported that in 2018, the FBI provided it with referrals of researchers—some of whom were also peer reviewers—who had NIH grants and were alleged to have undisclosed foreign affiliations.
It also says that program staff may have noticed papers that cited funding that has not been disclosed properly (on the Other Support that PIs have to file prior to funding, I presume).
As of November 2019, NIH determined that allegations against 207 researchers were potentially substantiated. Of those 207 researchers, NIH determined that 129 had served as peer reviewers in 2018 and/or 2019. NIH designated 47 of these 129 peer reviewers as Do Not Use. When OIG asked NIH about the remaining 82 peer reviewers—i.e., those who had potentially substantiated allegations but who had not been designated as Do Not Use—NIH did not respond.
What the heck? Why not? This is the IG ffs. How do they “not respond”?
Between February 2018 and November 2019, NIH confirmed 10 cases involving peer reviewers who were stealing or disclosing confidential information from grant applications or related materials and who also had undisclosed foreign affiliations. Two of these 10 cases involved peer reviewers who were selected for China’s Thousand Talents program. The breaches of confidentiality included disclosing scoring information, sharing study section critiques, and forwarding grant application information to third parties. In some of these instances, reviewers shared confidential information with foreign entities.…In two cases, NIH dissolved a study section
So the worst of the worst. How long had this been going on? How many proposals were affected? How many ill gotten grant awards aced out more legitimate competitors? Were those PIs made whole (hahaha. Of course not.) For the dissolved study sections, just how bad WAS it?
Look, I’m glad they caught this stuff. But I have no confidence that we are getting anything even remotely like a full picture here. The tone seems to be that this was sparked by some pretty egregious violations of Other Support declarations leading to scrutiny of those PIs who happened to review grants. The NIH then managed to find evidence (confessions?) of violations of peer review rules. The description of the actual peer review violations leans heavily on inappropriate disclosure of confidential information. Showing critiques and grants to people who have no right to see them. Is this all it was? This is what led to a study section dissolution? Or, as I would suspect, a lot more going on with grant-score-deciding behavior? That is what should lead to dissolution of a section but it is a lot harder to prove than “clearly you gave your password to someone who is logging in from half a world away two hours after you logged in from the US”. I want answers to these harder questions- how are these conspiracies and conflicts leading to funding for those inside the conspiracy and the loss of funding for those who are not?
NIH is highly motivated to soft-pedal that part. Because they are really, really, REALLY motivated to pretend their system of grant selection works to fund the most meritorious science. Probing into how easy it would be to suborn this process, as a single rogue reviewer OR as a conspiracy, is likely to lead to very awkward information.
I never feel that NIH takes participant confidence in their system of review and grant award seriously enough. I don’t think they do enough to reassure the rank and file that yes, it IS their intent to be fair and to select grants on merit. Too many participants in extramural grant review, as applicants and as reviewers, continue to talk with great confidence and authority about what a racket it is and how there are all these specific unfairnesses I alluded to above.
Well, what happens if reviewers believe that stuff?
“Everybody is doing it” is the most frequent response when scientists are caught faking data, right? Well….
A loss of confidence in the integrity of NIH review is going to further excuse future misdeeds in the minds of reviewers themselves. If the system is biased against model systems, it’s okay for me Captain Defender of Models System Neuroscience, to give great scores to fly grants, right? I’m just making up for the bias, not introducing one of my own. If the system is clearly biased in favor of those soft money high indirect cost professional grant writers than hey, it is totally fair that I , Professor of Heavy Teaching Load Purity to do down their grants and favor those of people like me, right? It’s just balancing the scales.
Because everyone knows the system is stacked against me.
Do it to Julia, not me, Julia!
I think the NIH needs to do far more than to blame the dissolution of two study sections of foreign influence and call it a day. I think they need to admit to how easy it was for such efforts to corrupt review and to tell us how they can put processes in place to keep review cartel behavior, explicit OR IMPLICIT, from biasing the selection of grants for funding.
They need to restore confidence.
Progress in the Time of Corona
April 10, 2020
One of the thorniest issues that we will face in the now, and in the coming months, is progress. Scientific progress, career progress, etc. I touched on this a few days ago. It has many dimensions. I may blog a lot about this, fair warning.
Several days (weeks?) ago, we had a few rounds on Twitter related to altering our peer review standards for manuscript evaluation and acceptance. It’s a pretty simple question for the day. Is the Time of Corona such that we need to alter this aspect of our professional scientific behavior? Why? To what end? What are the advantages and for whom? Are there downsides to doing so?
As a review, unneeded for most of my audience, scientific papers are the primary output, deliverable good, work product, etc of the academic scientist. Most pointedly, the academic scientist funded by the taxpayer. Published papers. To get a paper published in an academic journal, the scientists who did the work and wrote the paper submit it for consideration to a journal. Whereupon an editor at the journal decides either to reject it outright (colloquially a “desk reject”) or to send it to scientific peers (other academics who are likewise trying to get their papers published) for review. Typically 3 peers, although my most usual journals accept 2 as a minimum these days, and editors can use more if necessary. The peers examine the paper and make recommendations to the editor as to whether it should be accepted as is (rarely happens), rejected outright (fairly common) or reconsidered after the authors make some changes to the manuscript. This latter is a very usual outcome and I don’t think I’ve ever had a paper ultimately published that did not get there without making a lot of changes in response to what peers had to say about it.
Peer comments can range from identifying typographical errors to demanding that the authors conduct more experiments, occasionally running to the tune of hundreds of thousands of dollars in expense (counting staff time) and months to years of person-effort. These are all couched as “this is necessary before the authors should be allowed to publish this work”. Of course, assigned reviewers rarely agree in every particular and ultimately the editor has to make a call as to what is reasonable or unreasonable with respect to apparent demands from any particular reviewer.
But this brings us to the Time of Corona. We are, most of us, mostly or totally shut down. Our institutions do not want us, or our staff members, in the labs doing work as usual. Which means that conducting new research studies for a manuscript that we have submitted for review is something between impossible and very, very, very unlikely.
So. How should we, as professionals in a community, respond to this Time of Corona? Should we just push the pause button on scientific publication, just as we are pushing the pause button on scientific data generation? Ride it out? Refuse to alter our stance on whether more data are “required for publication” and just accept that we’re all going to have to wait for this to be over and for our labs to re-start?
This would be consistent with a stance that, first, our usual demands for more work are actually valid and second, that we should be taking this shutdown seriously, meaning accepting that THINGS ARE DIFFERENT now.
I am seeing, however, some sentiments that we should be altering our standards, specifically because of the lab shutdowns. That this is what is different, but that it is still essential to be able to publish whatever (?) manuscripts we have ready to submit.
This is fascinating to me. After all, I tend to believe that each and every manuscript I submit is ready to be accepted for publication. I don’t tend to do some sort of strategy of holding back data in hand, or nearly completed, so that in response to the inevitable demands for more, we can respond with “Yes, you reviewers were totally right and now we have included new experiments. Thank you for your brilliant suggestion!”. People do this. I may have done it once or twice but I don’t feel good about it. 🙂
I believe that when I am reviewing manuscripts, I try to minimize my demands for new data and more work. My review stance is to try to first understand what the reviewers are setting out to study, what they have presented data on, and what conclusions or claims they are trying to make. Any of the three can be adjusted if I think the manuscript falls short. They can more narrowly constrain their stated goals, they can add more data and/or they can alter their claims to meet the data they have presented. Any of those are perfectly valid responses in my view. It doesn’t have to be “more data are required no matter what”.
I may be on a rather extreme part of the distribution on this, I don’t know. But I do experience editors and reviewers who seem to ultimately behave in a similar way on both my manuscripts and those manuscripts to which I’ve contributed a review. So I think, that probably my fellow scientists that have my ~core skepticism about the necessity for peer review demands for more, more, more are probably not so exercised about this issue. It is more the folks who are steeped in the understanding that this is the way peer review of manuscripts should work, by default and in majority of cases, who are starting to whinge.
I’m kinda amused. I would be delighted if the Time of Corona made some of these Stockholm Syndrome victims within science think a little harder about the necessity of their culture of demands for more, more, more data no matter what.
NIH Discontinues Continuous Submission for Frequent Service With A Gaslighting Excuse
January 28, 2020
The Notice NOT-ED-20-006 rescinds the continuous submission privilege for the “recent substantial service” category that has been in place since 2009 (NOT-OD-09-155). This extended the privilege that had been given to people who were serving an appointed term on a study section (NOT-OD-08-026). The qualification for “recent substantial service” meant serving as a study section member six times in an 18 month interval. In comparison, an appointed member of a study section serves in 3 meetings per year maximum, with the conversion to 6 year options entailing only two rounds per year. As a reminder the stated goal for this extension was: “to recognize outstanding review and advisory service, and to minimize disincentives to such service“. This is why it is so weird that the latest notice rescinding the policy for the “substantial service” seems to blame these people for having some sort of malign influence. To wit: “prior policy had unintended consequences, among them encouraging excessive review service and thus disproportionate influence by some.“
Something smells. Really, really badly.
There is a new CSR blogpost up on Review Matters, by the current Director of CSR Noni Byrnes, which further adds to the miasma. She starts off with stuff that I agree with 1,000 percent.
The scientific peer review process benefits greatly when the study section reviewers bring not only strong scientific qualifications and expertise, but also a broad range of backgrounds and varying scientific perspectives. Bringing new viewpoints into the process replenishes and refreshes the study section, enhancing the quality of its output.
I have blogged many a word that addresses this topic in various ways. From my comments opposing the Grande Purge of Assistant Professors started by Toni Scarpa, to my comments generally on the virtues of the competition of biases to address inevitable implicit bias to my pointed comments on the Ginther finding and NIH’s dismal response to same. I agree that broadening the participation in NIH peer review is a good goal. And I welcome this post because it gives us some interesting data, new to my eyes.
As of January 1, 2020, there were 22,608 individuals with active R01 funding. Of these, 30% (6715) have served one to five times, and 18% (4074) have never served as a reviewer in the last 12 years. Of those who have served only one to five times over 12 years, 26% are assistant professors and 34% are associate professors.
Cool, cool. At least it is a starting point for discussion. Should they be trying to reduce that 18% number? Heck yes. To what? I don’t know. Some of this is structural in the sense that someone just awarded their first R01 probably is less likely to have a service record within the next 3 months. Right? So…5%? The question is how to do this, why are the 18% being overlooked, etc. Well, if you are the head of CSR you know in your bones that peer review service is opt-in…but only opt-in upon request from a CSR (or in limited cases IC-associated) SRO. So the 18% needs to be parsed into those who have never been asked and those who have refused. Those that have been asked several times (3+ over time?) and those that have only been asked once (I mean, stuff happens and you aren’t always available when requested). And Director Byrnes is sorta, half heartedly, putting the blame where it belongs, pending data on refusal rates, on the SROs. “ In an effort to facilitate broader participation in review, we are making these data available to SROs and encouraging them to identify qualified and scientifically appropriate reviewers, who may not have been on their radar previously. ” “Encouraging” . Gee, for some reason, the SROs I talked to during the Scarpa Grande Purge suggested that he was doing a lot more than mere “encouragement” to get rid of Assistant Professors. And in full disclosure more than one SRO alluded to fighting back and slow-walking since they disagreed with Scarpa’s agenda. But both of these things suggest that Byrnes is going to have to do more than just show her SROs the data and ask them nicely to do better.
Then the blog post goes into a topic I think I’ve planned to blog, and failed to do so, for years. Disproportionate influence of a given reviewer, by virtue of constant and substantial participation in peer review of grants. This is a tricky topic. As I said, the system is opt-in upon request. So a given reviewer is at least partially to blame for the number of panels he or she or they serve on. The blog post has a nice little graph of distribution of the 12 year service history of anyone who has been on a CSR panel in the past two years.
However, one aspect of broadening the pool of reviewers is to avoid excessive review service by a small fraction of people, which can lead them to have a disproportionate effect on review outcomes. We are looking into issue of undue influence, or the “gatekeeper” phenomenon, where a reviewer has participated in the NIH peer review process at a rate much higher than their peers, and thus has had a disproportionate effect on review outcomes in a given field.
Look, Dear Reader, by now you know what the primary analysis from the peanut gallery will be. If you think a given reviewer hates your work, your approaches, you, your pubs, etc, you think they are having undue influence on the study section to which you are submitting your grants. It is particularly inflaming when you can’t seem to escape them because no matter whether you send stuff to your best fit study section, various SEPs, try a different mechanism, etc….up they pop. Professor Antagonist the Perma-Reviewer. On the other hand, if a reviewer that you think is sympathetic to your proposals keeps showing up, heck you wouldn’t complain if that continued on 75% of the sections your grants are reviewed in for decades. Right?
Director Byrnes drew her first line on the chart at 1-36 meetings per 12 year interval. Now me, I think I want to see something a little bit closer and more segmented on that. Three meetings, year in, year out for 12 years does seem like a fairly substantial and outsized influence. One per year does not. One 12 round interval of service (three rounds per year in 4 years or two in 6) as an appointed reviewer seems okay to me. The chart then shows quite a number of people in the 37-72 meeting range (5% of the sample) and even some folks in the 73+ range (1%ish). The way they are talking about undue influence it seems like they should be in the low single digits, right?
But they are not. The minimum standard was 6 review panels per 5 rounds. This is one more than is standard for an empaneled reviewer. And she or he could always just pick up an extra one, right? And I went to view the video of the Advisory Council meeting linked in the blog post and there is a suggestion that the real problem are reviewers begging SROs for an assignment at the last minute to keep their eligibility. Right? So they are for sure pointing the finger at people who meet the bare minimum. Which is not much different from that “influence” wielded by a term of service.
And probably even less. Why? because if you are cobbling your 6 out of 5 rounds from ad hoc requests it is very likely to be entailing a smaller load. SEP service, in my experience, means a smaller review load per panel. So does ad hoc service on established panels, frequently, because the SRO is trying not to annoy the ad hocs and the empaneled folks have buy in. The Advisory Council discussion came oh so close to covering this but veered away into distraction. In part because Director Byrnes started talking about voting scores as being more important than reviews written….but this is also correlated. For the SEPs, fewer items per reviewer often comes with a smaller overall panel load compared with a standing panel. I’ve been on established study sections that routinely have anywhere from 60-90 apps per round. Rarely, if ever, on a SEP with more than about 30.
I really don’t understand the CSR logic here.
The only thing that makes any sense is that they are tired of having to route so many last-minute applications once SROs of standing panels have started trying to assign apps and recruit ad hoc members. And maybe tired of having to convene 5-15 app SEPs to deal with the overflow. Certainly my personal experience has been that in the past few years my continuous submission go to SEPs and are refused by the standing panel SROs. This never used to happen to me in the first years of this policy.
but who knows.
I stand by this assertion. Whether they use this power to its fullest terrible extent is arguable. But that they possess this power is not.
While there are many factors that go into determining what research proposals get selected for funding, the primary driver is the overall voted impact score (and calculated percentile) that emerges from the initial peer review. If your application gets triaged / not discussed, you are very unlikely to get funded. If your application gets a 10 and a 1%ile, it is very likely to be funded. There is a lot of variability in between but even if your application merely scores within a published or virtual payline, it is very likely to be funded. The farther your “maybe” score is away from this published or virtual payline, the less likely it is to be funded.
Peer review score/percentile outcome, as you know, depends on the reviewers. Those that are assigned as primary, secondary and tertiary/Discussant are the most influential. We discuss this ad nauseum around here. The opinion of only 2-3 peers is what does most of the heavy lifting in the fate of your application. They generate a preliminary score range which, if in the triage zone for the panel, can essentially keep anyone else from even bothering to read your application. This range, if in the excellent zone, becomes fairly hard for the panel to assail and move to a significant degree.
Reviewers, my friends, exhibit biases of all sorts. It could be a preference or antagonism for the topic, or model or overall approach of a given application. It could be a stylistic bias- Timelines, extensive discussion of pitfalls, Future Directions…I’ve seen many cases where these optional and somewhat peripheral stylistic issues have kiboshed an application’s chances. Biases could be for various features of the applicant herself that are not supposed to, but do, engage a bias. For senior investigator apps over junior investigator apps. For or against applications where the PI is perceived to have too much or too little funding. ZIP code or institutional biases. Do we need to review Hoppe, or Ginther again? Biases can be exceptionally subtle. The panel can in itself express certain preferences and biases compared to other panels. These may or may not be modified, moderated or hardened by the addition of certain types of ad hoc reviewers on a per-meeting basis.
Biases are detectable to anyone who is paying attention to what reviewers say or how they score. Maybe a given reviewer always seems to go to bat for cocaine grants but not for marijuana ones. Maybe they are never passionate for a rat models grant proposal but wax on chapter and verse about why study in humans is da bomb. Maybe they are always looking for weird and fascinating new behavioral models but roll their eyes at marble burying or the Morris water maze. Maybe they rail all the time about how a mouse doesn’t have a prefrontal cortex.
Beyond biases, reviewers also differ a lot in effectiveness. Or so I presume. One of the most frustrating things about serving a term of service on a study section is that you can never really know what worked. You don’t know how the panel votes after a discussion. So you can’t really tell if a given reviewer is more persuasive and another less. (I say this is frustrating because as a reviewer you yourself can never know if you are being effective in arguing your take on the application.) It feels to me as if some reviewers are more likely to get other panel members on board, but the only hints we have about this are facial expressions, nods/lack thereof and occasionally the panel members who wish to vote outside of the post-discussion range of scores. But, the SRO has this information. In the old days they had the paper sheets. I think, could be wrong, but I think that they are still supposed to peruse the scores to make sure there aren’t any errors- this is why one is supposed to say that they are going outside of the range. Otherwise an outside of range vote is maybe interpreted as an error and (I think) the SRO is supposed to check with that specific reviewer after the meeting. So in addition to the more general sense (which may be accurate) gleaned from discussion comments, the SRO may also have access to even better data on persuasiveness.
The SRO, as you recall, is the person who both selects reviewers for the panel (for appointed and for ad hoc service) and who assigns the grants to those reviewers. This can easily be the difference between a grant receiving a fundable score and being triaged. Easily.
If an SRO so chose, she could readily determine an outcome for a proposal. Or, if not “determine” could significantly affect the most likely outcome. Make all three reviewers the ones that best like that topic, that approach and those PI / applicant institution characteristics versus one or two haters. Give it to the three least persuasive, mumbling, full critique readers to present. Etc.
I am reasonably confident that the vast majority of SROs don’t explicitly put their finger on the scale like this. I am reasonably confident that the vast majority of the SROs are simply trying to distribute the grants to the right experts as best they can.
But this doesn’t override the fact that they are empowered. And it doesn’t override the fact that, like everyone, they are themselves subject to implict biases that shape their behavior. Unknown to themselves.
More perniciously it doesn’t override the fact that the SRO, just like panel members, evolves her or his own preferences and biases and convinces themselves that this is how review is supposed to go down. The SRO can express these beliefs and biases in reviewer instruction and new reviewer orientations. They can express these beliefs in advice or interjection during the meeting, even when they think they are just relating proper policy. The example we know best, and can all agree with, is the ‘we don’t use the F word’ every time anyone says “funding”, “funded” or “fundable”. There are other more borderline opinions that can, and are, expressed by the SRO.
Structurally, I can relate an anecdote or two about panel makeup. As you may recall, I was empaneled on a section during a time when the head of CSR was gleefully heeding the complaining of established PIs who were facing the late- to post-doubling music. “Those damn assistant professors are killing my grants!” they would cry. [It is not at all coincidental that my fourth blog post ever was on this topic.] Nevermind that the stats showed that assistant professors only ever made up 10% of all reviewers at the high water mark, presumably far lower in terms of reviews submitted (given ad hoc and lighter loads), and consequently had very little effect on a systematic basis. Toni Scarpa tried to purge assistant professors. Learned societies beat the bushes for their senior folks to go back on study section. But…..SROs still did the selecting and empaneling. So they could either resist (and I was serving under a SRO that seemingly did resist) the dictation from on high or they could fully embrace it, stat. Power.
I also happened to have a chat or two with a SRO about the Early Career Reviewer program, starting a little while before it was formally announced. Guess what? It was originally supposed to be Underrepresented Individual Early Career Researchers that were to get this opportunity. And there was a battle within CSR about this. As you know, the eventual program made very little mention of this history, if at all. However. SROs were still doing the selecting. They could, I surmise, more or less opt out of ever including any ECR people (not totally sure, and of course there may have been pressure from above). But suppose a given SRO happened to be on the pro-URM side of this issue- could not he/she simply choose to express this original positioning on who they just so happened to always select for their panel? Or could an anti-affirmative action SRO just so happen to always pick majoritarians? Of course. [See this blog post on the ECR experience from NewPI]
The most egregious case of an SRO putting a finger on the scale that I can recall experiencing was during reviewer instruction for a Big Mech review panel. P and U mechanisms. Big budgets, multi-componented. That sort of thing. High stakes for the PIs and for the ICs, right? I forget what the triggering question was from a panel member but the SRO basically said that a competitive PD (Program Director, that’s the term for the overall PI for a BigMech) for such things “had” to be at the level of Chair of a department or similar. The very strong implication was basically that of course we shouldn’t be giving good scores to an proposal from a less well-experienced PI.
This is utter nonsense and bias. The SRO was putting a very firm finger on the review scale by saying such a thing. They do so in many other ways as well, and have in my limited experience. It wasn’t just this SRO. It’s just that this particular bias really ticked me off. Fortunately, I don’t recall this coming up in the actual review of the applications but who knows? Sometimes it is hard to discern the real reason behind a reviewer’s stance on a given application.
Anyway…..if you ever consider taking a SRO job…..it has impact. Use it judiciously.
BJP issues new policy on SABV
September 4, 2019
The British Journal of Pharmacology has been issuing a barrage of initiatives over the past few years that are intended to address numerous issues of scientific meta-concern including reproducibility, reliability and transparency of methods. The latest is an Editorial on how they will address current concerns about including sex as a biological variable.
Docherty et al. 2019 Sex: A change in our guidelines to authors to ensure that this is no longer an ignored experimental variable. https://doi.org/10.1111/bph.14761 [link]
I’ll skip over the blah-blah about why. This audience is up to speed on SABV issues. The critical parts are what they plan to do about it, with respect to future manuscripts submitted to their journal. tldr: They are going to shake the finger but fall woefully short of heavy threats or of prioritizing manuscripts that do a good job of inclusion.
From Section 4 BJP Policy: The British Journal of Pharmacology has decided to rectify this neglect of sex as a research variable, and we recommend that all future studies published in this journal should acknowledge consideration of the issue of sex. In the ideal scenario for in vivo studies, both sexes will be included in the experimental design. However, if the researcher’s view is that sex or gender is not relevant to the experimental question, then a statement providing a rationale for this view will be required.
Right? Already we see immense weaseling. What rationales will be acceptable? Will those rationales be applied consistently for all submissions? Or will this be yet another frustrating feature for authors in which our manuscripts appear to be rejected on grounds that other papers published seem to suffer from?
We acknowledge that the economics of investigating the influence of sex on experimental outcomes will be difficult until research grant‐funding agencies insist that researchers adapt their experimental designs, in order to accommodate sex as an experimental variable and provide the necessary resources. In the meantime, manuscripts based on studies that have used only one sex or gender will continue to be published in BJP. However, we will require authors to include a statement to justify a decision to study only one sex or gender.
Oh a statement. You know, the NIH has (sort of, weaselly) “insisted”. But as we know the research force is fighting back, insisting that we don’t have “necessary resources” and, several years into this policy, researchers are blithely presenting data at conferences with no mention of addressing SABV.
Overall sex differences and, more importantly, interactions between experimental interventions and sex (i.e., the effect of the intervention differs in the two sexes) cannot be inferred if males and females are studied in separate time frames.
Absolutely totally false. False, false, false. This has come up in more than one of my recent reviews and it is completely and utterly, hypocritically wrong. Why? Several reasons. First of all in my fields of study it is exceptionally rare that large, multi-group, multi-sub-study designs (in single sex) are conducted this way. It is resource intensive and generally unworkable. Many, many, many studies include comparisons across groups that were not run at the same time in some sort of cohort balancing design. And whaddaya know those studies often replicate with all sorts of variation across labs, not just across time within lab. In fact this is a strength. Second, in my fields of study, we refer to prior literature all the time in our Discussion sections to draw parallels and contrasts. In essentially zero cases do the authors simply throw up their hands and say “well since nobody has run studies at the same time and place as ours there is nothing worth saying about that prior literature”. You would be rightfully laughed out of town.
Third concern: It’s my old saw about “too many notes“. Critique without an actual reason is bullshit. In this case you have to say why you think the factor you don’t happen to like for Experimental Design 101 reasons (running studies in series instead of parallel) has contributed to the difference. If one of my peer labs says they did more or less the same methods this month compared to last year compared to five years ago…wherein lies the source of non-sex-related variance which explains why the female group self-administered more cocaine compared with the before, after and in between male groups which all did the same thing? And why are we so insistent about this for SABV and not for the series of studies in males that reference each other?
In conscious animal experiments, a potential confounder is that the response of interest might be affected by the close proximity of an animal of the opposite sex. We have no specific recommendation on how to deal with this, and it should be borne in mind that this situation will replicate their “real world.” We ask authors merely to consider whether or not males and females should be physically separated, to ensure that sight and smell are not an issue that could confound the results, and to report on how this was addressed when carrying out the study. Obviously, it would not be advisable to house males and females in different rooms because that would undermine the need for the animals to be exposed to the same environmental factors in a properly controlled experiment.
NO SHIT SHERLOCK!
Look, there are tradeoffs in this SABV business when it comes to behavior studies, and no doubt others. We have many sources of potential variance that could be misinterpreted as a relatively pure sex difference. We cannot address them all in each and every design. We can’t. You would have to run groups that were housed together, and not, in rooms together and not, at times similar and apart AND MULTIPLY THAT AGAINST EACH AND EVERY TREATMENT CONDITION YOU HAVE PLANNED FOR THE “REAL” STUDY.
Unless the objective of the study is specifically to investigate drug‐induced responses at specific stages of the oestrous cycle, we shall not require authors to record or report this information in this journal. This is not least because procedures to determine oestrous status are moderately stressful and an interaction between the stress response and stage of the oestrous cycle could affect the experimental outcome. However, authors should be aware that the stage of the oestrous cycle may affect response to drugs particularly in behavioural studies, as reported for actions of cocaine in rats and mice (Calipari et al., 2017; Nicolas et al., 2019).
Well done. Except why cite papers where there are oestrous differences without similarly citing cases where there are no oestrous differences? It sets up a bias that has the potential to undercut the more correct way they start Section 5.5.
My concern with all of this is not the general support for SABV. I like that. I am concerned first that it will be toothless in the sense that studies which include SABV will not be prioritized and some, not all, authors will be allowed to get away with thin rationales. This is not unique to BJP, I suspect the NIH is failing hard at this as well. And without incentives (easier acceptance of manuscripts, better grant odds) or punishments (auto rejects, grant triages) then behavior won’t change because the other incentives (faster movement on “real” effects and designs) will dominate.
Infuriating manuscripts
January 17, 2019
I asked what percentage of manuscripts that you receive to review make you angry that the authors dared to submit such trash. The response did surprise me, I must confess.
I feel as though my rate is definitely under 5%.
Suggest women as potential reviewers
December 12, 2018
A recent editorial in Neuropsychopharmacology by Chloe J. Jordan and the Editor in Chief, William A. Carlezon Jr. overviews the participation of scientists in the journals’ workings by gender. I was struck by Figure 5 because it is a call for immediate and simple action by all of you who are corresponding authors, and indeed any authors.
The upper two pie charts show that between 25% and 34% of the potential reviewer suggestions in the first half of 2018 were women. Interestingly, the suggestions for manuscripts from corresponding authors who are themselves women were only slightly more gender balanced than were the suggestions for manuscripts with male corresponding authors.
Do Better.
I have for several years now tried to remember to suggest equal numbers of male and female reviewers as a default and occasionally (gasp) can suggest more women than men. So just do it. Commit yourself to suggest at least as many female reviewers as you do male ones for each and every one of your manuscripts. Even if you have to pick a postdoc in a given lab.
I don’t know what to say about the lower pie charts. It says that women corresponding authors nominate female peers to exclude at twice the rate of male corresponding authors. It could be a positive in the sense that women are more likely to think of other women as peers, or potential reviewers of their papers. They would therefore perhaps suggest more female exclusions compared with a male author that doesn’t bring as many women to mind as relevant peers.
That’s about the most positive spin I can think of for that so I’m going with it.