Your Manuscript in Review: It is never an idle question
August 22, 2018
I was trained to respond to peer review of my submitted manuscripts as straight up as possible. By this I mean I was trained (and have further evolved in training postdocs) to take every comment as legitimate and meaningful while trying to avoid the natural tendency to view it as the work of an illegitimate hater. This does not mean one accepts every demand for a change or alters one’s interpretation in preference for that of a reviewer. It just means you take it seriously.
If the comment seems stupid (the answer is RIGHT THERE), you use this to see where you could restate the point again, reword your sentences or otherwise help out. If the interpretation is counter to yours, see where you can acknowledge the caveat. If the methods are unclear to the reviewer, modify your description to assist.
I may not always reach some sort of rebuttal Zen state of oneness with the reviewers. That I can admit. But this approach guides my response to manuscript review. It is unclear that it guides everyone’s behavior and there are some folks that like to do a lot of rebuttal and relatively less responding. Maybe this works, maybe it doesn’t but I want to address one particular type of response to review that pops up now and again.
It is the provision of an extensive / awesome response to some peer review point that may have been phrased as a question, without incorporating it into the revised manuscript. I’ve even seen this suboptimal approach extend to one or more paragraphs of (cited!) response language.
Hey, great! You answered my question. But here’s the thing. Other people are going to have the same question* when they read your paper. It was not an idle question for my own personal knowledge. I made a peer review comment or asked a peer review question because I thought this information should be in the eventual published paper.
So put that answer in there somewhere!
___
*As I have probably said repeatedly on this blog, it is best to try to treat each of the three reviewers of your paper (or grant) as 33.3% of all possible readers or reviewers. Instead of mentally dismissing them as that weird outlier crackpot**.
**this is a conclusion for which you have minimal direct evidence.
On reviewing scientific work from known sexual harassers, retaliators, bigots and generalized jerks of science
May 15, 2018
On a recent post, DNAMan asks:
If you were reviewing an NIH proposal from a PI who was a known (or widely rumored) sexual harasser, would you take that into account? How?
My immediate answer was:
I don’t know about “widely rumored”. But if I was convinced someone was a sexual harasser this would render me unable to fairly judge the application. So I would recuse myself and tell the SRO why I was doing so. As one is expected to do for any conflicts that one recognizes about the proposal.
I’m promoting this to a new post because this also came up in the Twitter discussion of Lander’s toast of Jim Watson. Apparently this is not obvious to everyone.
One is supposed to refuse to review grant proposals, and manuscripts submitted for publication, if one feels that one has a conflict of interest that renders the review biased. This is very clear. Formal guidelines tend to concentrate on personal financial benefits (i.e. standing to gain from a company in which one has ownership or other financial interest), institutional benefits (i.e., you cannot review NIH grants submitted from your University since the University is, after all, the applicant and you are an agent of that University) and mentoring / collaborating interests (typically expressed as co-publication or mentoring formally in past three years). Nevertheless there is a clear expectation, spelled out in some cases, that you should refuse to take a review assignment if you feel that you cannot be unbiased.
This is beyond any formal guidelines. A general ethical principle.
There is a LOT of grey area.
As I frequently relate, in my early years when a famous Editor asked me to review a manuscript from one of my tighter science homies and I pointed out this relationship I was told “If I had to use that standard as the Editor I would never get anything reviewed. Just do it. I know you are friends.“.
I may also have mentioned that when first on study section I queried an SRO about doing reviews for PIs who were scientifically sort of close to my work. I was told a similar thing about how reviews would never get done if vaguely working in the same areas and maybe one day competing on some topic were the standard for COI recusal.
So we are, for the most part, left up to our own devices and ethics about when we identify a bias in ourselves and refuse to do peer review because of this conflict.
I have occasionally refused to review an NIH grant because the PI was simply too good of a friend. I can’t recall being asked to review a grant proposal from anyone I dislike personally or professionally enough to trigger my personal threshold.
I am convinced, however, that I would recuse myself from the review of proposals or manuscripts from any person that I know to be a sexual harasser, a retaliator and/or a bigot against women, underrepresented groups generally, LGBTQ, and the like.
There is a flavor of apologist for Jim Watson (et rascalia) that wants to pursue a “slippery slope” argument. Just Asking the Questions. You know the type. One or two of these popped up on twitter over the weekend but I’m too lazy to go back and find the thread.
The JAQ-off response is along the lines of “What about people who have politics you don’t like? Would you recuse yourself from a Trump voter?”.
The answer is no.
Now sure, the topic of implicit or unconscious bias came up and it is problematic for sure. We cannot recuse ourselves when we do not recognize our bias. But I would argue that this does not in any way suggest that we shouldn’t recuse ourselves when we DO recognize our biases. There is a severity factor here. I may have implicit bias against someone in my field that I know to be a Republican. Or I may not. And when there is a clear and explicit bias, we should recuse.
I do not believe that people who have proven themselves to be sexual harassers or bigots on the scale of Jim Watson deserve NIH grant funding. I do not believe their science is going to be so much superior to all of the other applicants that it needs to be funded. And so if the NIH disagrees with me, by letting them participate in peer review, clearly I cannot do an unbiased job of what NIH is asking me to do.
The manuscript review issue is a bit different. It is not zero-sum and I never review that way, even for the supposedly most-selective journals that ask me to review. There is no particular reason to spread scoring, so to speak, as it would be done for grant application review. But I think it boils down to essentially the same thing. The Editor has decided that the paper should go out for review and it is likely that I will be more critical than otherwise.
So….can anyone see any huge problems here? Peer review of grants and manuscripts is opt-in. Nobody is really obliged to participate at all. And we are expected to manage the most obvious of biases by recusal.
Ludicrous academics for $200, Alex
April 2, 2018
Just when I think I will not find any more ridiculous things hiding in academia…..
A recent thread on twitter addressed a population of academics (not sure if it was science) who are distressed when the peer review of their manuscripts is insufficiently vigorous/critical.
This is totally outside of my experience. I can’t imagine ever complaining to an Editor of a journal that the review was too soft after getting an accept or invitation to revise.
People are weird though.
Question of the Day
April 2, 2018
How do you assess whether you are too biased about a professional colleague and/or their work?
In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.
Does your threshold differ for papers versus grants?
Do you distinguish between antipathy bias and sympathy bias?
Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system
March 13, 2018
You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.
This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:
Absolutely. We do not want 100% fidelity the evaluation of grant “merit”. If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.
Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.
Do you really want perfect fidelity?
Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?
Of course not.
You want the variability in NIH Grant review to work in your favor.
If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole ‘nother research program? Eventually to quit your job and do something else when you don’t get funded after the first 5 or 10 tries?
Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.
Anyway, I’d like to see more chit chat on the implicit question from the last post.
No “agreement”. “Subjectivity”. Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, “subjective”. Anyone that pretends this process is “objective” is an idiot. Underinformed. Willfully in denial. Review by human is a “subjective” process by its very definition. That is what it means.
The only debate here is how much variability we expect there to be. How much precision do we expect in the process.
Well? How much reliability in the system do you want, Dear Reader?
__
*ok, maybe sometimes. but always?
I was critical of a recent study purporting to show that NIH grant review is totally random because of structural flaws that could not have been designed more precisely to reach a foregone conclusion.
I am also critical of CSR/NIH self-studies. These are harder to track because they are not always published or well promoted. We often only get wind of them when people we know are invited to participate as reviewers. Often the results are not returned to the participants or are returned with an explicit swearing to secrecy.
I’ve done a couple of these self-study reviews for CSR.
I am not impressed by their designs either. Believe me.
As far as I’ve heard or experienced, most (all) of these CSR studies have the same honking flaw of restricted range. Funded applications only.
Along with other obscure design choices that seem to miss the main point*. One review pitted apps funded from closely-related sections against each other. ….except “closely-related” did not appear that close to me. It was more a test of whatever historical accident made CSR cluster those study sections or perhaps a test of mission drift. A better way would have been to cluster study sections to which the same PI submits. Or by assigned PO maybe? By a better key word cluster analysis?
Anyway, the CSR designs are usually weird when I hear about them. They never want to convene multiple panels of very similar reviewers to review the exact same pile of apps in real time. Reporting on their self-studies is spotty at best.
This appears to my eye to be an attempt to service a single political goal. I.e. “Maintain the ability to pretend to Congress that grant review selects only the most meritorious applications for funding with perfect fidelity”.
Th critics, as we’ve seen, do the opposite. Their designs are manipulated to provide a high probability of showing NIH grant review is utterly unreliable and needs to be dismantled and replaced.
Maybe the truth lies somewhere in the middle? And if these forces would combine to perform some better research we could perhaps better trust jointly proposed solutions.
__
*I include the “productivity” data mining. NIH also pulls some sketchy stuff with these studies. Juking it carefully to support their a priori plans, rather than doing the study first and changing policy after.
Pier and colleagues published a study purporting to evaluate the reliability of NIH style peer review of grant applications. Related work that appears to be from the same study was published by this group in 2017.
From the supplement to the 2018 paper, we note that the reviewer demographics were 62% Asian, 38% white with zero black or hispanic reviewers. I don’t know how that matches the panels that handle NCI applications but I would expect some minimal black/hispanic representation and a lot lower Asian representation to match my review panel experiences. The panels were also 24% female which seems to match with my memory of NIH stats for review running under 1/3 women.
There were 17% of reviewers at assistant professor rank. This is definitely a divergence from CSR practice. The only data I saw right around the time of Scarpa’s great Purge of Assistant Professors suggested a peak of 10% of reviewers. Given the way ad hoc / empaneled reviewer loads work, I think we can conclude that way fewer than 10% of reviews were coming from Assistant Professors. As you know, we are now a decade past the start of the purge and these numbers have to be lower. So the panel demographics are not similar.
N.b., The 2017 papers says they surveyed the reviewers on similarity to genuine NIH review experience but I can’t find anywhere it states the amount of review experience for the subjects. Similarly, while they all had to have been awarded at least one R01, we don’t know anything about their experiences as applicants. Might be relevant. A missed opportunity would seem to be the opportunity to test reviewer demographics in the 2017 paper which covers more about the process of review, calibration of scoring, agreement after discussion, etc.
The paper(s) also says that they tried to de-identify the applicants.
All applications were deidentified, meaning the names of the PIs, any co-investigators, and any other research personnel were replaced with pseudonyms. We selected pseudonyms using public databases of names that preserved the original gender, nationality, and relative frequency across national populations of the original names. All identifying information, including institutional addresses, email addresses, phone numbers, and hand-written signatures were similarly anonymized and re-identified as well.
I am still looking but I cannot find any reference to any attempt of the authors to validate whether the blinding worked. Which is in and of itself a fascinating question. But for the purposes of the “replication” of NIH peer review we must recognize that Investigator and Environment are two of five formally co-equal scoring criteria. We know that the NIH data show poor correlation of Investigator and Environment criterion scores with overall voted impact score (Approach and Significance are the better predictors), but these are still scoring criteria. How can this study attempt to delete two of these and then purport to be replicating the process? It is like they intentionally set out to throw noise into the system.
I don’t think the review panels triaged any of the 25 proposals. The vast majority of NIH review involves triage of the bottom ~half of the assigned proposals. Reviewers know this when they are doing their preliminary reading and scoring.
Agreement among NIH grant reviewers
March 9, 2018
Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:
We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.
emphasis added.
This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. “Aha!” cry the disgruntled applicants, “This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah.”
A smaller set of voices expressed perplexed confusion. “Weird“, we say, “but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole.”
So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.
This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.
In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.
So. The authors claims are bogus. Ridiculously so. They did not “replicate” the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don’t have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.
No “agreement”. “Subjectivity”. Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, “subjective”. Anyone that pretends this process is “objective” is an idiot. Underinformed. Willfully in denial. Review by human is a “subjective” process by its very definition. That is what it means.
The only debate here is how much variability we expect there to be. How much precision do we expect in the process.
The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the “top-[insert favored value of 2-3 times the current paylines]” scoring grants are all worthy of funding and have very little objective space between them.
Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don’t understand these people.
__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115
*And we’re not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow
Responding to comments on pre-prints
March 2, 2018
almost tenured PI raised some interesting questions in a comment:
So you want me to submit a preprint so I can get comments that I have to spend time responding to? No thanks. I spend enough time responding to the comments of reviewers from real journals. I can’t imagine how much time I’d have to spend responding to comments that are public and immortalized online. No paper is perfect. How many comments does a paper need before it’s acceptable for publication? Where does it end? I do not need more work when it already feels like the bar to publishing a paper keeps rising and rising.
Ok, so first off we need to recognize that the null hypothesis right now has to be that there will not be extensive substantial commentary on the preprints. PubMed Central shut down its commenting scheme for lack of use. Journals have been trying out various systems for years (a decade or more?) without general success. The fear that each posted pre-print will attract a host of journal-club style comments is probably not well supported.
But lets suppose your preprint does get some critical comment*. Are you obliged to respond?
This ties into the uncertainty and disagreement over when to submit the preprint. At what stage of the process do you post it? Looking at the offerings on bioRxiv, I think many folks are very close to my position. Namely, we are waiting to submit a preprint until it is ready to go out for peer review at a journal.
So any comments it gets are being made in parallel with review and I can choose to address or not when I get the original decision back and need to revise the manuscript. Would the comments then somehow contaminate the primary review? Would the reviewers of a revision see the comments on the pre-print and demand you address those as well as the regular peer comments? Would an Editor? For now I seriously doubt this is a problem.
So, while I think there may be many reasons for people not to want to post their manuscripts as pre-prints, I don’t think the fear that this will be an extra dose of Reviewer #3 is well supported.
__
*I had one get a comment and we ended up including something in a revision to address it so, win-win.
Theological waccaloons win because they are powered by religious fervor and exhaust normal people
February 14, 2018
Some self-congratulatory meeting of the OpenAccess Illuminati* took place recently and a summary of takeaway points has been posted by Stephen Curry (the other one).
These people are exhausting. They just keep bleating away with their talking points and refuse entirely to ever address the clear problems with their plans.
Anonymous peer review exists for a reason.
To hear them tell it, the only reason is so hateful incompetent reviewers can prevent their sterling works of genius from being published right away.
This is not the reason for having anonymous peer review in science.
Their critics regularly bring up the reason we have anonymous peer review and the virtues of such an approach. The OA Illuminati refuse to address this. At best they will vaguely acknowledge their understanding of the issue and then hand wave about how it isn’t a problem just …um…because they say so.
It’s also weird that 80%+ of their supposed problems with peer review as we know it are attributable to their own participation in the Glamour Science game. Some of them also see problems with GlamHumping but they never connect the dots to see that Glamming is the driver of most of their supposed problems with peer review as currently practiced.
Which tells you a lot about how their real goals align with the ones that they talk about in public.
Edited to add:
Professor Curry weighed in on twitter to insist that the goal is not to force everyone to sign reviews. See, his plan allows people to opt out if they choose. This is probably even worse for the goal of getting an even-handed and honest review of scientific papers. And even more tellingly, is designing the experiment so that it cannot do anything other than provide evidence in support of their hypothesis. Neat trick.
Here’s how it will go down. People will sign their reviews when they have “nice, constructive” things to say about the paper. BSDs, who are already unassailable and are the ones self-righteously saying they sign all their reviews now, will continue to feel free to be dicks. And the people** who feel that attaching their name to their true opinion will still feel pressure. To not review, to soft-pedal and sign or to supply an unsigned but critical review. All of this is distorting.
Most importantly for the open-review fans, it will generate a record of signed reviews that seem wonderfully constructive or deserved (the Emperor’s, sorry BSDs, critical pants are very fine indeed) and a record of seemingly unconstructive critical unsigned reviews (which we can surely dismiss because they are anonymous cowards). So you see? It proves the theory! Open reviews are “better” and anonymous reviews are mean and unjustified. It’s a can’t-miss bet for these people.
The choice to not-review is significant. I know we all like to think that “obvious flaws” would occur to anyone reading a paper. That’s nonsense. Having been involved in manuscript and grant review for quite some time now I am here to tell you that the assigned reviewers (typically 3) all provide unique insight. Sometimes during grant review other panel members see other things the three assigned people missed and in manuscript review the AE or EIC see something. I’m sure you could do parallel sets of three reviewers and it would take quite a large sample before every single concern has been identified. Comparing this experience to the number of comments that are made in all of the various open-commenting systems (PubMed Commons commenting system was just shuttered for lack of general interest by the way) and we simply cannot believe claims that any reviewer can be omitted*** with no loss of function. Not to mention the fact that open commenting systems are just as subject to the above discussed opt-in problems as are signed official review systems of peer review.
__
*hosted at HHMI headquarters which I’m sure tells us nothing about the purpose
**this is never an all-or-none associated with reviewer traits. It will be a manuscript-by-manuscript choice process which makes it nearly impossible to assess the quelling and distorting effect this will have on high quality review of papers.
***yes, we never have an overwhelmingly large sample of reviewers. The point here is the systematic distortion.
NIH encourages pre-prints
February 13, 2018
In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): “The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.“.
The key bits:
Interim Research Products are complete, public research products that are not final.
A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.
Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.
I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.
The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.
I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today’s discussion.
At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.
The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it’s better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.
I’m more focused on productivity. First, this is an advantage for trainees. We’ve discussed the tendency of new scientists to list manuscripts “in preparation” on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three “in prep” manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post “in preparation” drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing “in preparation” or “in review” items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.
This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.
A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?
How do you respond to not being cited where appropriate?
October 10, 2016
Have you ever been reading a scientific paper and thought “Gee, they really should have cited us here”?
Never, right?
There should be only three categories of review outcome.
Accept, Reject and Minor Revisions.
Part of the Editorial decision making will have to be whether the experiments demanded by the reviewers are reasonable as “minor” or not. I suggest a lean towards accepting only the most minimal demands for additional experimentation as “minor revisions” and otherwise to choose to reject.
And no more of this back and forth with Editors about what additional work might make it acceptable for the journal as a new submission either.
We are handing over too much power to direct and control the science to other people. It rightfully belongs within your lab and within your circle of key peers.
If J Neuro could take a stand against Supplemental Materials, they and other journals can take a stand on this.
I estimate that the greatest advantage will be the sharp decline in reviewers demanding extra work just because they can.
The second advantage will be with Editors themselves having to select from what is submitted to them, instead of trying to create new papers by holding acceptances at bay until the authors throw down another year of person-work.
Review unto others
April 25, 2016
I think I’ve touched on this before but I’m still seeking clarity.
How do you review?
For a given journal, let’s imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.
Do you review manuscripts for that journal as you would like to be reviewed?
Or as you have perceived yourself to have been reviewed?
Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?
Interesting comment from AnonNeuro:
Reviews are confidential, so I don’t think you can share that information. Saying “I’ll review it again” is the same as saying “I have insider knowledge that this paper was rejected elsewhere”. Better to decline the review due to conflict.
I don’t think I’ve ever followed this as a rule. I have definitely told editors when the manuscript has not been revised from a previously critiqued version in the past (I don’t say which journal had rejected the authors’ work). But I can’t say that I invariably mention it either. If the manuscript had been revised somewhat, why bother. If I like it and want to see it published, mentioning I’ve seen a prior version elsewhere seems counterproductive.
This comment had me pondering my lack of a clear policy.
Maybe we should tell the editor upon accepting the review assignment so that they can decide if they still want our input?