Why indeed.

I have several motivations, deployed variably and therefore, my answers to his question about a journal-less world vary.

First and foremost I review manuscripts as a reciprocal professional obligation, motivated by the desire I have to get my papers published. It is distasteful free-rider behavior to not review at least as often as you require the field to review for you. That is, approximately 3 times your number of unique-journal submissions. Should we ever move to a point where I do not expect any such review of my work to be necessary, then this prime motivator goes to zero. So, “none”.

The least palatable (to me) motivation is the gatekeeper motivation. I do hope this is the rarest of reviews that I write. Gatekeeper motivation leads to reviews that try really hard to get the editor to reject the manuscript or to persuade the authors that this really should not be presented to the public in anything conceivably related to current form. In my recollection, this is because it is too slim for even my rather expansive views on “least publishable unit” or because there is some really bad interpretation or experimental design going on. In a world where these works appeared in pre-print, I think I would be mostly unmotivated to supply my thoughts in public. Mostly because I think this would just be obvious to anyone in the field and therefore what is the point of me posturing around on some biorxiv comment field about how smart I was to notice it.

In the middle of this space I have the motivation to try to improve the presentation of work that I have an interest in. The most fun papers to review for me are, of course, the ones directly related to my professional interests. For the most part, I am motivated to see at least some part of the work in print. I hope my critical comments are mostly in the nature of
“you need to rein back your expansive claims” and only much less often in the vein of “you need to do more work on what I would wish to see next”. I hate those when we get them and I hope I only rarely make them.

This latter motivation is, I expect, the one that would most drive me to make comments in a journal-less world. I am not sure that I would do much of this and the entirely obvious sources of bias in go/no-go make it even more likely that I wouldn’t comment. Look, there isn’t much value in a bunch of congratulatory comments on a scientific paper. The value is in critique and in drawing together a series of implications for our knowledge on the topic at hand. This latter is what review articles are for, and I am not personally big into those. So that wouldn’t motivate me. Critique? What’s the value? In pre-publication review there is some chance that this critique will result in changes where it counts. Data re-analysis, maybe some more studies added, a more focused interpretation narrative, better contextualization of the work…etc. In post-publication review, it is much less likely to result in any changes. Maybe a few readers will notice something that they didn’t already come up with for themselves. Maybe. I don’t have the sort of arrogance that thinks I’m some sort of brilliant reader of the paper. I think people that envision some new world order where the unique brilliance of their critical reviews are made public have serious narcissism issues, frankly. I’m open to discussion on that but it is my gut response.

On the flip side of this is cost. If you don’t think the process of peer review in subfields is already fraught with tit-for-tat vengeance seeking even when it is single-blind, well, I have a Covid cure to sell you. This will motivate people not to post public, unblinded critical comments on their peers’ papers. Because they don’t want to trigger revenge behaviors. It won’t just be a tit-for-tat waged in these “overlay” journals of the future or in the comment fields of pre-print servers. Oh no. It will bleed over into all of the areas of academic science including grant review, assistant professor hiring, promotion letters, etc, etc. I appreciate that Professor Eisen has an optimistic view of human nature and believes these issues to be minor. I do not have an optimistic view of human nature and I believe these issues to be hugely motivational.

We’ve had various attempts to get online, post-publication commentary of the journal-club nature crash and burn over the years. Decades by now. The efforts die because of a lack of use. Always. People in science just don’t make public review type comments, despite the means being readily available and simple. I assure you it is not because they do not have interesting and productive views on published work. It is because they see very little positive value and a whole lot of potential harm for their careers.

How do we change this, I feel sure Professor Eisen would challenge me.

I submit to you that we first start with looking at those who are already keen to take up such commentary. Who drop their opinions on the work of colleagues at the drop of a hat with nary a care about how it will be perceived. Why do they do it?

I mean yes, narcissistic assholes, sure but that’s not the general point.

It is those who feel themselves unassailable. Those who do not fear* any real risk of their opinions triggering revenge behavior.

In short, the empowered. Tenured. HHMI funded.

So, in order to move into a glorious new world of public post-publication review of scientific works, you have to make everyone feel unassailable. As if their opinion does not have to be filtered, modulated or squelched because of potential career blow-back.

__

*Sure, there are those dumbasses who know they are at risk of revenge behavior but can’t stfu with their opinions. I don’t recommend this as an approach, based on long personal experience.

As my longer term Readers are well aware, my laboratory does not play in the Glam arena. We publish in society type journals and not usually the fancier ones, either. This is a category thing, in addition to my stubbornness. I have occasionally pointed out how my papers that were rejected summarily by the fancier society journals tend to go on to get cited better than their median and often their mean (i.e., their JIF) in the critical window where it counts. This, I will note, is at journals with only slightly better JIF than the vast herd or workmanlike journals in my fields of interest, i.e. with JIF from ~2-4.

There are a lot of journals packed into this space. For the real JIF-jockeys and certainly the Glam hounds, the difference between a JIF 2 and JIF 4 journal is imperceptible. Some are not even impressed in the JIF 5-6 zone where the herd starts to thin out a little bit.

For those of us that publish regularly in the herd, I suppose there might be some slight idea that journals towards the JIF 4-5 range is better than journals in the JIF 2-3 range. Very slight.

And if you look at who is on editorial boards, who is EIC, who is AE and who is publishing at least semi-regularly in these journals you would be hard pressed to discern any real difference.

Yet, as I’ve also often related, people associated with running these journals all seem to care. They always talk to their Editorial Boards in a pleading way to “send some of your work here”. In some cases for the slightly fancier society journals with airs, they want you to “send your best work here”….naturally they are talking here to the demiGlam and Glam hounds. Sometimes at the annual Editorial Board meeting the EIC will get more explicit about the JIF, sometimes not, but we all know what they mean.

And to put a finer point on it, the EIC often mentions specific journals that they feel they are in competition with.

Here’s what puzzles me. Aside the fact that a few very highly cited papers would jazz up the JIF for the lowly journals if the EIC or AEs or a few choice EB members were to actually take one for the team, and they never do, that is. The ONLY thing I can see that these journals can compete on are 1) rapid and easy acceptance without a lot of demands for more data (really? at JIF 2? no.) and 2) speed of publication after acceptance.

My experience over the years is that journals of interchangeable JIF levels vary widely in the speed of publication after acceptance. Some have online pre-print queues that stretch for months. In some cases, over a year. A YEAR to wait for a JIF 3 paper to come out “in print”? Ridiculous! In other cases it can be startlingly fast. As in assigned to a “print” issue within two or three months of the acceptance. That seems…..better.

So I often wonder how this system is not more dynamic and free-market-y. I would think that as the pre-print list stretches out to 4 months and beyond, people would stop submitting papers there. The journal would then have to shrink their list as the input slows down. Conversely, as a journal starts to head towards only 1/4 of an issue in the pre-print list, authors would submit there preferentially, trying to get in on the speed.

Round and round it would go but the ecosphere should be more or less in balance, long term. right?

It is time. Well past time, in fact.

Time for the Acknowledgements sections of academic papers to report to report on a source of funding that is all to often forgotten.

In fact I cannot once remember seeing a paper or manuscript I have received to review mention it.

It’s not weird. Most academic journals I am familiar with do demand that authors report the source of funding. Sometimes there is an extra declaration that we have reported all sources. It’s traditional. Grants for certain sure. Gifts in kind from companies are supposed to be included as well (although I don’t know if people include special discounts on key equipment or reagents, tut, tut).

In recent times we’ve seen the NIH get all astir precisely because some individuals were not reporting funding to them that did appear in manuscripts and publications.

The statements about funding often come with some sort of comment that the funding agency or entity had no input on the content of the study or the decisions to/not publish data.

The uses of these declarations are several. Readers want to know where there are potential sources of bias, even if the authors have just asserted no such thing exists. Funding bodies rightfully want credit for what they have paid hard cash to create.

Grant peer reviewers want to know how “productive” a given award has been, for better or worse and whether they are being asked to review that information or not.

It’s common stuff.

We put in both the grants that paid for the research costs and any individual fellowships or traineeships that supported any postdocs or graduate students. We assume, of course, that any technicians have been paid a salary and are not donating their time. We assume the professor types likewise had their salary covered during the time they were working on the paper. There can be small variances but these assumptions are, for the most part, valid.

What we cannot assume is the compensation, if any, provided to any undergraduate or secondary school authors. That is because this is a much more varied reality, in my experience.

Undergraduates could be on traineeships or fellowships, just like graduate students and postdocs. Summer research programs are often compensated with a stipend and housing. There are other fellowships active during the academic year. Some students are on work-study and are paid a salary and in school related financial aid…in a good lab this can be something more advanced than mere dishwasher or cage changer.

Some students receive course credit, as their lab work is considered a part of the education that they are paying the University to receive.

Sometimes this course credit is an optional choice- something that someone can choose to do but is not absolutely required. Other times this lab work is a requirement of a Major course of study and is therefore something other than optional.

And sometimes…..

…sometimes that lab work is compensated with only the “work experience” itself. Perhaps with a letter or a verbal recommendation from a lab head.

I believe journals should extend their requirement to Acknowledge all sources of funding to the participation of any trainees who are not being compensated from a traditionally cited source, such as a traineeship. There should be lines such as:

Author JOB participated in this research as an undergraduate course in fulfilling obligations for a Major in Psychology.

Author KRN volunteered in the lab for ~ 10 h a week during the 2020-2021 academic year to gain research experience.

Author TAD volunteered in the lab as part of a high school science fair project supported by his dad’s colleague.

Etc.

I’m not going to go into a long song and dance as to why…I think when you consider what we do traditionally include, the onus is quickly upon us to explain why we do NOT already do this.

Can anyone think of an objection to stating the nature of the participation of students prior to the graduate school level?

In an earlier post I touched on themes that are being kicked around the Science Twitters about how perhaps we should be easing up on the criteria for manuscript publication. It is probably most focused in the discussion of demanding additional experiments be conducted, something that is not possible for those who have shut down their laboratory operations for the Corona Virus Crisis.

I, of course, find all of this fascinating because I think in regular times, we need to be throttling back on such demands.

The reasons for such demands vary, of course. You can dress it up all you want with fine talk of “need to show the mechanism” and “need to present a complete story” and, most nebulously, “enhance the impact”. This is all nonsense. From the perspective of the peers who are doing the reviewing there are really only two perspectives.

  1. Competition
  2. Unregulated desire we all have to want to see more, more, more data if we find the topic of interest.

From the side of the journal itself, there is only one perspective and that is competitive advantage in the marketplace. The degree to which the editorial staff fall strictly on the side of the journal, strictly on the side of the peer scientists or some uncomfortable balance in between varies.

But as I’ve said before, I have had occasion to see academic editors in action and they all, at some point, get pressure to improve their impact factor. Often this is from the publisher. Sometimes, it is from the associated academic society which is grumpy about “their” journal slowly losing the cachet it used to have (real or imagined).

So, do standards having to do with the nitty-gritty of demands for more data that might be relevant to the Time of Corona slow/shut downs actually affect Impact? Is there a reason that a given journal should try to just hold on to business as usual? Or is there an argument that topicality is key, papers get cited for reasons not having to do with the extra conceits about “complete story” or “shows mechanism” and it would be better just to accept the papers if they seem to be of current interest in the field?

I’ve written at least one post in the past with the intent to:

encourage you to take a similar quantitative look at your own work if you should happen to be feeling down in the dumps after another insult directed at you by the system. This is not for external bragging, nobody gives a crap about the behind-the-curtain reality of JIF, h-index and the like. You aren’t going to convince anyone that your work is better just because it outpoints the JIF of a journal it didn’t get published in. …It’s not about that…This is about your internal dialogue and your Imposter Syndrome. If this helps, use it.

There is one thing I didn’t really explore in whingy section of that post, where I was arguing that the citations of several of my papers published elsewhere showed how stupid it was for the editors of the original venue to reject them. And it is relevant to the Time of Corona discussions.

I think a lot of my papers garner citations based on timing and topicality more than much else. For various reasons I tend to work in thinly populated sub-sub-areas where you would expect the relatively specific citations to arise. Another way to say this is that my papers are “not of general interest”, which is a subtext, or explicit reason, for many a rejection in the past. So the question is always: Will it take off?

That is, this thing that I’ve decided is of interest to me may be of interest to others in the near or distant future. If it’s in the distant future, you get to say you were ahead of the game. (This may not be all that comforting if disinterest in the now has prevented you from getting or sustaining your career. Remember that guy who was Nobel adjacent but had been driving a shuttle bus for years?) If it’s in the near future, you get to claim leadership or argue that the work you published showed others that they should get on this too. I still believe that the sort of short timeline that gets you within the JIF calculation window may be more a factor of happening to be slightly ahead of the others, rather than your papers stimulating them de novo, but you get to claim it anyway.

For any of these things does it matter that you showed mechanism or provided a complete story? Usually not. Usually it is the timing. You happened to publish first and the other authors coming along several months in your wake are forced to cite you. In the more distant, medium term then maybe do you start seeing citations of your paper from work that was truly motivated by it and depends on it. I’d say a 2+ year runway on that.

This citations, unfortunately, will come in just past the JIT window and don’t contribute to the journal’s desire to raise its impact.

I have a particular journal which I love to throw shade at because they reject my papers at a high rate and then those papers very frequently go on to beat their JIF. I.e., if they had accepted my work it would have been a net boost to their JIF….assuming the lower performing manuscripts that they did accept were rejected in favor of mine. But of course, the reason that their JIF continues to languish behind where the society and the publisher thinks it “should” be is that they are not good at predicting what will improve their JIF and what will not.

In short, their prediction of impact sucks.

Today’s musing were brought around by something slightly different which is that I happened to be reviewing a few papers that this journal did publish, in a topic domain reasonably close to mine, not particularly more “complete story” but, and I will full admit this, they do seem a little more “shows mechanism” sciency in a limited way in which my work could, I just find that particular approach to be pedantic and ultimately of lesser importance than broader strokes.

These papers are not outpointing mine. They are not being cited at rates that are significantly inflating the JIF of this journal. They are doing okay, I rush to admit. They are about the middle of the distribution for the journal and pacing some of my more middle ground offering in my whinge category. Nothing appears to be matching my handful of better ones though.

Why?

Well, one can speculate that we were on the earlier side of things. And the initial basic description (gasp) of certain facts was a higher demand item than would be a more quantitative (or otherwise sciencey-er) offering published much, much later.

One can also speculate that for imprecise reasons our work was of broader interest in the sense that we covered a few distinct sub-sub-sub field approaches (models, techniques, that sort of thing) instead of one, thereby broadening the reach of the single paper.

I think this is relevant to the Time of Corona and the slackening of demands for more data upon initial peer review. I just don’t think in the balance, it is a good idea for journals to hold the line. Far better to get topical stuff out there sooner rather than later. To try to ride the coming wave instead of playing catchup with “higher quality” work. Because for the level of journal I am talking about, they do not see the truly breathtakingly novel stuff. They just don’t. They see workmanlike good science. And if they don’t accept the paper, another journal will quite quickly.

And then the fish that got away will be racking up JIF points for that other journal.

This also applies to authors, btw. I mean sure, we are often subject to evaluation based on the journal identity and JIF rather than the actual citations to our papers. Why do you think I waste my time submitting work to this one? But there is another game at foot as well and that game does depend on individual paper citations. Which are benefited by getting that paper published and in front of people as quickly as possible. It’s not an easy calculation. But I think that in the Time of Corona you should probably shift your needle slightly in the “publish it now” direction.

One of the thorniest issues that we will face in the now, and in the coming months, is progress. Scientific progress, career progress, etc. I touched on this a few days ago. It has many dimensions. I may blog a lot about this, fair warning.

Several days (weeks?) ago, we had a few rounds on Twitter related to altering our peer review standards for manuscript evaluation and acceptance. It’s a pretty simple question for the day. Is the Time of Corona such that we need to alter this aspect of our professional scientific behavior? Why? To what end? What are the advantages and for whom? Are there downsides to doing so?

As a review, unneeded for most of my audience, scientific papers are the primary output, deliverable good, work product, etc of the academic scientist. Most pointedly, the academic scientist funded by the taxpayer. Published papers. To get a paper published in an academic journal, the scientists who did the work and wrote the paper submit it for consideration to a journal. Whereupon an editor at the journal decides either to reject it outright (colloquially a “desk reject”) or to send it to scientific peers (other academics who are likewise trying to get their papers published) for review. Typically 3 peers, although my most usual journals accept 2 as a minimum these days, and editors can use more if necessary. The peers examine the paper and make recommendations to the editor as to whether it should be accepted as is (rarely happens), rejected outright (fairly common) or reconsidered after the authors make some changes to the manuscript. This latter is a very usual outcome and I don’t think I’ve ever had a paper ultimately published that did not get there without making a lot of changes in response to what peers had to say about it.

Peer comments can range from identifying typographical errors to demanding that the authors conduct more experiments, occasionally running to the tune of hundreds of thousands of dollars in expense (counting staff time) and months to years of person-effort. These are all couched as “this is necessary before the authors should be allowed to publish this work”. Of course, assigned reviewers rarely agree in every particular and ultimately the editor has to make a call as to what is reasonable or unreasonable with respect to apparent demands from any particular reviewer.

But this brings us to the Time of Corona. We are, most of us, mostly or totally shut down. Our institutions do not want us, or our staff members, in the labs doing work as usual. Which means that conducting new research studies for a manuscript that we have submitted for review is something between impossible and very, very, very unlikely.

So. How should we, as professionals in a community, respond to this Time of Corona? Should we just push the pause button on scientific publication, just as we are pushing the pause button on scientific data generation? Ride it out? Refuse to alter our stance on whether more data are “required for publication” and just accept that we’re all going to have to wait for this to be over and for our labs to re-start?

This would be consistent with a stance that, first, our usual demands for more work are actually valid and second, that we should be taking this shutdown seriously, meaning accepting that THINGS ARE DIFFERENT now.

I am seeing, however, some sentiments that we should be altering our standards, specifically because of the lab shutdowns. That this is what is different, but that it is still essential to be able to publish whatever (?) manuscripts we have ready to submit.

This is fascinating to me. After all, I tend to believe that each and every manuscript I submit is ready to be accepted for publication. I don’t tend to do some sort of strategy of holding back data in hand, or nearly completed, so that in response to the inevitable demands for more, we can respond with “Yes, you reviewers were totally right and now we have included new experiments. Thank you for your brilliant suggestion!”. People do this. I may have done it once or twice but I don’t feel good about it. 🙂

I believe that when I am reviewing manuscripts, I try to minimize my demands for new data and more work. My review stance is to try to first understand what the reviewers are setting out to study, what they have presented data on, and what conclusions or claims they are trying to make. Any of the three can be adjusted if I think the manuscript falls short. They can more narrowly constrain their stated goals, they can add more data and/or they can alter their claims to meet the data they have presented. Any of those are perfectly valid responses in my view. It doesn’t have to be “more data are required no matter what”.

I may be on a rather extreme part of the distribution on this, I don’t know. But I do experience editors and reviewers who seem to ultimately behave in a similar way on both my manuscripts and those manuscripts to which I’ve contributed a review. So I think, that probably my fellow scientists that have my ~core skepticism about the necessity for peer review demands for more, more, more are probably not so exercised about this issue. It is more the folks who are steeped in the understanding that this is the way peer review of manuscripts should work, by default and in majority of cases, who are starting to whinge.

I’m kinda amused. I would be delighted if the Time of Corona made some of these Stockholm Syndrome victims within science think a little harder about the necessity of their culture of demands for more, more, more data no matter what.

Despite evidence to the contrary on this blog, some people who don’t like to write have occasionally said things in the vein of “oh, but you are such a good writer”. Sometimes this is by way of trying to get me to do some writing for them in the non-professional setting. Sometimes this is a sort of suggestion that somehow it is easier for me to write than it is for them to write, in the professional setting.

I don’t know. I certainly used to be a better writer and my dubious blogging hobby has certainly contributed to making my written product worse. Maybe I’m just getting that Agatha Christie thing early (her word variety constricted towards her final books, people suggest that was evidence of dementia).

But for decades now, I view my primary job as a writing job. When it comes right down to the essentials, an academic scientist is supposed to publish papers. This requires that someone write papers. I view this as the job of the PI, as much as anyone else. I even view it as the primary responsibility of the PI over everyone else, because the PI is where the buck stops. My personnel justification blurb in every one of my grants says so. That I’ll take responsibility for publishing the results. Postdocs are described as assisting me with that task. (Come to think of it, I can’t remember exactly how most people handle this in grants that I’ve reviewed.)

Opinions and practices vary on this. Some would assert that no PI should ever be writing a primary draft of a research paper and only rarely a review. Editing only, in the service of training other academic persons in the laboratory to, well, write. Some would kvetch about the relative ratio of writing effort of the PI versus other people in the laboratory. Certainly, when my spouse would prefer I was doing something other than writing, I get an earful about how in lab X, Y and Z the PI never writes and the awesome postdocs basically just hand over submit ready drafts and why isn’t my lab like that. But I digress.

I also have similar views on grant writing, namely that in order to publish papers one must have data from which to draw upon and that requires funds. To generate the data, therefore, someone has to write grant proposals. This is, in my view, a necessary job. And once again, the buck stops with the PI. Once again, practices vary in terms of who is doing the writing. Once again, strategies for writing those grants vary. A lot. But what doesn’t vary is that someone has to do a fair bit of writing.

I like writing papers. The process itself isn’t always smooth and it isn’t always super enjoyable. But all things equal, I feel LIKE I AM DOING MY JOB when I am sitting at my keyboard, working to move a manuscript closer to publication. Original drafting, hard core text writing, editing, drawing figures and doing analysis iteratively as you realize your writing has brought you to that necessity…I enjoy this. And I don’t need a lot of interruption (sorry, “social interaction”) when I am doing so.

In the past year or so, my work/life etc has evolved to where I spend 1-2 evenings a week in my office up to about 11 or 12 after dinner just writing. I dodge out for dinner so that my postdocs have no reason to stick around and then I come back in when the coast is clear.

I’m finding life in the time of Corona to simply push those intervals of quiet writing time earlier in the day. I have a houseful of chronologically shifted teens, which is awesome. They often don’t emerge from their rooms until noon…or later. Only my youngest needs much of my input on breakfast and even that is more a vague feeling of lingering responsibility than actual need. Sorry, not trying to rub it in for those of you with younger children. Just acknowledging that this is not a bad time in parenthood for me.

So I get to write. It’s the most productive thing I have to do these days. Push manuscripts closer and closer to being published.

It’s my job. We have datasets. We have things that should and will be papers eventually.

So on a daily and tactical level, things are not too bad for me.

I still don’t understand the calculation of Journal Impact Factor. Or, I didn’t until today. Not completely. I mean, yes, I had the basic idea that it was citations divided by the number of citable articles published in the past two years. However, when I write blog posts talking about how you should evaluate your own articles in the context (e.g., this one), I didn’t get it quite right. The definition from the source:

the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years

So when we assess how our own article contributes to the journal impact factor of the journal it was published in, we need to look at citations in the second and third calendar years. It will never count the first calendar year of publication, somewhat getting around the question of whether something has been available to be seen and cited for a full calendar year before it “counts” for JIF purposes. So when I wrote:

The fun game is to take a look at the articles that you’ve had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the “JCR” (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.

I don’t think I fully appreciated that you can look at citations in the second and third year and totally ignore the first year of citations. Look at the second and third calendar year of citations, individually, or average them together as a short cut. Either way, if you want to know if your paper is boosting the JIF of the journal, those are the citations to focus on. Certainly in my mind when I do the below mentioned analysis I used to think I had to look at the first year and sort of grumble to myself about how it wasn’t fair, it was published in the second half of the year, etc. And the second year “really counted”. Well, I was actually closer with my prior excuse making than I realized. You look at the second and third years.

Obviously this also applies to the axe grinding part of your analysis of your papers. I was speaking with two colleagues recently, different details but basically it boils down to being a little down in the dumps about academic disrespect. As you know Dear Reader one of the things that I detest most about the way academic science behaves is the constant assault on our belongingness. There are many forces that try to tell you that you suck and your science is useless and you don’t really deserve to have a long and funded career doing science. The much discussed Imposter Syndrome arises from this and is accelerated by it. I like to fight back against that, and give you tools to understand that the criticisms are nonsense. One of these forces is that of journal Impact Factor and the struggle to get your manuscripts accepted in higher and higher JIF venues.

If you are anything like me you may have a journal or two that is seemingly interested in publishing the kind of work you do, but for some reason you juuuuuust miss the threshold for easy acceptance. Leading to frequent rejection. In my case it is invariably over perceived impact with a side helping of “lacks mechanism”. Now these just-miss kinds of journals have to be within the conceivable space to justify getting analytical about it. I’m not talking about stretching way above your usual paygrade. In our case we get things in this one particular journal occasionally. More importantly, there are other people who get stuff accepted that is not clearly different than ours on these key dimensions on which ours are rejected. So I am pretty confident it is a journal that should seriously consider our submissions (and to their credit our almost inevitably do go out for review).

This has been going on for quite some time and I have a pretty decent sample of our manuscripts that have been rejected at this journal, published elsewhere essentially unchanged (beyond the minor revisions type of detail) and have had time to accumulate the first three years of citations. This journal is seriously missing the JIF boat on many of our submissions. The best one beat their JIF by a factor of 4-5 at times and has settled into a sustained citation rate of about double theirs. It was published in a journal with a JIF about 2/3rd as high. I have numerous other examples of manuscripts rejected over “impact” grounds that at least met that journal’s JIF and in most cases ran 1.5-3x the JIF in the critical second and third calendar years after publication.

Fascinatingly, a couple of the articles that were accepted by this journal are kind of under-performing considering their conceits, our usual for the type of work etc.

The point of this axe grinding is to encourage you to take a similar quantitative look at your own work if you should happen to be feeling down in the dumps after another insult directed at you by the system. This is not for external bragging, nobody gives a crap about the behind-the-curtain reality of JIF, h-index and the like. You aren’t going to convince anyone that your work is better just because it outpoints the JIF of a journal it didn’t get published in. Editors at these journals are going to continue to wring their hands about their JIF, refuse to face the facts that their conceits about what “belongs” and “is high impact” in their journal are flawed and continue to reject your papers that would help their JIF at the same rate. It’s not about that.

This is about your internal dialogue and your Imposter Syndrome. If this helps, use it.

A twitter observation suggests that some people’s understanding of what goes in the Introduction to a paper is different from mine.

In my view, you are citing things in the Introduction to indicate what motivated you to do the study and to give some insight into why you are using these approaches. Anything that was published after you wrote the manuscript did not motivate you to conduct the study. So there is no reason to put a citation to this new work in the Introduction. Unless, of course, you do new experiments for a revision and can fairly say that they were motivated by the paper that was published after the original submission.

It’s slightly assy for a reviewer to demand that you cite a manuscript that was published after the version they are reviewing was submitted. Slightly. More than slightly if that is the major reason for asking for a revision. But if a reviewer is already suggesting that revisions are in order, it is no big deal IMO to see a suggestion you refer to and discuss a recently published work. Discuss. As in the Discussion. As in there may be fresh off the presses results that are helpful to the interpretation and contextualization of your new data.

These results, however, do not belong in the Introduction. That is reserved for the motivating context for doing the work in the first place.

Infuriating manuscripts

January 17, 2019

I asked what percentage of manuscripts that you receive to review make you angry that the authors dared to submit such trash. The response did surprise me, I must confess.

I feel as though my rate is definitely under 5%.

A recent editorial in Neuropsychopharmacology by Chloe J. Jordan and the Editor in Chief, William A. Carlezon Jr. overviews the participation of scientists in the journals’ workings by gender. I was struck by Figure 5 because it is a call for immediate and simple action by all of you who are corresponding authors, and indeed any authors.
The upper two pie charts show that between 25% and 34% of the potential reviewer suggestions in the first half of 2018 were women. Interestingly, the suggestions for manuscripts from corresponding authors who are themselves women were only slightly more gender balanced than were the suggestions for manuscripts with male corresponding authors.

Do Better.

I have for several years now tried to remember to suggest equal numbers of male and female reviewers as a default and occasionally (gasp) can suggest more women than men. So just do it. Commit yourself to suggest at least as many female reviewers as you do male ones for each and every one of your manuscripts. Even if you have to pick a postdoc in a given lab.

I don’t know what to say about the lower pie charts. It says that women corresponding authors nominate female peers to exclude at twice the rate of male corresponding authors. It could be a positive in the sense that women are more likely to think of other women as peers, or potential reviewers of their papers. They would therefore perhaps suggest more female exclusions compared with a male author that doesn’t bring as many women to mind as relevant peers.

That’s about the most positive spin I can think of for that so I’m going with it.

I was trained to respond to peer review of my submitted manuscripts as straight up as possible. By this I mean I was trained (and have further evolved in training postdocs) to take every comment as legitimate and meaningful while trying to avoid the natural tendency to view it as the work of an illegitimate hater. This does not mean one accepts every demand for a change or alters one’s interpretation in preference for that of a reviewer. It just means you take it seriously.

If the comment seems stupid (the answer is RIGHT THERE), you use this to see where you could restate the point again, reword your sentences or otherwise help out. If the interpretation is counter to yours, see where you can acknowledge the caveat. If the methods are unclear to the reviewer, modify your description to assist.

I may not always reach some sort of rebuttal Zen state of oneness with the reviewers. That I can admit. But this approach guides my response to manuscript review. It is unclear that it guides everyone’s behavior and there are some folks that like to do a lot of rebuttal and relatively less responding. Maybe this works, maybe it doesn’t but I want to address one particular type of response to review that pops up now and again.

It is the provision of an extensive / awesome response to some peer review point that may have been phrased as a question, without incorporating it into the revised manuscript. I’ve even seen this suboptimal approach extend to one or more paragraphs of (cited!) response language.

Hey, great! You answered my question. But here’s the thing. Other people are going to have the same question* when they read your paper. It was not an idle question for my own personal knowledge. I made a peer review comment or asked a peer review question because I thought this information should be in the eventual published paper.

So put that answer in there somewhere!

___
*As I have probably said repeatedly on this blog, it is best to try to treat each of the three reviewers of your paper (or grant) as 33.3% of all possible readers or reviewers. Instead of mentally dismissing them as that weird outlier crackpot**.

**this is a conclusion for which you have minimal direct evidence.

Self plagiarism

June 8, 2018

A journal has recently retracted an article for self-plagiarism:

Just going by the titles this may appear to be the case where review or theory material is published over and over in multiple venues.

I may have complained on the blog once or twice about people in my fields of interest that publish review after thinly updated review year after year.

I’ve seen one or two people use this strategy, in addition to a high rate of primary research articles, to blanket the world with their theoretical orientations.

I’ve seen a small cottage industry do the “more reviews than data articles” strategy for decades in an attempt to budge the needle on a therapeutic modality that shows promise but lacks full financial support from, eg NIH.

I still don’t believe “self-plagiarism” is a thing. To me plagiarism is stealing someone else’s ideas or work and passing them off as one’s own. When art critics see themes from prior work being perfected or included or echoed in the masterpiece, do they scream “plagiarism”? No. But if someone else does it, that is viewed as copying. And lesser. I see academic theoretical and even interpretive work in this vein*.

To my mind the publishing industry has a financial interest in this conflation because they are interested in novel contributions that will presumably garner attention and citations. Work that is duplicative may be seen as lesser because it divides up citation to the core ideas across multiple reviews. Given how the scientific publishing industry leeches off content providers, my sympathies are…..limited.

The complaint from within the house of science, I suspect, derives from a position of publishing fairness? That some dude shouldn’t benefit from constantly recycling the same arguments over and over? I’m sort of sympathetic to this.

But I think it is a mistake to give in to the slippery slope of letting the publishing industry establish this concept of “self-plagiarism”. The risk for normal science pubs that repeat methods are too high. The risks for “replication crisis” solutions are too high- after all, a substantial replication study would require duplicative Introductory and interpretive comment, would it not?

__

*although “copying” is perhaps unfair and inaccurate when it comes to the incremental building of scientific knowledge as a collaborative endeavor.

Citing Preprints

May 23, 2018

In my career I have cited many non-peer-reviewed sources within my academic papers. Off the top of my head this has included:

  1. Government reports
  2. NGO reports
  3. Longitudinal studies
  4. Newspaper items
  5. Magazine articles
  6. Television programs
  7. Personal communications

I am aware of at least one journal that suggests that “personal communications” should be formatted in the reference list just like any other reference, instead of the usual parenthetical comment.

It is much, much less common now but it was not that long ago that I would run into a citation of a meeting abstract with some frequency.

The entire point of citation in a scientific paper is to guide the reader to an item from which they can draw their own conclusions and satisfy their own curiosity. One expects, without having to spell it out each and every time, that a citation of a show on ABC has a certain quality to it that is readily interpreted by the reader. Interpreted as different from a primary research report or a news item in the Washington Post.

Many fellow scientists also make a big deal out of their ability to suss out the quality of primary research reports merely by the place in which it was published. Maybe even by the lab that published it.

And yet.

Despite all of this, I have seen more than one reviewer objection to citing a preprint item that has been published in bioRxiv.

As if it is somehow misleading the reader.

How can all these above mentioned things be true, be such an expectation of reader engagement that we barely even mention it but whooooOOOAAAA!

All of a sudden the citation of a preprint is somehow unbelievably confusing to the reader and shouldn’t be allowed.

I really love* the illogical minds of scientists at times.

Time to N-up!

May 2, 2018

Chatter on the Twitts today brought my attention to a paper by Weber and colleagues that had a rather startlingly honest admission.

Weber F, Hoang Do JP, Chung S, Beier KT, Bikov M, Saffari Doost M, Dan Y.Regulation of REM and Non-REM Sleep by Periaqueductal GABAergic Neurons. Nat Commun. 2018 Jan 24;9(1):354. doi: 10.1038/s41467-017-02765-w.

If you page all the way down to the end of the Methods of this paper, you will find a statement on sample size determination. I took a brief stab at trying to find the author guidelines for Nature Communications because a standalone statement of how sample size was arrived upon is somewhat unusual to me. Not that I object, I just don’t find this to be common in the journal articles that I read. I was unable to locate it quickly so..moving along to the main point of the day. The statement reads partially:

Sample sizes

For optogenetic activation experiments, cell-type-specific ablation experiments, and in vivo recordings (optrode recordings and calcium imaging), we continuously increased the number of animals until statistical significance was reached to support our conclusions.

Wow. WOW!

This flies in the face of everything I have ever understood about proper research design. In the ResearchDesign 101 approach, you determine* your ideal sample size in advance. You collect your data in essentially one go and then you conduct your analysis. You then draw your conclusions about whether the collected data support, or fail to support, rejection of a null hypothesis. This can then allow you to infer things about the hypothesis that is under investigation.

In the real world, we modify this a bit. And what I am musing today is why some of the ways that we stray from ResearchDesign orthodoxy are okay and some are not.

We talk colloquially about finding support for (or against) the hypothesis under investigation. We then proceed to discuss the results in terms of whether they tend to support a given interpretation of the state of the world or a different interpretation. We draw our conclusions from the available evidence- from our study and from related prior work. We are not, I would argue, supposed to be setting out to find the data that “support our conclusions” as mentioned above. It’s a small thing and may simply reflect poor expression of the idea. Or it could be an accurate reflection that these authors really set out to do experiments until the right support for a priori conclusions has been obtained. This, you will recognize, is my central problem with people who say that they “storyboard” their papers. It sounds like a recipe for seeking support, rather than drawing conclusions. This way lies data fakery and fraud.

We also, importantly, make the best of partially successful experiments. We may conclude that there was such a technical flaw in the conduct of the experiment that it is not a good test of the null hypothesis. And essentially treat it in the Discussion section as inconclusive rather than a good test of the null hypothesis.

One of those technical flaws may be the failure to collect the ideal sample size, again as determined in advance*. So what do we do?

So one approach is simply to repeat the experiment correctly. To scrap all the prior data, put fixes in place to address the reasons for the technical failure, and run the experiment again. Even if the technical failure hit only a part of the experiment. If it affected only some of the “in vivo recordings”, for example. Orthodox design mavens may say it is only kosher to re run the whole shebang.

In the real world, we often have scenarios where we attempt to replace the flawed data and combine it with the good data to achieve our target sample size. This appears to be more or less the space in which this paper is operating.

“N-up”. Adding more replicates (cells, subjects, what have you) until you reach the desired target. Now, I would argue that re-running the experiment with the goal of reaching the target N that you determined in advance* is not that bad. It’s the target. It’s the goal of the experiment. Who cares if you messed up half of them every time you tried to run the experiment? Where “messed up” is some sort of defined technical failure rather than an outcome you don’t like, I rush to emphasize!

On the other hand, if you are spamming out low-replicate “experiments” until one of the scenarios “looks promising”, i.e. looks to support your desired conclusions, and selectively “n-up” that particular experiment, well this seems over the line to me. It is much more likely to result in false positives. Well, I suppose running all of these trial experiments at the full power is just as likely it is just that you are not able to do as many trial experiments at full power. So I would argue the sheer number of potential experiments is greater for the low-replicate, n-up-if-promising approach.

These authors appear to have done this strategy even one worse. Because their target is not just an a priori determined sample size to be achieved only when the pilot “looks promising”. In this case they take the additional step of only running replicates up to the point where they reach statistical significance. And this seems like an additional way to get an extra helping of false-positive results to me.

Anyway, you can google up information on false positive rates and p-hacking and all that to convince yourself of the math. I was more interested in trying to probe why I got such a visceral feeling that this was not okay. Even if I personally think it is okay to re-run an experiment and combine replicates (subjects in my case) to reach the a priori sample size if it blows up and you have technical failure on half of the data.

__
*I believe the proper manner for determining sample size is entirely apart from the error the authors have admitted to here. This isn’t about failing to complete a power analysis or the like.

Just when I think I will not find any more ridiculous things hiding in academia…..

A recent thread on twitter addressed a population of academics (not sure if it was science) who are distressed when the peer review of their manuscripts is insufficiently vigorous/critical.

This is totally outside of my experience. I can’t imagine ever complaining to an Editor of a journal that the review was too soft after getting an accept or invitation to revise.

People are weird though.