Is the J Neuro policy banning Supplemental Materials backfiring?

January 20, 2015

As you will recall, I was very happy when the Journal of Neuroscience decided to ban the inclusion of any Supplemental Materials in articles considered for publication. That move took place back in 2010.

Dr. Becca, however, made the following observation on a recent post:

I’m done submitting to J Neuro. The combination of endless experiment requests due to unlimited space and no supp info,

I find that to be a fascinating comment. It suggests that perhaps the J Neuro policy has been ineffectual, or even has backfired.

To be honest, I can’t recall that I have noticed anything in a J Neuro article that I’ve read in the past few years that reminded me of this policy shift one way or the other.

How about you, Dear Reader? Noticed any changes that appear to be related to this banning of Supplemental Materials?

For that matter, has the banning of Supplemental Materials altered your perception of the science that is published in that journal?

44 Responses to “Is the J Neuro policy banning Supplemental Materials backfiring?”

  1. Mark Baxter Says:

    I have seen some batshit crazy reviews at J Neuro (for instance asking to repeat experiments in another species to “increase impact”). I don’t know where that’s coming from or what’s driving it.

    Like

  2. Dr Becca Says:

    Seriously, have you tried counting the number of figures in a JN paper lately? My impression is that whatever would have been supplemental is now part of the paper. IMO, this is worse than supplemental because half the time I’m like, “what does this figure have to do with anything?” and it takes longer to figure out what the main findings were.

    Like

  3. AcademicLurker Says:

    Seriously, have you tried counting the number of figures in a JN paper lately?

    True story: a candidate for a senior position in my department gave a talk based mostly on his recent Cell paper. On a single slide he had figures A through Z.

    He didn’t get the job.

    Like

  4. Brain Says:

    I also won’t submit to JN anymore. After 3 revisions and multiple new experiments, editor started complaining about figures. why wasn’t this raised in earlier decisions?

    Like

  5. Kevin. Says:

    Just because the reviewers ask for more experiments doesn’t mean they have to end up in the paper. The data can be mentioned in the text, and indeed J. Neuro demands that for two bar only data figures. You can also submit the data in the rebuttal and say in the text ‘data not shown.’ If the thing they ask for was stupid and/or the results were negative/uninterpretable, it shouldn’t be in the paper. But you can still show it to them.

    Like

  6. DrugMonkey Says:

    “Data not shown” is horrible scholarship. Terrible blight upon the body scientifique.

    Like

  7. Comradde PhysioProffe Says:

    Fucke J Neuroscience. I’m never publishing there again.

    Like

  8. DrugMonkey Says:

    That, MB, is pretty funny and a full reveal of the inanity of citation counting. Yes, almost by definition if you use an experimental model that a boatload of other people in your field use, you will likely get more citations to your paper. Therefore “higher impact”. But very likely, the *true* scientific importance and impact is higher for the less-common model.

    Like

  9. DrugMonkey Says:

    CPP- you sound as credible as the college student swearing off tequila after a night in TJ

    Like

  10. Philapodia Says:

    What’s wrong with following the old engineer’s adage of KISS (keep it simple, stupid) with manuscripts? Extra data that doesn’t really help the story (like what many reviewers ask for but feel they should because someone else did it to them in the past) just makes papers harder to understand, and people tend simply ignore all the extraneous data anyway. When I read a paper I just want the authors to get to the damn point. I would rather read a smaller, easier to read paper in a lower-tier journal than one of these journals that make the authors stuff crap into the paper ’til it looks like a haggis that’s about to burst.

    Like

  11. Noncoding Arenay Says:

    ‘Data not shown’ is a major peeve for me. Bad idea.

    Like

  12. poke Says:

    Yeah, totally agree that ‘data not shown’ is horrible for the literature.

    I haven’t noticed much change at j neuro though, either in submissions of mine or papers I’ve read\reviewed there. Sounds like others have had some crazy experiences though. Troubling…

    Like

  13. Rheophile Says:

    If referees are asking for additional data, it would have to be a *profoundly* stupid request for it not to go in somewhere. It’s the old gradeschool rule: if one person asks, three others didn’t bother.

    Was the SM change meant to reduce referee demands for additional experiments? Or was it just an acknowledgment that SM is part of the paper and should be refereed as such? If so, then why not just have an in-paper appendix, that way the main findings are clear?

    Like

  14. Kevin Says:

    It was eliminated because supplemental wasn’t being peer reviewed in the same way that the regular material was.

    I like doing away with it because it allows a complete story without being constrained by number of figures, or text because of print limitations, and a separate unreadable manuscript full of God-knows-what. Fine, data not shown is bad, but plenty of journals allow actual data in the text (or, remember tables?) even if you bastards lard it up with unreadable, parenthetical lines reporting degrees of freedom, etc.

    If it’s a problem, it’s the reviewers fault. “I didn’t submit my paper to get reviewed, I submitted it to be published.”

    Like

  15. Dave Says:

    I assumed J Neuro put a limit on the number of main figures after banning supplements?!? If they didn’t do that, then it’s all swings and roundabouts.

    Like

  16. Grumble Says:

    Dr. B., the “endless experiment requests” are not “due to unlimited space.” They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held.

    That has nothing to do with whether supplemental material is allowed.

    Like

  17. Grumble Says:

    Dave – no limits on figures, tables, the length of Results text, or the number of references. Intro and Discussion sessions are pretty strictly limited, though.

    I like it. Serious papers get published in J Neurosci – the kind that have an audience that is not happy just reading the surface bits and respecting the eternal peace of data interred in a supplement.

    Like

  18. DrugMonkey Says:

    I put the link in for a reason folks. The journal did explain their rationale at length you know.

    Like

  19. DrugMonkey Says:

    I detest Tables.

    Like

  20. Dave Says:

    I just don’t see how in the face of unlimited results and figures in the main text, reviewer demands were expected to dwindle.

    And yes, I read the link and this is clearly one of the goals of the policy.

    And I also dislike tables.

    Like

  21. Philapodia Says:

    What’s wrong with tables?

    Like

  22. DrugMonkey Says:

    Maybe because the journal combined it with a substantive critique of the idea that additional experiments should always be demanded?

    Like

  23. rxnm Says:

    Kind of weird to blame JN for not doing enough to reign in our bad behaviour.

    Reviewers ask for more experiments out of habit or because they think that’s what reviewing is…that’s why most of them are lazy, poorly thought out, or at best irrelevant.

    Reviews seem to fall into three categories:
    25% 1. Mostly positive reviews that are accepts with edits and maybe some explanations.
    25% 2. Critical constructive and insightful review
    40% 3. Harping demands for shitty experiments x, y, and z; douchey musings about “impact” and “clearing the bar” and their own “high standards” in their reviews.
    10% 4. Batshit

    I think type 3 is about 40% of reviewers…it’s a cultural problem of and by scientists.

    “They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held.”

    That’s hilarious.

    Like

  24. Occasional Contributor Says:

    When asked for new experiments, I usually say “no”. This has never once resulted in rejection.

    Like

  25. Dave Says:

    What’s wrong with tables?

    Boring. Need moar piczzzzz!!!

    Like

  26. Mike_F Says:

    Don’t like J. Neurosci? Just submit to eNeuro instead! Reviewers and editor must concur on a single final list of revisions to be communicated to the author, and this policy should weed out unreasonable or unjustified critiques.

    Like

  27. Ben Saunders Says:

    I’m curious, for people who have been submitting to JN for a while (~10 years), has there actually been a noticeable change in reviewer/editor requests, such as asking for more new data, that matches up in time with the change in supplemental material policy? I’ve only been part of submission there since 2010, and in terms of reviewers requesting new data, it’s been mixed.

    Like

  28. DrugMonkey Says:

    Interesting Mike_F. And in practice do you think it keeps the crazy requests down to a dull roar?

    Like

  29. Mike_F Says:

    It’s still early days, but it should. The express purpose of this policy is to make sure that the review process is reasonable and does not become a “trench warfare” scenario of escalating demands.

    Like

  30. Dr Becca Says:

    They are due to reviewers who are holding your work to a higher standard for what constitutes a J Neurosci paper than that to which you think it should be held.

    @Grumble: there are “standards,” and then there’s “but what about ____?” I’d say the latter by far accounts for most extra experiment requests. Is there a “standard” for how many brain regions need to be processed in order for the results in the target region to be believable? How many post-manipulation euthanasia time points? How many drug doses and drug types? I’d love to know what J Neuro’s “standards” are, and whether they at all reflect even a basic understanding of how much time and money these “standards” cost, and whether those costs are adequately compensated for by a paper in a journal whose impact factor has done nothing but decline in the last 7 years.

    Oh, and my last ms that was rejected by JN ended up in a journal with an IF over 9, with no new experiments.

    Like

  31. mbruchas Says:

    Generally the bar is increasing at all Journals. Maybe more so at JN. The problems are many…one being that the reviewers rank your paper first, before writing their comments…even if you have lots of comments, but with a high ranking you have a chance to resubmit (and pay another fee). If you have a low rank, and yet easily addressable comments, you can get rejected. You as an author don’t get to see this ranking, however, making it hard to know where you stand in the revision process.

    The other issue here is competition from other respectable neuroscience/neurobiology journals is heating up. There are journals now that aren’t quite as crazy about the expectations that have several points of impact higher than JN, and frankly that I read more often now. With that said, JN is the gold standard for a society publication in our field, so many of us will keep trying.

    It is also the job of the Senior and Review editors to regulate the review though, and not just assume that a reviewer’s demands have to always be upheld. They should be weighing in more to dampen insane requests. My fear is that they are too overworked, and simply don’t have time with the number of papers to make these careful decisions. The eNEURO solution proposed here, should be more common place. I hear it is working well over at eLife.

    Like

  32. DrugMonkey Says:

    I commonly get demands for extra experiments in submissions to fairly pedestrian journals that are out of step with 1) the scope of work viewed as a typical R01 Aim and 2) the number of papers expected as acceptable productivity. Obviously we’re talking the same overall population of peers doing the grant and paper reviewing.

    Like

  33. House of Mind Says:

    I’ve only published in JN once and the paper was accepted with minor revisions (no new experiments) so I guess we lucked out. I get a feeling that many labs send their rejected Neuron or Nature Neuro papers to JN, which may contribute to the perception of there being a “higher standard” now, especially if these people are also reviewing JN papers…

    Like

  34. Grumble Says:

    @Dr B: I’m not denying that reviewer requests can get out of control. But sometimes a paper simply does not demonstrate a hypothesis conclusively enough for the kind of journal it’s been submitted to. In your case, that might not have been the case — but still, the reviewer requests, whether reasonable or not, probably had nothing to do with whether the journal allows supplemental data or not — and everything to do with reviewers’ perceptions of what makes a J Neuro paper.

    @Ben Saunders: I’ve been publishing in J Neuro for 20+ years (i.e, since before there even WAS such a thing as online supplemental material!), and reviewing for them for 10+, and I haven’t noticed any obvious difference in “more data, please” requests before vs after the change in supplemental materials policy.

    Then again, I don’t recall ever actually been asked to do another experiment when submitting to J Neuro. Maybe that’s because I have some idea about what sort of dataset is likely to make the cut. Or I’ve just been lucky, or both.

    Like

  35. Philapodia Says:

    “I’ve been publishing in J Neuro for 20+ years (i.e, since before there even WAS such a thing as online supplemental material!), and reviewing for them for 10+, and I haven’t noticed any obvious difference in “more data, please” requests before vs after the change in supplemental materials policy.”

    Perhaps this has to do with perceived BSD-ness. It would be interesting to see an analysis of the amount of extra experiments asked for per publication as a function of senior author h-index.

    Like

  36. rxnm Says:

    “Perhaps this has to do with perceived BSD-ness. It would be interesting to see an analysis of the amount of extra experiments asked for per publication as a function of senior author h-index.”

    Trainees who have worked in BSD and non-BSD labs / institutions have first hand knowledge of how different the review process is.

    Hell, I’ve seen what are essentially pre-acceptance letters from editors at Cell Press to BSDs for papers they haven’t submitted yet.

    Like

  37. Jo Says:

    I suspect that a lot of this variability is down to the editor in charge of your particular field. I haven’t noticed any uptick in requests for additional experiments, but then I’ve also had editors override a reviewer with a comment like “we decided that additional experiments were beyond the scope of the paper”.

    Like

  38. Ben Saunders Says:

    “House of Mind: I get a feeling that many labs send their rejected Neuron or Nature Neuro papers to JN, which may contribute to the perception of there being a “higher standard” now, especially if these people are also reviewing JN papers…”

    My perception has been the JN no supp policy has made the NN, and especially Neuron, rejects more obvious. No idea if that has changed perception of what “should” be a JN paper.

    Like

  39. Skeptic Says:

    First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor’s word, “he has to reject the paper because this was the revision.”
    Never submitting to JN again.

    SM is very useful for videos and raw excel data, etc. I do look at those data myself in other journals.

    Like

  40. House of Mind Says:

    Ben Saunders: “My perception has been the JN no supp policy has made the NN, and especially Neuron, rejects more obvious. No idea if that has changed perception of what “should” be a JN paper.”

    You may be right. I haven’t been in science long enough to know what was JN’s reputation before 2009 but I do feel like it’s getting harder to publish there- even if the IF is not as high as other society journals like Neuropsych or Biol Psych. However, I know people that prefer getting a paper in JN rather than Biol Psych/Mol Psych etc because it is perceived as more prestigious (even if the IF is not as high)…. Does this “hierarchy” makes sense? Where do you usually send your papers?

    Like

  41. drugmonkey Says:

    JNeuro continues to punch higher in subjective rank because of hangover rememberence of when it occupied ~the space filled by Neuron and NN lately.

    Like

  42. gingerest Says:

    How do you present large volumes of quantitative data without tables? You people are weird.

    Like

  43. Grumble Says:

    @HOM: “Does this “hierarchy” makes sense? Where do you usually send your papers?”

    I wouldn’t send a paper to Biol Psych unless you threatened me with bodily harm. That this rat’s ass of a journal has such a ridiculously high IF is a disgrace to science (or maybe just to the idea that IFs mean anything). Of the papers in my subsubsubfield that I trust and that I think have been most transformative, I can’t think of a single one that’s been in Biol Psych. I’d love to see someone explain why this silly excuse for a journal has an IF of 10 — probably it publishes a lot of reviews, or something.

    Like

  44. CrazyMF Says:

    Well JNeuro does actually allow for supplementary material to be submitted during revision (but not during initial submission). It’s just not hosted at the JNeuro website, it has to be hosted elsewhere (e.g., a database or author’s website). I’ve recently had two papers accepted there, both of which we included supplemental material in our revised submission and were important for acceptance. They were a little funny about how to provide the supplemental materials to the reviewers during the revision stage; even gave us a hard time when we wanted to include a link to a large data set provisionally submitted to a public repository. It was annoying and it does seem a bit archaic as we’re heading into the days of big data, whether it’s genomic, proteomic or even mathematical modelling. But the studies weren’t quite Neuron/NN, not really appropriate for BP/NPP, so there wasn’t really a whole lot of other places to go in that realm. Maybe I’m wrong, but I think a couple of JN papers will look good on the CV come TT-faculty search time…

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: