An Editor Lays Down the Law

July 26, 2008

Incoming Editor in Chief of the Journal of Neurophysiology David J. Linden has written a fascinating editorial. As new Editors often do, he lays out his vision of science publishing. If I am not sorely mistaken, he is issuing a little smackdown to the GlamourMagz!

I’ve always greatly admired the scientific ethos of Journal of Neurophysiology. Reading the Journal reminds me of what I like best about science. I like that it publishes full-length reports,
which are still being cited 20 or 30 years on. I like that each paper can stand on its own, without 10 supplemental online figures… Most importantly, I like that Journal of Neurophysiology has been guided solely by publishing excellent and interesting science, regardless of perceived “sexiness” or “impact factor.”

Almost makes you want to jump into some neurophysiology yourself, doesn’t it? And he’s not done…


Editor Linden has a few thoughts for authors:

Finally, let me preach just a little bit. In these increasingly competitive times, when the focus of scientists increasingly turns to research grants, publications, and academic promotion, let’s
remember that we all have come to neurophysiology out of a common desire to illuminate some interesting problems in the natural world. Our ultimate goal is not the published paper or the grant or the promotion, but rather to develop scientific understanding, a process that is inherently interactive and self-correcting.

Does this rock or what? We can just replace “neurophysiology” with “the entirety of biological and medical science” can we not? In one sentence he issues the most important critique and solution to one of our biggest cultural failings. Awesome.
Next Linden has a few words on the purpose of peer review that I think you will find as attractive as I do. I will eviscerate it a bit so that you are inclined to read the whole thing.

… strive to be rigorous, fair, and open-minded…. while it is always appropriate to ask for additional experiments if you think that the author’s main point cannot stand without them, carefully consider your requests for additional experiments that broaden the scope of the investigation. …you do not do the authors a favor by proposing an additional two years’ worth of work. … negative reviews should not be a license for mean-spirited or disrespectful prose. …If there’s something you really like, we won’t think that you’re a wimp if you praise it with gusto. It’s rough out there and a little kindness
goes a long way. Remember, we’re all striving to reveal the same truth about neural function–we’re all on the same team.

It’s fantastic. He may be a dreamer but he’s dreaming the right dream. And he has a bully pulpit to enforce it for his journal’s subfield.
And while I have the window open, I might as well do a Blogrolling: The Accidental Blog authored by:

David J. Linden, Ph.D., is a Professor in the Department of Neuroscience at the Johns Hopkins University School of Medicine. His laboratory has worked for many years on the cellular substrates of memory storage in the brain. You can see a list of some of his lab’s recent scientific papers here. He serves as the Chief Editor of the Journal of Neurophysiology.

And he blogs. Kewl.
[big ol’ hattip to Nat Blair of The Junction Potential, go read his blog]

No Responses Yet to “An Editor Lays Down the Law”

  1. BugDoc Says:

    Great editorial! Per your post on the letter in Science a while back, it is nice to see yet more senior scientists and editors speaking out to illuminate some of the recent problems of the peer review process. Historically, peer review has been an excellent mechanism to improve and clarify grant proposals and publications. I hope this editorial and others like it will help to keep the process fair and constructive.

    Like

  2. drdrA Says:

    My first thought upon reading this was…. there is a god… then I remembered I’m an atheist.
    Obviously, I’m with you- this is totally awesome… and a breath of fresh air in a scientific culture that I sometimes think has lost its way in the competitive climate- so now we are doing certain experiments to compete with each other…- and not because they are good science that answers fundamental questions in an unambigious way.

    Like

  3. Mike_F Says:

    You must be joking. Do a PubMed search on “Linden DJ” and see where he publishes his own stuff…

    Like

  4. PhysioProf Says:

    Our ultimate goal is not the published paper or the grant or the promotion, but rather to develop scientific understanding, a process that is inherently interactive and self-correcting.

    Sounds nice. But it sure is difficult to do electrophysiology in your fucking basement with some crap you bought at Radio Shack, which is where you’ll end up in the absence of “the published paper or the grant or the promotion”.
    These kinds of hand-wringing hortatory editorials don’t do jack diddly shit about the systemic structure of the biomedical research enterprise that absolutely ensures that what is complained about continues.
    You think Linden’s department at Johns Hopkins is gonna hire an entry-level tenure-track faculty member with only J Neurophys papers as a post-doc? HAHAHAH!

    Like

  5. BugDoc Says:

    Silly rabbit, PP. You know he’s not saying that it’s not important to publish papers or get grants. His point (I think) is that peer review shouldn’t be a barrier to solid science, nor should reviewers be an engine for directing other people’s long term experimental efforts. Peer review is supposed to be a quality control mechanism to determine if authors are supporting their submitted conclusions with solid data, and also to help editors assess whether the science is of narrow or general interest. Period. Peer review is not an opportunity to piss on somebody else just because you can, or because your paper or grant got trashed.
    Besides if we as academic scientists can’t keep some lofty goal in sight (i.e., “developing scientific understanding”), who will? Being savvy about your career is one thing, but if the practice of science is going to become all about politics and game-playing, let’s just move to DC and become lobbyists.

    Like

  6. juniorprof Says:

    Great, great editorial! I frequently publish work that might find a nice home in J Neurophys (and we do more of this now than ever) but I really never consider it. You bet I will have it on my list now.
    Mike_F, who cares where he publishes his own stuff (mostly Neuron). His job is to be a good steward of a historically high impact journal. He is off to a fantastic start.

    Like

  7. juniorprof Says:

    Yeah PP, I read it in the same way BugDoc did. I just don’t see what you’re seeing in his words.

    Like

  8. Mike_F Says:

    juniorprof – “…His job is to be a good steward of a historically high impact journal…”
    Sunshine – I don’t know what your definition of a high impact journal is, but fyi J. Neurophys. impact factor over the past five years is 3.6-3.8. As PhysioProf pointed out in his (her?) inimitable style, this is not a journal that Linden’s own department would consider high impact. If your department takes a different view, why then you live in a happier parallel universe than the one PhysioProf and I seem to inhabit

    Like

  9. bill Says:

    These kinds of hand-wringing hortatory editorials don’t do jack diddly shit about the systemic structure of the biomedical research enterprise that absolutely ensures that what is complained about continues.

    So what will work? What do you suggest that would be more effective than “hand-wringing… editorials”?

    Like

  10. whimple Says:

    I think he’s got the right approach to building (keeping) a journal solid: publish stuff that people will reference for years to come. The IF is only based on the immediate next two years following publication, but some stuff takes a while to be appreciated.

    Like

  11. Lab Lemming Says:

    D. Monk says:
    Does this rock or what? We can just replace “neurophysiology” with “the entirety of biological and medical science” can we not?
    You can cross out “biological and medical”, mate.

    Like

  12. Venkat Says:

    I think this editorial seems a gentle reminder of what we already know, but hardly do. I find it funny how a solid paper with an important result can make it to, say, PNAS, but not make it to, may be Nature, just because it is not considered sexy enough, even if the result’s perceived impact may be the same.

    Like

  13. pinus Says:

    I have to toss my number in with PP and Mike_F here. While this is nice for Linden to say, I just don’t see how this is a big deal.

    Like

  14. drdrA Says:

    ‘These kinds of hand-wringing hortatory editorials don’t do jack diddly shit about the systemic structure of the biomedical research enterprise that absolutely ensures that what is complained about continues.’
    Ok, I’m sure I’m going to get tons of hate mail now- but I have to say I’m cranky with you people that think that stating an expectation about how things should be makes no difference.
    I suppose we have an honest philosophical difference here- I find that when expectations are laid out plainly from the get go- this issues a challenge to people to do better, to know you EXPECT better. One small editorial in one small journal does not change the direction of the whole scientific infrastructure- I’ll give you that. But change like this doesn’t usually happen in one big leap – it happens because one person pushed the system in the right direction in a small way, and then was recognized and JOINED by a whole lot of other people pushing in the same direction.
    And honestly, you got a better idea??? I’d love to hear it.
    (as for all this talk of comparing J. Neurophys to high impact journals I say WHO CARES. They are different animals, everyone knows that each subfield has its smaller, lower impact factor, but very solid journals- that have the bread and butter publications for many many good scientist- and in fact, many junior people will only keep their heads above water by publishing in these kinds of journals- they may not be the stars of their respective fields- but they will keep their heads above water – which is saying a lot in the current hostile climate).

    Like

  15. juniorprof Says:

    But change like this doesn’t usually happen in one big leap – it happens because one person pushed the system in the right direction in a small way, and then was recognized and JOINED by a whole lot of other people pushing in the same direction.
    Sunshine here,
    Right on DrdrA!!! This is further evidenced by DMs stated goal of making some small difference in what happens at NIH by using this little forum to point out how things could be done better. We’ve seen evidence that it can draw attention and get people thinking (like the April Fools post). Linden is doing the right thing here whether you think it will work or not.
    And for those of you knocking J Neurophys’s IF, I second DrdrA. WHO CARES!! J Neurophys has published a large number of classic papers through the years. Moreover, at a variety of times in recent history it has been considered a high impact journal. IFs change and who’s to say that J Neurophys won’t rise rapidly under new leadership.

    Like

  16. neurolover Says:

    What strikes me in this debate is the balance of where one publishes. Can it really be true that every single project one does in one’s own lab turns out to be worthy of publication in a glamormag? My intuition would have been that, even if one always aimed for the fancy stuff, that sometimes an experimental outcome would fall short of that result. And, of course, I think there’s real value to careful complete science that isn’t fancy, but just thorough and solid.
    My perception was that in the olden days people published in a balance of different journals, and, furthermore, that they self-selected to some extent where they would send stuff. That the route to J Neurophys wouldn’t always be to send it to three other places first, but that one would see that the project belonged there after finishing it. Now, though, and I think it is ’cause PP says — there’s a perception that in order to get a job, as a post doc, you have to have a glamor publication. So, the PI doesn’t get to target the manuscript — the post-doc needs to get it into the highest “impact” journal that they can, so publications that end up in one place walk their way through a year + of other places.

    Like

  17. pinus Says:

    It could be that my opinion of J neurophys is coloured from my previous interactions over the years.
    I have found (and colleagues have also found) that the reviewers and editors for this particular journal always want a tremendous amount of experiments. Sure, that is fine, part of the process…blahblahblah.
    However, it is too the degree that you can just hold the manuscript, do the experiments, and then publish in J Neuroscience, a journal with a higher impact factor. You can raise shit-storms about impact factors, but the reality is, publishing in only J neurophys, and similar IF journals, as a post-doc, is not sufficient, you need some higher tier stuff to get a job.
    There are some great articles in J neurophys, but in terms of bang for the buck, it doesn’t add up. But, after reading Linden’s manifesto, I am happy that he is explicitly asking reviewers to consider the experiments they are asking the authors to do. Who knows…if this sticks and the editors and reviewers stop asking for years worth of work maybe I will start sending stuff there.

    Like

  18. neurolover Says:

    “you can raise shit-storms about impact factors, but the reality is, publishing in only J Neurophys, and similar IF journals, as a post-doc, is not sufficient, you need some higher tier stuff to get a job.”
    You know, the problem with this is that I think I agree that you need to *do* higher impact work in order to get a job. The problem is that the journal you publish in should not be the only or major method of determining the value of the work. There is work that’s just unsuitable to be published in Nature (the supplementary figures bug is one of my bugaboos, too). There are stories that can be told in four pages with 4 figures, and there are stories that can’t. The stories that can’t shouldn’t be morphed into weird telegraphic versions that then appear in Nature. And, I’m picking on Nature unfairly here, because the article I remember most clearly as crossing the line was the marginal article on intelligence and birth order that appeared in Science.

    Like

  19. bayman Says:

    PISS ON IMPACT FACTOR!!! I say do some wicked-ass experiments, put it into a crystal-clear, insightful and original paper, and let the journal editors fight over the privilege of publishing your kick-ass scientific work. Whether it gets published in Cell or the Bavarian Journal of Open Access Horseshit, track your paper on ISI and watch as it amasses bzillions of citations by the second. Then slap that data down on the desk of DepartmentHead at BigLeaguePresitge University of the Ivy Rose and demand a tenure track position.
    People who spend their careers designing experiments, grants and research programs to impact factors and journal editor prestige have got the whole damn thing 180 deg. ass-backwards.

    Like

  20. PhysioProf Says:

    Then slap that data down on the desk of DepartmentHead at BigLeaguePresitge University of the Ivy Rose and demand a tenure track position.

    Good luck with that, holmes!

    Like

  21. Mike_F Says:

    Ummm, from some of the above I seem to have been misunderstood, so here are two clarifications-
    1) I would find Linden’s editorial more believable if he practiced what he preaches.
    2) I don’t suggest that anybody out there direct their research by impact factors. On the other hand anybody out there who totally ignores impact factors and high ranking journals as venues for publication is going to find him/herself without any research to direct…

    Like

  22. David Linden Says:

    Hi all. David Linden here. I’m happy to see the constructive debate that my editorial has provoked. In my view, PhysioProf raises a valid question: Can improving the behavior of authors, referees and editors at one little journal make any kind of a difference in a system that has major structural problems that work against good creative science? My own feeling is that we have to start somewhere.
    My challenge to all of you DM readers is to put forward ideas that could reasonably be implemented at Journal of Neurophysiology (or similar journals) that would be steps in the right direction. However, I would appreciate it if the suggestions weren’t heavily expletive-laden. That fucking shit just gets old really goddamn fucking fast, eh?

    Like

  23. pinus Says:

    1st off:
    I don’t define my experiments by impact factors. However, there reaches a point where one has to consider where a manuscript will be sent. Different journals like different things, you have to adjust. If you are aiming higher, you have to do different experiments. Perhaps I am the fool, but I let my questions guide what I do…and then when it comes time to publish, I tweak and adjust things.
    2nd off:
    Great to see Professor Linden here and willing to solicit feedback, not many would venture to do this and it is commendable.
    As far as ideas, I will post more later. I really want to think these through.

    Like

  24. Arlenna Says:

    “Which” journals you publish in does not have to be a barrier to getting a faculty job. It may affect whether hoity-toity departments will interview you, but there are plenty of well-respected, productive departments that look at your bigger picture when it comes to hiring their future colleagues. As a job-seeking postdoc, do you want YOUR future colleagues to be people who only think you’re cool if you wear the right clothes and drive the right car and publish in the right journals?
    It may seem naive to hope this, but wouldn’t you rather work somewhere that your ideas and contributions are valued as is, rather than fight in the shark pit in an environment where those “sexiness” factors are more important than the long view? Everybody needs to stop kowtowing to this idea of science celebrity and worry more about keeping up the productivity line (with mature papers wherever they will fit in!) and getting funded. Those are the things that predict for survivability and success in the modern science environment. Making a small step for sciencekind and having editorial leadership push in that direction is about as good a place to start as any.

    Like

  25. PhysioProf Says:

    It may seem naive to hope this, but wouldn’t you rather work somewhere that your ideas and contributions are valued as is, rather than fight in the shark pit in an environment where those “sexiness” factors are more important than the long view?

    I would say there is room for both approaches, and the existence of both approaches benefits the overall enterprise. And some of us enjoy being in “hoity-toity” departments and engaging the competition of the “shark pit”.

    Like

  26. neurolover Says:

    “What can one little journal do?” Well, I’ve always thought J Neurophys publishes good stuff, and I think doing the same thing they’ve always done vetting work thoroughly, working with the authors to improve the paper, and then publishing it is a good start. But then, the next step is what happens not at the journal, but in evaluating the work (in study sections, in job applications, in tenure cases). Folks have to stop trying to get someone else to do the work of evaluation for them. And, folks have to start sincerely evaluating their own work in targeting it to the appropriate journal.
    I’ve decided I actually think less of a lab when nothing they publish appears in J Neurophys (assuming they do neurophysiology). If they publish nothing there, they’re either over-selling their work, or their not publishing everything they do. Because, as I’ve said already, not every idea one thinks up can actually be in the top 10% of the field (or whatever threshold we think it *should* take) to get into the high profile publications. Yes, a high profile lab is going to get more high profile papers. They have great resources, they’re great, and they employ great people. But they can’t always being doing the best work.
    So, the answer to what one little journal can do is, well, just publish the solid stuff. But, what can one senior scientist do? help the good stuff get published, not use other’s decisions as a proxy in evaluating people, and make sure they publish their work as they think others should publish.

    Like

  27. PhysioProf Says:

    If they publish nothing there, they’re either over-selling their work, or their not publishing everything they do.

    Some labs only aim towards glamour-mag projects, and give up on shit as soon as it becomes apparent that it is not going to be glamour-mag worthy. Just sayin’.

    Like

  28. Zuska Says:

    Just wondering, does J Neurophys practice double blind review? If not, instituting it would be something it could do to help advance the goal of publishing “excellent and interesting science”. That way you can minimize unconscious bias against female researchers (see my post on this topic) who, Drugmonkey recently pointed out, have been major contributors to this field.

    Like

  29. neurolover Says:

    “Some labs only aim towards glamour-mag projects, and give up on shit as soon as it becomes apparent that it is not going to be glamour-mag worthy”
    yeah, I think that’s a subset of my “not publishing everything they do.” I’m guessing I don’t know exactly what this means (since my stuff doesn’t lend itself to giving up that easily — too many sunk costs). It seems this behavior is skewing the science.
    But, more of my worry is that in my field it’s simply not true that the stuff that gets published can be post-hoc ranked well by the journal it got published in. There are definitely papers in J Neurophys that far out rank stuff in the glamor mags, because the J Neurophys article tells the whole story and the other journals don’t. I think this problem is to some extent field dependent, ’cause I think there are fields in which it’s clear that you’re right, and have made a big advance, or you’re not. (But, maybe I’m only dreaming of that — a vision of clarity in other fields).

    Like

  30. neurolover Says:

    I’m unaware of any neuro type journal that practices double-blind review. I do largely believe it to be impossible (too obvious who the author is from the publication — would certainly be true for mine). But, I do think it could be an experiment worth trying.
    I know others object on the grounds that you need to know the author in order to be able to judge the work, but I largely reject this idea — that somehow you’re going to look beyond the written words to your knowledge of the author’s rigor in doing statistics, for example. So, an experiment worth trying, I think.
    J Neurophys? any chance? a pilot experiment, using one section?

    Like

  31. pinus Says:

    a few ideas:
    -send to only two reviewers…third only if you need a tie-beak. This increases time for some, but will reduce overall reviewer load (not sure if this is how you all do it…I think I had 3 reviewers each time I went to J Np)
    -have reviewers provide explicit justification for extra experiments and include a estimated time to perform these experiments. Everybody can agree that sometimes you need a few extra controls, or perhaps a converging technique to really be able to conclude something….but there are reviewers who just pile experiments on…sure it would be interesting to know if X alters Y as well…but if you just spent 10 figures describing X effect on Z…well then maybe that should be beyond the scope of the review.
    -I love the idea of double-blind review….but that is so hard…mainly because most people have posters of their work at conferences…thus double blind won’t work.
    Unfortunately, all of these ideas are pretty lame. ugh!

    Like

  32. bayman Says:

    Some labs only aim towards glamour-mag projects, and give up on shit as soon as it becomes apparent that it is not going to be glamour-mag worthy.
    Those of us who played sports growing up will recognize this as good old-fashioned cherry-picking.
    From the urban dictionary:
    In sports, someone who prefers to take only easy shots. Mildly disparaging.
    A common example comes from basketball…especially a person who sits in the back court by the basket waiting for an outlet pass on a change of possession, enabling them to score easily before any defenders can reach them.
    The origins of this term may come from the notion of “cherry picking” as attempting, or picking, only those things that are easily obtained, or only what suits your taste best, as a cherry might.

    Or for the video gamers:
    A n00b who waits till the VERY end of a kill run and steals the kill with only a few hits. The n00b THINKS they got all the glory of the kill, instead they just annoy the 10 other people who poured all their turns and military into that country in the first place!
    Getting closer to the mark, in the world of internet and blogging, a slightly different connotation:
    Cherry picking is the act of pointing at individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position.
    Are you saying cherry-picking projects is a desirable approach to experimental science? Somebody’s still got to play defense before the cherry-pickers get to take it in for the dunk, no? Maybe more sophisticated team play is the higher ground (not to mention the more effective approach to solving real problems)?

    Like

  33. DSKS Says:

    David Colquhoun gave a slightly less diplomatic, but insightful, commentary along similar themes in his Times HE article last year.
    He wrote,
    “Take, for example two scientists who command universal respect in my own field, Erwin Neher and Bert Sakmann. They got the Nobel Prize for Physiology or Medicine in 1991. In the ten years from 1976 to 1985, Sakmann published an average of 2.6 papers per year (range 0 to 6).
    In six of these ten years he failed to meet the publication target set by Imperial, and these failures included the years in which the original single channel paper was published (Neher & Sakmann, 1976) and also the year when Colquhoun & Sakmann (1985) was published. In two of these ten years he had no publications whatsoever.On the other hand, a paper in 1981 in a journal with an �unacceptable� impact factor of 3.56 has had over 15000 citations (Hamill et al. , 1981). This paper would have earned for Sakmann a publication score of a miserable 0.71, less than 100th of our perspective in Science .”

    David Lindens’ remarks are agreeable and welcome, but the cynic in me concurs with PP et al. David Linden’s intentions are noble, but nevertheless he is not acting in step with what he’s proselytizing. He is telling us what most of us know and, I suspect, rather hoping that some other prestigious investigator, or perhaps an institution, will take the much needed lead towards changing how we assess quality in research. It’s certainly good to know that even seasoned veterans of science are aware that something is amiss, and yet at the end of the day, the Chaplain’s rousing speech does not change the fact that when the whistle blows he’ll be waiting for somebody else to stick their head above the sandbags first.

    Like

  34. TreeFish Says:

    One thing David Linden could do is what Bill Greenough did to help the transition of Behavioral Neural Biology into Neurobiology of Learning and Memory: publish some of his own best work in his journal.
    When WTG took over NLM, it was very niche. Though it’s still niche-y, using NLM as a forum for his own work helped NLM thrive under WTG’s editorship (he is no longer EIC).
    Sure, the grad students and post docs will be pissed off that ‘all’ they got was a JNFizz paper. But, given David’s voraciously outgoing style, such concessions will likely be paid back with some great reference letters come job-time.
    Let’s not forget, David Linden is old-skool at heart: he was a graduate student of Aryeh Routtenberg, a student of Hebb’s and an all-around loveable curmudgeon. Then, he did a post doc with John Connor, who most people forgot about (if they even knew of him). As Dick Thompson said, the classic Linden and Connor papers were the first to show ‘learning in a dish.’
    David will hold science as his ultimate judge, so it will be interesting to see what he does. Hey, howabout some damn mini-reviews?! How about some ‘recent findings’ updates from JNFizz-related papers in other journals, e.g., why not review Stephen Williams’ objective lacing of voltage-clamp recordings and the space clamp errors that they ignore? Finally, howzabout some special issues, which will include empirical, theoretical, historical and review papers?
    Finally, good on JNFizz. First, they have Eve Marder and now they have David Linden as EIC. That’s a great legacy already.

    Like

  35. PhysioProf Says:

    why not review Stephen Williams’ objective lacing of voltage-clamp recordings and the space clamp errors that they ignore?

    HAHAH! The dirty little secret of whole-cell slice physiology!!!

    Like


  36. “you can raise shit-storms about impact factors, but the reality is, publishing in only J Neurophys, and similar IF journals, as a post-doc, is not sufficient, you need some higher tier stuff to get a job.”

    Which is exactly why people need to spread the word that the IF is dead:
    1. The IF is negotiable and doesn’t reflect actual citation counts.
    2. The IF cannot be reproduced, even if it reflected actual citations.
    3. The IF is not statistically sound, even if it were reproducible and reflected actual citations.
    Sources linked here.

    “My challenge to all of you DM readers is to put forward ideas that could reasonably be implemented at Journal of Neurophysiology (or similar journals) that would be steps in the right direction.”

    I would say that a step in the right direction would be to encourage or even to initiate the development of an open and transparent standard for building a multidimensional scientific reputation. The technology and know-how is around, the only thing lacking is the political will and motivation to start moving. It’s time for scientists to decide what’s good and important science and not professional editors. The current publishing system is “180 deg. ass-backwards.” (#19) and it needs reform badly. The more editors and scientists push for reform, the sooner it will happen and we all will be better off for it.

    Like

  37. DSKS Says:

    What do people think of the page rank method for determining article importance (e.g. the Eigenfactor)? It’s pimped as a superior metric to IF*, but I’m not convinced by it. Basically, “prestige” is assigned to journals both according to the number of citations they receive and where those citations come from (a citation from Science being more “valuable” than a citation from J. Neurophysiol.). But there’s no adequate explanation for why a citation from a rigorous study published in J. Neurophysiol. shouild be regarded as any less valuable than a citation from a letter to Nature.
    * Which I think it is, but then a Magic 8-Ball is a superior metric to the IF, so…

    Like

  38. pinus Says:

    treefish, those are some great ideas!
    I think J neurophys would be a great place for some technical type reviews of recent findings. Also, special issues with reviews are always interesting to me.
    A few high profile people publishing there from time to time would be nice as well.
    Also…space-clamp issues are no secret! Everybody is aware…it just isn’t trumpeted in every whole-cell paper.

    Like

  39. neurolover Says:

    J Neurophys is doing “journal club” articles that review a recent finding in the journal. I think they are good, and that it’s a valuable exercise for a junior investigator (post-doc/grad student/new faculty). It’s also good for the journal, because people need work to be interpreted.
    and, I agree about people publishing in J Neurophys. I swear, people really did do this in the old days. Sakmann (or someone like him) would have something to write in 4 pages and he’d publish it in Science, and he’d have something to write in 15 pages, and he’d publish it in J something. Maybe it’s the proliferation of the intermediate journals (Nature Neuro, Neuron, Cell, and the rest of the Nature group) that’s contributed to the journal shopping we see these days.

    Like

  40. David Linden Says:

    Thanks to all of you who have offered constructive ideas about how to improve J. Neurophysiology.
    1) Zuska suggests double-blind review.
    This is an interesting idea, and the goal of eliminating bias against particular authors or groups of authors based on gender (or some other criterion) is admirable. That said, there is a reason that the few experiments in double blind reviewing over the years have been abandoned. First, as mentioned by others in this thread, in practice it�s very easy for people reviewing in their own subspeciality to guess the lab of origin correctly. Second, the editors need to know the identity of the authors, not because of their reputations, but because we want to avoid recruiting referees with a conflict of interest (scientists from the same institution, recent collaborators, mentors or trainees).
    Zuska also raises a very legitimate question. Is there gender bias in refereeing at J Neurophysiology? Fortunately, we have a huge data set that can help us answer this question. We�ll do an analysis and report the results in an editorial soon.
    2) Pinus writes �send to only two reviewers…third only if you need a tie-beak. This increases time for some, but will reduce overall reviewer load�
    At J. Neurophysiology we do not have a policy that requires 3 reviewers�it�s up to the Associate Editor. First off, at J. Neurophysiology, the editors are encouraged to use their own brains in making decisions�it�s not just arithmetic. A tie-break can often be resolved without getting a third review of one of the referees (either pro or con) is convincing. I have even gone with the minority decision (1 out of 3) on some occasions when that one referee makes a compelling case. What you should know as an author, is that at J. Neurophysiology that third referees is just as likely to help your chances as hurt them. Furthermore, in some cases when you get two referee reports back, it�s because 3 referees agreed but one flaked out. That can produces disastrous delays if we have only recruited 2 to start with (no one likes to have to wait 3 months to get their reviews).
    3) Pinus also sez �have reviewers provide explicit justification for extra experiments and include a estimated time to perform these experiments. Everybody can agree that sometimes you need a few extra controls, or perhaps a converging technique to really be able to conclude something….but there are reviewers who just pile experiments on…sure it would be interesting to know if X alters Y as well…but if you just spent 10 figures describing X effect on Z…well then maybe that should be beyond the scope of the review.�
    While I agree with the goal (as expressed in my editorial) I disagree with the method to achieve it. If we insist on this extra writing, it will become impossible for us to consistently recruit the best referees. The best solution is to express to the referees what is expected and then for the Associate Editor to step and say �this additional experiment is crucial but this other one is not.�
    4) TreeFish: �One thing David Linden could do is what Bill Greenough did to help the transition of Behavioral Neural Biology into Neurobiology of Learning and Memory: publish some of his own best work in his journal.�
    I agree. I am proud of the 10 papers my lab has published in J. Neurophysiology. I expect to submit many more examples of our best work in the years to come and I have encouraged our Associate Editors and Editorial Board to do likewise.
    5) More TreeFiash; �Hey, howabout some damn mini-reviews?! How about some ‘recent findings’ updates from JNFizz-related papers in other journals, e.g., why not review Stephen Williams’ objective lacing of voltage-clamp recordings and the space clamp errors that they ignore? PhysioProf: HAHAH! The dirty little secret of whole-cell slice physiology!!!�
    Great ideas. J. Neurophysiology has a reviews section�we would welcome a brief review addressing the literature on voltage-clamp errors in dendrites as addressed in the Williams paper and others. That would be really useful and a service to the field. Go for it, TreeFish or PhysioProf (or anyone else with fire in the belly on this point).
    6) In a similar vein, Neurolover writes; �J Neurophys is doing “journal club” articles that review a recent finding in the journal. I think they are good, and that it’s a valuable exercise for a junior investigator (post-doc/grad student/new faculty). It’s also good for the journal, because people need work to be interpreted.�
    Well, not exactly. I think you meant to say �J. Neuroscience.� I love the �Journal Club� section in J. Neuroscience where students and postdocs are invited to review and interpret recent papers. In fact, the September issue of J. Neurophysiology will announce our own adaptation of this mechanism, which we will call �Neuro Forum.� Two main differences�we will allow brief reviews of recent neurophysiological papers in any journal, not just our own, and we will also allow �minireviews� of exciting new areas.
    7) DSKS asks �What do people think of the page rank method for determining article importance (e.g. the Eigenfactor)? It’s pimped as a superior metric to IF*, but I’m not convinced by it.�
    In my view, you can twiddle these publication metrics all you want, but in the end they are all various flavors of bullshit. If you want to evaluate someone�s science, say, for the purposes of hiring or promotion, then the best way to do it is simply to read the papers they�ve written and form your own opinion. In my �hoity toity� department at Johns Hopkins I have sat on a zillion faculty search and promotion committees and that is exactly what we do. We have the confidence to evaluate the science ourselves and, despite what you might imagine, the words �impact factor� or �h-index� or �EigenFactor� are not even mentioned�

    Like

  41. pinus Says:

    While one can debate whether or not ‘small’ changes can really do anything in face of the systemic issues that plague science, I think it is pretty awesome that D. Linden came here and took the time to read and thoughtfully reply to suggestions. It really is great (and surprising) to see a senior member of the field do this…thank you. Honestly, this kind of open discussion makes me feel much much better about J Neurophys.

    Like

  42. BugDoc Says:

    D. Linden@ #41: “Pinus also sez �have reviewers provide explicit justification for extra experiments and include a estimated time to perform these experiments….
    While I agree with the goal (as expressed in my editorial) I disagree with the method to achieve it. If we insist on this extra writing, it will become impossible for us to consistently recruit the best referees. The best solution is to express to the referees what is expected and then for the Associate Editor to step and say this additional experiment is crucial but this other one is not.”
    I agree that it is an important part of the solution to have associate editors step in to make some decisions about experiments requested by the reviewers. However, in practice, how are you going to determine if this is happening, or if editors are just passing on the reviews (probably because they are extremely busy) without providing much guidance as many do now? Regarding asking reviewers to write a few extra words of justification for requested experiments, the best reviewers do this anyway. Providing rationale and justification for our thoughts is something we all have to do in our papers and grants, so asking for this thought process to be included in peer review should not be an undue burden. If some reviewers refuse to do this, perhaps they are not the most thorough reviewers anyway?

    Like

  43. DrugMonkey Says:

    geez, ya turn your back for a second around here…
    great comments all, thanks for playing. I’d like to pick up on this suggestion that Editor Linden should submit his “best” stuff to his journal. There is one slight minefield for him in this, just thought I’d get it out there.
    There is always a potential for a poor perception when people on the editorial staff get articles accepted to “their” journal. The perception that there was an extra thumb on the scales, so to speak. Especially for anyone who thinks that their own rejected paper was just as good, or better. So it is not simple for an editor to just start submitting all of their own work. Just saying.
    A thought exercise for the home reader. What would you think if an editor published 80-90% of their articles in their own journal?

    Like


  44. David Linden wrote:

    In my view, you can twiddle these publication metrics all you want, but in the end they are all various flavors of bullshit. If you want to evaluate someones science, say, for the purposes of hiring or promotion, then the best way to do it is simply to read the papers and form your own opinion. In my hoity toity department at Johns Hopkins I have sat on a zillion faculty search and promotion committees and that is exactly what we do.

    If only it were true everywhere.
    My friend Prof David Colquhoun, who as already noted has written a lot about related issues (see e.g. the extended version of his Times Higher Education piece here), used to say something similar about search committees and interview panels, noting that he would read the papers people starred on their CVs as their most important ones and then ask them questions about the papers. The punchline was that his University Human Resources people told him to stop doing it as it was impossible to ensure that all the candidates got asked equivalently tough questions, and thus unfair.
    In the UK, the reason publications in the beauty competition type of journals have such an influence on faculty hiring is that they are seen as giving a strong indication of who will actually be able to get their grants funded. In our system the funding rate for the major funders runs at between 10 and 20%, and the panels (study sections) have very wide remits (e.g. the whole of physiology and pharmacology). In this setting the added glamour factor, and sense that the person might be cutting edge, conferred by a paper in Nature, Science, Cell, PNAS or wherever is felt by many scientists to have a big influence on the funding panels, who are inevitably looking for any reason to sift stuff either in or out. Universities in turn hire the people they think will be best able to get funded, thus commonly the people with at least one paper in one of the glamour journals as well as a solid track record in specialist category journals.
    On the specific subject of how to make better journals, one feature that some of the medical journals use, and which (speaking as an author, reader, reviewer and sometime editor) I think could be used more in science journals, is an electronic response thread following the online version of articles. One of the things one hears all the time is – well, Dr As stuff is garbage because XYZ (usually because of perceived technical flaws or over-stretched interpretation of the data). But when you say – OK, did you publish this explanation of why you think Dr As data or interpretation are wrong – they say – lifes too short – or similar. I have the impression that this is mostly an activation barrier thing – trying to pen a review article, even a short one, disagreeing with something is hard work. In contrast, penning an e-letter is a lot easier, and in the best cases generates quite an informative online debate. It is a bit like seeing the reviewing happening live, except it is after publication, and sometimes shows up things the journal reviewers missed.

    Like

  45. juniorprof Says:

    What would you think if an editor published 80-90% of their articles in their own journal?
    We’ve got this going on with one of our subfields journals right now. The perception appears to be uniformly bad. I don’t think it will tank the journal but the practice has not been appreciated.

    Like

  46. DrugMonkey Says:

    Austin Elliott, the point you raise about the various publication metrics being used because they seem ‘fairer’, particularly to those who are not actually in science (HR depts) is interesting. First because it encourages the doubting Thomases to redouble their criticisms of the IF and other systems in hopes of showing that these are not in fact less biased, just differently biased.
    Second, because it helps to understand one of the pro-IF positions a little better and encourage the production of seemingly ‘fairer’ measures that are more palatable. For example, instead of ‘starring’ pubs of personal biases, perhaps the way to go is to use measures that account for the known limitations of citation measures. First of all, getting back to actual per-article cites instead of per-journal cites. Developing the proper field-relevant measures (that account for field-specific cite rates), dividing per-article cites by the journal cite rate (heh, heh, heh), etc.

    Like

  47. neurolover Says:

    “Austin Elliott, the point you raise about the various publication metrics being used because they seem ‘fairer’, particularly to those who are not actually in science (HR depts) is interesting”
    Yes, this is an interesting point, but a quest for fairness by coming up with “objective metrics” is doomed to failure. It’s another method to try to foist off the evaluation of merit onto someone else, and it won’t work to find “true” merit, if our goal of true merit is to look for some form of idealized science. Using metrics like these is a measure of a de-professionalized/non-creative field, where we evaluate using fair, but gameable metrics. It’s the reason that teacher’s unions argue against evaluations of merit.

    Like


  48. I’m sure that Hopkins search committees read the papers of the candidates–but from what I’ve heard (at a similarly prestigious location) this only happens after the candidates got through the first round cut by, duh, having a glamour paper. Or two.
    That said, I think editorials like this one are terrific. It sets a tone, it lays out goals, and it reminds everyone that the publishing enterprise is what we choose to make it.

    Like

  49. DrugMonkey Says:

    …but neurolover, you do understand the fundamental problem of bias, right? Evaluation of merit that is in fact based on other factors which are not directly relevant to the decision that is at hand is not a good thing.
    The motivation to dismantle sources of bias that are irrelevant to the supposed goal of academic quality is a good one.

    Like

  50. neurolover Says:

    “…but neurolover, you do understand the fundamental problem of bias, right?”
    Oh, totally totally, for sure. I just think embracing mathematical formulas in order to counter bias are doomed to failure of another sort. The math (statistics) has value — in pointing out that there is bias. But, in judging individual science, any formulaic method of evaluating merit isn’t going to serve the goal of “real science.” It’ll end up being something like hiring people based on their IQs, and thinking you’ve hired the smartest person. All metrics are doomed to failure (well, except maybe the true mathematical ones: “metric – a function of a topological space that gives, for any two points in the space, a value equal to the distance between them”).
    I think the solution to bias is being aware of it and taking extra steps to question ourselves when our evaluations seem to fall into stereotypic traps (like saying a woman is shrill, or an asian is shy, . . .).

    Like

  51. Elmo Says:

    Allow the reviewers to comment on each other’s reviews. Manuscripts are often held up by one reviewer making comments or demands that are obviously unreasonable to everyone but the editor. Let’s be frank. Editors sometimes (frequently, usually) don’t take the time to study the manuscripts they receive very carefully, relying instead on the reviewers to do the heavy lifting. At the same time, some reviewers, because they don’t put in enough effort to understand the manuscript, fire off ridiculous criticisms and make illogical demands. Other reviewers, sometimes out of shear malignant competitiveness, deliberately try to tank manuscripts. (I worked for someone who did this. Meeting this person, you wouldn’t suspect it, but seeing other people succeed really agitated this faculty member.) We’ve all seen these sorts of things happen. Allowing reviewers to comment on each other’s reviews would help to limit the damage that can be done by sloppy, mistaken, or ill-intentioned reviewers.

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: