On making the publishing process "more rapid, rational and equitable"

July 7, 2008

We had a previous extended discussion on the ways in which scientists might react to the rejection of a submitted manuscript which followed a post over at double-doc’s place. The discussions following all of these posts touched on one of the annoyances of manuscript peer review, namely requests for the authors to provide extensive additional experimental work to justify acceptance for publication.
It seems a trio of senior chaps have been reading blogs.


Longtime reader BugDoc pointed me to a Letter to the Editor recently published in Science. The letter outlines the dangers of increasing reviewer demands as follows:

One problem with the current publication process arises from the overwhelming importance given to papers published in high-impact journals such as Science. Sadly, career advancement can depend more on where you publish than what you publish. Consequently, authors are so keen to publish in these select journals that they are willing to carry out extra, time-consuming experiments suggested by referees, even when the results could strengthen the conclusions only marginally. All too often, young scientists spend many months doing such “referees’experiments.” Their time and effort would frequently be better spent trying to move their project forward rather than sideways. [emphasis added-DM]

Oh yes. Preach on Professors Raff, Johnson and Walter, preach on. And really, Readers, check the links. These are not disgruntled wanna-bees talking here by any measure.

It is surprising that so many referees make unnecessary demands, as they are authors themselves and know how it feels when the situation is reversed. Such demands are discouraging for young scientists and, cumulatively, slow the progress of science. Of course, peer review is critical for making sure that the authors’ conclusions are sound, and some referees’ experiments would substantially advance the story. But frequently, these would justify an additional paper. Science advances in stages, and no story is complete.

Wait, are they plagiarizing Your Humble Narrator? Damn. Or is it just possible that these thoughts just might occur to a whole bunch of scientists….?
Ok smart guys (because the evidence of your CVs suggests you are most assuredly smart guys), got any solutions?

Editors should insist that reviewers rigorously justify each new experiment that they request. They should also ask reviewers to estimate how much time and effort the experiment might require. With this information in hand, editors can more easily override referees’excessive demands. This requires confident, knowledgeable, and experienced editors, and it risks alienating referees, who are often hard to come by. Nonetheless, editors should be encouraged and empowered to perform this crucial task.

Indeed. Wonder how they feel about the whole professional editors / *working-scientist editors thing? Would more senior or more experience editors be more likely to smack down excessive reviewer demands? More likely to send back the decision letter with “and oh, btw, feel free to ignore reviewer #1’s demands for new data” which I will note that I see frequently in the society-journal-level playing field.
__
*I hate the term “working scientist”. Really I do. I think it is elitist and disrespects many people who think scientifically, indeed even more dispassionately scientifically than many so-called working scientists. (I am a lot more conflicted by my “barely hacking it as postdocs” crack than it might seem, btw)
UPDATE 07/08/08: I just noticed a post on the RabidReviewer over on the Professor Chaos blog.

No Responses Yet to “On making the publishing process "more rapid, rational and equitable"”

  1. BugDoc Says:

    To follow up the conversation on the “Put down the crack pipe” entry, A Former Nature Editor said “Or does the editor think about how long it takes to do the experiments and what they’d really add to the work — using previous publications and previous submissions to their journal and reviews of those papers as a guide, as well as their own extensive scientific background — then take an hour to write a letter explaining to the authors that they must reject this version of the paper, but they truly think that addition of suggested experiments X and Y would improve it (although they think Z is not necessary and more suited for the next paper)?”
    I have had a few wonderful editors who clearly exercised some editorial discretion in saying that certain reviewer requests were well beyond reasonable expectations for one paper, while other editors just forward reviews along with “yeah, what they said” without any guiding comments. Is Former Nature Editor unusually conscientious or the norm?

    Like

  2. JSinger Says:

    1) The bottom line is that this is still another straightforward product of supply and demand. The number of researchers continues to grow exponentially while the number of high-profile journals grows much more slowly. (And the number of papers the collective research mind is willing to consider “high-profile”, which is the real limiting factor, doesn’t grow at all.) So, what constitutes a “complete story” at that level keeps growing.
    2) I’m not understanding how a call for fewer follow-up experiments in super-journal papers and more gate-keeping by editors squares with DM’s usual soapboxes.
    3) Regarding “I hate the term “working scientist”. Really I do. I think it is elitist and disrespects many people who think scientifically, indeed even more dispassionately scientifically than many so-called working scientists.”:
    What is this, PZ Myers? “Working scientist” evaluates people on the dimension what makes them scientists — that they produce. Abstract adherence to OMG Teh Scientific Method! isn’t what makes you a scientist.

    Like

  3. Becca Says:

    You could always use “bench scientist” instead. Oh wait, that’s right. PIs belong in their offices writing grants, not doing ignominous manual labor like pipetting at the bench (the horror!).
    In that sense, JSinger, scientists are never evaluated on what they produce, but what they can con, cajole, or coerce others into doing for them.
    Sorry, snarky mood has hit me today.
    But seriously, for the context of this discussion, what’s wrong with being honest? They are professional editors, you are the “amature” editors. Just call yourself that, if you don’t want to sound elitist.
    Of course, if instead of just avoiding sounding conceited, you dislike elitism due to it’s very nature… If you want to undermine the very structure of the absurd hirearchy which unnecessarily divides grad students and PIs, manuscript submittors and manuscript editors (those that want to pass through the gates and the gatekeepers), you could always work on that more explicitly. Personally I’ve yet to figure out an effective way to do it (aside from tounge-in-cheek snarky comments on blogs).

    Like

  4. DrugMonkey Says:

    I have had a few wonderful editors who clearly exercised some editorial discretion in saying that certain reviewer requests were well beyond reasonable expectations for one paper, while other editors just forward reviews along with “yeah, what they said” without any guiding comments. Is Former Nature Editor unusually conscientious or the norm?
    I suspect that this diversity is expressed within Editor. Meaning that a failure to take sides or modify any reviewer comments is just as much of an editorial decision as saying “ignore reviewer #3’s third comment”. If you think about doing the job yourself, it is trivial to see that sometimes you’d want to stick your nose into it and sometimes you’d let the reviewer comments pass unedited. This is what I see for those editors with whom I have a decent sample of interactions as author and reviewer.
    The devil is in the detail, of course. So we are jousting only over the degree to which an editor might interfere with the reviewer comments.
    What bothers me about the example given by former Nature editor was the suggestion that even though s/he thought 2/3rds of the reviews were crappy reviews and the remaining review was positive and exhaustive, s/he still went with a partial sop to the jerks by demanding more experiments. The default reflexive position seems to be that more experiments will be demanded. That’s bollocks and the testimony of senior scientists with long GlamourMag lists and apparently the resources to put up with endless demands for additional experiments is important here.

    Like

  5. Anonymous Says:

    Negative much Becca?
    I’m confused… Are you really for a science anarchy? No hierarchy, grad students and PIs are equal kind of thing. I mean anarchy is always tempting but only theoretically. Aren’t there skills to be acquired in this business? So how can there be passing/teaching of skills and expertise without any “hierarchy”? Is there any aspect of human endeavor where that is so free of “hierarchy”..
    I never thought scientific productivity was how much you can “con, cajole, or coerce others into doing for [you]”. I see what you’re getting at but are all trainees in science simply idiots who willingly let themselves get taken advantage of? Or is this a really dark and pessimistic view of what a PI does?
    (FYI, I am NOT a PI but a postdoc and I do a fair bit of “labor” but really how else should it be given the way things are…)

    Like

  6. BugDoc Says:

    …a failure to take sides or modify any reviewer comments is just as much of an editorial decision as saying “ignore reviewer #3’s third comment”.
    I guess I don’t get this. Since, as you point out, DM, it is becoming essentially par for the course for reviewers to ask for a year’s worth of experiments, it seems like it should be part of the editor’s mission to point out which experiments they think will be key to achieve prior to resubmission. That doesn’t mean authors shouldn’t try to do what they can anyway. I don’t have a problem with reviewers asking a lot per se, if experiments are needed to support the authors’ stated conclusions. However, one important point made in the article was that reviewers should clearly articulate why they think specific experiments are needed for the study. Otherwise, it’s easy to come up with many experiments only a few of which contribute much to the paper. Similar suggestions are made to hiring committees at our institution to avoid inadvertant bias, since evaluations have to be supported by substantive comments. The peer review process overall would be more transparent and constructive (and probably much quicker) if reviewers were more specific about how the proposed experiments will contribute to the paper.
    Although JSinger describes the phenomenon of the increasing “complete story” as driven by supply and demand, do we really want peer review and scientific progress to be shaped by market forces?

    Like

  7. Becca Says:

    @Anonymous- Once again, I maintain I’m not a negative-nelly at all- but an optimistic idealist of the most hardcore shiny-happy care-bear tea party variety.
    The snarky was clearly labeled tounge-in-cheek. I was having fun. But then, I think the child who said the emperor had no clothes was also having fun.
    I personally have little use for hirearchy. Some of us are not sheep, nor shepards, but happy (and stubborn) little goats.
    The fact you even ask the question
    “So how can there be passing/teaching of skills and expertise without any “hirearchy”?”
    strikes me as astonishing. Truly. How can there not be exchanging of information and learning- regardless of hierarchical structure? People are built to learn. If you don’t go into life with the attitude that everyone – from your plumber to your professor- has something to teach you, I think you miss out on a lot of valuble information. You certainly miss out on a lot of interesting conversations.
    I’m not saying PIs aren’t generally (at least in my experience) talented, hardworking, intelligent individuals with an incredibly rich store of knowledge to share. They most certainly are!
    It’s just that so are many (most!) of the post-docs, and even (gasp!) my fellow grad students. Heck, I’ve even been known to learn stuff from med students once in a while (keep that on the DL please! My reputation will be ruined!). Furthermore, I hold the utterly shocking opinion that maybe (just maybe) professional editors are well, professionals. Who might know what they’re talking about, at least every once in a while. Which isn’t to say I’ll rant any less if they have the audacity to reject my (obviously) brillant work!
    😉
    Anyway, I’m not at all about tearing down PIs. I’m just about lifting up grad students. And even professional editors, from time to time. If that makes me pro-“science anarchy” so be it.

    Like

  8. pinus Says:

    /tosses monkeywrench in to works
    I think we should eliminate anonymous reviews….i think the # of annoying extra experiments would dramatically be reduced.

    Like

  9. drdrA Says:

    Pinus- you are singing my song. BUT- as an alternative probably more acceptable to most- I’m with Bugdoc in that reviewers should have to, at the very least, provide an explanation as to how experiments they require and request are needed to strengthen the stated conclusion of the paper.
    I myself have personally seen a lot of ‘it would be nice if you had shown XYZ’- and yes, while I admit it would have been nice- if showing XYZ isn’t required to strengthen the the stated conclusion (or provide a necessary control for) of the paper, I don’t really give a rat’s ass about what the reviewer things would be nice….and I’m mad at the editor for allowing seemingly offhanded, incomplete, and uninformative suggestions to pass as a serious review.
    Providing a solid reason for the experiments required by reviewers to strengthen a stated conclusion, should be just as required as the authors providing positive and negative controls on their figures.

    Like

  10. juniorprof Says:

    I have never gotten the it would have been nice type review. Rather, I get the: there are several open questions which the authors should address… three years of work… before the paper can be acceptable for publication. I like to think that the reviewers suspect that I might be considering early retirement and they want me to stay in the game 🙂

    Like

  11. Neuro-conservative Says:

    Broadly speaking, I think that reviewers should stick to reviewing the paper in front of them, rather than some hypothetical paper they would have preferred to read. If the authors have failed to run a necessary control so that interpretation of results is completely confounded, that’s one thing. Dinging the authors for not running some experiments that would be “nice” or “interesting” or make a “more complete story” is often just passive-aggressive BS.

    Like

  12. Odyssey Says:

    There is of course an assumption being made here when people suggest editors should be encouraged to override “overzealous” reviewers. Hands up everyone who believes all editors actually read the manuscripts they’re handling…
    One would hope they are, but at least some are not. And you, as an author, don’t know which ones are and which aren’t. I’m on the editorial board of a reasonable, although not top tier, journal. There are some quite well-known scientists on the board, along with not-so-well-known schmucks like me. The editor-in-chief recently felt compelled to send an email out to the board encouraging them to read the manuscripts they were handling.

    Like

  13. DrugMonkey Says:

    BugDoc @#6:as you point out, DM, it is becoming essentially par for the course for reviewers to ask for a year’s worth of experiments, it seems like it should be part of the editor’s mission to point out which experiments they think will be key to achieve prior to resubmission.
    It isn’t always the editors’ job to write the paper for you, however. So in many cases if the reviewers throw up a bunch of flak, it is up to the authors to grasp the gist, turn it into an improved paper and argue for acceptance.
    pinus @#8: i think the # of annoying extra experiments would dramatically be reduced.
    no but the number of personal attack rebuttals would increase “Dear Editor, I find it curious that Dr. Schmo is asking for an extensive series of additional experiments when most of his papers feature a single decent figure at best…”
    Neuro-conservative @#11- BINGO!!!
    I do have a caveat though, which is perhaps only relevant to me in this conversation since I’m the only one admitting to slumming around in the 2-4 IF zone. At this level, there really can be papers submitted that are really just beyond the pale in terms of sausage slicing. Eye of the beholder of course. The point being that there is nothing particularly wrong (in terms of controls for example), the manuscript is just sharply limited in terms of the amount of data being presented. I see editors (almost universally respected scientists volunteering as editors) doing the effort calculus thing all the time. Reviewers obviously struggle because you see all kinds of ideas and criticisms being supplied to avoid saying simply “folks, I don’t much care what you do or which direction you follow up but this thing needs more data”. This is one area where the reviewers seem to be telling the authors what paper to write not out of specific desire to tell ’em what to do but as an attempt to be specific in the face of a general complaint.

    Like

  14. bsci Says:

    I think the biggest gap to making the review processes more rational and equitable, and possible faster is not the reviewer/editor to author interactions. It’s the interactions between the reviewer and editors. This is implied by some of the comments about asking reviewers to justify requests for new experiments, but I think it’s bigger than that.
    Of all the tasks I’ve done as a researcher, the only one where I have received ZERO direct training or even access to training is reviewing. I’ve done journal clubs and I know how to rip apart published manuscripts. I’ve gotten reviews on a few papers and know what I felt was appropriate or inappropriate, but this is a slow trial&error process. As I review more and more I’ve set up my own idiosyncratic rules on when to accept/major revisions/reject, but those rules are loose and not always rationale or consistent (I often request nontrival amounts of additional data analysis, but rarely new data collection).
    When I write reviews they disappear into the ether. I see the same meaningless paragraph from the editor that the authors see. Depending on the journal, I don’t always see the authors responses when I say “accept with minor revisions.” I never see the authors responses if the revision goes to a different reviewer or a different journal.
    Better review requires training better reviewers. Editors don’t have the time to review reviews. I’ve never heard of a PI/mentor doing this. If journals had a paid service, I’d even consider using that, but right now there are no options.

    Like

  15. BugDoc Says:

    DM@#13 “It isn’t always the editors’ job to write the paper for you, however. So in many cases if the reviewers throw up a bunch of flak, it is up to the authors to grasp the gist, turn it into an improved paper and argue for acceptance.”
    In fact, it should never be the editor’s job to write the paper for you. My point was that the editor is the person who makes the final decision about the manuscript. I’ve found the revision process to be much less chancy if I contact the editor and make a case for what we can do, and what we think is bullshit before we embark on months worth of experiments. Most editors are very helpful in this regard, so it makes me wonder why they don’t just provide this sort of guidance up front with the reviews. I don’t mind contacting them to sort it out, but it just makes more work and correspondence for them.

    Like

  16. PhysioProf Says:

    When I write reviews they disappear into the ether.

    Most good journals share the Comments to the Authors of each reviewer with the others, as well as the editor’s letter to the authors and the final disposition.

    Like

  17. Sean Walker Says:

    I agree with a lot of what is being said.
    However, in the end it is the primary authors job to argue to the editor that the paper is worth taking. If the reviewer is a bonehead then the authors need to take them to task for being a bonehead and talk to the editor and write a rebuttal. When I was a grad student we referred to this as ‘publication by brute force’.
    I completely agree with BugDoc on this. You can solve lots of problems if you talk to the editor. Now, there are certainly times then the editor is a bonehead and I have no idea what to do about that.

    Like

  18. Anonymous Says:

    Sorry to keep on a tangential discussion. thanks for responding Becca. I do agree with you that potentially there is something to learn from everyone. I in fact have a large circle of non-academic people that I talk to who might have influenced my research and thinking more than my PI’s have… At the same time, I guess what I am saying is, whether we like it or not, the people who are succeeding *in this system*, who are “technically”, hierarchically “above” within the judgment of this system, are doing something “right” and have some skills to teach us that are relevant to how to succeed in the system as it is. Which IS important.
    Of course the system keeps changing – and yay for that – and needs to keep changing.
    Fortuntely in my case, most of my mentors had things to teach me that was beyond “how to play the game like me” and actually helped me to grow as a scientist in an ideal sense as well. But I did meet some people whose approach to science was so different to my own that I would never work like them — but even in those cases it helped me to watch how someone “hierarchically” above me was doing things.
    I am a woman and a minority so while I agree with your anarchist spirit and “bring everyone up” spirit and hope to apply those principles to my future life as researcher and mentor, I also have to learn from anyone and everyone in order to succeed in the system as it is today…
    Basically, we agree. And that’s why I actually am glad you responded because I am trying to understand how everyone sees the scientific enterprise and what everyone feels is “changeable” and more importantly, HOW. I guess I am a very practical person, not a care bear like you, more like “what can we specifically do about this?”. People like me need the encouragement and belief of people like you so that we can remain motvated about CHANGE…

    Like

  19. Anonymous Says:

    PS: By corollary, yes I do think pro editors have something to contribute to science. Maximally, they could really be trying to make science better, minimally they have some power about what gets into these Magz… Either way, they’re relevant and important to the science-web…
    I don’t always agree the GlamorMags is good for science but most people I know want to publish in those places. Some say I ain’t going to play that game and I respect that. Some play the GlamorMag game 100% (who would not even start an experiment that they cannot envision will get into a GlamorMag, even if it was a good scientific question – yes, I know multiple people who do this). Most people just do research they care about and sometimes try for the GlamorMags.
    As for referees asking for more and more… This is how I read it: A referee that is in your field and knows the standards of the field would only ask for an unreasonable amount of extra work, is just trying to reject the paper. If they really want the paper to get in, but also be improved, they can suggest small experiments or new analyses instead of suggesting you do a years’ worth of experiments. Sometimes talking with the editor really works – but I have been in many situations where the editor just would not put in one iota of thought and analysis into the reviews and just forward them as they are.
    I was lucky enough recently to work with an editor who said “do this, do that, and that and that forget about the rest”, in bullet points. I can’t tell you how much less stressful and time-consuming that revision was and how much it helped to not bloat the paper addressing every comment “just in case”. It was almost a pleasure. This was a pretty high impact journal but not one with prof editors. I was amazed that a working-scientist editor (or whatever the appropriate word is) could find that much time to put in to a manuscript (and no it was not square within his area of research). Lucky, we were!

    Like

  20. bsci Says:

    Re: PP#16
    Yes I do see the other reviewers comments and I learn from them (though what I often learn is I still read papers in much more detail than other reviewers and I actually care about making sure that it should be possible to replicate study using the methodology section and listed references). Still, reading examples of others work is a poor education method compared to having someone review samples of your own work.
    PP. Do you think you have a consistent reviewing style? How many years of reviewing did you do before you got there? Did you consider your education in this area of being a researcher sufficient and efficient? Of course, you are blessed with an unnatural amount of intellectual clarity and writing skill so you might not be the best person to answer this question for us mere mortals. 🙂

    Like

  21. BugDoc Says:

    Sean Walker@#17 “If the reviewer is a bonehead then the authors need to take them to task for being a bonehead and talk to the editor and write a rebuttal.”
    Sean, you are right, of course. However, the reason I hope that editors would be more proactive is that I don’t like to be in the position of having to argue that other (presumably respected) scientists in my field are boneheads. I always try to do so in a polite manner, with a clear explanation of why I think the proposed experiments will not add to the paper or are beyond the scope of the current work. However, this approach necessarily puts the authors and reviewers at odds, when the point of peer review is to improve an worthy body of work for publication or reject (based on substantive concerns). This should not be an antagonistic process but a synergistic process. The current peer review paradigm has resulted in more and more of our time as PIs spent on “grantsmanship” and political and semantic maneuvering to avoid the bloated and uncontrolled process of unnecessary revision. Obviously I am not saying that papers don’t need to be revised; reviewers often make helpful and important suggestions….they are just sometimes lost in the barrage.

    Like

  22. Odyssey Says:

    Sean Walker@#17 “If the reviewer is a bonehead then the authors need to take them to task for being a bonehead and talk to the editor and write a rebuttal.”
    As BugDoc @#21 noted, one needs to be very careful in how this is handled. Remember, the editor chose the reviewers. Accusing the reviewer of being a bonehead implies the editor made a boneheaded decision choosing that person. Editors generally don’t react too well to being accused of boneheadedness.
    It’s also possible of course that the bonehead was on the authors list of suggested reviewers, making the author in part responsible. 🙂

    Like

  23. DrugMonkey Says:

    bsci @#20: Do you think you have a consistent reviewing style? How many years of reviewing did you do before you got there? Did you consider your education in this area of being a researcher sufficient and efficient?
    I had a little something on this before:
    http://scientopia.org/blogs/drugmonkey/2008/03/have-you-been-emtrainedem-to-peerreview
    Training or no, I do think manuscript reviewing is a difficult task and I hope that I never feel that I have arrived at some fixed approach that need never be reconsidered. As mentioned by others, reading the other reviews and editor decisions is a necessary part of calibrating one’s own reviewer behavior. I do notice that sometimes the journals do not email you when the editor decision is issued and sort of leave it up to you to check back with the online reviewing site…which is kinda lame. Also, I imagine if you haven’t been asked back for the revision, this wouldn’t come to your attention- this could be good (if slightly painful) feedback that you are not providing useful reviews for that editor.
    Here’s an additional consideration that I thought of this morning. Take a reviewer who habitually publishes in the journal for which she is reviewing. Is she not motivated to up the quality of the papers accepted for that journal such that the benefits of increased reputation (or Impact Factor) spill over onto her own CV? hmmm?

    Like

  24. NM Says:

    Becca
    Science mostly doesn’t have much to do with Benches. You might be a bench scientist but I most certainly am not.

    Like

  25. Becca Says:

    @NM- I recognize that science isn’t necessarily about benches. My point was more, if the data presented in publications with your name on them was produced at a bench, but you did not produce it… is it really reasonable to look down your nose at others who are also not at a bench?
    I do recongize that PIs do lots of things journal editors don’t (PP’s interpertation of the former Nature editor nonwithstanding, I think that was never in question). I just think it is absurd to imply that editors don’t “work” (unless you use the “produce data” benchmark-pun intended). And I certainly don’t like the implication that they are not scientists.
    @Anonymous (#18)- your post truly pleased me. If my rhetoric can actually motivate others for change (and not just have the totally off-target effect of leaving them with the impression I think the system sucks), perhaps there is a point after all.
    I also understand the need for learning how to work within the system. Afterall, that is one of the basic functions of this whole blog- and I think it generally does a great job. It’s just that, when something comes along that inadvertantly perpetuates a negative feature of the system (like the “barely cutting it as post-docs” comment, or the “NON-workingscientist” near-slur) I have to say something.

    Like

  26. Stephanie Z Says:

    Hey, Becca, if you want to send me a note ([firstname].[lastname]@gmail.com), I have a link to a white paper that you might find useful. As one goat to another. 🙂 I’d post it here, but it’s rather off topic.

    Like

  27. NM Says:

    I don’t think that editors are necesarily scientists. They can certainly be scientists if they are still producing science. If they are no longer producing science then they are editors who are certainly integral to the scientific process. They may have been scientists in the past and have vast scientific knowledge but if they are no longer producing scientific knowledge (rather helping to critique and then present it) I’m not sure how they could qualify for the term “scientist”. It’s certainly not a slur, it’s just a result of a ‘working’ definition.
    Publishers and editors of fiction works are also essential to the writing and publishing of books. But this does not make them authors, even when they once were.

    Like

  28. Becca Says:

    Determining who is “producing science” is nontrivial. Is it generating data? Of course not. Then the average grad student would be more of a scientist than the average PI.
    Does “producing science” mean having your name on a scientific publication? Well that leads to an interesting issue. If, as FormerNatureEditor mentioned, an editor actually designs experiments, and if they contribute to changing the text of the manuscript (as is the traditional function of “editor”- broadly speaking), one could argue they meet the criteria for authorship. Not that they’d ever be included. Granted, most editors do not do enough work on most manuscripts to be included- but it could certainly happen.
    If you look at other fields, editing a book can be considered a scholarly achievement on par with writing a book. If you have a volume of poetry, and an editor frames each poem, and provides extensive commentary throughout the book, they have certainly created intellectual property. You can argue that they are not “an author”- but the book will be in the library under their name.
    Bottom line- we don’t give editors the same credit they get in other fields, nor do we give them credit as scientists.
    It almost seems that all we give them is abuse when they don’t publish our brillant work as quickly as we want, and insinuate they couldn’t cut it at our jobs. How is this responsible professional behavior toward an important member of the scientific enterprise?

    Like

  29. pinus Says:

    Irrespective of people shitting on editors; I don’t think there are going to be many examples of a scientific journal editor framing each experiments and providing extensive commentary. I think the comparison is apples to pears.

    Like

  30. neurolover Says:

    I think we’ve gotten drugmonkey to back-off on his mindless knee-jerk bashing of editors and their abilities, but, I don’t think editors are scientists. I think that “producing science” is a reasonable definition of what a scientist is, but also agree that it’s difficult to define exactly what “produce” means. It’s not just collecting data (a data collector might not, actually, be a scientist, either). But, it’s also certainly not just providing the money and very vague guidelines about what should be done. I think there are more PIs out there than we’d like to admit who are not really scientists any more either (who are, say communicators, or fundraisers, or managers).

    Like


Leave a comment