Circumstantial evidence

November 15, 2007

A recent post soliciting Open Laboratory 2007 nomination from Noah Grey of Action Potential Blog reminded me of a little commentary exchange that we were having over a post on “paranoia in research“. Inexplicably I let him get in the last word. Fortunately an opportunity presents itself to continue the discussion.

In very brief outline, the topic is the observation that paranoid secrecy over what one’s current scientific findings, experiments and hypotheses are is a bad thing for science. And yet it seems to be an increasingly common fact of life. My suggestion was that certain editorial practices of the most well regarded scientific publications contribute to scientific paranoia in explicit ways. Some of my prior points included:

The second category of silliness for which Nature policy is directly responsible is accepting a submission because “a paper on a similar hot topic is accepted at (competing journal X)and gee don’t you want one too”. or worse, accepting a me-too for co-publication that comes in, astonishingly, from the group you sent the other paper to for review.

The point is that you don’t have to determine who stole what in a specific case to recognize that systematic journal policies encourage scooping and project stealing.

In the series of exchanges Noah took a couple of positions which boiled down to 1) “Give explicit examples”, 2) “Well, we at Nature Neuroscience aren’t responsible for what may go on with other Nature titles” and 3) “I’m so offended that you would even suggest this! Stop making these accusations against us!”. Oh, and some semantic wibble that doesn’t fool anyone either. The bottom line is that taking offense and using vaguely threatening language in an attempt to avoid answering a critique is not a valid argument. Placing the burden of proof on someone to assemble overwhelming evidence of editorial shenanigans is a nice little trick but doesn’t in and of itself prove a damn thing. Particularly when one has one’s own set of experiences that may or may not be fodder for open blogging, for various reasons. Not to mention that it would have been simple for Noah to come right out and answer this query of mine:

can you tell me you never ever get PIs telling you that something related to their MS is accepted / almost accepted elsewhere? can you tell me that this never works to influence how a submission would be treated (from review to decision to print) in such a case?

The bottom line here is that only those on the inside of situations involving less than stellar behavior can really know what is going on. From the outside, all we usually have are partial pieces of the puzzle, i.e., from authors involved, reviewers involved, both, or from editors involved. After all, these people know things are a little dodgy and they sure as heck aren’t going to admit openly to their suspicious behavior. “Heck yeah, I got this paper to review and than had to sit on it until we could get our submission out”. Or, “Well, the reviews were pretty damning on this one but our competitors have related papers in acceptance so we’d just better take this one now”. But in many cases the casual reader can assemble some pretty damning circumstantial evidence.

Within the last month or so we had an all-too-familiar situation develop involving at least three of the very top journals. Let’s call them J1, J2 and J3. Also involved are a grand total of five articles (A1-A5) which all touch on a single scientific issue. The “received” dates for A1 (in J1) and A2 (in J2) are within a span of two weeks. Hmm, one of those interesting coincidences, no doubt. Those papers were then “accepted” for publication about 11-12 weeks later. So far, so good.

Where it gets interesting is looking at the submission dates for A3-A5, all in J3. The first “received” date was about 5 weeks after A1 and A2 were initially submitted and the last one was about 8 weeks afterwards. Well before A1 and A2 were accepted, to underline the point.

It gets better. A3 and A4 were accepted by J3 a few days before (A3) and after (A4) the range of dates over which A1 (J1) and A2 (J2) were found acceptable for publication; the poor authors of A5 had to wait another month.

What an amazin’ series of coincidences. Five labs generate manuscripts good enough for the very highest pinnacles of academic publishing on a single topic and happen to be all ready to publish within a span of two months time. Wow what a coincidence. And it must be rare, right? Because if a given scientific topic is so obvious and pedestrian that 5 groups are working on it at the exact same time, uh, well, maybe it isn’t really a topic that reaches the standard of the glamor mags top scientific journals? [Insert Emperor/clothes snark here.]

Or. Just maybe. Is it possible that the authors of A1 or A2 got wind of the other being ready to submit and rushed one out? Or that they agreed in advance to co-submit (it happens and this is a whole ‘nother issue. Short version this is bad because it also contributes to the paranoid culture.)?

And do you think? Just perhaps? Maybe one of the authors on each of A3-A5 was asked to review the A1 or A2 manuscripts? Ya think? Naah. That would be a violation of ethics, wouldn’t it? Well, only if they gave it a different (i.e., bad) review than they otherwise would’ve given it, right? I mean, if they waited until the first round of reviews were in, saw A1 or A2 was going to be accepted and only then submitted their work they’re clean, right. Um, no. What, like the paper was just sitting there all ready to go? Or did the rather large lab groups suddenly find that the PI had an inexplicable urgency to wrap up one of many ongoing stories in particular? And to shoehorn a not-quite-naturally-fitting series of experiments together?

And then we get to the behavior of the editorial team of J3. Completely clueless were they? Just amazed to get three submissions on this scientific topic within about a month’s time, weren’t they? “Gee” they thought, “This must be a hot new thing! We gotta get on it, man.” Really. Not one of the communicating or senior authors of A3-A5 just happened to mention, oh in passing really no biggie, that J2 and/or J1 had related manuscripts poised to be accepted?

As I said, we cannot know with any certainty what is going on from the outside. It could be a tremendous coincidence each time this sort of thing happens. Or, it might not be. And that is why it would be ever so helpful for Editors to state unequivocally that they will:

  • Not shade the editorial decision or timeline for that decision and/or publication one iota based on rumor or hard evidence that a topic-related manuscript is close to submission, under review or accepted at a competing (or the same) journal.
  • Ask very tough questions of authors should it ever appear even possible, nevermind likely, that a manuscript may have been rushed into submission following confidential knowledge of a competing paper due to peer review assignment.
  • Retract an offer of acceptance, or even a published article, should it develop that ethical peer review standards have been breached.

Alternately, they could just ‘fess up to their behavior and at least then we could have an honest discussion as to whether they indeed should be doing this.

21 Responses to “Circumstantial evidence”

  1. Biogeek Says:

    With respect to your specific example, I’m a little puzzled at the complaint. The premise is (if I have it right, that 5 groups are working on the same problem at the same time, and have roughly similar results (at least, the arrows are all pointing in the same direction – approaches, degree of proof and technical quality will vary). In this case, is it not a fair result (in the end), that all should be published at about the same time?

    I think editors at journals do their best to treat submissions fairly. They cannot act on the basis of information they do not have. If there is evidence that competing work is submitted elsewhere, editors will often try to accelerate things a little (as a service for authors), but by and large will not change their standards for publication. They cannot make decisions on the basis of rumors.

    Reviewers are asked to exclude themselves, if they have a conflict. Many will do this; some will not. Some might (as you suggest) try to delay publication of their competitor’s paper, while trying to accelerate publication of theirs (usually in a different journal). Good editors will try to discern such behavior, by using multiple reviewers, reading the reviews carefully from a scientific standpoint, and maintaining good ties to the field and community.


  2. […] 16, 2007 I’m not sure about the prevalence of DrugMonkey’s conspiracy theories about contemporaneous publication, but I do have a more general comment on “paranoia in […]


  3. CC Says:

    There was a hilarious parody poster at a Drosophila conference years ago, after Nature and Cell had each featured three simultaneous, identical papers on hedgehog phosphorylation, or something like that:

    We announce a new publication, the Journal of Hedgehogology. All submissions will be held until Gerry Rubin, Dan Kalderon, Susan Parkhurst,…. generate identical findings to publish simultaneously.

    It went on for pages like that, with a job posting page at the end with identical ads from all the aforementioned PIs for postdocs to study hedgehog dephosphorylation.


  4. drugmonkey Says:

    “In this case, is it not a fair result (in the end), that all should be published at about the same time?…If there is evidence that competing work is submitted elsewhere, editors will often try to accelerate things a little (as a service for authors)”

    If, and this is an important if, the results are independently or openly competitively ready at about the same time, sure your first point is apt. From a certain perspective anyway. You are, however, overlooking the larger point. Why is it important to “accelerate things…as a service for authors”. Because they need to be first (or roughly contemporaneous) or else the exact same paper is considered somehow reduced in status and importance. Right? This is idiotic from a scientific standpoint and I assert this sort of business is detrimental to science in a whole host of ways. Not least of which is that it actually discourages competition in a more general sense, discourages replication and extension in multiple labs, etc.

    “They cannot make decisions on the basis of rumors.”

    As I think I made clear, the problem is that we cannot know. Until and unless editorial staffs make clear and unequivocable statements on policy and practice. Not weaseling statements like “oh, well how can we know what is going on” when I know for certain that some submitting groups routinely attempt to use chivvying language about competitiveness when dealing with editors. Similarly, from a colleague that had a brief stint at one of the premier journals as an editor, they apparently received similar types of arguments on a frequent basis. Parts of the puzzle, parts of the puzzle. All anyone can do is ask for a clear statement that “no way, no how” would editorial process be influenced by such crap. And let us also keep in mind that those of us trained in the experimental psychology traditions are fully aware of inevitable influences on our behavior even when we are fully convinced we are not being “biased”. In other words, just because an editor says “Well no we didn’t alter our decision because of that alleged paper in CompetingJournalX”, doesn’t mean this is what happened.

    “Reviewers are asked to exclude themselves… some will not…Good editors will try to discern such behavior…maintaining good ties to the field and community “

    This is an issue I give the editors a pass on, actually. I don’t think they can do much about unscrupulous behavior in advance. Without totally circumventing the peer review process that is. In my corner of the world going by paper and grant review processes it is very frequently the case that 1) you will have a minority voice in serious critique and 2) you will have a personality that is distinctly more skeptical and critical than the mean. In neither case is it always so that the “jerk” is in error. I also see a whole lot of firmly held beliefs that “We got reviewed by so-and-so and that lab is out to get us” without any evidence for this. The point being that editors would be unwise to cull reviewers from their pool based solely on outlier critiques or rumour of what a unscrupulous jerk some PI is.

    Until and unless there is some evidence, that is. So yes, when a hypercritical and slow review is followed by a suspiciously timed submission from said reviewer, then the editors had better start asking some hard questions.


  5. Biogeek Says:

    Thanks for your comments. I have a couple of responses. I’m not very formatting-proficient, so forgive if I paraphrase instead of quote the earlier discussion.

    About timing/precedence and scientific publication – I agree it is can be seen as detrimental to science as a whole to take a “s/he who crosses the finish line first gets the lion’s share of the credit” attitude. As you know, this attitude leads to stressful, difficult races to be first to publish. And encourages ‘bad reviewer behavior’ of the type we’ve discussed.

    I think PhysioProf did bring up a good and salient point, that these races occur more frequently in certain areas of research (or certain timepoints in a given topic), and if one does not want to do one’s research this way, there are plenty of areas, topics, and times where these sorts of races do not figure in.

    I do disagree with your statement that this priority philosophy discourages repetition/validation of results. To the contrary – such validations/independent demonstrations are found all over the literature, but perhaps not at the super-top tier journals. That is, Cell-Science-Nature is not interested in publishing a result, that was essentially shown 6 months ago in another paper. But another journal might be willing to take the paper, if it is technically sound and of value to the field.

    Authors pleading with editors and/or trying to manipulate them using psychology – well, I am sure this happens. Scientists (and editors) are only human, after all. Scientists have access to certain information; so do editors. This information is overlapping but non-identical. This is just the way it is. As I said, editors can only act on the basis of what they know. Not what is rumor. At least that is my understanding of a fair process.

    In your second paragraph you are asking editors to come out and state that they are immune to being psychologically manipulated – this seems something of a contradiction, actually?

    Your last point – of course, minority voices need to be taken into account during review, but the context is important also. As I tried to convey in my earlier post, good editors will try to take all the contextual issues into account. The minority voice can sometimes be the one solid scientific opinion; or, it can be someone with some sort of personal axe to grind. Sometimes the same person can be in both roles on different occasions.

    “Asking hard questions” – I guess this is possible, but in my experience, editors for the most part are not so pro-active (reviewers are for the most part, after all, providing an essential yet unpaid and anonymous service).

    Also, if one really wanted to ‘misbehave’, one probably would not submit one’s ‘me-too’ paper, to the same journal where one had stone-walled a competitor’s paper.

    The world of science is a pretty small one actually. I guess I am essentially an optimist about human nature – if a scientist or editor is behaving poorly, eventually (and it may take a while) it will all come ‘out in the wash’. I know this is of little comfort, when it was your paper (or potential faculty position or grant or whatever) that suffered as a result.


  6. drugmonkey Says:

    “there are plenty of areas, topics, and times where these sorts of races do not figure in.”

    True. Just as there are ways to do good science and have minimal exposure to scientific fraud. So why bother, right? Well because I believe in the process of science, not just as a vocation but as a citizen of this planet. So I have an opinion. Not to mention I pay taxes which support the NIH. I hate the culture that keeps lots of data that I’ve paid to generate unpublished because it isn’t “hot” enough to support the obligatory C/N/S paper. The culture that delays the generation of knockouts of a series of related proteins because Lab A thinks that Lab B surely must be working on it so why would Lab A bother to get into that area, just to get out-raced. That squeezes out applicable and relevant research so that we can spend millions generating internally consistent but unbelievably artificial data that may indeed have zero application in an intact organism or a human.

    “you are asking editors to come out and state that they are immune to being psychologically manipulated – this seems something of a contradiction, actually?”

    sorry, that one got a bit garbled. The point is that first we need the statements and official policies that editors are to resist manipulation. Second, we need to recognize that editors are human and go even further to proactively reduce the opportunity for unconscious swaying of opinion. This might mean banning all pre-submission interactions, the source of much editor-manipulating behavior. It might mean banning all mention of competing publications from the cover letter. The point being that just like the journals go out of their way to try and ensure anonymity of review, they could go out of their way to reduce influences unrelated to the quality and topic of the manuscript at hand.

    “editors for the most part are not so pro-active”
    Right, and I object to this. And I object even more strongly to editorial page tut-tutting in situations where the editorial behavior is explicitly contributing to a given “problem”. See Nature bemoaning the impact factor, for example! flaming hypocrites.

    Editors make similar excuses. With science fraud and the type of shenanigans I’m discussing they are either being disingenuous or have their head in the sand to an extent one wonders if credulous idiots like that have any place editing C/N/S in the first place.

    “I guess I am essentially an optimist about human nature”

    And I would describe myself as a student and observer of human behavior. Informally and formally as well. The “optimism/pessimism” axis is a theological structure of belief which is fine and all but doesn’t get down to the facts. No matter what one’s hypothesis might be, wouldn’t it be better to know what interpretations the data actually support?


  7. Biogeek Says:

    OK, this is getting a bit long in the tooth, but one more set of replies…

    Re: ‘manipulation’ of editors – what you term as ‘manipulation’, publishers/editors might characterize as “service to authors”, and “ties to the community”. It’s a competitive world out there for journals, too. One way journals try to distinguish themselves, is by being responsive to author requests. I guess the idea is that this makes the journal a more desirable place to send your paper. Presubs, and the aforementioned “acceleration”, would fall into this category of ‘service’. Given the current culture(s), I think it is going to be difficult to change to the “reduce unrelated influences” model you suggest. Science is a human enterprise, after all.

    As an aside, yes I agree that Nature can be seen as hypocritical, for touting the impact factor when it favors them (cf. the “No Nature, No Impact” ad campaign of a few years back), and pooh-poohing it (editorials) when it does not. They are not the only ones guilty of this however. Again, I refer you to the “science as a human enterprise” statement.

    Editors as potential “credulous idiots” – NB this is probably something else you want to leave out of your cover letter! On a more serious note, editors rely on expert reviewers/advisers as well – by definition they can’t be insiders/well-versed in everything they read. Editors can’t be policemen – when it comes to bad behavior, the science community has to take responsibility as a whole, as well.

    Optimism about human nature vs. an impartial observer – OK, you have me on this one. I admit I made a spiritual statement, and not a scientific one. Even in the world of science and scientific publication, I would prefer to exist in a humanistic environment, rather than a rationalist purely objective one.


  8. Noah Gray Says:

    All I can say, as an editor of a prestigious journal, is that two simple things keep this very subjective practice of reviewing more-or-less in line ethically. First, authors of manuscripts request reviewer exclusions, which are always respected by Nature Neuroscience. There are always enough reviewers for a study (although we only allow 5 exclusions). Second, plenty of reviewers excuse themselves from reviewing so as not to embark on a “conflict of interest” situation. Do you really think that authors of a rejected study from our journal would sit tight when they see a second, but identical study published in an equivalent journal, or even in our own pages? Those authors would be clawing at our throats in a flash. The fact that this only happens rarely suggests to me that reviewers are being more honest than not when choosing to review manuscripts. If and when we hear otherwise, the consequences to the perpetrator would be severe. Since it is not worth getting banned from all Nature Publishing Group journals, reviewers explicitly state that they do not want to be placed anywhere near that position.

    Regarding submission/acceptance dates, these are much more subjective than you think. At Nature journals, manuscripts are usually considered “dead” unless they are going to be imminently published. At Nature Neuroscience, there are rarely papers with submission/acceptance dates that differ by more than 8-12 weeks. This doesn’t mean that we didn’t see the paper 6 months before the submission date, because we probably did. We then rejected it, but invited back a re-submission. Therefore, the REVISION comes in with a new submission date, much later than the original. That could account for plenty of your paranoid descriptions harping on dates. Please don’t give them that much credit…the error built into those dates is staggering, making them virtually meaningless within a distance of 6 months.

    Regarding the encouragement of bad behavior, if the research community communicated to the top journals that they value confirmatory studies equally with the original, then we would publish them more often. That is not what we are told in reviews and when soliciting feedback from the community. The community wants to see novel studies published, and only accept novel studies when reviewing. If you really think that top journals accept papers that are flawed (as detailed by a reviewer) just because it is an interesting topic, you are truly delusional. Just think about that one for a minute. How would the journal ever keep its reputation? Do bad papers get through due to mistakes by editors, reviewers and authors? Of course. Does it happen often? No. So please chill out and stop scrounging for the few examples that do exist. Try looking at the avalanche of papers that do not seem to have significant problems and feel a little bit better that an inherently-flawed subjective system actually seems to work.

    Based on your comments, you seem to have had a lot of experience dealing with the editors of several journals. I apologized that you have not had good interactions with them and do not believe them to be proactive. Perhaps you should try submitting some of your work to our journal to compare your treatment to what you have received from Cell and Science, for example. The shenanigans that you describe do not play a role in my decisions, but since you know better, perhaps I am due for a little self-reflection…

    As a quick disclosure, I can only comment on the objectiveness, professionalism, and practices of the journal for which I work. Attempting to speculate on the editorial practices of others based solely on dates, circumstance and suggestion would not only be irresponsible, but unprofessional.

    Getting back to the main point of the original thread, I understand all of this paranoid behavior, I see the value of secrecy in research, and I understand the position that high impact journals play in perpetuating this paranoia, but, indeed, I am saddened by its necessity.


  9. Biogeek Says:


    About journals and selecting a “submitted:” date for papers – you state that these are “virtually meaningless” – however as I think is evident from the original post, they are taken quite seriously by the scientific community. Maybe some reconsideration (or more explicit clarification) is in order?

    The editor’s task is a complex one – good luck to all.



  10. Noah Gray Says:

    I stated that they are meaningless if one tries to decipher when the study was actually started? One group may have worked for years on something, but actually submits a study AFTER another group who had been working for 10 months on the same study. The editor nor the reader never really knows when the studies were started, if one group scooped the other, etc… we just evaluate what comes in or read what gets published. That is why it is important to take these dates with a grain of salt.

    Many journals have different policies with these dates, so how serious can you take these when determining who did what first? And again, although I stated above that we give the community what it wants (emphasis on the first report) I’m not saying that there isn’t room for compromise, nor am I saying that this is the best philosophy. As I stated in the previous thread with Drugmonkey, it is impossible for the editor to police these issues regarding who scooped whom or who currently has the best paper describing a particular finding. If they come in together, we treat each individually and if both studies are up to our standards, we publish both. If one is not, we publish one. It is pretty simple.

    If the community changes its mind, I am happy to drop novelty from the criteria required to publish in top-tier journals. But with so many solid publications serving each individual field, there are plenty of opportunities to publish confirmations of previous research elsewhere.


  11. Biogeek Says:

    With all respect Noah, I think you are talking past DM’s point a little. While of course no-one can know when the study was started, the issue is the “submitted” date, which the scientific reader takes to be, the first date the journal saw the paper. And DM’s original point was (I think), that journals are altering their evaluations of papers, based on what they have already seen/have in the system (either nefarious editor, or manipulating reviewer/author).

    I won’t comment on shenanigans (as I’ve stated, people can only know what they know, and I agree editors can’t be police), but again you’ve stated that DM should “take these dates with a grain of salt”, and DM’s entire issue (at least initially), was in fact based on a “close reading” of these dates. Which is why the topic might benefit from further clarification (beyond what you’ve done in these blog comments), IMO.


  12. Noah Gray Says:

    I’m not sure what you are looking for. I just told you that Nature journals “submitted” dates do not reflect the first time we see the paper. Once rejected, a manuscript is considered dead. If it comes back as a revision, it has a new submission date.

    Given that information, and as I mentioned, the fact that not all journals follow the same policies regarding this point, how can one meticulously compare submission dates and make a close-reading of them???? As you state, if that is what Drugmonkey’s analysis comes down to, then that part needs to be discarded as it is based on information that is simply not complete. Again, one can’t split temporal hairs with manuscripts that have submission dates within 6 months or so of each other without taking a serious risk that those comparisons will be flawed. What more can I say??

    In Drugmonkey’s opinion, journals are altering their evaluations based on rumor and what is currently in their systems. Journals know absolutely nothing about what is being considered elsewhere and, besides with the reviewers, it is completely forbidden to discuss papers under consideration with anyone, let alone another journal. Anyone violating that is breaking a serious ethical code. That is why such accusations are so serious. Circumstantial evidence is never good enough to make such claims. Conspiracy theories based on submission dates and speculation bore me.


  13. drugmonkey Says:

    Noah, throughout this and the prior discussion you have made your position fairly clear and for that I thank you. I am also quite willing to believe that Nature Neuroscience maintains a purity of process consistent with your stated personal editorial approaches on your say-so since none of my examples so far concern your journal.

    I am most emphatically NOT trying to suggest that every paper published in the glamor mags has been tainted by an ethically dubious process. What I AM trying to suggest is that it DOES happen and more often than just once in a blue moon.

    “Journals know absolutely nothing about what is being considered elsewhere “

    um, no. sorry but through one-degree relationships with people on both sides of this who 1) I trust implicitly and 2) had no particular reason to lie or even embellish this is false. Authors tell editors all kinds of stuff trying to influence them. The question is, what impact does an author statement have and what is a given journal doing policy-wise to combat subtle or overt influence.

    “besides with the reviewers, it is completely forbidden to discuss papers under consideration with anyone, let alone another journal.”

    it happens. besides, the “discussion” part is completely irrelevant to the question of when the PI of the giant lab suddenly decides to get really interested in some languishing project for no particular reason. or shoehorns in an extra (yes related, but not that related) figure to an existing story.

    “Conspiracy theories based on submission dates and speculation bore me.”

    Be that as it may I think you have your head in the sand. It should take no genius to think that, just perhaps, my points are based on just a leeeetle more than the observation of submission dates. And on a little more than a paranoid nature. not all of the evidence is really fodder for this particular venue. The point is whether the “field” speaks up and says “gee, DM, I’ve never heard of any such thing myself you’re hitting about a 4.8 on the DMS” or speaks up and says “uh, yeah, I’ve heard or experienced something similar myself”.


  14. physioprof Says:

    Well, I think the type of situation DM is talking about is very rare. Does it happen? I’m sure it does. But not enough to worry about. I think DM underestimates the frequency of situations where a field is “pregnant” with a question, result, or approach. Under such a circumstance, it is not at all surprising that multiple papers come out contemporaneously with similar findings.

    And my experience is that Noah is spot on when he asserts that editors of the high-impact journals really do allow their reviewers to drive the bus. I am aware of situations where I am certain that an editor at C/N/S really wanted, on the basis of her own opinion about a paper, to see the paper published, but simply couldn’t do it in light of the reviews. Editors’ credibility in the scientific community depends in the long run on their respect for and obeisance to the opinions of their reviewers.

    Where editors do have a major influence is not on the disposition of papers reporting particular findings, but rather on the fields and subfields that they choose to emphasize by sending more papers in those fields and subfields out for review. For example, it is clear that in Neuron and Nature Neuroscience, there is a de facto mimimum quota for certain subfields in which there is at least one paper published *every* issue. This quota arises out of the review-versus-don’t-review filter, but not out of the decisions on particular papers.

    Another interesting sociological phenomenon is the extent to which reviewers who are experts in a particular field/subfield–and thus presumably also authors in that field/subfield–can leverage off editors’ decision to send out papers in that field/subfield for review. Reviewers can review their fellow field/subfield denizens’ papers favorably, and thereby gain market share for their field/subfield, or they can beat the shit out of each other. Of course, editors can recognize this kind of log-rolling, if they are savvy enough, and compensate for it, but it definitely has an influence.

    For example, (and I will probably piss a bunch of people off with this), in my opinion, the field of functional imaging of human brains is much bigger–in terms of papers published in high-impact journals and NIH funding expended to support it–than the real insight it delivers into brain function merits. This is, again in my opinion, because the players in this field who review each other’s papers and grants puff each other up, instead of kick each other in the balls. And also because of the decisions of editors to send a lot of these papers out for review.


  15. Biogeek Says:

    wow, another nice post PhysioProf. I agree with a lot of what you have said here:

    Editors cannot overrule ALL the reviewers – by and large very true.

    A subfield can “log-roll” their papers into higher-profile journals – true. But the editors’ job is to make sure papers in the journal represent the whole spectrum of the scope, and that decisions are made in a fair and consistent fashion.

    Back to the original point, DM’s ‘situation’ – it is reasonably rare I believe, but as you (PP) have mentioned, it varies by field to some extent – not only the do-ability of the experiments from scratch, but also to some extent the personalities in the field.

    About fMRI etc – everyone is entitled to their opinion! But in a peer-review system, the more peers you have on your side…


  16. drugmonkey Says:

    “And my experience is that Noah is spot on when he asserts that editors of the high-impact journals really do allow their reviewers to drive the bus…Where editors do have a major influence is … on the fields and subfields that they choose to emphasize by sending more papers in those fields and subfields out for review..”

    So which is it? This would be the perfect reply to Noah’s comment that it is the “field” driving the novelty/hotness priority scheme. There is an extent to which this gets very circular.

    “Reviewers can review their fellow field/subfield denizens’ papers favorably, and thereby gain market share for their field/subfield,”

    I agree but the way you phrase it makes it sound too specific. It is a more broad-based thing. As in “the paper has to contain this list of experimental techniques applied to the question or it is ‘insufficiently mechanistic’ or ‘insufficiently biological'” or some such. So if you have a gene of interest and you go from cells to manipulated mice including behavior and toss in some new array-platform (no, simple affymetrix doesn’t cut it anymore) data well, you are in like flynn. doesn’t matter if some of the components (usually the behavior) are demonstrable crap nor that you have to retract that “mistaken” figure of cut and pasted random blots at a later time. the labs that can put together these huge-ranging efforts don’t have to slap their actual subfield competitors on the back. the don’t really have to be all that creative or interesting or applicable. all they have to do is ensure that they are only competing with similarly technologically capable labs and that’s half the battle. throw in a lot of breast beating about how it is really morally superior to “work on basic science” and you cap off any boys in the crowd pointing out that your artificial-ass system can’t possibly be relevant, say to human health. “What are you talking about”, the crowd replies “I can see those fine purple robes on the Emperor, what’s the matter with you?”


  17. physioprof Says:

    I know it is a little bit scary, but could you be specific about a particular example of something like this? You have described this kind of scenario before, but always very abstractly. I continue to not be able to get a satisfying sense for what you really mean.


  18. […] Topics that I plan on addressing here definitely include politics, media, academia, and other blogs. Politics I may address if I feel the urge include sports, food, and anything else that tickles my fancy. As far as academic issues relating directly to biomedical research goes, I will continue to post my thoughts in that area as a co-blogger at DrugMonkey. […]


  19. […] Or….a really, really cynical person might see this as a brilliant power move to ensure that you most-likely competitors would not be trying to scoop you nor even be able to edge their way into one of those lame co-publication games. […]


  20. […] potential competitors into collaborators is discussed here, here, here and here.  See also this  post and this discussion on the caveats of peer review and possible danger of scooping (with focus on […]


  21. Twilight fan Says:

    Did you guys know that Twilihgt eclipse has leaked…
    see here


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: