Analysing Gender Bias in Peer Review at J. Neurophysiology

May 5, 2009

The Journal of Neurophysiology is reporting an analysis of peer review outcome for a sample of manuscripts submitted for review in the first half of 2007. Major kudos to them for being concerned enough to conduct such a self-analysis.

The data set comprised 713 submissions. Of these, 7 were rejected by the Associate Editors without review and 12 were withdrawn by the authors following one or more rounds of review. At JN, Associate Editors make the final decisions in the review process; there is no mandatory consultation with the Chief Editor.
The data set consisted of the following entries for each manuscript: first author gender, last author gender, Associate Editor gender, referee gender (for each referee), first decision index, and final decision accept rate. Gender determination for authors and referees was confirmed by photographic web search. The small group of transgendered scientists known to us was scored according to their self-identified gender rather than their chromosomal sex. Of the 713 submissions, 13 were single-author papers and we scored those by entering the single author as both first and last.

Editor Linden was kind enough to send me a note about the editorial so I’m assuming he won’t mind too much if I give away the punchline (which is behind a paywall, I think).

Of the 713 submissions in the data set, there were 191 submitted with women as first authors. These received a first decision score of 3.84 ± 0.05 (mean ± SE) and a final accept rate of 43.4%. The 522 submissions with men as first authors received similar evaluations: a first decision score of 3.83 ± 0.05 and a final accept rate of 42.5%. Submissions with women as last authors numbered 120 and they received a first decision score of 3.87 ± 0.10 and a final accept rate of 42.5%, whereas the 593 submissions with men as last authors had a first decision score of 3.82 ± 0.04 and a final accept rate of 42.8%. Not much difference to speak of.

Lane and Linden conclude with an invitation:

There are many possible ways to parse and analyze this data set. You may download it in either Excel spreadsheet or tab-delimited text form to perform your own analysis using the “Supplementary Data” link on-line. Furthermore, you are encouraged to comment on this editorial or post your own analysis of the data (statistical analyses, which we have avoided, are welcome) in a moderated forum using the “Submit a Response” feature on the on-line article page.

Nice. I’m happy to see an editorial team that cares to know their own performance on such measures. Presumably had differences been found they would have sought to improve for the future. One hopes that they are also able to put an easy tracking system in place, although I suppose the only way to do that is self-identification of gender during submission. And authors might be leery, even if they pointed to this editorial as their reason.
What think you? Would a check box for first and last author gender be a bit off-putting? How about if it had a little help link to tell you that they were keeping oversight of possible bias in review and acceptance? Perhaps if there was a way to keep it under cover until decisions had been made? (Hmm, that makes me wonder about the outcome when gender was readily detected from the names versus not?)

No Responses Yet to “Analysing Gender Bias in Peer Review at J. Neurophysiology”

  1. neurolover Says:

    What I’m noting is the low % of women first authors/last authors, 26% for the first, and 16% for the second. I guess we might kind of expect the second, given that we know that there aren’t many women PIs. But, why the low % of women first authors? Is it a field difference (i.e. physiology grad students aren’t 50% women? or are female grad students not submitting first author papers?).
    And, yes, I like that they looked at this data, and made it available. I look forward to more analysis.

    Like

  2. Alex Says:

    Good to see this data with negligible differences, but I have heard of studies involving double-blind peer review, and those studies have produced some gains for female authors. I don’t know which fields those studies focused on, however.

    Like

  3. neurolover Says:

    “good to see the data with negligible differences”
    I agree, but I hope there’s more analysis (yes, I might try my hand at it). The specific hypothesis I’d like to test is one based on John Dovidio’s work: his work suggests that aversive bias (he studies racism) comes into play when the decision is based on ambiguous data. So, no effect for either the clear accepts or rejects, but an effect in the middle, where decisions could go either way. I’m not sure that analysis can be done with the JNP data, but a first step might be to look for an interaction between score, gender, and outcome. If, for example, one only looked at papers that received “middle” scores, would there be a gender imbalance?

    Like

  4. DSKS Says:

    “But, why the low % of women first authors?”
    That caught my eye, too, particularly the first author tally. I would have thought that currently the proportion of grads and postdocs that are women must be nearing 50%. Although, this could well be the result of gender-related differences within fields, or perhaps even in the choices of preferred journals for publication.
    Are some journals perceived, rightly or wrongly, as more gender neutral than others? e.g. JBC strikes me as a somewhat androgynous periodical, whereas otoh a journal like Pflugers Archiv. has that distinct air of facial hair, pipe smoke and elbow pads. Probably just me, though.

    Like

  5. Nat Says:

    I think it’s totally awesome that the editors as JNP did this, and then put the data set out there for people to analyze six ways from Sunday.
    It is surprising that there were so few female first authored papers; what’s up with that?

    Like

  6. msphd Says:

    Hmm. I have a feeling your last point is the most relevant one: when the gender is obvious from the name, and when it is not.
    I don’t know anything about this journal, though. What is the breakdown of gender among the editors? What about reviewers? We don’t need to know their names, of course, but what if we had a checkbox for reviewers to share their gender for sociological purposes?
    I might spend some time looking at this data set.
    The other possibility, which is much harder to quantify, is that the reason there are fewer women first authors is because our advisors block and delay us at the stage of initial manuscript submission.
    It would be interesting to know how these numbers add up with, as DSKS points out, the actual % of junior (i.e. first-author age) women in this sub-field. While it is about 60% in grad school and 40% in postdoc in biological sciences as a whole, it varies widely among the different sub-fields (and even within some sub- sub-fields).
    And I agree, that certain journals (PNAS comes to mind – with 100% male editorial board in most fields) might be worse than others.

    Like


  7. The other possibility, which is much harder to quantify, is that the reason there are fewer women first authors is because our advisors block and delay us at the stage of initial manuscript submission.

    This is completely absurd. Post-doc mentors have as much incentive to publish in a timely fasion as do post-docs. There certainly is gender bias in the biomedical sciences, but post-doc mentors selectively “blocking and delaying” manuscripts with women first authors isn’t one of them.

    Like

  8. Alex Says:

    CPP-
    I don’t know that any advisor has an incentive to deliberately block a manuscript just for shits and giggles, but surely you must be open to the possibility that an advisor with conscious or unconscious biases might be more skeptical of a female student’s data, or be more critical of her writing, slowing the submission process.

    Like

  9. whimple Says:

    I’d like to see if the relative lack of women first authors is reflected as a relative surplus of women as middle authors.

    Like


  10. I’d like to see if the relative lack of women first authors is reflected as a relative surplus of women as middle authors.

    That is much more likely than “blocking/delaying” women first authors.

    Like

  11. neurolover Says:

    “I’d like to see if the relative lack of women first authors is reflected as a relative surplus of women as middle authors.”
    The data to analyze this isn’t in the database (no middle author info).
    But, an interesting tidbit: if you select only female 2nd authors, first authors is 49% female. For male 2nd authors, the first author is 22% female.
    (The data, BTW, is not behind a subscription wall)

    Like

  12. neurolover Says:

    urgh, correcting grammar
    Female 2nd authors: first authors are 49% female
    Male 2nd authors: first authors are 22% female
    (there were no authors who were only 22% female, since, as we know, transgendered individuals were identified as their self-identified gender).

    Like


  13. I breathed an ignoble sigh of relief at the finding that women were no more likely to accept a paper than men were (if anything the other way around). I was anxious about the possibility that women were “softies” (or at least that they would look like such, from the dataset). Also, kudos to the editors for doing this and making the data available. Would that many another publication would do the same.
    CPP, you’re flat out wrong that all post-doc mentors have the same urge to publish in timely fashion as post-docs do. The larger and more successful the lab, the more likely that the PI can afford to drag feet on a lower-profile or harder-to-grapple-with paper. I’m not saying that all of them do, but your assertion that none of them do is rather silly.

    Like

  14. Bob O'H Says:

    Hmm, that makes me wonder about the outcome when gender was readily detected from the names versus not?

    We know double-blinding has no effect on the proportion of female authors (see the references in my post). This used data where the researchers used the authors’ names to assign gender, so it addresses this point fairly directly.

    Like

  15. Nat Says:

    The larger and more successful the lab, the more likely that the PI can afford to drag feet on a lower-profile or harder-to-grapple-with paper.

    Word up DrJ.

    Like

  16. neurolover Says:

    “The larger and more successful the lab, the more likely that the PI can afford to drag feet on a lower-profile or harder-to-grapple-with paper.”
    Precisely the kind of paper that tend to be submitted and published in JNP.
    Another interesting tidbit — JNP basically doesn’t seem to have any “ambiguous” papers: 95% of papers initially assigned a middling score (3) are eventually accepted, while

    Like

  17. neurolover Says:

    while less than 1% of the papers initially assigned 4/5 are published (the symbol messed up my text)

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: