A thought exercise for readers
November 11, 2011
Consider the last few manuscripts you reviewed.
How many labs could credibly have submitted that manuscript if you didn’t know who the authors were?
Assume they had been written without any “as we previously showed” type of language but with citations otherwise unchanged. What would be your confidence in your bets as to lab identity?
November 11, 2011 at 8:52 am
I guess I’m lucky. In my field new people are doing it all the time. I’ve never known the authors, whether blind or not.
LikeLike
November 11, 2011 at 8:54 am
Is this part of some debate you’re having on blind submissions?
My field mostly has a blind submission rule, so I can say with 100% confidence that for the majority of papers that I review, I would have a pretty slim chance of accurately guessing where it came from. It’s generally all or nothing: there are a few papers for which I can guess with 90+% confidence, and then the majority of papers for which I could only guess with 15% or less confidence.
On the opposite side, I think that the reviews I get on my unblinded submissions are much nicer and friendlier than the reviews I get on my blinded submissions.
LikeLike
November 11, 2011 at 9:07 am
In my field, I would say 75% sure you could tell who it is. And then of course immediately reject the paper!
LikeLike
November 11, 2011 at 9:11 am
Say, the last four papers:
– No chance, would never have guessed it, don’t know them: 2
– I know the authors, but would not have guessed them correctly: 1
– 30%-50% probability of guessing correctly: 1
LikeLike
November 11, 2011 at 9:15 am
In my field (CS), I’m only aware of one formal study where this was tested. People claimed a prediction rate ability of upwards of 80% but when tested they were accurate below 50% of the time, if I remember correctly,
LikeLike
November 11, 2011 at 9:17 am
I reviewed two papers this week. One would have been obvious because it’s a follow-up to something that only one group is working on. The other would have been suspicious, because everybody in the field is still using home-brew stuff, so the references in the methods section would have given it away even if “using our previously-established tool[reference]” were replaced with “using a previously-established tool [reference]…”
LikeLike
November 11, 2011 at 9:24 am
Research in some selected fields shows that double-blind review favors women and getting closer to gender parity, independent of reviewers’ confidence in their identification of the authors/performers…
Budden AE, et al. (2008) Double-blind review favours increased representation of
female authors. Trends Ecol Evol 23:4–6.
LikeLike
November 11, 2011 at 9:27 am
Is this part of some debate you’re having on blind submissions?
Of course 🙂
My field mostly has a blind submission rule, so I can say with 100% confidence that for the majority of papers that I review, I would have a pretty slim chance of accurately guessing where it came from.
I am curious as to how they pull that off. I just don’t see in my field how this can work.
People claimed a prediction rate ability of upwards of 80% but when tested they were accurate below 50% of the time
I can see that. but even 25% hit rate could be huge if it was not randomly distributed against exactly the factors that blinding is supposed to address. I.e., are bigger, more established labs still easily identified? Also, if the issue here is type bias instead of specific bias, the question might be whether you hit on “one of three big labs” rather than the exact right one….
The other would have been suspicious, because everybody in the field is still using home-brew stuff,
This is one of my biggest objections. Even with the “standard” behavioral assays in drug abuse, (think self-administration, conditioned place preference, drug-discrimination, locomotor activity…) the *way* someone does it, the preferred rat strain, various niceties of the design…these methodological factors form a laboratory fingerprint.
LikeLike
November 11, 2011 at 9:33 am
I just did the experiment. One of five papers was a dead giveaway that couldn’t have been anonymized if I’d tried, one I wouldn’t have guessed offhand but would have been able to figure out if I’d tried – and would also have been difficult to anonymize because of the specificity of the question – but three of five could have come from any of a few dozen labs.
LikeLike
November 11, 2011 at 9:38 am
Despite that finger-print, I still think that blind review is worth having. It might not work in all cases, but it will help in at least some cases, and I don’t see where it will hurt.
Also, we might be able to figure out which lab a paper is from, but how often will we figure out which student or postdoc did it? To whatever extent reviewer bias responds to characteristics (race, gender, etc.) of the first author rather than the PI, why not leave the first author a mystery?
LikeLike
November 11, 2011 at 9:51 am
I think the reverse issue is more problematic: anonymous reviewers that feel free to trash with impunity results that upset their scientific applecart.
LikeLike
November 11, 2011 at 10:07 am
It might not work in all cases, but it will help in at least some cases, and I don’t see where it will hurt.
It would hurt if many of the suspected “bads” of unblinded review were true and the distribution of failed-blinds was not random.
Suppose one of the knocks is that known larger/established labs get kid glove reviews because of either respect for, or fear of, the lab. The better known you are, the more likely blinding fails. So they are unaffected while the lesser-known folks fail to get any smidge of benefit that comes with the degree to which *they* are known.
Suppose someone tries to game the blinding system by writing a manuscript in a way that hints at their identity…or tries intentionally to mislead the reviewer into assuming a different lab. You would have all kinds of undue influences having an effect. Maybe that’s good and would reduce reviewer confidence about their ID of the lab…but if the abovementioned overconfidence is valid, one suspects not. There is an argument that it is better just to have the authors known instead of pushing the bias off onto “well, it is inaccurate so that makes it okay”.
LikeLike
November 11, 2011 at 10:14 am
I would say 50-75%, in some cases you can be 100% because what someone is working on and what specialized techniques their lab employs. I think the point is that there really can be no completely blinded review of manuscripts.
LikeLike
November 11, 2011 at 3:46 pm
2 of the last 6 I reviewed. I think bioinformatics is less specialized because the start-up costs for striking out in a new direction are low.
LikeLike
November 11, 2011 at 5:07 pm
Re drugmonkey’s last point: Ah, but I think there is a safety factor built in, because even though people might be more likely to guess accurately the identity of a paper from Famous Lab X, they will also be more likely to (incorrectly) attribute Unfamous Lab Y’s manuscript to Lab X, because lab X is the only one that comes to mind. IE, I will sometimes guess correctly that a manuscript comes from the biggest name in the field, but I will never guess that a manuscript comes from someone I’ve never heard of, because… I haven’t heard of them to guess that.
LikeLike
November 12, 2011 at 3:50 am
I am for the way that physics and math articles were published in the past: You send your article to be reviewed to 2 fellow scientists you know to be real experts in the field (not friends that owe you favors). After getting their feedback you modify the article, perhaps do one more experiment (not add 10 supplemental figures). You send the article and the reviews to a respected journal. The editors decide whether the article is worthy based on their knowledge, and the reviewers comments/assessments. If the article gets published, it includes the names of the reviewers. This way when you read an article that is questionable, you do not have to ask yourself “who in the hell reviewed this piece of junk and let it though w/o the critical controls?”. And when you review an article, you do it carefully because your reputation and credibility is also at stake.
LikeLike
November 12, 2011 at 4:54 am
distribution of failed-blinds was not random.
If the paper is in my real area of expertise then I can probably narrow it down to 3 or 4 groups. The more of a record a person has, the more I know their lab’s general MO. The big wigs have known patterns.
Even though we sometimes do blind submission, people aren’t that careful about not dropping clues. Take a look at who they reference and you can generally get a good idea.
LikeLike
November 12, 2011 at 1:49 pm
Read some EMBO J “Review Process” files-my guess is that unblinded review helps bigger labs publish turdier studies that get pass reviews (the one paragraph “just publish” reviews).
My field is big enough that a minority of papers would be super obvious who their authors were, but not most of them.
LikeLike
February 3, 2012 at 8:24 am
[…] Thought exercise: Choose one thought and focus on it. Hold the thought in your mind for as long as you can without getting distracted or allowing your mind to wander. At first, you may not be able to do this for very long, but practice every day and soon you should be able to stay focused for some time. […]
LikeLike