Medical journal editor: "Don't show me data, I know what I know"

October 1, 2009

My readers will recall that I have long expressed difficulty crediting assertions that assistant professors are poorer reviewers of grant proposals.
A recent news bit in NatureJobs describes a study (I think it was a conference presentation from the wording) of paper review quality as a function of years spent reviewing.

Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, analysed the scores that editors at the journal had given more than 1,400 reviewers between 1994 and 2008. The journal routinely has its editors rate reviews on a scale of one to five, with one being unsatisfactory and five being exceptional. Ratings are based on whether the review contains constructive, professional comments on study design, writing and interpretation of results, providing useful context for the editor in deciding whether to accept the paper.
The average score stayed at roughly 3.6 throughout the entire period. The most surprising result, however, was how individual reviewers’ scores changed over time: 93% of them went down, which was balanced by fresh young reviewers coming on board and keeping the average score up. The average decline was 0.04 points per year.

I am not surprised on bit. Nor should anyone who thinks for just a half a second about how real people actually behave. Exhaustion and cynicism have a tendency to replace energy, enthusiasm and the fear of looking like an intellectual lightweight when it comes to reviewing. Sorry but if you really believe that people in aggregate do not suffer from these tendencies you are utterly out to lunch and you need to spend some time interacting with actual people. Seriously. If you think that you are somehow unique…HAHAHAHAHAHA!
Anyway, the data that are being described seem to confirm my perspective in one specific context of interest to our readers. Editors should take note of this! The take home message is that they should continue to work hard to cast a wide net, to involve new reviewers as much as possible and not to stick with the same old folks for decades.
Of course, some editors abandon all pretense of actually being a scientist any time it is suggested that the geezeriat might not be all-that in comparison with young guns (of 45!!!).

“This is a quantitative review, which is fine, but maybe a qualitative study would show something different,” says Paul Hébert, editor of the Canadian Medical Association Journal in Ottawa. A thorough review might score highly on the Annals scale, whereas a less thorough but more insightful review might not, he says. “When you’re young you spend more time on it and write better reports. But I don’t want a young person on a panel when making a multi-million-dollar decision.”

“Multi-million-dollar decision”? Grant funding I deduce? Game on, my friend, game on.
This is, you will notice, the same old crap*. An assertion that young and/or assistant prof level scientist is deficient in reviewing grants. This theme was enthusiastically adopted by Toni Scarpa, head of the NIH’s reviewing unit, the CSR. We have heard all sorts of complaints and efforts (often covert within CSR) to reduce the number of assistant professors participating on review panels. Seldom have we seen anything like a coherent argument for why assistant professor reviewers are to blame for [insert poorly specified, seemingly negative grant review outcome]. Never have we seen any data backing up the assertion. Personal anecdotes, if offered, never survive the question of review experience, which of course is a Catch22 if you prevent younger people from serving as reviewers. Never have we seen a discussion of my contention that the relatively few assistant professors on panels (10% of reviewers was the high water mark, I believe), mostly as ad hoc reviewers who see fewer grants (note that when CSR does offer figures they do not present them by percent of reviews), cannot have a major role in eventual grant disposition. The numbers don’t add up.
And now we have some data on the table to suggest that peer-review quality goes down over time. For damn sure it suggests that the best approach is to cast a wide net and to constantly seek new blood. And no, I don’t see where grant review is somehow different from paper review in this.
It also suggests that one new CSR policy should be reconsidered. I wasn’t all that impressed by the new 6-year, every-other-round option for being a permanent member of a CSR study section (default is 4 yr, every-round). My argument in that post focused on the continuity of reviewing revised applications but I also am concerned about reviewer burnout. Heck, I even think four year terms might be a bit too long.
So when you hear this assistant professor bashing in person, DearReader, do me a favor would you? Get the whiner to flesh out the complaint. Ask for what they base this on. And drop me a line or a comment. I’m curious.
[h/t: PhysioProf]
__
*sure the guy might have gone on for chapter and verse and the journalist boiled it down to this pap. but I doubt it. this story is just too familiar…

No Responses Yet to “Medical journal editor: "Don't show me data, I know what I know"”

  1. FSP Says:

    The decrease in review quality as a function of ‘professional age’ of reviewer is real, but I wonder about the causes of it. For example, what about the effect of the number of reviews done by any one reviewer? Perhaps I should just say no to review requests more often and keep my overall review numbers lower so that I can devote more time to each one, but I do say no to quite a few, and still I end up with a significant number of reviews that I feel I have to do. The decrease in review quality is real and a good solution is what you suggest, but I think review exhaustion and lack of time are more significant factors than cynicism or laziness.

    Like


  2. AHAHAHAHAHAH!!!!!!! I knew this shit would drive you berserk!

    Like

  3. Will Says:

    While I completely agree with the analysis and the data I think there might be at least one confounding element of noise in the data for “older reviewers”. Specifically, I wonder how many of the reviews were actually written by the scientist in question. I’m not sure how widespread the process is but many lab heads will give out papers to review to their grad-students to review as a “learning experience”. I’m sure many of these “student written reviews” are sent off without much more then a sanity-check and a name replacement.
    This may be artificially inflating the “old folk” score if their students are very astute reviewers. Conversely it may be decreasing their score if their students don’t have the experience to critically review a topic. Without some sort of anonymous survey I’m not sure how you would tease out this effect.

    Like

  4. Eric Lund Says:

    “This is a quantitative review, which is fine, but maybe a qualitative study would show something different.”–Attributed to the alleged editor of a scientific journal.
    Whiskey. Tango. Foxtrot. The point of science is to arrive an idea of how the world works which is based on quantitative results. I’m not in a biomedical field, so I will never have the dubious pleasure of submitting to or refereeing for this journal, but if I were, I would definitely take a more skeptical view of anything they publish.
    I don’t know of any comparable study in my field, but this is certainly consistent with my experience: there is no significant positive correlation between reviewer age/experience and quality of review. If anything, it’s the opposite, and there are a number of reasons why this should not be a surprise. In a typical research group, grad students and postdocs (techs are relatively rare in my field) do most of the research, so postdocs are most likely to be familiar with the current literature. Prof. Bigshot is typically busy trying to feed his army of grad students and postdocs, so he often doesn’t have time to keep up with the literature. So often a senior reviewer will view the proposal/journal article with a frozen-in-amber view from his days as a postdoc, sometimes going as far as writing ex cathedra statements claiming X is impossible without engaging the supporting literature that the authors cite in favor of X, let alone the data analysis which shows that X is likely to be the correct interpretation. Meanwhile, a junior reviewer, who is more afraid of being perceived as a lightweight, will engage the author’s points, and if such a reviewer feels compelled to say that X is impossible, he will cite literature in support of that viewpoint.

    Like

  5. becca Says:

    “Specifically, I wonder how many of the reviews were actually written by the scientist in question. I’m not sure how widespread the process is but many lab heads will give out papers to review to their grad-students to review as a “learning experience”.”
    Precisely. Grad students/postdocs could be saving those old farts from having a 0.08 decline per year. At least, if DM isn’t talking out of his bum when he says that “energy, enthusiasm and the fear of looking like an intellectual lightweight” are the critical motivating factors.

    Like

  6. lost academic Says:

    “But I don’t want a young person on a panel when making a multi-million-dollar decision.”
    Wow, Hébert, suck my metaphorical balls. Outside of your little world, ‘young’ people are making those decisions every damned day and they’re held far more accountable then someone like you would hold a reviewer. I don’t personally see ‘young’ reviewers as being very young ‘ I imagine the investment decisions and funding decisions I make as part of my national board, at an age younger than the youngest, so you can get down off your high horse.
    Just deal with the fact that younger people may not be as beholden to your process or political ramifications as older ones and MIGHT (no proof) be more likely to make the necessary calls. Or don’t.

    Like

  7. Alex Says:

    As a junior faculty member I’m of course pleased to know that we do better reviews. However, the quality of the review was measured by the editors, who tend to be older. So if the old guys can’t review, how can we trust them to review our reviews?
    On a related note, if you have two liars from Crete…. 🙂

    Like


  8. Thanks for bringing it up- you, as always, spit it out much more eloquently than me.
    I mean, the whole thing is just offensive. In order to be a good reviewer, all you need is enough experience to become very knowledgeable in your field (~3 years?). Anything beyond that is unnecessary and just adds to the cynicism and arrogance that seems to correlate with age.
    What pisses me off the most is that, as an assistant professor, I will be robbed of the opportunity to sit in on study sections, to learn more about the review process, and to write better grants myself. That is, unless things change. I know I trust myself to make multi-million dollar decisions. 🙂 How to convince the elders???

    Like


  9. How to convince the elders???
    Don’t worry – they’ll be dead soon and us young ‘uns will take over the asylum.

    Like

  10. DrugMonkey Says:

    Hate to break it to you PiT, we GrumpyOldeDoodsInTraining will be happy to take up the lawn clearing grumbles….

    Like


  11. In order to be a good reviewer, all you need is enough experience to become very knowledgeable in your field (~3 years?).

    Three years to become “very knowledgeable in your field”!?!?!? AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Like


  12. Darling PP, some of us acquire and retain new information at a remarkable rate. Especially when schloads of complicated biology are *not* involved.

    Like

  13. Dr. Feelgood Says:

    I review all the time. Always a bridesmaid (ad hoc) never a bride (chartered). I was literally on section for 5 years straight without an offer of charterdom at the assistant professor level. I think my way of reviewing has changed. Not sure about the quality. I used to be more trees than forest, and I evolved to look at the whole proposal rather than the minutiae.
    Also, I have found that style and grantsmanship issues have seeped into my reviews which I think should be less important than the data shown and experiments proposed. I have done it for so long that I hate almost everything now. I personally have it in for greybeards writing for their 4th concurrent R01 who send in a cut and paste special. I see more and more of those as they cant run their labs on less than a million bucks a year. (I can!)
    I generally like submissions from younger reviewers, not because they write better grants, but their ideas are usually more refreshing and their methodologies are more cutting edge. Often they have poor ideas or methods of execution, but I champion that more than 70s style old-man science.
    As I said, I pretty much hate everything though. Off to section this week (ad hoc) and again in 2 1/2 weeks at SFN to destroy the hopes and dreams of all.

    Like


Leave a comment