Bias in Selection for NIH Study Sections

July 9, 2013

Interesting exchange on the twitts today with someone who is intimating that the process of selecting peers to serve as grant reviewers on NIH study sections requires some transparency and fixing.

As my longer term Readers are aware, my main objection along these lines is that I think Assistant Professors should not be excluded and that the purge urged on by Toni Scarpa back some years ago was misguided. I will also venture that I think it is ridiculous that the peer review pool is limited to those Professorial rank people who have already won funding from the NIH (for the most part). If really pressed, I’ve been know to suggest that it is even unfair that the more senior postdoc types who have not yet won a faculty-level appointment cannot review grants.

Other than that, I am generally down with the official mandates to seek ethnic/racial, gender and geographic representation on panels. My personal experience has been that the SROs do a pretty good job at this. Also, because of these factors, I have found that the types of institutions represented spans the range pretty well..small mostly teaching profs, big Research Uni profs, research insitutes of various sizes, public Unis, private Unis, Med Schools and academic departments.

So it is with some confusion that I read someone asserting that there is a problem with who is selected.

My query of the day, therefore, is to ask you if you know of people who seek to serve on study section but cannot seem to land an invite. Alternately, do you know of categories of investigators that are routinely overlooked?

Advertisements

33 Responses to “Bias in Selection for NIH Study Sections”

  1. Jim Woodgett Says:

    Hmmm….. reviewer selection. CIHR (Canada’s mini-NIH) is planning to do away with panels/study sections and move towards virtual review and a “college of reviewers”. This is necessitated by a desire to attract 5 reviews per application but to use “content” appropriate matching of expertise such that no two applications are likely to be reviewed by the same 5 people. Hence, this is a virtual review where reviewers are selected based on matching criteria (still to be worked out) based on stuff like regular expression analysis of abstracts of reviewers papers, compared to the feature/keyword extraction from the applicaiton, etc. Each reviewer is expected to review 20-25 applications 2X a year and these will be ranked per reviewer. This allows a mass sorting of applications based on relative rankings (rather than absolute scores).

    If this seems somewhat pie in the sky and desperate, join the trepidation…. Their rationale is that study sections/physical panels are difficult to assemble to cluster the appropriate expertise expected by applicants and there are also “bad” behaviours of reviewers such as tending to give people who are well ensconced within the expertise of the panel better scores. I can see the merit but its all a big experiment. This is Canada, though, and we suffer from a thinner pool of expertise so that panels can be easily distorted (e.g. a third of the panel may have to leave due to conflicts with Montreal or Toronto applications). There will be an actual super panel(s) which evaluates the applications with the biggest z-scores (among the 5 reviewers) to try to hone down on the cut-off line.

    Whether there is perceived or real bias in panel selection, clearly there is variation in reviewer quality. Whether this can be mitigated by additional reviewers and using application-focused virtual micro-panels is unknown but the Canadian experiment will be rolled out in 2014.

    BTW, the current situation is this. CIHR funds 400 grants per 6 monthly competition come hell or high water. The budget calculation is funding available divided by the total of budgets of top 400 applications. This usually works out at 3/4 or so the budgets are all cut by 25% across the board from the panel recommended budgets. It is not pretty.

    Like

  2. Former technician Says:

    I know of both research and tenure track assistant professors who have served on review panels. Perhaps there is a difference in the makeup of different review panels? I really hadn’t noticed professorial only panels.

    Like

  3. mikka Says:

    I’ve been on that Early Career Reviewer list for over a year now, three review cycles. I’ve groveled and begged to the SROs to no avail.

    I’m male, white, mid thirties, from a big state R1 uni, so I’m not exactly what comes to mind when one thinks “diversity”. My feeling from the communications with SROs is that they have a long list to choose from and only two ECR slots per study section. *shrug*

    Like

  4. Grumble Says:

    “people who seek to serve on study section but cannot seem to land an invite”??? I wish they would stop asking me! Once you’ve been on study section once or twice and learned what there is to know about how it operates, it is an utter and complete waste of time.

    If there are really people begging to be on study sections, they need to stop asking me and start asking them. Seriously.

    Like

  5. DrugMonkey Says:

    Grumble- do you suggest alternatives? Good way to get some younger folks that you respect in your field involved.

    Like

  6. DrugMonkey Says:

    mikka- there was debate and disagreement about the purpose of the ECR when they started it. Originally it was for underrepresented groups (incl, smaller institutions). Some SROs may still be on that plan. Have you reviewed who this SRO has been asking?

    Like

  7. DrugMonkey Says:

    Former tech-
    It’s a matter of relative proportions. I’m hoping that post Scarpa things will trend back to where they were before his freak out about it.

    Like

  8. Ola Says:

    Some of the most erratic reviews I’ve seen have come from very junior reviewers on study section, so there is something to be said for having experienced reviewers.

    For me, the bigger issue is with the T word*, which has resulted in proliferation of MDs on study sections, many of whom indeed understand clinical relevance, but wouldn’t know a good hypothesis if it bit them.

    Diversity wise, it seems to me there’s a long way to go. The one I’m on has 20 regular members, of whom 2 are women, 3 are non white. A little more diversity comes from the ad hocs each cycle, but the other regular study sections I’ve seen have similar make up.

    *translational

    Like

  9. DrugMonkey Says:

    Some of the most erratic reviews I’ve seen have come from very junior reviewers on study section, so there is something to be said for having experienced reviewers.

    The usual bullshit confirmation of biases.

    I’ve never had this experience, at all, on panels. If anything there is an inexperience factor, but that applies to all new reviewers, regardless of career stage.

    Where I do see “erratic” reviewing is amongst a segment of experienced reviewers that hold the grant smithing of the noobs and the field luminaries to different standards.

    Like

  10. DrugMonkey Says:

    Panels I tend to get reviewed in (and have served on) run 30%+ women *very* consistently. Rarely 50/50, so things could be better. But still.

    Racial/Ethnic representation always seems better than my fields of interest so that’s good.

    Geographic spreading is likewise very good. Flyoverlandia is clearly over represented relative to number of apps.

    Like

  11. qaz Says:

    In my experience, ECR is being used for up-and-coming stars to provide on-section experience. All the ECRs who’ve come to my study section have been white and from top-flight research unis expecting multiR01s for tenure.

    I agree w DM 100% that it was a major mistake to remove asst profs from study section. In my experience, they were better reviewers (by far), mostly for the reasons shown by Yun Gun in DMs wonderful play. In my experience, the most erratic reviews I’ve seen have been between oldschool and newschool reviewers holding different personal standards, particularly from oldschool profs who don’t listen to the instructions from the SRO or CSR (which admittedly change every year).

    Like

  12. Ola Says:

    I guess by “erratic” I’m really referring to what appears to be an increased willingness on the part of junior reviewers to use the full 9 point scale. Yeah yeah, I know that’s exactly what we’re all supposed to be doing, but I often see senior reviewers will score a 5-6, indicating the project still has some merit (often unseen by the junior folks), but the 9 from the young-un pulls the average way down into the non-discussed range.

    Similarly I’ve seen overly negative scores on the investigator criterion from junior reviewers, who may not be as familiar with someone’s former contribution a to the field, as the more experienced reviewers are. Put simply, they just haven’t read as much. The same for environment – a lot of junior folks have not traveled as much, so may be unaware that University of Podunk has a frickin amazing program and resources in subject X. Significance also seems to get hammered by inexperienced reviewers. Now, the good side is that they’re often far more familiar with current methods and technologies, so are way more qualified to discuss approach, than the old farts.

    Like

  13. meshugena313 Says:

    I applied for the ECR program a while back when it was first announced, 6 months later got a “ding” letter stating something ridiculous along the lines of “no evidence that you run an independent research program”. Since I have a tenure track appointment with well documented ample lab space and 2 senior author publications at that point, this was either incompetence or an excuse to weed out people. Probably the latter, but this was still ridiculous.

    It does seem nuts to eliminate people who actually want to review grants…

    JW’s description of the new CIHR review process is interesting – it may actually be better for removing bias, although the absence of any in-person discussion may be problematic. Not a bad experiment, though.

    Like

  14. DrugMonkey Says:

    So did you write back with all the details? C’mon meshugena, persistence.

    Like

  15. DrugMonkey Says:

    Ola- so newb reviewers 1) use the range *as continually exhorted to do by the SRO* and 2) don’t substitute Authoritah! for a good research plan.

    This is EXACTLY why the system needs more younger reviewers.

    Like

  16. DrugMonkey Says:

    qaz-
    I think you are expressing some of the variability in how different SROs (and POs who suggest names) view and apply the ECR program. It was clear during the genesis that there was disagreement. Originally it was bandied as a strategy for engaging the underrepresented categories of reviewers. Then the no-fair-niks chimed in to defend the people like I was as a new faculty member- people who would have eventually found a place anyway. Maybe the thinking is to broaden the pool of “the usual” earlier? Or to satisfy claims like Ola’s that they newbs suck and need training before being loosed on a full pile?

    Me, I support both goals.

    Like

  17. meshugena313 Says:

    DM-
    Since most of that info was in the initial application, at the time I didn’t think it was worth the effort to argue considering that I was a white, late 30s male at an east coast medical school… not exactly enhancing any diversity. And they had apparently thousands of applicants for a few reviewer slots. Maybe I will try again now.

    Like

  18. Busy Says:

    Why do Americans even tolerate “grant smithing”. Most other countries don’t have that.

    If your proposal idea is good and reasonably well explained it gets high marks. Yet, it seems that it is routine to reject NSF/NIH grants because they don’t have enough commas (ok, this is hyperbole for effect, but not that far from what actually happens).

    Like

  19. drugmonkey Says:

    Why do Americans even tolerate “grant smithing”.

    It is my nongendered alternative to “grantsmanship”, for the most part. Is that a problem? Any country that has a granting system, there’s a way to be better at it and a way to be worse at it. That’s “grantsmanship” or “grantsmithing”.

    If your proposal idea is good and reasonably well explained it gets high marks. Yet, it seems that it is routine to reject NSF/NIH grants because they don’t have enough commas

    This is for two reasons, at the least. First, there are a LOT of very good proposals. Second, there is a desire to find equal review grounds in the face of a diversity of proposal specifics. Sometimes criticizing the grant structure seems fairest to people. I think a lot of the cultural-expectations stuff that emerges in a given study section starts from this gut feeling that a panel should try to treat proposals equivalently.

    most of that info was in the initial application
    Clearly the person overlooked it, meshugena313

    Like

  20. Busy Says:

    First, there are a LOT of very good proposals.

    Then rank them all highly and let other people deal with the mess. Why do the dirty job for them? In fact that would be a good argument to go to Congress and ask for more money: “last review cycle we had 400 world-class essentially-undistinguishable grant applications that would have kept the country at the forefront of medicine, science and technology yet we could only fund 50.”

    Or you can do as told and nix the grant with too few commas (figuratively speaking).

    Like

  21. whimple Says:

    “last review cycle we had 400 world-class essentially-undistinguishable grant applications that would have kept the country at the forefront of medicine, science and technology yet we could only fund 50.”

    This is a silly argument to make to Congress, since it has always been true and always will be true; the demand for “free money from the government” always exceeds the supply.

    Like

  22. DrugMonkey Says:

    Also…you want someone *else* to do the dirty work? Riiiiiiighht…

    Like

  23. Busy Says:

    the demand for “free money from the government” always exceeds the supply.

    But this is not what I’m asking for. I didn’t say “yay free money!”. I said more money for scientific proposals which are so worthwhile they have to be distinguished from each other by the number of commas.

    Science support percentages in the USA are at historical lows. Either we make this case loud and clear so people support expenditure levels commensurate with a developed nation or we continue to suffer more cuts and go the way of the Fomer Socialist Republic of Georgia, who has the lowest level of taxation in the world so surely must be a paradise on earth, if the GOP is to be believed.

    Like

  24. miko Says:

    Obviously they are different tasks, but as an academic editor, I see an almost perfect negative correlation between career stage and review quality. With notable exceptions, the more senior the reviewer the more generic and lazy the comments are, the less they support their criticisms with data from inside or outside the paper, etc. If you like reading long-winded assertions, by all means use these folks.

    And I say again — all respect and affection for the the fucking boomers — but if you are under 50 and review papers or grants, rip those folks every chance you get. A little retirement incentive never hurt anyone.

    Kidding.

    Like

  25. DrugMonkey Says:

    I don’t believe you. About the Kidding part.

    Like

  26. damit Says:

    Well….ESI’s generally have a lot to learn when they come to study section. Including me when I started. Most are pissed off at the world especially these days. But it’s good for all, lets people learn the process and see what good and bad applications look like and the panel is usually robust enough to deal with variances, and gets new reviewers in the que.

    It is really good to see a young person for the first time work up themselves to really fight for a grant application. And I do try to tell them when I thought they did it well.

    The real elephant in the room is lack of appropriate review for senior reviewers…there is generally one nutjob ( senior) permant appointee per study section I’ve served on…. who everybody says “God, how many more rounds do we have to deal with this person! They’re nuts!”

    We all shared a drink to celebrate one individual rotating off last year.

    Like

  27. whimple Says:

    Science support percentages in the USA are at historical lows.

    Explain why the USA cares about support percentages? The USA says, “we’ll spend $35B this year on science, who wants free money?” and a historically large number of people say “Meeeeeeeee!” Shock.

    Like

  28. Pinko Punko Says:

    The complaints I usually hear about study section selection is from coastal elites who don’t like to be judged by no-name flyovers. The SRO that I deal with I think does a really good job, and the panel chair runs the panel very fairly. I see lazier reviews out of other panels. I think that reviews could possibly be getting more boilerplate. I think subconsciously people must be feeling “what is the point”. I don’t want to be lazy with my reviewing because it is so painful to get a lazy disengaged review.

    Like

  29. Busy Says:

    who wants free money?

    I was under the impression that people were expected to do work, particularly research, in return for this money. I’m glad you have clarified this for us that no such expectation exist and it is thus free money.

    Like

  30. TwoYellowsMakeRed Says:

    I am generally down with the official mandates to seek ethnic/racial, gender and geographic representation on panels. DM

    It’s not a bad idea, but the scientific community does not look like the general population and the politicians in Washington (and some in Bethesda) demand that committees look like the US population. If places a premium on Latino, Native-, and African-American grantees who are serving on so many committees that they don’t have time to generate data and papers for their competitive renewals.

    Like

  31. Joe Says:

    “and the politicians in Washington (and some in Bethesda) demand that committees look like the US population. If places a premium on Latino, Native-, and African-American grantees who are serving on so many committees that they don’t have time to generate data and papers for their competitive renewals.”
    Sadly, I have never seen an African-American or Native-American on any of the panels I have served on, in seven years of service. Diversity generally means 25% female, 4 Indian dudes, 2 Chinese or Japanese men, and a guy from Mexico or South America.

    Like

  32. Juanlopez Says:

    Yeah, free money. Someone hasn’t been paying attention. Think that NIH gives a lot of money for little work? Try Defense and Intelligence.

    A problem with judging diversity by what you see in panels is that it depends on our biases and prejudices. I, for example, look white, middle aged from an east coast R01 university. But this doesn’t mean I am not Latino/Hispanic. Diversity is important and complicated. I am glad this topic is considered and there seems to be an interest to address it, even if imperfect.

    Like


  33. […] know I and others have blogged about the Early Career Reviewer program at NIH. Also see Drugmonkey’s take on this (and the comments) from last […]

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: