Scenes

November 23, 2015

In the past few weeks I have been present for the following conversation topics.

1) A tech professional working for the military complaining about some failure on the part of TSA to appropriately respect his SuperNotATerrorist pass that was supposed to let him board aircraft unmolested…unlike the the rest of us riff raff. I believe having his luggage searched in secondary was mentioned, and some other delays of minor note. This guy is maybe early thirties, very white, very distinct regional American accent, good looking, clean cut… your basic All-American dude.

2) A young guy, fresh out of the military looking to get on with one of the uniformed regional service squad types of jobs. This conversation involved his assertions that you had to be either a woman or an ethnic minority to have a shot at the limited number of jobs available in any given cycle. Much of the usual complaining about how this was unfair and it should be about “merit” and the like. Naturally this guy is white, clean cut, relatively well spoken…. perhaps not all that bright, I guess.

3) A pair of essentially the most privileged people I know- mid-adult, very smart, blonde, well educated, upper middle class, attractive, assertive, parents, rock of community type of women. Literally *everything* goes in these women’s direction and has for most of their lives. They had the nerve to engage in a long running conversation about their respective minor traffic stops and tickets and how unfair it was. How the cops should have been stopping the “real” dangers to society at some other location instead of nailing them for running a stop sign a little too much or right on red-ing or whatever their minor ticket was for.

One of the great things about modern social media is that, done right, it is a relatively non-confrontational way to start to see how other people view things. For me the days of reading science blogs and the women-in-academics blogs were a more personal version of some of the coursework I enjoyed in my liberal arts undergraduate education. It put me in touch with much of the thinking and experiences of women in my approximate career. It occasionally allowed me to view life events with a different lens than I had previously.

It is my belief that social media has also been important for driving the falling dominoes of public opinion on gay marriage over the past decade or so. Facebook connections to friends, family and friends of the same provides a weekly? daily? reminder that each of us know a lot of gay folks that are important to us or at the very least are important to people that are important to us.

The relentless circulation of memes and Bingo cards, of snark and hilarity alike, remind each of us that there is a viewpoint other than our own.

And the decent people listen. Occasionally they start to see things the way other people do. At least now and again.

The so-called Black Twitter is similar in the way it penetrated the Facebook and especially Twitter timelines and daily RTs of so many non-AfricanAmerican folks . I have watched this develop during Ferguson and through BlackLivesMatter and after shooting after shooting after shooting of young black people that has occurred in the past two years.

During the three incidents that I mention, all I could think was “Wow, do you have any idea that this is the daily reality for many of your fellow citizens? And that it would hardly ever occur to non-white people to be so blindly outraged that the world should dare to treat them this way?” And “Wait, so are you saying it sucks to have a less-assured chance of gaining the career benefits you want due to the color of your skin or the nature of your dangly bits….it’ll come to you in a minute”.

This brings me to today’s topic in academic science.

Nature News has an editorial on racial disparity in NIH grant awards. As a reminder the Ginther report was published in 2011. There are slightly new data out, generated from a FOIA request:

Pulmonologist Esteban Burchard and epidemiologist Sam Oh of the University of California, San Francisco, shared the data with Nature after obtaining them from the NIH through a request under the Freedom of Information Act. The figures show that under-represented minorities have been awarded NIH grants at 78–90% the rate of white and mixed-race applicants every year from 1985 to 2013

I will note that Burchard and Oh seem to be very interested in how the failure to include a diverse population in scientific studies may limit health care equality. So this isn’t just about career disparity for these scientists, it is about their discipline and the health outcomes that result. Nevertheless, the point of these data are that under-represented minority PIs have less funding success than do white PIs. The gap has been a consistent feature of the NIH landscape through thick and thin budgets. Most importantly, it has not budged one bit in the wake of the Ginther report in 2011. With that said, I’m not entirely sure what we have learned here. The power of Ginther was that it went into tremendous analytic detail trying to rebut or explain the gross disparity with all of the usual suspect rationales. Trying….and failing. The end result of Ginther was that it was very difficult to make the basic disparate finding go away by considering other mediating variables.

After controlling for the applicant’s educational background, country of origin, training, previous research awards, publication record, and employer characteristics, we find that black applicants remain 10 percentage points less likely than whites to be awarded NIH research funding.

The Ginther report used NIH grant data between FY 2000 and FY 2006. This new data set appears to run from 1985 to 2013, but of course only gives the aggregate funding success rate (i.e. the per-investigator rate), without looking at sub-groups within the under-represented minority pool. This leaves a big old door open for comments like this one:

Is it that the NIH requires people to state their race on their applications or could it be that the black applications were just not as good? Maybe if they just keep the applicant race off the paperwork they would be able to figure this out.

and this one:

I have served on many NIH study sections (peer review panels) and, with the exception of applicants with asian names, have never been aware of the race of the applicants whose grants I’ve reviewed. So, it is possible that I could have been biased for or against asian applicants, but not black applicants. Do other people have a different experience?

This one received an immediate smackdown with which I concur entirely:

That is strange. Usually a reviewer is at least somewhat familiar with applicants whose proposals he is reviewing, working in the same field and having attended the same conferences. Are you saying that you did not personally know any of the applicants? Black PIs are such a rarity that I find it hard to believe that a black scientist could remain anonymous among his or her peers for too long.

Back to social media. One of the tweeps who is, I think, pretty out as an underrepresented minority of science had this to say:

Not entirely sure it was in response to this Nature editorial but the sentiment fits. If AfricanAmerican PIs who are submitting grants to the NIH after the Ginther report was published in the late summer of 2011 (approximately 13 funding rounds ago, by my calendar) were expecting the kind of relief provided immediately to ESI PIs…..well, they are still looking in the mailbox.

The editorial

The big task now is to determine why racial funding disparities arise, and how to erase them. …The NIH is working on some aspects of the issue — for instance, its National Research Mentoring Network aims to foster diversity through mentoring.

and the News piece:

in response to Kington’s 2011 paper, the NIH has allocated more than $500 million to programmes to evaluate how to attract, mentor and retain minority researchers. The agency is also studying biases that might affect peer review, and is interested in gathering data on whether a diverse workforce improves science.

remind us of the entirely toothless NIH response to Ginther.

It is part and parcel of the vignettes I related at the top. People of privilege simply cannot see the privileges they enjoy for what they are. Unless they are listening. Listening to the people who do not share the set of privileges under discussion.

I think social media helps with that. It helps me to see things through the eyes of people who are not like me and do not have my particular constellations of privileges. I hope even certain Twitter-refuseniks will come to see this one day.

CV alt metrics and Glamour

November 19, 2015

Putting citation counts for each paper on the academic CV would go a long way towards dismantling the Glamour Mag delusion and reorient scientists toward doing great science rather than the “get” of Glam acceptance.
Discuss.

Michael S. Lauer, M.D., and Richard Nakamura, Ph.D. have a Perspective piece in the NEJM which is about “Reviewing Peer Review at the NIH”. The motivation is captured at the end of the first paragraph:

Since review scores are seen as the proximate cause of a research project’s failure to obtain support, peer review has come under increasing criticism for its purported weakness in prioritizing the research that will have the most impact.

The first half or more of the Perspective details how difficult it is to even define impact, how nearly impossible it is to predict in advance and ends up with a very true observation “There is a robust literature showing that expert opinion often fails to predict the future.” So why proceed? Well, because

On the other hand, expert opinion of past and current performance has been shown to be a robust measure; thus, peer review may be more helpful when used to assess investigators’ track records and renewal grants, as is typically done for research funded by the Howard Hughes Medical Institute and the NIH intramural program.

This is laughably illogical when it comes to NIH grant awards. What really predicts future performance and scientific productivity is who manages to land the grant award. The money itself facilitates the productivity. And no, they have never ever done this test I guarantee you. When have they ever handed a whole pile of grant cash to a sufficient sample of the dubiously-accomplished (but otherwise reasonably qualified) and removed most funding from a fabulously productive (and previously generously-funded) sample and looked at the outcome?

But I digress. The main point comes later when the pair of NIH honchos are pondering how to, well, review the peer review at the NIH. They propose reporting broader score statistics, blinding review*, scoring renewals and new applications in separate panels and correlating scores with later outcome measures.

Notice what is missing? The very basic stuff of experimental design in many areas of research that deal with human judgment and decision making.

TEST-RETEST RELIABILITY.

INTER-RATER RELIABILITY.

Here is my proposal for Drs. Lauer and Nakamura. Find out first if there is any problem with the reliability of review for proposals. Take an allocation of grants for a given study section and convene a parallel section with approximately the same sorts of folks. Or get really creative and split the original panels in half and fill in the rest with ad hocs. Whenever there is a SEP convened, put two or more of them together. Find out the degree to which the same grants get fundable scores.

That’s just the start. After that, start convening parallel study sections to, again, review the exact same pile of grants except this time change the composition to see how reviewer characteristics may affect outcome. Make women-heavy panels, URM-heavy panels, panels dominated by the smaller University affiliations and/or less-active research programs. etc.

This would be a great chance to pit the review methods against each other too. They should review an identical pile of proposals in traditional face-to-face meetings versus phone-conference versus that horrible web-forum thing.

Use this strategy to see how each and every aspect of the way NIH reviews grants now might contribute to similar or disparate scores.

This is how you “review peer review” gentlemen. There is no point in asking if peer review predicts X, Y or Z outcome for a given grant when funded if it cannot even predict itself in terms of what will get funded.

__
*And by the way, when testing out peer review, make sure to evaluate the blinding. You have to ask the reviewers to say who they think the PIs are, their level of confidence, etc. And you have to actually analyze the results intelligently. It is not enough to say “they missed most of the time” if either the erroneous or correct guesses are not randomly distributed.

Additional Reading: Predicting the future

In case you missed it, the Lauer version of Rock Talk is called Open Mike.

Cite:
Reviewing Peer Review at the NIH
Michael S. Lauer, M.D., and Richard Nakamura, Ph.D.
N Engl J Med 2015; 373:1893-1895November 12, 2015
DOI: 10.1056/NEJMp1507427

The post-triage stage

November 18, 2015

Holdworth Cheesington III Endowed Chair Professor K. Kristofferson has a few thoughts for you, as well.

Advice for faculty

November 17, 2015

From Holdworth Cheesington III Endowed Chair Professor K. Kristofferson:

Two quick things:

Your NIH grant Progress Report goes to Program. Your PO. It does not go to any SRO or study section members, not even for your competing renewal application. It is for the consumption of the IC that funded your grant. It forms the non-competing application for your next interval of support that has already passed competitive review muster.

Second. The eRA commons automailbot sends out requests for your JIT (Just In Time; Other Support page, IRB/IACUC approvals) information within weeks of your grant receiving a score. The precise cutoff for this autobot request is unclear to me and it may vary by IC or by mechanism for all I know. The point is, that it is incredibly generous. Meaning that when you look at your score and think “that is a no-way-it-will-ever-fund score” and still get the JIT autobot request, this doesn’t mean you are wrong. It means the autobot was set to email you at a very generous threshold.

JIT information is also requested by the Grants Management Specialist when he/she is working on preparing your award, post-Council. DEFINITELY respond to this request.

The only advantage I see to the autobot request is that if you need to finalize anything with your IRB or IACUC this gives you time. By the time the GMS requests it, you are probably going to be delaying your award if you do not have IRB/IACUC approval in hand. If you submit your Other Support page with the autobot request, you are just going to have to update it anyway after Council.

Following up my post on RFAs and the inherent self-reinforcing conservatism of NIH grant review.

Mike the Mad Biologist has taken issue with the findings of a “cross-campus, cross-career stage and cross-disciplinary series of discussions at a large public university” which “has produced a series of recommendations for addressing the problems confronting the biomedical research community in the US“.

Mike the Mad has pulled out a number of the proposals and findings to address but I was struck by one on the role of “R&D contracts, Requests for Applications (RFAs) and intramural research”. From page 4 of the UW report:

Fourth, the NIH should increase the proportion of its budget directed to Research Project Grants, Center Grants and Training, and it should decrease the proportion directed to R&D contracts, Requests for Applications (RFAs) and intramural research. These changes would redirect funds towards investigator-initiated research and allow funding of a greater diversity of projects. R&D contracts and RFAs place limits on the topics and approaches that can be pursued, so a shift away from them will lead to fewer intellectual constraints being placed on researchers. We emphasize that this is not a recommendation to eliminate R&D contracts or RFAs, but rather to reduce their number, which will sharpen their quality and provide the funds needed to award more investigator-initiated grants.

I disagree with the notion that RFAs are poisonous to diversity and the notion that pure “investigator-initiated” leads to fewer intellectual constraints.

The NIH peer review process is an inherently conservative one because it tends to reinforce itself. Those who are successful within the system do the primary judging of who is next to be successful. Those who become successful have to, in large degree, adapt themselves to the thinking and expectations of those who have previously been successful.

When it comes to the role of Program Officers in selecting grants for pickups and saves, well, they too are influenced by the already-successful. This is in addition to the fact that POs have long term careers and thus their orientations and biases come into play across literally decades of grant applications. To the extent that POs are judged by the performance of their grant portfolios, you can see that they are no different than the rest of us. Higher JIF, higher citations, more press attention, more high-profile scientists….all of these things dictate them selecting grants that are going to be more of the same.

Sure, this is a thumb on the scale. Lots of scientists are open to new ideas. Lots of scientists can become enamoured of scientific proposals that are outside of their immediate interests. Peer reviewers and Program Officers alike.

But there more assuredly is a thumb on the scale. And it is a constantly reinforcing cycle of conservatism to select grants for funding that are very much like ones that have previously been funded. Alike in topic, alike in PI characteristics, alike in the University which is applying.

Request for Applications (RFAs) quite often serve to fight against the narrowing of topic diversity and in favor of getting grants funded in a new area of investigation. Trust me, if they already have copious amounts of grant funding on a topic, this does not result in additional RFAs!

In some of my general oversight of RFAs over the years from some of my favorite ICs I’ve noticed topics like sex-differences and less-usual experimental models are often at play. Adolescent/developmental studies as well. To take a shot at my much beloved NIDA— well, they have been, and continue to be, the National Institute on Cocaine and Heroin Abuse. Notice how whenever the current Director Nora Volkow gets interviewed on the general lay media she goes on and on about the threat of marijuana to our adolescents? Try a trip over to RePORTER to review NIDA’s respective portfolios on marijuana versus cocaine or heroin.

There have been several NIDA RFAs, PARs and PASs over the years which are really about “Gee, can’t we fund at least two grants on this other drug over here?”. There’s an old one begging for medications development for cannabis dependence (RFA-DA-04-014; 10 awards funded) and another asking for investigation of developmental effects of cannabis exposure (RFA-DA-04-016; 6 awards funded). Prenatal exposure to MDMA (RFA-DA-01-005). Etc.

The latest version of this is PAR-14-106 on synthetic drugs. You know, the bath salts and the fake weed. I’ve been chatting with you about these since what, 2010? The PAR was issued in 2014 (4 awards funded so far, 5 if you include R03/R21 versions).

Is this because evil NIDA wants to force everyone to start working on these topics? Constraining their intellectual freedom? Hampering the merry progress on cocaine and heroin? You might ask the same about various sex-differences FOA that have been issued over the years.

Heck no. All that stuff has continued to be funded at high rates under NIDA’s normal operation. Why? well because tons and tons and tons of highly funded and highly productive researchers have focused on cocaine and heroin for their entire careers. And these are the grants that seem most important to them….the cocaine and heroin grants. They are the successful scientists who review other grants and who whisper in the ears of POs at every turn.

So the other drugs get short shrift in the funding race.

Every now and again a PO gets up the courage to mount an assault on this conservatism and get a few grants funded in his or her bee-in-the-bonnet interest. Having watched one of these develop back in the good old days, it takes time. Two POs I observed at NIDA set up mini-symposia at NIH and at meetings for several years before lo and behold an RFA was issued on that topic. This was in the days when presumably they had the spare cash to do this sort of grooming of a topic domain.

The CRAN initiative initially side-stepped the review process altogether and issued supplements for combined-drug research (think “effects of alcohol drinking on smoking behavior and vice versa”).

Etc.

I am sure that parallels exist at all of the other ICs.

And let me be emphatically clear on this. It isn’t as though there are not individual investigators out there independently initiating grant applications on these topics. OF COURSE there are.

They just haven’t been able to get funded.

I come back to this claim in the UW document that RFAs “place limits on the topics and approaches that can be pursued” and the suggestion that their diminishment will lead to “fewer intellectual constraints being placed on researchers”.

This is nonsense. Targeted FOAs very often address topics which have been “investigator-initiated” many, many times but these applications have not been successful in navigating the study section process. I would be shocked if there were more than a very small number of targeted funding announcements from the NIH that were on a topic that nobody had ever applied for funding to research. Shocked.

The pool of people applying for funding is just so large and so diverse that any half-way interesting idea has been proposed by somebody at some point in time. The idea that NIH Program have come up with something that nobody in the extramural community has ever thought about is just not that credible.

In the golden days of yore, when the research plan stretched to 25 pages, the Preliminary Data had a specific place. You created a header and put it inbetween the Background and the Research Plan.

Generally.

In the latest version of the NIH application there is no explicit place and the headers more or less match the review criteria- Significance, Innovation and Research Strategy (which maps to Approach).

I had heard of people who sprinkled their Preliminary Data figures all across the app even in the old days but I can’t remember ever trying it. With the new application, however, it just made sense to me. 

Some figures are Background/Significance and some figures are really just showing technical ability for that tricky assay used in Aim 3. Some speak directly to the Innovation.

So I spread the figures around when I think they do the most good. 

Isn’t this what everyone is doing now? Have you seen other approaches? 

PAR-16-025 invites applications for the R50 Research Specialist award.

The Research Specialist Award is designed to encourage the development of stable research career opportunities for exceptional scientists who want to pursue research within the context of an existing cancer research program, but not serve as independent investigators. These scientists, such as researchers within a research program, core facility managers, and data scientists, are vital to sustaining the biomedical research enterprise. The Research Specialist Award is intended to provide desirable salaries and sufficient autonomy so that individuals are not solely dependent on grants held by Principal Investigators for career continuity.

This mechanism is for salary support up to 100% and for travel up to $5,000 per year. Maximum duration is 5 years. It is interesting that they chose to make this an R mech instead of a K mech. I like that. A lot.

This idea was discussed by NCI a little bit ago, as discussed in this blog post, in the wake of a hint from Varmus when he left the NCI Director office. The devil will be in the detail but this new mechanism appears to leave some wiggle room for the Research Specialist to avoid some deficits I identified in the original discussion (start at 2:20).

I was most concerned about all the discussion focusing on the original PI and how this proposed new mechanism was to his or her benefit more than the Research Specialist themselves.

2:29 -the research proposal is to be written jointly by the applicant and the sponsoring PI, describing the research.

[DM- I think this is workable even though my eye started to twitch. There is going to be some slippage here with respect to the goals of making this award portable and not tied to the fate of one lab’s research grant]

2:29:55 -Initially the Research Specialist to apply while supported on an existing research grant. Once the K05 is awarded, it would be expected to be 50/50 support with the grant and then continuing on the K05 100% once the grant ended.

2:30:30 – Review criteria. Accomplishment of applicant individually and within the nominating lab’s program. Accomplishment of the PI and Uni. Importance of the applicant to the research program of the PI.

[DM- Welp. This is certainly going down a road of contributing to the rich getting richer which is not something I support. Unless “importance to the research program of the PI” means helping to stabilize the science of a have-not type of PI who struggles to maintain consistent funding.]

and

2:32: slide on portability of the award – possible but requires PO approval if PI and K05 move together, if the PI leaves and K05 stays, if the grant is lost, etc.

if K05 Specialist chooses on her/his own hook to leave old lab, it will require a new PI, approval, etc. The old PI is eligible for 2 year administrative supplement because they are “suddenly missing a critical support component”.

[DM- ugh, this last part. Why should the original grant be compensated for the K05 person deciding to leave? It will already have benefited from that 50% free effort. Rich get richer, one. and a reward for that scenario where the PI is such a jerkface that the K05 leaves him/her? no. and regarding “critical support component”, dude, what about when any postdoc chooses to leave? happens all the time. can I get some free money for suddenly missing an awesome postdoc?]

2:36 on assessment of the pilot. “critical to get input from the PI about how well their needs have been served”

[DM- well sure. but…… grrrr. this should be about the K05 awardee’s perspective. The whole point is that the existing system puts these people’s careers into the hands of the big cheese PI. That is what the focus should be on here. The K05 Research Specialist. Not on whether the PI’s loss of control has allowed him or her to continue to exploit or whether this is just a way to shield the haves of the world from the grant game a little bit more.]

Two interesting parts of note in the section on Award Administration from this new PAR:

5) Funds freed up through the R50 will be restricted from any other personnel use, but may be rebudgeted for other research costs with NCI prior approval.

6) Research Specialists would have the option, with prior NCI approval, to move to other research programs or institutions (e.g. if the Unit Director’s laboratory is closed, if the institution closes a core, etc.).

Number 5 is a bit weird. Why not be able to hire another person to work on the project? And re-budgeting is allowed only with prior approval? For a salary? This is unusual.

But everything about this rests on what Number 6 turns out to be in practice. It echos another part in the FOA scope part that reads:

The proposed new research support is intended to provide desirable salaries and sufficient autonomy so that individuals are not solely dependent on grants held by Principal Investigators for career continuity. Research Specialists would have the option, with prior NCI approval, to move to other research programs or institutions while maintaining funding from this award (e.g., if the Principal Investigator’s laboratory is closed, if the institution closes a core, etc.).

This is the part that gives the Research Specialist the true “sufficient autonomy” and “not solely dependent” business that is written all throughout the PAR. It is essential how broadly this “e.g.” is interpreted, particularly with respect to who makes the decisions about permitting a change. Obviously, the one major thing missing from these examples is the autonomous choice of the Research Specialist. What if she or he simply wants to join a different lab or university? How easily can it be moved to another city when the person’s spouse gets a new job? How easily can they detach themselves from a toxic PI? etc.

__
h/t: @superkash