Why R21s Stink A Lot
September 23, 2009
A colleague of mine just sent me the following e-mail:
Dear Comrade PhysioProf:
I am reviewing an R21 A1 application that I did not review the original submission.
The stupid fuckers who reviewed the original submission dinged the only cool exploratory/developmental part of the whole fucking thing because it was a “fishing expedition” and “not well-supported by prelliminary data”, so the poor applicant cut that out in the resub. The only shit left is boring-ass crap that could just as easily be Specific Aim #2.A.1.c.ii of a boring-ass fucking R01.
Now I have to ding the poor fuck for not being “Developmental/Exploratory”.
Sincerely,
Your Colleague
September 23, 2009 at 3:27 pm
Isn’t there an oblique way to write the review such that it’s obvious they should go ahead and do the innovative part; irrespective of what the first reviewers said?
LikeLike
September 23, 2009 at 4:01 pm
Any applicant worth funding should take the award and do the kewl stuff anyway. All together now- A grant is not a contract!
LikeLike
September 23, 2009 at 4:31 pm
R21 review challenges such as those described were a significant factor in our development of the EUREKA award program. The Funding Opportunity Announcement for this year was recently released (see http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-009.html ). Nine ICs are now participating.
LikeLike
September 23, 2009 at 5:14 pm
OMG- I just had the laugh of the week. I think I LOVE this reviewer.
This seems to be to be the perfect application for some of the arbitrary scoring features of the new scoring system. (i.e. the subcategory scores and the overall score don’t have to match up)
LikeLike
September 23, 2009 at 6:08 pm
R21 review challenges such as those described were a significant factor in our development of the EUREKA award program
Soooo…. This highlights one of the HUGE problems that I have with NIH band-aids deployed to “fix” study section behavior. I really, really do not understand this dance of “we don’t want to tell reviewers what to do but we really, really, really want them to change their behavior”.
It seems to me that the solution is not to invent new mechanisms under RFA but to do more to give feedback to the reviewers about the crap job they are doing. I mean, in this case it was apparently right there in the summary statement that the panel was insisting on preliminary data! The solution is to empower the chairs and SROs (not to mention spot checking their commitment to the cause as well) and if necessary convening R21-only panels of people with demonstrated willingness to review properly.
I believe that in the vast majority of cases the reviewers are adopting the prevalent culture because they are not given sufficient instruction. The n00bs have to sort of deduce what to do based on the other reviewer’s behavior and the summary statements they themselves have been receiving. If they read the documents paying lipservice to R21 review but then the panel reviews as a miniR01 and everyone nods approvingly…what are they to think?
LikeLike
September 23, 2009 at 6:17 pm
I am thinking about suggesting to my colleague the possibility of writing in the “Additional Comments to Applicant” section of the new review template that she is very concerned with the appropriateness of the prior round of review, and that she thinks that the best part of the application was removed due to it.
She might not be very popular at the study section dinner party if she does that. lolz
LikeLike
September 23, 2009 at 6:58 pm
That fucking sucks.
If that reviewer sinks them on something like that s/he’s every bit as much of a douche bag as the first lot of reviewers who can’t follow instructions.
Sounds like that story from the last two weeks: How to submit a comment in 123 easy steps.
Round 1. Too long4
Round 2. too short, not enough detail, add this stuff
Round 3. Too long
Round 4. Too short add this stuff
Round 5. Out of date now and too long…
etc etc
now it’s out of date…
LikeLike
September 23, 2009 at 7:45 pm
Dr. Berg:
The review panels are the source of the problem, not the existence (or lack of it) of programs like EUREKA. The complete lack of accountability on the part of reviewers just encourages writing more crap, and the example discussed here is neither unique, nor especially extreme.
LikeLike
September 23, 2009 at 9:24 pm
The complete lack of accountability on the part of reviewers
This is not the problem, exactly. It is not so much accountability in the sense that people just behave in any old rogue fashion they like. It is that the instruction process (formal and informal) is lacking and uncontrolled. So you have people reviewing grants in good faith in ways that they think they have been shown is the right way to do it. One thing that is a very big contributor is the desire to be fair in the sense that every grant in a section gets reviewed the same way. Thus, in the case of R21’s, if you have the committee nodding sagely for application #1 that there is no supporting data, it becomes hard to break that for application #2.
There are cultural forces at play. I think they can be revamped but it takes 1) recognition and 2) direct opposition to fix them.
LikeLike
September 24, 2009 at 8:47 am
I agree with Dr. Berg that the EUREKA program is the way to go since I don’t think the NIH will ever be able to make substantive changes to study section behavior (current efforts notwithstanding), however, if EUREKA is really about funding transformational ideas then it is crucial the ideas be reviewed with scientific merit reviewers blinded to the identity of the applicant. Otherwise, good scores will simply go to big-name applicants rather than to ideas. This is because it is too easy for reviewers to cop-out along the lines of, “Nobody really knows if this is going to work or not, but BigCheeze has a good track record, so lets give the cash to BigCheeze. On the other hand, LittleGuy can’t possibly have any good ideas, because if he did, he’d be a BigCheeze!”
After the science is scored in blinded fashion, then Program should unblind the review and there should be a separate evaluation of whether, if the award is made, the necessary supporting resources are already in place to make the work happen.
LikeLike
September 24, 2009 at 9:56 am
“It is not so much accountability”
Well, ethics quite obviously doesn’t work, so it may be time to try accountability. I can’t reconcile the notion of “reviewing in good faith” with flagrant scientific mistakes and/or ignoring program requirements and goals.
This element of accountability is nothing new in science. If I misrepresent data, I will be investigated for misconduct. If I write garbage in a journal review, the editor will never again ask me for a critique. So why make the grant review process an exception?
LikeLike
September 24, 2009 at 10:36 am
Whimple: making the review process blind is a worthy, but difficult objective. It would involve making a substantial changes in the peer review process.
1. The review panels should be provided only 2-page anonymous summaries of the projects, evaluate them and assign scores for all the applications. Only the idea/hypothesis/significance and the proposed way of addressing it should be evaluated. No details.
2. The second round of reviews should be entirely internal. The NIH staff would evaluate the applicant’s capability to carry out the research according to the stated plan (facilities, environment, credentials), and assign a passed/fail grade. Passed applications should be then sorted according to the scores from the first round and selected for funding based on these scores and NIH priorities.
The above is just an idea, feel free to comment if it is stupid or not.
LikeLike
September 24, 2009 at 10:45 am
VRWC: blinded review won’t work for the regular stuff like R21s, R01s etc., because what the applicant specifically has done in the past is too important. I think it would work for mechanisms like EUREKA where what you’ve already done isn’t relevant to the proposal.
LikeLike
September 24, 2009 at 10:55 am
I think the main problem is the paradox of trying to have a panel of scientists decide what is innovative and what isn’t a priori. In very few sectors of the economy is innovation determined accurately in advance of being implemented; the impact of innovation is not remotely correlated to its predictability. It’s only after the fact that people are generally in the position to say, “Wow, that really was a pretty stellar idea”
Google and similar techie trend-setters doesn’t arrange a panel of executives every time it wants to decide which of its employees is going to get 10% time to fart around doing whatever they want. They just give it to everyone with the belief that the one person out of ten who actually comes up with something cool will pay for the others nine who didn’t.
Is such a mechanism feasible in academia? I don’t know, but it’s worth thinking about. Maybe even a one-time career deal attached to an R01 award or something. If you get the R01, it’s obvious you’re not a total joker, so here’s an automatic extra 2 years top up to spend on generating preliminary data for whatever bonkers idea you might have cluttering your backburner (in addition to the inevitably pedestrian and incremental project that you had to propose in order to get any money from us at all). No review panel, no preliminary data, no evidence feasibility, just a cheque.
LikeLike
September 24, 2009 at 12:13 pm
No. The EUREKA application and review process is intentionally even *more* skewed towards “track record” of the PI, and the perception of the review panel whether the PI has proven to be “exceptionally innovative” in the past. Take a look at the application instructions and you’ll see what I’m talking about.
One of the cultural forces at play is the refusal of SROs, study section chairs, and program staff to provide explicit guidance to reviewers in the correct interpretation and application of the review criteria for various funding mechanisms, *except* in irrelevant “kabuki” circumstances:
(SRO wakes the fuck up and stops surfing ESPN.com) “Oh! You can’t say the word ‘funding’!” (SRO sleeps through a review panel trashing an R21 because it is “speculative” and “not well-supported by preliminary data”)
LikeLike
September 24, 2009 at 5:26 pm
I can’t reconcile the notion of “reviewing in good faith” with flagrant scientific mistakes and/or ignoring program requirements and goals.
Well that is your lack of imagination and experience perhaps, not necessarily evidence of ethics obviously not working. Let us take up the review advice for the R21 application:
See all that weasel language? If a reviewer still chooses to emphasize preliminary data, she is not violating this guidance. You can argue that you read the spirit as “review differently from an R01” but there is in fact plenty of room for a reviewer to review them fairly similarly.
LikeLike
September 24, 2009 at 7:28 pm
So, assuming that one had some reason to stick with the R21 mechanism, would the good comrade have advised them to 1) collect and add the pilot data or 2) cut it out or 3) scrap the shit and submit something else?
LikeLike