Your Grant In Review: Errors of fact from incompetent reviewers
December 3, 2015
Bjoern Brembs has posted a lengthy complaint about the errors of fact made by incompetent reviewers of his grant application.
I get it. I really do. I could write a similar penetrating expose of the incompetence of reviewers on at least half of my summary statements.
And I will admit that I probably have these thoughts running through my mind on the first six or seven reads of the summary statements for my proposals.
But I’m telling you. You have to let that stuff eventually roll off you like water off the proverbial duck’s back. Believe me*.
Brembs:
Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP.
…
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers.
Speaking for the NIH grant system only, you are an idiot if you expect this level of “expert peer” as the assigned reviewers to each and every one of your applications. I am not going to pretend to be an expert in this issue but even I can suspect that the body of work on this area does not lead each and every person who is “expert” to the same conclusion. And therefore even an expert might disagree with Brembs on what reviewers should “recognize”. A less-than-expert is going to be subject to a cursory or rapid reading of related literature or, perhaps, an incomplete understanding from a prior episode of attending to the issue.
As a grant applicant, I’m sorry, but it is your job to make your interpretations clear, particularly if you know there are papers pointing in different directions in the literature.
More ‘tude from the Brembster:
For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.
…
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.
These are repeated several times triumphantly as if they are some excellent sick burn. Don’t think like this. First, NIH reviewers are not expected to do a lot of outside research reading your papers (or others’) to apprehend the critical information needed to appreciate your proposal. Second, NIH reviewers are explicitly cautioned not to follow links to sites controlled by the applicant. DO. NOT. EXPECT. REVIEWERS. TO. READ. YOUR. BLOG! …or your papers.
With respect to “graduate student level”, it will be better for you to keep in mind that many peers who do not work directly in the narrow topic you are proposing to study have essentially a graduate student level acquaintance with your topic. Write your proposal accordingly. Draw the reader through it by the hand.
__
*Trump voice
December 3, 2015 at 12:44 pm
Preach it.
LikeLike
December 3, 2015 at 12:50 pm
…and in more detail in a related blog post
BWAHAHAHAHHAHAAAAA, sigh….
Really? Has this person EVER served on a panel or study section? That is some delusional shit, right there.
LikeLike
December 3, 2015 at 12:53 pm
I expect reviewers to read *your* blogge!
LikeLike
December 3, 2015 at 1:00 pm
Seriously? He expects “the reviewers to be expert peers”? Oy.
LikeLike
December 3, 2015 at 1:02 pm
Holy Snap Crackle and Pop….what a jackwagon. And, not for nothing, Bjorn, your publications suck a bag of dickkes. You can’t be copping a ‘fuckke you ignorant fools’ spin when you have virtually no papers and all abstracts for 4 years and daring to question expertise of reviewers in some very small part of publications territory. http://bjoern.brembs.net/publications/
No one reads blogs. Ever. What next….they didn’t come to his posters? Oy.
LikeLike
December 3, 2015 at 1:46 pm
I wonder if this person had an experienced colleague/mentor at their institution look at this proposal before it went in. One of the greatest pieces of advice I received writing my first R01 was to make it simple, understandable, and do NOT make the reviewers have to find things outside the pages of the grant–convey a convincing and understandable message within the confines of the page limit because not everyone is an expert down to the level you are on your specific project (and don’t take that for granted). Don’t “annoy” the reviewers by making them do “extra work.”
LikeLike
December 3, 2015 at 1:51 pm
Just a side note — I was surprised at one quotation from the review and thought it included some inflamatory content that the SRO should have edited out. Namely, the mention “in no case fundable” violates the NIH aim that review should be separate from issues of funding. In addition adding (!) to the middle of “many problems” is hardly useful or dispassionate. Certainly my SRO wouldn’t have let us talk like this…..
“In principle, this proposal addresses important and highly relevant questions but unfortunately there are many (!) problems with this application which make it rather weak and in no case fundable.”
LikeLike
December 3, 2015 at 1:59 pm
I won’t claim even close to the experience of DM, but will add my interpretation of “errors of fact” in reviews that I’ve developed over the years.
In the maybe 50% of summary statements with flat-out incorrect assertions, the theme that runs through is that the reviewer didn’t like the proposal in the first place. Or maybe the PI, or Environment. I don’t think a reviewer who is already enthusiastic about a proposal makes such errors. In the basest terms, the “blatant error of fact” comments are just part of the mechanism to move the proposal into the pile that needs no further attention. Sometimes they will use your papers as part of the take-down, selectively quoting or omitting information to support the story. Still, the bottom line is that the reviewer didn’t like the proposal in general, so the specific errors just don’t matter. No matter how maddening they may be.
LikeLike
December 3, 2015 at 2:12 pm
CS- I don’t think this is NIH review and it may not even be in the US at all….
LikeLike
December 3, 2015 at 2:23 pm
In the maybe 50% of summary statements with flat-out incorrect assertions, the theme that runs through is that the reviewer didn’t like the proposal in the first place.
Exactly.
And therefore angered applicants should keep in mind that the reviewer “error” was most emphatically not the factor that killed the grant’s chances. It is a symptom, not a cause.
LikeLike
December 3, 2015 at 2:55 pm
That first one just is nutballs. Unless I’m misreading, it sounds like “Despite the fact that a Science paper made a claim that in some ways undermines this proposal, we did not address it, because anyone who is sufficiently expert in the field knows that paper’s technical details do not quite prove what they said they did.” Doesn’t that limit you to 4-5 people in the world who can review your proposal? Probably two of which think the Science paper is just fine?
LikeLike
December 3, 2015 at 3:08 pm
That was my read of that complaint as well.
LikeLike
December 3, 2015 at 3:10 pm
Doesn’t that limit you to 4-5 people in the world who can review your proposal? Probably two of which think the Science paper is just fine?
But it was explained in a BLOG! Come on!
LikeLike
December 3, 2015 at 3:15 pm
“Don’t “annoy” the reviewers by making them do “extra work.””
I have lost track of the number of times I have given this advice. I told my students on the first day of class (an angry grader is a harsh grader), I tell coworkers, and I tell applicants. If you force the reader to do extra work, you will get pushed to the bottom of the pile.
LikeLike
December 3, 2015 at 7:23 pm
I agree with everything that was said here re writing grants on a basic level, but I have to point out two things- sometimes there ARE errors of fact (sometimes when the reviewer has not actually read the proposal); and in this case one should consider appeal, especially if an A1 and your PO is on your side.
But often errors of fact can be also considered matters of opinion. I once had a grant review that claimed that the entire prevailing view of a field was wrong (essentially a flat-earther) and so no grant based on that view could ever be valid. This single (new) review kicked the proposal out of the funding range, and I was told that it was a matter of opinion and could not be appealed. Fortunately, an earlier version of the proposal scored closer to the payline and was eventually funded. But these are crazymaking experiences.
Today I served on a phone review where we gave most of the grants the same score…in part in large part because we only had 3 digits to play with for the discussed grants (2-4). I really think having a second integer (or half of one) would allow us to actually rank grants instead of leaving everything to Program.
LikeLike
December 3, 2015 at 7:53 pm
“Had Reviewer #1 ever written a research proposal of their own, they would understand that proposals are written to fund experiments that have not yet been performed.”
Ha ha ha.
LikeLike
December 3, 2015 at 8:06 pm
In summary, I could not find any issue raised in this review that is not either generally known in the field, or covered either in the literature, or in our proposal. Hence, I regret to conclude that there is not a single issue raised by Reviewer #1 that I would be able to address in my revised proposal.
Actually, those are the easiest things to address. The issues raised by Baltogirl are much harder.
I wonder how Brembs deals with paper reviews.
LikeLike
December 3, 2015 at 9:50 pm
APPEAL!
LikeLike
December 3, 2015 at 10:10 pm
Reading through this big, he sure seems to think quite a bit of himself. He should get a non-science related hobby. Also work and no play makes Jack a tool. People who take themselves so seriously are insufferable.
LikeLike
December 3, 2015 at 11:39 pm
Can we get Brembs and Perlstain a room already? And that Martin Shrekyl (?) asshat too! Circle jerk wouldn’t even begin to describe it.
LikeLike
December 4, 2015 at 4:21 am
There were three main issues in this review which made me publish it. I have no problem to admit that the minor points (like the first one mentioned here) were more snide remarks in my reply than actual criticisms of the reviewer. Granted.
Issue #1 was a very common technique described here:

https://en.wikipedia.org/wiki/RNA_interference
which is taught in essentially all undergrad genetics/neurogenetics courses I’ve seen or heard of. The reviewer critique was based on their ignorance of what is described in this figure:
Which is also part of undergrad education – I’ve taught this method myself routinely, including the mechanisms described on Wikipeida. I have included such very basic information once before in one of my proposals (about ~7/8 years ago), only to have it thrown back in my face with the reviewer comment “on what level does the applicant think he can address his peers?”
So there is definitely a penalty for assuming your peers are not experts, at least in our country. There is a fine line between being inclusive and being insulting. After the feedback ~7/8 years ago, I ran with expecting my peers to be competent and my acceptance rate of just under 25% is probably not all that bad. It has happened before, but this is the first time in probably ten years that I get a grant-review, where just looking into wikipedia or a textbook for 10 minutes would solve all such issues.
Issue #2 is a criticism that stems from not knowing the difference between polyclonal and monoclonal antibodies, also undergraduate level knowledge. Moreover, the issue of these techniques is currently hotly debated with regard to the replicability crisis.
Had I explained both issues at length, I would have opened myself again for the criticism “at what level do you think you can address us?” On top of this feedback I also received feedback back then that my early applications (10 years ago) were too long. “Good applications are usually succinct and not verbose” I was told then, with the recommendation of not exceeding ~10 pages total (current application was already at 18 pages). These two sets of feedback made me write for expert peers and it has worked not stellar but ok since then. Even for the rejected grants, the comments were not so obviously incompetent and the other reviewer here had very reasonable remarks which I have largely implemented.
Issue #3 was an obvious factual error (actually two errors): the reviewer scalded us for proposing a method we didn’t propose and suggested we instead use a method we already had in the proposal. We even explained the details of the method (the one he suggested we should use!) in a figure with two separate citations to the papers where the method was described. Double whammie: the method the reviewer criticized was not in our proposal and the method he recommended was already very prominently in the proposal.
Taken these three major issues together, I get the impression the reviewer only skimmed our proposal and trashed it – perhaps because he didn’t have the time to do a proper review?
Many thanks for the post and the constructive criticism everyone. I appreciate the concerted efforts in making me a better scientist and grant applicant 🙂 It’s great to be an open wackaloon and get feedback from so many colleagues. I have already incorporated most of the feedback into the new version of the grant.
LikeLike
December 4, 2015 at 5:01 am
Not to differentiate between poorly deduced opinions and commonly known facts is a common formal mistake made by conspiracy theorists. Apparently scientists are not free of this fallacy. And I mean this in a general way, not specifically pointing at anybody involved or commenting. 😉
But I happily agree with drugmonkey: The basic assumption should be that information transmission errors are the responsibility of the sender, not the receiver.
LikeLike
December 4, 2015 at 7:30 am
Brembs, you should separate in your publication list abstracts from real peer reviewed papers, and reviews from research papers.
Otherwise your are selling smoke.
LikeLike
December 4, 2015 at 7:34 am
Brembs, you should separate abstracts from reviews and research papers in your publication list. Otherwise you are selling smoke.
LikeLike
December 4, 2015 at 8:48 am
[…] a review of our most recent grant proposal has sparked an online discussion both on Twitter and on Drugmonkey’s blog. The main direction the discussion took was what level of expertise to expect from the reviewers […]
LikeLike
December 4, 2015 at 8:53 am
@Dennis
“The basic assumption should be that information transmission errors are the responsibility of the sender, not the receiver.”
I agree as well. When people forget that grant proposals are sales documents and approach them as scientific documents, that’s when they have difficulties. I think that there should be some sort of marketing/sales training for scientists outside of traditional grant writing classes, perhaps in conjunction with business schools? The whole goal of marketing/sales is to get a customer to buy something, which is essentially what we do with grants.
LikeLike
December 4, 2015 at 9:12 am
After about ~15 years of writing and obtaining grants, I have of course tried to provide basic, commonly known knowledge in my grants more than ten years ago. It backfired:
http://bjoern.brembs.net/2015/12/how-to-write-your-grant-proposal/
LikeLike
December 4, 2015 at 9:23 am
It is perhaps more acute in manuscript review but I find it most useful to assume everyone reviewing is an honest (and reasonably competent) broker. Helps to remember that communication errors are mine, not the result of incompetent peers.
LikeLike
December 4, 2015 at 9:44 am
Agreed on separating abstracts from peer reviewed papers on the publications. Not doing so will put off a lot of people.
LikeLike
December 4, 2015 at 9:56 am
BB- I think it is good to recognize that panel demands are never constant. In the NIH game, different panels with very closely related expertise can have profoundly different expectations wrt emphasis and assumption of background knowledge. So you have to learn the hard way every so often.
LikeLike
December 4, 2015 at 10:22 am
Namnezia: To each their place. See also, e.g.:
http://bjoern.brembs.net/downloads
or elsewhere:
http://www.ncbi.nlm.nih.gov/myncbi/browse/collection/44622422/?sort=date&direction=descending
http://orcid.org/0000-0001-7824-7650
http://www.researcherid.com/ProfileView.action?SID=W2zHzHIPYTJAKZbNkBu&returnCode=ROUTER.Success&queryString=KG0UuZjN5Wmgu8pvTYBecPETDOGp4O4uJ2U%252Bk5HGXbA%253D&SrcApp=CR&Init=Yes
‘Publications’ page is now updated with links to all of these places.
LikeLike
December 4, 2015 at 10:54 am
Maybe it’s a byproduct of my juniorness (i.e. imposter syndrome), but I assume any misunderstanding by the reviewer is due to a lack of clarity on my part. I also presume that if a reviewer loves the ideas, they are more likely to overlook minor things, but if they hate the ideas, they will find molehills to make mountains of.
LikeLike
December 4, 2015 at 11:32 am
Getting a reviewer who has misconceptions or differing opinions about a background detail of your proposal is only one of the myriad sources of randomness in the peer review system. I’ve stopped being angry about “wrongness” and just written more grants.
LikeLike
December 4, 2015 at 11:48 am
Finally nailed what bothers me so much about this, and engaged a bit with Brembs on the twits about it – he conflates “easy to understand” (using the mega-insulting “explain it to your grandmother” canard) with “simplistic to the point of insulting”. It seems he is unaware that one can be clear without insulting anyone’s intelligence.
After seeing those tweets, I reread his ranty statements about how these techniques are taught to every undergraduate, elementary school student, and embryo whilst still in the womb, so of course these dumbshit reviewers should be able to recall them at a moment’s notice. Does he write his grants with this sort of tone?
“As you remember, or at least SHOULD remember from your most elementary training, the regulation of HopP1 on bunny hopping is far more complex than that Science paper made it appear, for reasons which are ***obvious*** to true experts.”
Even if one’s internet comment style is not reflective of one’s professional writing style (though I suspect for many people they are more similar than different) and this is just blowing off steam, I question the ability to explain things clearly and calmly if one has adopted a priori the view that explanation can only be insulting to the reader.
In his case, its a bit worse than a priori: he has adopted this viewpoint after a reviewer *straight up told him he was being insulting*, which led to him removing explanations from subsequent grants (and the current posts), not questioning whether his phrasing was dismissive or condescending.
LikeLike
December 4, 2015 at 2:17 pm
Not intended to “pile on” but there’s a number of papers on the topic of “Open Science” and “Journal ranking” in the pubs list as well. Fair, separable or padding?
LikeLike
December 4, 2015 at 2:39 pm
Seems to me you need to make a compelling argument why doing the same things that have been bought, paid for, and published are worth doing again and why your way is somehow going to be better and more informative. Antibodies, RNAi clones, etc.
It sounds like you’re basically saying all this other work is shit and that your work is way better. And then you’re getting upset when you don’t explain why their work is shit to people who don’t know about it. Your review audience are the people who come to your seminar, engaged and interested, but not precisely familiar. Your job is to sell it to them. The one or two people who love you (e.g. your Mother), aren’t going to be reviewing your grants.
LikeLike
December 4, 2015 at 6:13 pm
The line I was told is that “People read your papers because they want to. People read your grants because they have to. Don’t make them work for it.” (Truth be told, making papers easy to read is a good thing too!)
LikeLike
December 4, 2015 at 6:14 pm
@baggervance – Brembs has done serious statistical work on the validity of open science, journal ranking, replicability, and problems with the Glamourization of science. Some of it is really excellent. It is a dangerous game to start trying to take apart which publications are “real contributions”. (See the earlier arguments about “review” papers.)
It looks to me like Brembs is paying some interesting price for his insistence on producing work in other venues than traditional journals. I respect his willingness to drink the kool-aid of open science, open data, and open data management, but I wonder if some of the normal science fight (about theories, data, and the science) that we typically find in journals (thus measurable by alternating publications) has the added advantage of giving one papers to count and cite. (Reflecting the line “nothing exists until it’s published”.)
LikeLike
December 4, 2015 at 7:17 pm
qaz- when you are busting on your peers for being scientific incompetents it tends to raise the issue of your own demonstrated competence. and like it or not, the uninformed outside observer is going to look at scientific pubs. I think it is probably a bit mean and misplaced in this context but you have to be reasonable about how people are going to respond when you start a war over scientific chops.
LikeLike
December 4, 2015 at 7:36 pm
Shrew, you nailed it. I bet the blog is not the only place where he is so condescending.
I wonder how the blog post will affect the review of the revised application. The reviewer will not be amused.
LikeLike
December 4, 2015 at 8:22 pm
DM – Absolutely, I totally agree.
I just find it interesting that his fight with his colleagues seems to be outside the usual journal back and forth, which is what all this “post-pub peer review” has been about. And Brembs has been one of the most vocal advocates of post-pub review and the “post-journal world”, including making a recent publication include some weird mechanism by which people could add data to his “paper” and change the figures to include the new data. (I can’t say that makes sense to me, but it is part of what the people arguing for this brave new open-data world have been arguing for.). I find it interesting that one effect of this has been to remove the ability of non-experts to judge a controversy. (In the normal scientific process, we watch the fight in the journals and judge it by the publications.)
LikeLike
December 5, 2015 at 1:23 am
How many of you read the entire review? It’s completely unprofessional, from start to finish. I think CS is the only one to address this point.
“In summary, this is a not well prepared application, full of mistakes and lacking some necessary preliminary data. Not at all fundable.”
http://thinklab.com/discussion/reviewer-1/129
!!!! and there are several instances of !!!!
“Yes, by all means the insertion of a large transposon into the open reading frame of a gene causes a mutation!!!!”
The irony of having a Twitter account as @sarcastic_f is not lost on me here, but being an insulting asshole in a grant review makes me discount everything else they’re saying.
LikeLike
December 5, 2015 at 7:38 am
Brems has not yet separated abstracts from research articles or reviews in his publication list. Having a quick look at his blog, it looks like he had dozen of publications every year. It is not so. I have been following Brembs since a while now and all I can see is the same story told again and again, and an increased interest in promoting open acces and in publication politics. He is taking part in millions of conferences and meetings but the publication list reflects what kind of science he is doing. That is why the lack of publications is masked with conference abstracts. Such a scientist cannot insult reviewers and other scientist. Marketing is good to gain attention, but it is not how science works. In the last years, Brembs has been doing just marketing. The reviewer has point to the lack of knowledge by Brembs in understanding some basic techniques in his proposal. I am not a drosophilist but I understand reviewer’s criticism regarding RNAi and protein trap techniques. Brembs should be a bit humbled and not pretend he is in the position of insulting and unrespectful critizing everybody who disagree with his vision. Less marketing and more lab work.
LikeLike
December 5, 2015 at 9:52 am
In post-pub review, you’ll only see more non-experts and their grandmothers drowning out the real experts.
LikeLike
December 5, 2015 at 9:53 am
Personally, I tend to get bored by research questions so shallow you can publish several papers a year on it.
LikeLike
December 5, 2015 at 10:51 am
Note to self: being a grumpy, arrogant blowhard online isn’t going to make my grant reviews any better.
LikeLike
December 5, 2015 at 12:27 pm
Personally, I tend to get bored by research questions so shallow you can publish several papers a year on it.
Most awesome justification for shitty research productivity EVER!!!!!!!!
LikeLike
December 6, 2015 at 3:39 pm
I assumed people were being too critical of his publication list, but I see only two senior-author, original research publications in the last four years, both in PLoS One. That’s a pretty low productivity no matter how you look at it.
LikeLike
December 6, 2015 at 6:11 pm
Personally, I tend to get bored by research questions so shallow you can’t publish several papers a year on it.
LikeLike
December 7, 2015 at 12:47 am
rofl, you guys’ outrage buttons are just too easy to push, you crack me up. 🙂
This was so hilarious, let’s try it again: For those who think counting papers is a perfectly valid measure of competence, you may want to start counting Diderik Stapel’s or Jan-Hendrik Schön’s papers (just Glam if it’s more than you can count) and then read “Research Assessment for Dummies” if you find the results confusing.
Bye bye guys, it’s been fun, but I have to head on over to YouTube, I’ve heard there are some helpful comments over there, too. 🙂
LikeLike
December 7, 2015 at 5:48 pm
Wow, he rolled out all the standard troll arguments on his way out, huh?
“I assumed people were being too critical of his publication list, but I see only two senior-author, original research publications in the last four years, both in PLoS One. That’s a pretty low productivity no matter how you look at it.”
-Just out of curiosity, and with the knowledge that it is highly field specific, what would be a good target for a 5 year retrospective on a new PI in the general area of molecular genetics/neurobiology dual class?
LikeLike
December 7, 2015 at 7:51 pm
jmz4gtu, There’s no answer to that question that will be fair. However, I have never seen an area where a PLOS publication every two years is considered productive enough to argue superiority.
He can claim to be having fun, but we know he’s bitter to the bone.
LikeLike
December 7, 2015 at 11:54 pm
Next time I’ll be sure to apply for a grant from NIB (National Institute of Boredom) since I could use some of that taxpayer money to fund my quest to not be bored.
LikeLike
December 9, 2015 at 1:22 pm
I strongly disagree that aggrieved applicants should ” let that stuff eventually roll off you like water off the proverbial duck’s back. ”
We instead should be raising our united voices for the removal of Dr. Richard Nakamura director of the Center for Scientific Review (CSR). It is my opinion that the CSR has allowed the current state of affairs to degenerate to such a low level that new leadership is required to address the concerns of new applicants.
Insulting the publication list is an easy way to express bias against new applicants. Maybe, just maybe, some funding is required to generate high quality publishable data.
A system to address errors in review was put in place before Nakamura stepped in to the CSR director position. Unfortunately the appeal process no longer exists in substantive fact. Appeals of NIH Initial Peer Review Notice Number: NOT-OD-11-064 are never successful. Any program officer will tell you this before you make the attempt.
LikeLike
December 9, 2015 at 1:47 pm
We instead should be raising our united voices for the removal of Dr. Richard Nakamura director of the Center for Scientific Review (CSR). It is my opinion that the CSR has allowed the current state of affairs to degenerate to such a low level that new leadership is required to address the concerns of new applicants.
The previous guy was, if anything, worse.
What do you imagine a new CSR head can do?
“Better reviewers” isn’t the solution here. Fewer apps competing for more money is.
The CSR doesn’t really handle this question. And overall, NIH is unpersuaded that they should address it either.
LikeLike
December 10, 2015 at 7:29 pm
An interested CSR head could reconstitute the re-review process for grants containing errors in review. Applicants currently have no recourse against low quality, or bogus reviews.
An interested CSR head could start eliminating reviewers busted for writing bogus and fraudulent grant reviews. Aggrieved applicants may currently have more luck with the OIG than the CSR when currently faced with this problem.
An interested CSR head could eliminate the “you don’t have enough papers” bias used to exclude new applicants from entering the grant and reviewer pools. 60-80% of papers are not reproducible anyway, so it appears that paper counting is not working for us. However the observed drop in R01 funding for people under 36 years old, is clearly biased and beneficial to some segments of the applicant pool (Reviewers).
An interested CSR head could apologize for the disastrous state of disrepair of the current CSR. How has anyone let the review process descend to the depths we are now seeing.
Anyone else have any suggestions?
LikeLike
December 10, 2015 at 7:34 pm
My point is that calling for Nakamura’s head doesn’t fix this. These are structural problems of the NIH.
LikeLike
December 10, 2015 at 11:36 pm
“60-80% of papers are not reproducible”
-That’s quite a claim. If you’re referring to the industry studies, you have to remember they chose an *extremely* biased subset of papers. Ironically, it, their study, statistically, is hugely underpowered.
LikeLike
December 11, 2015 at 12:38 am
Also they didn’t fully report their methods….
LikeLike
December 11, 2015 at 4:44 pm
[…] Why the World’s Most Powerful Telescope Has Just Been Ruled Unlawful Hundreds of researchers are using the wrong cells. That’s a major problem. (covered this last year) Who will manage outbreaks? A crisis looms for infectious disease specialists Remain Calm: Kissing Bugs Are Not Invading the US Your Grant In Review: Errors of fact from incompetent reviewers […]
LikeLike
December 12, 2015 at 11:15 pm
Nakamura would be a good place to start – let’s face it something is broken and someone is not fixing it. The CSR is doing a terrible job at policing reviewers and adhering to NIH policy when it comes to fixing reviewer errors, fraudulent reviews, lack of appropriate expertise, what ever you want to call this CSR bullshit.
1. Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature;483:531-3.
2. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov;10:712.
LikeLike
December 14, 2015 at 11:09 am
It would be symbolic to chop the CSR head, sure. But it is only useful to the extent it pursues the systematic problems. Blaming it on one person allows the system to continue to screw up.
LikeLike
December 14, 2015 at 11:10 am
What is a “fraudulent” review? If I may ask?
LikeLike
December 15, 2015 at 3:08 pm
I think all reviewers try to do their best- no one deliberately writes a “fraudulent review”. However, there are far too many reviewers reviewing WAY out of their expertise. I know an SRO who winces each time a reviewer gets up and says “This is not my area, but…” – but still, he keeps inviting them. In part because many people he asks refuse to serve, and in part because he can’t have as many total reviewers in different areas as he might like (money). So a few immediate (if unlikely) fixes would be to create modest review incentives (the only incentive now is continuous submission if you serve 6 times in 18 months- which is actually more than standing members serve!); permit as many people to serve as it takes to cover the grants; and for *(^&%*( sake, improve the review process by allowing more discrimination than 3 digits! (2,3, and 4).
I also agree that there should be an appeals process and I would be interested to see the stats on whether those numbers (total appeals and successes) have actually gone down.
LikeLike
December 15, 2015 at 3:54 pm
That statement has a perfectly legit role in grant review and has no bearing on “fraudulent” review.
LikeLike
December 20, 2015 at 12:21 am
“no one deliberately writes a “fraudulent review”
Yea – they do. Actually I believe it is quite common. What else would account for the major change in R01 demographics seen in the last 10 years? Applicants under 36 years of age now have only a 3% chance of getting funded. Down from ~ 18% ~10 years ago.
Example of a fraudulent review.
We showed novel targeted cancer compounds hitting 57 of the 58 cell lines in the NCI 60 set below 1 uM AND we showed 4 separate pieces of cell based assay data in support in the body of our grant. This data included confocal images (cellular slices) demonstrating our drugs penetrating cancer cells. The reviewer then stated “data presented in the application do not make a convincing case that the () compounds are entering the cells.”
Sorry the inhibition of cellular proliferation is caused by drugs penetrating cells???? These drugs show little toxicity sitting in a tube in the freezer.
This is flat out fraud – If the reviewer does not know what a cellular proliferation assay is (MTT or WST8) or what a microscope is, then they are required to recuse themselves from the review (NIH Reviewer Orientation). This particular reviewer instead wrote a completely bogus review.
Moreover – there is no recourse for an applicant receiving such reviews. Existing policy established in Notice Number: NOT-OD-11-064 is no longer being adhered to.
Nakamura would be a good place to start in cleaning up this mess.
PS Some of the cancer cell lines we hit are highly resistant to existing treatments (eg standard R-CHOP therapy).
LikeLike
December 20, 2015 at 10:56 am
How is that fraud? Incompetence *maybe*. But not evidence of fraud.
And your presentation makes it sound as though either your data are indirect (making the comment valid) or this was so clearly wrong that the other reviewers would have been all over it. If they had reasons to do so (i.e. It was discussed).
LikeLike
December 20, 2015 at 11:24 am
@STephen M: Been there and have been recipient of this sort of thing all-too-often (assuming that you are ‘in the right’ on this).
Over the years I’ve dealt with this–after the initial cursing at the reviewers–by trying to clarify this issue in the response when I submit a revision (assuming I can).
I once submitted a formal appeal and was told this was discussed at council and that ‘yes I was right but they didn’t like the grant anyway and it wouldn’t change the outcome’.
LikeLike
December 20, 2015 at 11:35 am
A big problem that is getting worse is stretchingexpectations of reviewers expertise. We’re encountering this in the disastrous CIHR virtual review due to the need for 5 reviewers for each application, layered onto a global ranking scheme. Each review ranks 10 or so grants (1/11, 2/11, 3/11, etc). This is similar in some aspects to face to face reviews but differs in two fundamental ways. Firstly, there is basically no expect ion of consensus or even discussion between the reviewers. Secondly, there is no consideration of whether one reviewers batch differs qualitatively from another. The rank is king. This has led to massive variance and adoption of arbitrary methods of cherry picking. Moreover, if you are in a smaller field, you’ll be ranked by people well outside of it. On top of this, each structured part of the application is character-limited and there is a max of 2 pages of data for a 7 year, $3+ million program.
It’s very important to write your grant assuming the reviewer is not on top of your field, but when you don’t have the luxury of explaining principles and feasibility, you basically buy a lottery ticket.
It’s also becoming more common for agencies to depend on reviewers who really don’t put in the effort. I’m not talking about the McKnight #riffraff (the opposite), but people who are not familiar with latest technical capabilities who rely on their postdocs to learn how instruments work. These people use outdated and irrelevant arguments that are so inane that you know they haven’t a clue.
LikeLike
December 21, 2015 at 12:55 am
Gross incompetence is not a catch-all bucket for the nonsense we see in review. We see fraudulent reviews. I am clearly so weary of the mistreatment we have seen, that I am now reaching out through your blog to find similarity mistreated applicants.
I hope all aggrieved applicants all take their bogus reviews into the OIG for closer examination. The OIG Hotline accepts tips and complaints from all sources about potential fraud, waste, abuse, and mismanagement in Department of Health and Human Services’ programs including the NIH grant system. http://oig.hhs.gov/fraud/report-fraud/
If a reviewer is reviewing a grant which is focused on drug development, they are required to have some expertise in the area. Alternatively, they are required to recuse themselves.
Other reviewers “all over it”??? We sometimes see comments from other reviewers in open disagreement, but in these rare instances the dissidents are clearly unable to address the shortfalls in the system.
New leadership at the CSR could be interested in cleaning up this mess.
MoBio – next time ask for the minutes of the council meeting in a FOIA request- you will find there are no minutes available. Did feeling that your work was discussed by the amazing council “Wizards of Oz” placate you? Pull open the curtain – the CSR needs closer examination of its adherence to existing policy. An actual council review would be a great place to start.
Wizard of Oz: They have one thing you haven’t got: a diploma. Therefore, by virtue of the authority vested in me by the Universitartus Committiartum E Pluribus Unum, I hereby confer upon you the honorary degree of ThD.
Scarecrow: ThD?
Wizard of Oz: That’s… Doctor of Thinkology.
LikeLike
December 21, 2015 at 9:31 am
http://criminal.findlaw.com/criminal-charges/fraud.html
The perpetrator has to have something to gain and make *knowingly false* statements. I don’t see how this applies to incompetent or erroneous grant review.
LikeLike
December 21, 2015 at 10:44 am
OK, here’s a great story that you all will no doubt love…
My lab had a paper accepted pending revisions at a prominent GlamourMag. Even though editors these days generally never say ‘accepted pending revisions’, I don’t use the phrase lightly. Our collaborator, when he saw the reviews, wrote “congratulations”. It was clearly just a matter of doing the work.
Imagine my surprise when a few weeks later the editors asked us to turn in our revised manuscript ahead of the deadline they had originally imposed! They wouldn’t say why. So back and forth go emails trying to figure out what the deal is, and they are acting really weird about it.
Then a few days later a former grad student tells me how she visited a competitor’s poster at a meeting and he spouted off all sorts of information straight from a recent NIH proposal that I had submitted. He was clearly using the information in that proposal to further their work. And he says they just submitted their paper to GlamourMag!
So the pieces are falling into place…
But why would GlamourMag care about him? And he was not on panel, so how did he get the proposal?
In the end we didn’t get the revisions done in time, and their paper got published. It turns out that a prominent-in-the-field co-author on that paper was on NIH panel, and would definitely have reviewed my proposal (which wasn’t funded). That prominent author also published very regularly in GlamourMag.
A couple years later that prominent co-author even admitted that he had given the junior (last) author “a LOT of help” with that paper. It was a weird not-said-but-said-between-the-words “he needed the help for his career”, and “you didn’t” kind of conversation.
But we still lost a Glamour Pub and a grant. My postdoc who was first author on that got totally screwed and left science. We eventually published our paper in a dump journal, where it has still gotten way more citations than the Glamour Pub — mainly because our reagents actually work (unlike some of theirs) and our conclusions have held up (unlike some of theirs). I have never submitted another paper to that GlamourMag. I know not to trust that Prominent PI or the little slimey fucker who was last author on that paper, who is now department head in a cutthroat soft money department, where I watched as he then unfairly denied tenure to another guy I know.
There are no policies or procedures that could prevent this sort of crap. It’s just certain humans being assholes. The world is full of them. Science is no different.
LikeLike
December 21, 2015 at 1:20 pm
@StephenM: Since the PO is someone I know quite well and is not someone who would knowingly lie to me I believe what he said.
He also relayed some ‘unofficial verbiage’ from the reviewers to me in confidence so I have little doubt he was telling me something close to the truth.
Years later looking over the grant I agree with the reviewers that it shouldn’t have been funded.
LikeLike
December 21, 2015 at 2:31 pm
LOL – Are any other people weary of the incompetent or erroneous grant reviewer argument?
There are policies to address reviewer errors – Notice Number: NOT-OD-11-064
These policies however do not really exist in substantive fact. Seems like we’ve been around this block a couple of times.
The Other Dave – The reason reviewers sign a CDA is to prevent the misuse of confidential information.
The NIH may take steps in response to a violation possible government-wide suspension or debarment.
LikeLike
December 22, 2015 at 12:05 pm
Stephen — If I could prove it, for sure I would shut down the person who acted improperly with my proposal. I think misconduct is a pattern with that PI. I have heard too many stories from others, and once I reviewed a paper from that lab where both I and the other reviewer noticed clear evidence of image tampering, and said so in our review. The editors (J. Neurosci) rejected the paper with no possibility for re-review, but that was all that happened.
That’s why I am a fan of ‘barred for life’ for misconduct and hate that NIH program where they try to retrain people convicted of misconduct. There are too many people trying to get into the system. We don’t need to keep any rotten apples.
LikeLike
December 26, 2015 at 4:44 pm
[…] “no one deliberately writes a “fraudulent review” Study tracks the evolution of pro-creationism laws in the U.S. Student Slaps Florida Public Schools With Suit for Refusing to Teach Evolution Because It Contradicts the Bible Bright Lights, Big Predators First praying mantis survey of Rwanda uncovers rich diversity (great picture here) […]
LikeLike
December 30, 2015 at 12:02 pm
The Other Dave,
You don’t have to prove this person acted inappropriately. You simply may bring your concerns to the OIG and let them investigate.
Your concerns may be valid and removing bogus reviewers from the system will help us all.
http://oig.hhs.gov/fraud/grant/index.asp
LikeLike
December 30, 2015 at 12:49 pm
TOD- once it gets to the ORI finding level, does anyone ever get back into the system?
LikeLike
December 30, 2015 at 5:31 pm
Stephen: But what would they do? All I can tell them is what I heard and the curious coincidences. I am not sure that it would even be ethically appropriate for them to take action based on the circumstantial evidence that I could provide.
DM: I don’t know. I hope not.
LikeLike