Open Grantsmanship

April 27, 2016

The Ramirez Group is practicing open grantsmanship by posting “R01 Style” documents on a website. This is certainly a courageous move and one that is unusual for scientists. It is not so long ago that mid-to-senior level Principal Investigator types were absolutely dismayed to learn that CRISP, the forerunner to RePORTER, would hand over their funded grants’ abstract to anyone who wished to see it.

There are a number of interesting things here to consider. On the face of it, this responds to a plea that I’ve heard now and again for real actual sample grant materials. Those who are less well-surrounded by grant-writing types can obviously benefit from seeing how the rather dry instructions from NIH translate into actual working documents. Good stuff.

As we move through certain changes put in place by the NIH, even the well experienced folks can benefit from seeing how one person chooses to deal with the Authentication of Resources requirement or some such. Budgeting may be helpful for others. Ditto the Vertebrate Animals section.

There is the chance that this will work as Open Pre-Submission Peer Review for the Ramirez group as well. For example, I might observe that referring to Santa Cruz as the authoritative proof of authentic antibodies may not have the desired effect in all reviewers. This might then allow them to take a different approach to this section of the grant, avoiding the dangers of a reviewer that “heard SC antibodies are crap”.

But there are also drawbacks to this type of Open Science. In this case I might note that posting a Vertebrate Animals statement (or certain types of research protocol description) is just begging the AR wackaloons to make your life hell.

But there is another issue here that I think the Readers of this blog might want to dig into.

Priority claiming.

As I am wont to observe, the chances are high in the empirical sciences that if you have a good idea, someone else has had it as well. And if the ideas are good enough to shape into a grant proposal, someone else might think these thoughts too. And if the resulting application is a plan that will be competitive, well, it will have been shaped into a certain format space by the acquired wisdom that is poured into a grant proposal. So again, you are likely to have company.

Finally, we all know that the current NIH game means that each PI is submitting a LOT of proposals for research to the NIH.

All of this means that it is likely that if you have proposed a 5 year plan of research to the NIH someone else has already, or will soon, propose something that is a lot like it.

This is known.

It is also known that your chances of bringing your ideas to fruition (published papers) are a lot higher if you have grant support than if you do not. The other way to say this is that if you do not happen to get funded for this grant application, the chances that someone else will publish papers related to your shared ideas is higher.

In the broader sense this means that if you do not get the grant, the record will be less likely to credit you for having those ideas and brilliant insights that were key to the proposal.

So what to do? Well, you could always write Medical Hypotheses and review papers, sure. But these can be imprecise. They describe general hypotheses and predictions but….that’s about all.

It would be of more credit to you to lay out the way that you would actually test those hypotheses, is it not? In all of the brilliant experimental design elegance, key controls and fancy scientific approaches that are respected after the fact as amazing work. Maybe even with a little bit of preliminary evidence that you are on the right track, even if that evidence is far too limited to ever be published.

Enter the Open Grantsmanship ploy.

It is genius.

For two reasons.

First, of course, is pure priority claiming. If someone else gets “your” grant and publishes papers, you get to go around whining that you had the idea first. Sure, many people do this but you will have evidence.

Second, there is the subtle attempt to poison the waters for those other competitors’ applications. If you can get enough people in your subfield reading your Open Grant proposals then just maaaaaybe someone on a grant panel will remember this. And when a competing proposal is under review just maaaaaaybe they will say “hey, didn’t Ramirez Group propose this? maybe it isn’t so unique.”. Or maybe they will be predisposed to see that your approach is better and downgrade the proposal that is actually under review* accordingly. Perhaps your thin skin of preliminary data will be helpful in making that other proposal look bad. Etc.

__
*oh, it happens. I have had review comments on my proposals that seemed weird until I became aware of other grant proposals that I know for certain sure couldn’t have been in the same round of review. It becomes clear in some cases that “why didn’t you do things this way” comments are because that other proposal did indeed do things that way.

On writing a review

April 26, 2016

Review unto others

April 25, 2016

I think I’ve touched on this before but I’m still seeking clarity.

How do you review?

For a given journal, let’s imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.

Do you review manuscripts for that journal as you would like to be reviewed?

Or as you have perceived yourself to have been reviewed?

Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?

Sunday Sermon

April 24, 2016

I just want you to think about that which you do. 

Labor

April 22, 2016

If you have a laboratory that has one postdoc, one grad student and on average has two undergrad volunteers most of the time, you don’t run a two person lab. You run a four person lab.

Reflexively appealing to how they have to be trained in a ploy to pretend you aren’t using their labor is nonsense.

Shorthand

April 22, 2016

Storyboard

Pretty data

N-up

Prove the hypothesis

Representative image

Trend for significance

Different subcultures of science may use certain phrases that send people in other traditions into paroxysms of critique.

Mostly it is because such phrasing can sound like bad science. As if the person using it doesn’t understand how dangerous and horrible their thinking is. 

We’ve gone a few rounds over storyboarding and representative images in the past. 

Today’s topic is “n-up”, which is deployed, I surmise, after examining a few results, replicates or subjects that look promising for what the lab would prefer to be so. It raises my hackles. It smells to me like a recipe for confirmation bias and false alarming. To me.

Apparently this is normal phrasing for other people and merely indicates the pilot study is complete? 

How do you use the phrase?

He’ll be missed.

jmz4 asks:

DM, what’s your reasoning behind advocating for reducing grad student numbers instead of just bottlenecking at the PD phase? I’d argue that grad students currently get a pretty good deal (free degree and reasonable stipend), and so are less exploited. Also, scientific training is useful in many other endeavors, and so the net benefit to society is to continue training grad students.

My short answer is that it is more humane.
Read the rest of this entry »

MillerLab noted on the twitters that the NIA has released it’s new paylines for FY2016. If your grant proposal scores within the 9%ile zone, congrats! Unless you happen to be an Early Stage Investigator in which case you only have to score within the top 19% of applications, woot!

I was just discussing the continuing nature of the ESI bias in a comment exchange with Ferric Fang on another thread. He thinks

The problem that new investigators have in obtaining funding is not necessarily a result of bias but rather that it is more challenging for new investigators to write applications that are competitive with those of established investigators because as newcomers, they have less data and fewer accomplishments to cite.

and I disagree, viewing this as assuredly a bias in review. The push to equalize success rates of ESI applicants with those of established investigators (generational screw-job that it is) started back in 2007 with prior NIH Director Elias Zerhouni. The mechanism to accomplish this goal was, and continues to be, naked quota based affirmative action. NIH will fund ESI applications out of the order of review until they reach approximately the same success percentages as is enjoyed by the established investigator applications. Some ICs are able to game this out predictively by using different paylines- the percentile ranks within which almost all grants will be funded.

NIA-fundingpolicyAs mentioned, NIA has to use a 19%ile cutoff for ESI applications to equal a 9%ile cutoff for established investigator applications. This got me thinking about the origin of the ESI policies in 2007 and the ensuing trends. Luckily, the NIA publishes its funding policy on the website here. The formal ESI policy at NIA apparently didn’t kick in until 2009, from what I can tell. What I am graphing here are the paylines used by NIA by fiscal year to select Exp(erienced), ESI and New Investigator (NI) applications for funding.

It’s pretty obvious that the review bias against ESI applications continues essentially unabated*. All the talk about “eating our seed corn”, the hand wringing about a lost generation, the clear signal that NIH wanted to fund the noobs at equivalent rates as the older folks….all fell on deaf ears as far as the reviewers are concerned. The quotas for the ESI affirmative action are still needed to accomplish the goal of equalizing success rates.

I find this interesting.

__
*Zerhouni noted right away [PDF] that study sections were fighting back against the affirmative action policy for ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Note: It is probably only a coincidence that CSR reduced the number of first time reviewers in FY2014, FY2015 relative to the three prior FYs.

Eric Hand reported in Science that one NSF pilot program found that allowing for any-time submission reduced applications numbers.

Assistant Director for Geosciences Roger Wakimoto revealed the preliminary results from a pilot program that got rid of grant proposal deadlines in favor of an anytime submission. The numbers were staggering. Across four grant programs, proposals dropped by 59% after deadlines were eliminated.

I have been bombarded with links to this article/finding and queries as to what I think.

Pretty much nothing.

I do know that NIH has been increasingly liberal with allowing past-deadline submissions from PIs who have served on study section. So there is probably a data source to draw upon inside CSR if they care to examine it.

I do not know if this would do anything similar if applied to the NIH.

The NSF pilot was for

geobiology and low-temperature geochemistry, geomorphology and land-use dynamics, hydrological sciences, and sedimentary geology and paleobiology.

According to the article these are fields in which

“many scientists do field work, having no deadline makes it easier for collaborators to schedule time when they can work on a proposal”.

This field work bit is not generally true of the NIH extramural community. I think it obvious that continual-submission helps to schedule time but I would note that it also eliminates a stick for the more proactive members of a collaboration to beat the slaggards into line. As a guy who hits his deadlines for grant submission, it’s probably in my interest to further lower the encouragements the lower-energy folks require.

According to a geologist familiar with reviewing these grants

The switch is “going to filter for the most highly motivated people, and the ideas for which you feel the most passion,” he predicts. When he sits on merit review panels, he finds that he can usually reject half of the proposals right away as being hasty or ill-considered. “My hope is that this has taken off the bottom 50%,” he says. “Those are the ones you read and say, ‘Did they have their heart in this?’”

Personally I see very few NIH grant proposals that appear to me to be “hasty or ill-considered” or cause me to doubt the PI has her heart in it. And you know how I feel about the proposition that the RealProblem with NIH grant success hinges on whether or not PIs refine and hone and polish their applications into some shining gem of a document. Applications are down so therefore success rates go up is the only thing we need to take away from this pilot, if you ask me. Any method by which you could decrease NIH applications would likewise seem to improve success rates.

Would it work for NIH types? I tend to doubt it. That program at NSF started with only two submission rounds per year. NIH has three rounds for funding per year, but this results from a multitude of deadlines including new R01, new R21/R03, two more for the revised apps, special ones for AIDS-related, RFAs and assorted other mechanisms. As I mentioned above, if you review for the NIH (including Advisory Council service) you get an extra extension to submit for a given decision round.

The pressure for most of us to hit any specific NIH deadline during the year is, I would argue, much lower at baseline. So if the theory is that NSF types were pressured to submit junky applications because their next opportunity was so far away….this doesn’t apply to NIH folks.

NIH Grant Lottery

April 18, 2016

Fang and Casadevall have a new piece up that advocates turning NIH grant selection into a modified lottery. There is a lot of the usual dissection of the flaws of the NIH grant funding system here, dressed up as “the Case for” but really they don’t actually make any specific or compelling argument, beyond “it’s broken, here’s our RealSolution”.

we instead suggest a two-stage system in which (i) meritorious applications are identified by peer review and (ii) funding decisions are made on the basis of a computer-generated lottery . The size of the meritorious pool could be adjusted according to the payline. For example, if the payline is 10%, then the size of the meritorious pool might be expected to include the top 20 to 30% of applications identified by peer review.

They envision eliminating the face to face discussion to arrive at the qualified pool of applications:

Critiques would be issued only for grants that are considered nonmeritorious, eliminating the need for face-to-face study section meetings to argue over rankings,

Whoa back up. Under current NIH review, critiques are not a result of the face-to-face meeting. This is not the “need” for meeting to discuss the applications. They are misguided in a very severe and fundamental way about this. Discussion serves, ideally, to calibrate individual review, to catch errors, to harmonize disparate opinions, to refine the scoring….but in the majority of cases the written critiques are not changed a whole lot by the process and the resume of the discussion is a minor outcome.

Still, this is a minor point of my concern with their argument.

Let us turn to the juxtaposition of

New investigators could compete in a separate lottery with a higher payline to ensure that a specific portion of funding is dedicated to this group or could be given increased representation in the regular lottery to improve their chances of funding.

with

we emphasize that the primary advantage of a modified lottery would be to make the system fairer by eliminating sources of bias. The proposed system should improve research workforce diversity, as any female or underrepresented minority applicant who submits a meritorious application will have an equal chance of being awarded funding.

Huh? If this lottery is going to magically eliminate bias against female or URM applicants, why is it going to fail to eliminate bias against new investigators? I smell a disingenuous appeal to fairness for the traditionally disadvantaged as a cynical ploy to get people on board with their lottery plan. The comment about new investigators shows that they know full well it will not actually address review bias.

Their plan uses a cutoff. 20%, 30%…something. No matter what that cutoff line is, reviewers will know something about where it lies. And they will review/score grants accordingly. Just Zerhouni noted that when news of special ESI paylines got around, study sections immediately started giving ESI applications even worse scores. If there is a bias today that pushes new investigator, woman or URM PI’s applications outside of the funding, there will be a bias tomorrow that keeps them disproportionately outside of the Fang/Casadevall lottery pool.

There is a part of their plan that I am really unclear on and it is critical to the intended outcome.

Applications that are not chosen would become eligible for the next drawing in 4 months, but individual researchers would be permitted to enter only one application per drawing, which would reduce the need to revise currently meritorious applications that are not funded and free scientists to do more research instead of rewriting grant applications.

This sounds suspiciously similar to a plan that I advanced some time ago. This post from 2008 was mostly responding to the revision-queuing behavior of study sections.

So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a “desired” funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding.

The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.

What I am unclear on in the Fang/Casadevall proposal is the limit to one application “per drawing”. Is this per council round per IC? Per study section per Council round per IC? NIH-wide? Would the PI be able to stack up potentially-meritorious apps that go unfunded so that the get considered in series across many successive rounds of lotteries?

These questions address their underlying assumption that a lottery is “fair”. It boils down to the question of whether everyone is equally able to buy the same number of lottery tickets.

The authors also have to let in quite reasonable exceptions:

Furthermore, we note that program officers could still use selective pay mechanisms to fund individuals who consistently make the lottery but fail to receive funding or in the unlikely instance that important fields become underfunded due to the vagaries of luck.

So how is this any different from what we have now? Program Officers are already trusted to right the wrongs of the tyranny of peer review. Arguing for this lottery system implies that you think that PO flexibility on exception funding is either insufficient or part of the problem. So why let it back into the scheme?

Next, the authors stumble with a naked assertion

The proposed system would treat new and competing renewal applications in the same manner. Historically, competing applications have enjoyed higher success rates than new applications, for reasons including that these applications are from established investigators with a track record of productivity. However, we find no compelling reason to justify supporting established programs over new programs.

that is highly personal. I find many compelling reasons to justify supporting established programs. And many compelling reasons not to do so preferentially. And many compelling reasons to demand a higher standard, or to ban them entirely. I suspect many of the participants in the NIH system also favor one or the other of the different viewpoints on this issue. What I find to be unconvincing is nakedly asserting this “we find no compelling reason” as if there is not any reasonable discussion space on the issue. There most assuredly is.

Finally, the authors appeal to a historical example with is laughably bad for their argument:

we note that lotteries are already used by society to make difficult decisions. Historically, a lottery was used in the draft for service in the armed forces…If lotteries could be used to select those who served in Vietnam, they can certainly be used to choose proposals for funding.

As anyone who pays even the slightest attention realizes, the Vietnam era selective service lottery in the US was hugely biased and subverted by the better-off and more-powerful to keep their offspring safe. A higher burden was borne by the children of the lower classes, the unconnected and, as if we need to say it, ethnic minorities. Referring to this example may not be the best argument for your case, guys.

HAHAHHHA. I am so full of myself  today 
I actually said this 

It’s like cult rescue though. You don’t try to rehab the head, you try to get the innocents out b4 the FlavorAde is poured

(Yes, it was a discussion of Glamour culture of science. As if you couldn’t guess.)

Representative Images

April 15, 2016

New rule: Claims of a “representative” image should have to be supported by submission of 2 better ones that were not included.

It works like this.

Line up your 9 images that were quantified for the real analysis of the outcome. In the order by which they appear to follow your desired interpretation of the mean effect.

Your “representative” image is #5. So you should have to prove your claim to have presented a representative image in peer review by providing #8 and #9.

My prediction is that the population of published image data would get a lot uglier, less “clear” and would more accurately reflect reality.

Interesting comment from AnonNeuro:

Reviews are confidential, so I don’t think you can share that information. Saying “I’ll review it again” is the same as saying “I have insider knowledge that this paper was rejected elsewhere”. Better to decline the review due to conflict.

I don’t think I’ve ever followed this as a rule. I have definitely told editors when the manuscript has not been revised from a previously critiqued version in the past (I don’t say which journal had rejected the authors’ work). But I can’t say that I invariably mention it either. If the manuscript had been revised somewhat, why bother. If I like it and want to see it published, mentioning I’ve seen a prior version elsewhere seems counterproductive.

This comment had me pondering my lack of a clear policy.

Maybe we should tell the editor upon accepting the review assignment so that they can decide if they still want our input?

Revise After Rejection

April 14, 2016

This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.

There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.

It’s the sort of thing you only need to do as a response to review when it has a chance of acceptance.

Outrageous claims that are going to be bait for any reviewer? Sure, back those down.