Predicting the future

June 24, 2015

One of the biggest whoppers told by Ronald Germain in his manifesto on fixing the NIH is this:

it is widely accepted that past performance, not a detailed research plan, is the best predictor of future success. So why stay with the fiction that R01 grant proposals are the best method for determining support of the individual scientist,

As I often say there is nothing so a-scientific and illogical as a scientist on the business of science.

The way he states this plays the usual sleight of hand with the all-important, unmentioned variable.

Namely, the means to do the research. Grant funding.

There is no bigger predictor for the success of a given research plan submitted to the NIH than whether or not the PI receives the funding to do the work. Funnily enough, Germain actually recognizes this and totally undercuts his argument in one of his caveats:

with 5–7 years of support per round and 1–2 years of bridge funding available, I think it is unlikely that a highly competent investigator will fail to produce enough during 6–9 years of research to warrant a “passing grade” without further extensions, except in extenuating circumstances.

Right? He sees right here that all that matters is funding. Most competent investigators will succeed if they but have the funding! Which makes his idea that this will cut down on competition and the “stochastic” nature of getting the grant funding look as silly as it is.

It’s just another way to say “We’ll pick our favored winners in advance of any independent accomplishment based on who they trained with (i.e., us!) They will keep right on winning because they will be the only ones with the means to accomplish anything. All others can stay the heck away from our effortless stream of moola, no matter how good their ideas might be“.

This is important and it is why basing funding on accomplishment, rather than great ideas and the capacity to fulfill them is recipe for a death spiral of the Extramural NIH productivity as a whole.

This plan will self-reinforce and harden a silo around a limited set of brains, doing science in the way they see fit. Good ideas from outside this silo will not be given a chance to compete….unless they happen to occur to someone inside the silo. And on the whole, that person will not represent a diversity of ideas, approaches and interests. This will, across the enterprise of NIH-funded science, reduce the rate of discovery.

Those who manage to accomplish will continue to have a stranglehold on the means to accomplish. Means leads to accomplishment leads to more means in the Germain scheme.

So what gets accomplished will be narrowed, iteratively, with each 5-7 year review. Only to be refreshed, minimally, with each squeezed down cohort of new hires who manage to make it into his starter, block-grant scenario. Those, of course, will be selected by Universities on the basis of seeming like the people who are already most successful since the review will be anticipated to be on the basis of the person. Naturally, the trainees of the insider club will be most highly sought after. (Take a look at the way HHMIers, espcially the Early Career ones have been trained folks. …talk about the past predicting the future and all, right?)

So when you hear someone talking about “the best predictor of future performance is past performance”, make sure to ask whether that is with or without the funding and how they know this.

The second truthy whopper Germain tells follows soon after.

true creativity is often cause for lower scores?

Personally I have yet to see a well-prepared truly creative grant get killed just for being creative and new. Maybe wackaloon geniuses who have great ideas but simply refuse to write an actual grant proposal struggle in some sections. I guess. But here’s a secret for Germain. (A “secret” known to just about anyone who has served on 2-3 traditional standing study sections.) People that he is talking about, those who have demonstrated a high level of accomplishment in the past 5-7 years, get away with utterly crappy proposals and still get their funding based on their record of accomplishment.

That’s right. We ALREADY have a system in place that HUGELY benefits and prioritizes the funding of people with a track record of accomplishment. The “creativity” in their proposal does not prevent them from getting funded. Nor, btw, does diverging substantially from the plan they got the money for hurt them in the next round of evaluation.

Given this, there is no conceivable way that switching to Germain’s plan changes the ability to be creative.

Now, for those outsiders or people with a brand new idea absent a track record….yeah, they may take it on the chin under the current NIH system. But they would ALSO fail to gain support under Germain’s. It isn’t like we’re inventing up some new peers to do the reviewing here. No matter if it were McKnight’s panel of NAS members or Germain’s ideas of the deserving elite or traditional NIH-style panels judging the “track record”….there is no way that we can assume that genius PI behind every PCR or gene knockout technology or whatever Nobel-worthy breakthrough will be immediately recognized as awesome and funded.

Read the rest of this entry »

JIF notes

June 24, 2015

More on NPP’s pursuit of BP is here.

see this for reference

Additional Reading:
The 2012 JIFs are out

Subdiscipline categories and JIF

Why JIF is complete sheepshit from Stephen Curry

A significant change in Impact Factor

Suggesting Reviewers

June 24, 2015

Who do you select when listing potential reviewers for your manuscripts? 

I go for suggestions that I think will be favorably inclined toward acceptance. This may be primarily because they work on similar stuff (otherwise they aren’t going to be engaged at all) but also because I think* they are favorable towards my laboratory. 

Of course. 

(I have also taken to making sure I suggest at least 50% women but that is a different matter.)

I wouldn’t suggest anyone that violates  the clearest statement of automatic COI that pertains to me, i.e. the NIH grant review 3-year window of collaboration.  

Where do you get your standards?


*I could always be wrong of course