NIH grant review obsesses over testing hypotheses. Everyone knows this.

If there is a Stock Critique that is a more reliable way to kill a grant’s chances than “There is no discernible hypothesis under investigation in this fishing expedition“, I’d like to know what it is.

The trouble, of course, is that once you’ve been lured into committing to a hypothesis then your grant can be attacked for whether your hypothesis is likely to be valid or not.

A special case of this is when some aspect of the preliminary data that you have included even dares to suggest that perhaps your hypothesis is wrong.

Here’s what bothers me. It is one thing if you have Preliminary Data suggesting some major methodological approach won’t work. That is, that your planned experiment cannot result in anything like interpretable data that bears on the ability to falsify the hypothesis. This I would agree is a serious problem for funding a grant.

But any decent research plan will have experiments that converge to provide different levels and aspects of testing for the hypothesis. It shouldn’t rest on one single experiment or it is a prediction, not a real hypothesis. Some data may tend to support and some other data may tend to falsify the hypothesis. Generally speaking, in science you are not going to get really clean answers every time for every single experiment. If you do…..well, let’s just say those Golden Scientist types have a disproportionate rate of being busted for faking data.

So.

If you have one little bit of Preliminary Data in your NIH Grant application that maybe, perhaps is tending to reject your hypothesis, why is this of any different value than if it had happened to support your hypothesis?

What influence should this have on whether it is a good idea to do the experiments to fully test the hypothesis that has been advanced?

Because that is what grant review should be deciding, correct? Whether it is a good idea to do the experiments. Not whether or not the outcome is likely to be A or B. Because we cannot predict that.

If we could, it wouldn’t be science.