There’s an interesting issue of Pharmacology, Biochemistry & Behavior that is focused on Reproducibility of animal models for neuropsychiatric diseases.

Reading through the articles I am struck by how this effort is like throwing a bucket of water on a barn blaze. You might think that a “reproducibility” paper would answer a lof of questions. They do. But they also raise more and more and more questions. Take the following from Richetto et al. Effects of light and dark phase testing on the investigation of behavioural paradigms in mice: Relevance for behavioural neuroscience.

This is a pretty duck-soup behavioral pharmacology assay- inject an experimental subject with drugs and see what happens. In this case, inject a mouse with amphetamine and see how much it runs around. (Nice feature – N=6 male and 6 female per group, SABV orthodox, yo!)

Figure 5a from Richetto et al, 2019

Ok, so there’s a light cycle effect. I’ve blogged about that before (2015; 2018). and there was also a light cycle effect on cFos in Nucleus Accumbens and midbrain which did not interact with an effect of amphetamine treatment. So. Whether or not light cycle affects replication in this narrow instance is whether you are interested in quantitative differences or relative differences. The behavioral curves are more or less the same to a first approximation. But what’s missing here? Threshold, for one thing. This is a single dose of amphetamine (2.5 mg/kg, i.p.). What happens at lower or higher doses? At some point you fail to distinguish a drug effect from vehicle….ooops. Where’s the vehicle control. Injection prior to the amphetamine and characterized for only 20 minutes. Where are the mRNA controls coming from? Wait that whole experiment was 30 min after the saline or drug injection. What about later time-points (25th time bin) when the behavioral difference emerged?

Crap, back to the behavioral control. Why not run the saline injections in parallel all the way out to the 25th time bin? Because what you would find is that in light cycle the animals basically go to sleep. Wait they are testing in the dark, right? ….back to the paper. OMG it doesn’t really directly say and all we have to go on is testing in dark versus light vivarium cycle. Another factor, gaaaah. Testing in dark or light versus the circadian cycle.

The point is not to ask why they didn’t test absolutely everything but to point out that even a fairly effortful “replication” study of an exceptionally simple phenomenon gets complicated in a huge hurry.

One of my NIH grant game survival mantras is that one should never let the lack of one particular bit of preliminary data prevent you from submitting a grant application.

There is occasionally a comment on the grant game that suggests one needs to have the data that support the main hypotheses in the proposal before one can get a fundable score. This may be snark or it may be heartfelt advice. Either way, I disagree.

I believe that preliminary data serve many purposes and sometimes all you need are feasibility data. Data that show you can perform the types of experiments or assays that you are proposing. That you have a fairly good handle on the trouble shooting space and know how to interpret your results.

Sometimes your data are beyond mere feasibility but are somewhat indirectly related to the topic at hand. This is also an area where you do not require overwhelmingly closely-related preliminary data.

I understand that you may, particularly if you are less experienced in the game, have a series of disappointing summary statements that appear to show that you will never ever get funded until you have a grant’s worth of money to generate the preliminary data that support all the hypotheses and only have to be written up, once the grant funds. I am willing to believe that in limited cases there may be study sections where this is true. But I suspect that even for noobs it is not universally true and the best strategy is to keep the proposals flying out the door and into different study sections.

The reason is that you will never ever lawyer a grant into funding by having just exactly the right Goldilocks-approved combination of preliminary data. Preliminary data criticisms are often mere StockCritique that are deployed or ignored depending on the reviewer’s gestalt impression. If the reviewer is basically on board, you only need enough preliminary data to beat back the most trivial of complaints about whether you have a pulse as a lab. And if the reviewer is not convinced by the larger picture of the proposal, you will never make them give it a 1 score just because the preliminary data are so pretty.

A recent anecdote: I had a grant come back with one reviewer saying “there is not enough preliminary data to show that this one specific thing will ever work”. Naturally, getting “this one thing” to work was in fact the purpose of the proposal, and it was stated all throughout why it needed work thrown at it and what it was a good idea to throw that work at the question. The second reviewer said “The preliminary data show that this one specific thing is basically all done so there is no need to put any funds into it”.

This, I surmise, is what happens when you hit that perfect sweet spot with the preliminary data. It is all down to the eyes of the beholder as to whether it is supportive of the proposed work, or so complete that it questions the need to do any more work.

When I have formulated these views in the past I have apparently managed to screw up and fail to communicate. A certain potty mouthed ex-blogger at this site used to say something like “a given bit of preliminary data supports an infinity of potential grant proposals”. And I would bang on about not waiting for some perfect figure, but to hit your deadlines with whatever you happened to have around.

It has recently come to my attention that I have not been clarifying this enough. There is a subtle difference, I guess, in how one assembles a grant from the available preliminary data. One approach is to have a firm idea of what you want to propose and then to search your lab books and hard drive for data that seemingly support that idea. And I do think this is okay to do, as part of a diversified grant writing strategy. But what I also meant to convey, and didn’t, is that one should be taking a look at the data one is generating, or has in hand, and to ask oneself “what is the best proposal that arises from these data“. In retrospect I meant the latter to large degree.

Look, presumably you conducted those experiments or collected that data for a purpose. It has a place in the world. Work from that. What were you thinking. As a second level, think about that other stuff you have in hand and where it might help, once a handful of your data has started telling a particular story.

The bottom line is that when I say one should use a given bit of preliminary data in many proposals, I don’t mean that you should stick it in just any old place. A grant proposal has to tell a story. And part of that story is being written by the data you have in hand.

Sorry if I was never clear on this distinction.