A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.

The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:

  • Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
  • Usually accepted on standard receipt (postmarked) dates on an on-going basis
  • Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner

In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.

This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?

Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:

This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.

This brings us to the key difference between the PA and a PAR (or a PAS):

  • Special Types
    • PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
    • PAS: A PA that includes specific set-aside funds as described in the PAS announcement

Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues.  So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.

The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and,  presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in  a general CSR panel.

As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.

It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.

The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and  that a mere PA was not going to be sufficient* to focus minds.

As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.

It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.

__

*This is a direct validation of my position that the PA is a rather weak stimulus, btw.

As always when it comes to NIDA specifics, see Disclaimer.

NIH Multi-PI Grant Proposals.

February 24, 2014

In my limited experience, the creation, roll-out and review of Multi-PI direction of a single NIH grant has been the smoothest GoodThing to happen in NIH supported extramural research.

I find it barely draws mention in review and deduce that my fellow scientists agree with me that it is a very good idea, long past due.

Discuss.

In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.

I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.

I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.

The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.

I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.

It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.

So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.

The NHLBI data say this is utter nonsense.

Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.

What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.

I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.

This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.

…or maybe it is.

One of the things that I try to emphasize in NIH grant writing strategy is to ensure you always submit a credible application. It is not that difficult to do.

You have to include all the basic components, not commit more than a few typographical errors and write in complete sentences. Justify the importance of the work. Put in a few pretty pictures and plenty of headers to create white space. Differentiate an Aim from a hypothesis from an Experiment.

Beyond that you are often constrained by the particulars of your situation and a specific proposal. So you are going to have to leave some glaring holes, now and again. This is okay! Maybe you are a noob and have little in the way of specific Preliminary Data. Or have a project which is, very naturally, a bit of a fishing expedition hypothesis generating, exploratory work. Perhaps the Innovation isn’t high or there is a long stretch to attach health relevance.

Very few grants I’ve read, including many that were funded, are even close to perfect. Even the highest scoring ones have aspects that could readily be criticized without anyone raising an eyebrow.

The thing is, you have to be able to look at your proposal dispassionately and see the holes. You should have a fair idea of where trouble may lie ahead and shore up the proposal as best you can.

No preliminary data? Then do a better job with the literature predictions and alternate considerations/pitfalls. Noob lab? Then write more methods and cite them more liberally. Low Innovation? Hammer down the Significance. Established investigator wanting to continue the same-old, same-old under new funding? Disguise that with an exciting hypothesis or newish-sounding Significance link. (Hint: testing the other person’s hypothesis with your approaches can go over great guns when you are in a major theoretical dogfight over years’ worth of papers.)

What you absolutely cannot do is to leave the reviewers with nothing. You cannot leave gaping holes all over the application. That, my friends, is what drops you* below the “credible” threshold.

Don’t do that. It really does not make you any friends on the study section panel.

__
*This is one case where the noob is clearly advantaged. Many reviewers make allowances for a new or young-ish laboratory. There is much less sympathy for someone who has been awarded several grants in the past when the current proposal looks like a slice of Swiss cheese.

This question is mostly for the more experienced of the PItariat in my audience. I’m curious as to whether you see your grant scores as being very similar over the long haul?

That is, do you believe that a given PI and research program is going to be mostly a “X %ile” grant proposer? Do your good ones always seem to be right around 15%ile? Or for that matter in the same relative position vis a vis the presumed payline at a given time?

Or do you move around? Sometimes getting 1-2%ile, sometimes midway to the payline, sometimes at the payline, etc?

This latter describes my funded grants better. A lot of relative score (i.e., percentile) diversity.

It strikes me today that this very experience may be what reinforces much of my belief about the random nature of grant review. Naturally, I think I put up more or less the same strength of proposal each time. And naturally, I think each and every one should be funded.

So I wonder how many people experience more similarity in their scores, particularly for their funded or near-miss applications. Are you *always* coming in right at the payline? Or are you *always* at X %ile?

In a way this goes to the question of whether certain types of grant applications are under greater stress when the paylines tighten. The hypothesis being that perhaps a certain type of proposal is never going to do better than about 15%ile. So in times past, no problem, these would be funded right along with the 1%ile AMAZING proposals. But in the current environment, a change in payline makes certain types of grants struggle more.

Apparently Potnia is going to do a series over at Mistress of the Animals blog. This statement is one of those mnemonic gems you should paste on your monitor edge.

Aims should be general enough to require a project (1-2 papers per aim), but specific enough that they are a project.

There’s at least one early indicator of what is going to happen with the study section rounds that were cancelled because of the government shutdown.

This has all sorts of implications, one of which was brought up by Professor Jentsch in a subsequent tweet. It is related to the NOTice just issued which says that all October deadlines will be pushed forward into the “November timeframe”.

Let’s say your submitted a new proposal in June or perhaps a revised or competing renewal proposal in July. And like a busy little beaver you’ve continued to work on the project. Perhaps you have some excellent new data that further supports your awesome ideas and the killer experiments that you’ve proposed.

There is only one thing to do. Pull the grant from consideration and resubmit it, with the new data, once the NIH picks some November deadlines.

Every good grant application boils down to one or more of a couple of key statements.

  • “The field is totally doing it WRONG!”
  • “That which all those idiots think is true….ISN’T!”
  • “These people are totally missing the boat by working on that instead of working on THIS!”
  • “How can they possible have missed the implications of THIS amazing THING??!!??”

Good grant applications also have a single goal and conclusion.

  • “….and I am here to FIX EVERYTHING!”

 

The trouble is that you can’t say this in so many words. First, because you sound insane. Second, because some of those self-same people you are calling blind, stupid fools are the ones reviewing your grant. Third, because people reviewing your grant might have some respect for those other people you are calling fools. Fourth, because you may stray into calling your friendly Program Officers at the NIH fools for funding all that other stuff instead of you.

The most acceptable compromise seems to be to focus very heavily on the fact that you are here to “fix everything”. To focus especially on the “everything” and less on the “fix” if I am being totally honest. This puts the focus more on the potential amazing outcome of what you intend to do and much less emphasis on why you need to do it. It has a more positive feel and avoids insulting too many of your reviewers. And avoids telling your PO that they are doing everything wrong themselves.

So I tend to do this in my grant applications.

This occasionally feels like I am battling with one hand tied behind my back since I  am pulling my punches about how ridiculous it is to fund anything other than my current proposal. You can talk about gaps in the literature. You can go on about synthesis of approaches and your amazing discoveries ahead. And you should do so.

 

But ultimately there are an awful lot of scientists with big promises. And even more with highly refined skills and effective laboratory operations. And to my eye it is less effective to argue that my own proposals are just more-good-than-thou. It is essential to argue why I am proposing work that is much better. And for something to be substantially better, well, that sort of implies that the status quo is lacking in a significant way.

I hate having to make those arguments. I mean, don’t get me wrong….it IS my native behavior. Which I am sure is no surprise to my readers.

 

It is just that I’ve worked hard to stamp that out of my grant writing due to my considered view that FWDAOSS is not a really useful strategy.

 

And now I have to reconsider the wisdom of this approach.

 

Better to burn out than to fade away?

 

 

in a comment from Evelyn:

As a grant writer, who has to learn a whole new field every 2 weeks, a good review is a life-saver! It gives me an accurate, hopefully an up-to-date snapshot of the field, leads me to the original research that I can then pull, read, and cite. A bad review is a an awful waste of my time but at this point, I can tell a bad one by about third paragraph and I don’t bother reading the rest of it. When I can’t find a good review, my life gets a lot harder since I don’t have the time to read all of the junk research on every topic before I get to the good stuff. At that point, I hate to say it, but I search through glam-journals and usually, those original papers will have enough background to lead me to the important papers in the field.
So I don’t care if the review authors are in the field or are first-year grand students, as long as they do a good job. But in my experience, the ones that are in the field usually give a better overview of the topic.

I need a cold compress.

Apparently review articles make it easy for a professional grant writer, who has no prior knowledge of a field, to simulate the expertise of the PI. Who, in most cases, the study section members presume did most of the writing.

If this minor deception* works then some PIs can afford to hire a professional grant writer to, presumably, submit more grants to out-compete those of us who cannot afford such luxuries.

I do not have sources of money available in my professional budgets that can be used to hire grant writers to craft more applications than I can write myself.

This professional grant-writer thing does not hearten me.

__
* There is absolutely nothing in the NIH grant rules that says that the people listed as participating Investigator staff need have anything whatsoever to do with the writing/crafting of the application.

This is my query of the day to you, Dear Reader.

We’ve discussed the basics in the past but a quick overview.

1) Since the priority score and percentile rank of your grant application is all important (not exclusively so but HEAVILY so) it is critical that it be reviewed by the right panel of reviewers

2) You are allowed request in your cover letter that the CSR route your NIH grant application to a particular study section for review.

3) Standing study section descriptions are available at the CSR website as are the standing rosters and the rosters for the prior three rounds of review (i.e., including any ad hoc reviewers).

4) RePORTER allows you to search for grants by study section which gives you a pretty good idea of what they really, really like.

5) You can, therefore, use this information to slant your grant application towards the study section in which you hope it will be reviewed.

A couple of Twitts from @drIgg today raised the question of study section “fit”. Presumably this is related to an applicant concluding that despite all the above, he or she has not managed to get many of his or her applications reviewed by the properly “fitting” panel of reviewers.

This was related to the observation that despite ones’ request and despite hitting what seem to be the right keywords it is still possible that CSR will assign your grant to some other study section. It has happened to me a few times and it is very annoying. But does this mean these applications didn’t get the right fit?

I don’t know how one would tell.

As I’ve related on occasion, I’ve obtained the largest number of triages from a study section that has also handed me some fundable scores over the past *cough*cough*cough* years. This is usually by way of addressing people’s conclusion after the first 1, 2 or maybe 3 submissions that “this study section HATES me!!!“. In my case I really think this section is a good fit for a lot of my work, and therefore proposals, so the logic is inescapable. Send a given section a lot of apps and they are going to triage a lot of them. Even if the “fit” is top notch.

It is also the case that there can be a process of getting to know a study section. Of getting to know the subtleties of how they tend to feel about different aspects of the grant structure. Is it a section that is really swayed by Innovation and could give a fig about detailed Interpretations, Alternatives and Potential Pitfalls? Or is it an orthodox StockCritiqueSpewing type of section that prioritizes structure over the content? Do they like to see it chock full of ideas or do they wring their hands over feasibility? On the other side, I assert there is a certain sympathy vote that emerges after a section has reviewed a half dozen of your proposals and never found themselves able to give you a top score. Yeah, it happens. Deal. Less perniciously, I would say that you may actually convince the section of the importance of something that you are proposing through an arc of many proposal rounds*.

This leaves me rather confused as to how one would be able to draw strong conclusions about “fit” without a substantial number of summary statements in hand.

It also speaks to something that every applicant should keep in the back of his or her head. If you can never find what you think is a good fit with a section there are only a few options that I can think of.
1) You do this amazing cross-disciplinary shit that nobody really understands.
2) Your applications actually suck and nobody is going to review it well.
3) You are imagining some Rainbow Fairy Care-a-lot Study section that doesn’t actually exist.

What do you think are the signs of a good or bad “fit” with a study section, Dear Reader? I’m curious.
__
*I have seen situations where a proposal was explicitly mentioned to have been on the fourth or fifth round (this was in the A2 days) in a section.

Additional Reading:
Study Section: Act I
Study Section: Act II

from jipkin over at PhysioProf’s pad:

The attitude “I’m happy to debate” should be replaced with “I’m happy to explain”.

and there it is.

from a self described newProf at doc becca’s digs.

Last week, the first NIH proposal I wrote with PI status was rejected… I knew things were bad, but it still hurts…Problem is, I don’t know how to allocate my time between generating more preliminary data/pubs and applying for more grants. How many grants does the typical NIH- and/or NSF-funded (or wannabe-funded) TT prof write per year before getting funded?

It is not about what anyone else or the “typical” person has done.

It is about doing whatever you possibly can do until that Notice of Grant Award arrives.

My stock advice right now is that you need to have at least one proposal going in to the NIH for each standard receipt date. If you aren’t hitting it at least that hard, before you have a major award, you aren’t trying. If you think you can’t get out one per round…. you don’t really understand your job yet. Your job is to propose studies until someone decides to give your lab some support.

My other stock advice is take a look at the payline and assume those odds apply to you. Yes, special snoflake, you.

If the payline is 10%, then you need to expect that you will have to submit at least 10 apps to have a fighting chance. Apply the noob-discount and you are probably better off hitting twice that number. It is no guarantee and sure, the PI just down the hall struck it lucky with her first Asst Prof submission to the NIH. But these are the kinds of numbers you need to start with.

Once you get rolling, one new grant and one revised grant per round should be doable. They are a month apart and a revision should be way easier. After the first few, you can start taking advantage of cutting and pasting a lot of the grant text together to get a start on the next one.

Stop whining about preliminary data. Base it on feasibility and work from there. Most figures support at least a half dozen distinct grant applications. Maybe more.

I never know for sure how hard my colleagues are working when it comes to grant submissions. I know what I do…and it is a lot. I know what a subset of my other colleagues do and let me tell you, success is better correlated with effort (grants out the door) than it is with career rank. That has an effect, sure, but I know relatively older investigators who struggle to maintain stable funding and ones who enjoy multi-grant stability. They are distinguished to some extent by how many apps they get out the door. Same thing for junior colleagues. They are trying to launch their programs and all. I get this. They have to do a lot of setup, training and even spend time at the bench. But they also tend to have a very wait-and-see approach to grants. Put one in. Wait for the result. Sigh. “Well maybe I’ll resubmit it next round”. Don’t do this, my noob friends. Turn that app around for the next possible date for submission.

You’ll have another app to write for the following round, silly.

There is little doubt that shortening the length of the NIH R01 application from 25 pages to 12 put a huge premium on the available word space. The ever declining success rates have undoubtedly accelerated the desire of applicants to cram every last bit of information that they possibly can into the application.

Particularly since StockCritiqueTM having to do with methodological detail has hardly disappeared.

It is possible that a somewhat frustrated, tongue-in-cheek comment of YHN may have led some folks astray.

Since I am finally getting serious about trying to write one of these new format grants, I am thinking about how to maximize the information content. One thought that immediately strikes me is….cheat!

By which I mean taking sections that normally I would have put in the page-limited part of the grant and sneaking them in elsewhere. I have come up with the following and am looking for more tips and ideas from you, Dear Reader.
1) Moving the animal methods to the Vertebrate Animals section. I’m usually doing quite a bit of duplication of the Vertebrate Animals stuff in my General Methods subheading at the very end of the old Research Design section. I can move much of that, including possibly some research stuff that fits under point 4 (ensuring discomfort and distress is managed), to the Vertebrate Animals section.

Now mind you, one of my always perspicacious commenters was all over me right from the start:

DM – Please don’t encourage people to cheat their way out of 12 pages. Please tell them to write a 12-page grant.
I would warn grant-writers to be careful of cheating too much. I was at a study section recently where someone lost about a point of score because one of the reviewers (it wasn’t me, although I agree with the reviewer) complained about “cheating” by moving methods into the vertebrate animals section.

That was all back in March 2010. Here we are down the road and I have to say, DearReader, I am hearing a constant drum beat of irritation at people who cheat in just this way. My suggestion (a serious one) is to be very wary of putting what should be your research plan methods into the Vertebrate Animals section.

I am hearing and seeing situations in which reviewers pretty obviously are ticked and almost certainly are punishing the applications accordingly. Nobody likes a cheat. I have even heard of rare cases of people having their grants kicked back, unreviewed, because of this.

So be careful. Keep the Vertebrate Animals section on task and put your Methods where they belong.

GrantRant XI

January 31, 2013

Combative responses to prior review are an exceptionally stupid thing to write. Even if you are right on the merits.

Your grant has been sunk in one page you poor, poor fool.

GrantRant X

January 30, 2013

Open the grant you are polishing up right now, pronto, and change your reference style to Author-Date from that godawful numbered citation style. Then go on a slash and burn mission to make the length requirement. Because lets face it we all know your excuses about reading “flow” are bogus and you are just shoe horning in more text.

 

That numbered citation stuff is maddening to reviewers.

 

happy editing,

Uncle DM, Your Grant Fairy.