March 8, 2014
Jocelyn Kaiser at ScienceInsider has obtained data on PI numbers from the NIH.
I think this graph should be pinned up right next to Sally Rockey’s desk. It is absolutely essential to any attempts to understand and fix grant application success rates and submission churning.
UPDATE 03/12/14: I should have noted that this graph depicts PIs who hold R01-equivalent grants (R01, R23, R29, R37 with ARRA excluded). The Science piece has this to say about the differential from RPG:
NIH shared these data for two sets of grants: research project grants (RPGs), which include all research grants, and R01 equivalents, a slightly smaller category that includes the bread-and-butter R01 grants that support most independent labs.
But if you read carefully, they’ve posted the excel files for both the R01-equivalents and RPG datasets. Woo-hoo! Let’s get to graphing, shall we? There is nothing like a good comparison graph to make summary language a little more useful. Don’t you think? I know I do….
A “slightly smaller category” eh? Well, I spy some trends in this direct comparison. Let’s try another way to look at it. How about we express the difference between the number of RPG and R01-equivalent numbers to see how many folks have been supported on non-R01/equivalent Research Project Grants over the years…
Well I’ll be hornswaggled. All this invention of DP-this and RC-that and RL-whatsit and all the various U-mechs and P01 (Center components seem to be excluded) in recent years seemingly has had an effect. Sure, the number of R01 equivalent PIs only slightly drifted down from the end of the doubling until now (relieved briefly by the stimulus). So those in NIH land could say “Look, we’re not sacrificing R01s, our BreadNButter(TM) Mech!”. But in the context of the growth of nonR01 RPG projects, well….hmmm.
Jeremy Berg has a new President’s Message up at ASBMB Today. It looks into a topic of substantial interest to me, i.e., the fate of investigators funded by the NIH. This contrasts with our more-usual focus on the fate of applications.
With that said, the analysis does place the impact of the sequester in relatively sharp focus: There were about a thousand fewer investigators funded by these mechanisms in FY13 compared with FY12. This represents more than six times the number of investigators who lost this funding from FY11 to FY12 and a 3.8 percent drop in the R-mechanism-funded investigator cohort.
another tidbit addresses the usual claim from NIHlandia that R-mechs and R01s in particular are always prioritized.
In her post, Rockey notes that the total funding for all research project grants, or RPGs, dropped from $15.92 billion in FY12 to $14.92 billion in FY13, a decrease of 6.3 percent. The total funding going to the R series awards that I examined (which makes up about 85 percent of the RPG pool) dropped by 8.9 percent.
What accounts for this difference? U01 awards comprise the largest remaining portion of the RPG pool…The funds devoted to U01 awards remained essentially constant from FY12 to FY13 at $1.57 billion.
Go Read the whole thing.
This type of analysis really needs more attention at the NIH level. They’ve come a looooong way in recent years in terms of their willingness to focus on what they are actually doing in terms of applications, funding, etc. This is in no small part due to the efforts of Jeremy Berg, who used to be the Director of NIGMS. But tracking the fate of applications only goes so far, particularly when it is assessed only on a 1-2 year basis.
The demand on the NIH budget is related to the pool of PIs seeking funding. This pool is considerably less elastic than the submission of grant applications. PIs don’t submit grant applications endlessly for fun, you know. They seek a certain level of funding. Once they reach that, they tend to stop submitting applications. A lot of the increase in application churn over the past decade or so has to do with the relative stability of funding. When odds of continuing an ongoing project are high, a large number of PIs can just submit one or two apps every 5 years and all is well. Uncertainty is what makes her submit each and every year.
Similarly, when a PI is out of funding completely, the number of applications from this lab will rise dramatically….right up until one of them hits.
I argue that if solutions to the application churn and the funding uncertainty (which decreases overall productivity of the NIH enterprise) are to be found, they will depend on a clear understanding of the dynamics of the PI population.
Berg has identified two years in which the PI turnover is very different. How do these numbers compare with historical trends? Which is the unusual one? Or is this the expected range?
Can we see the 1,000 PI loss as a temporary situation or a permanent fix? It is an open question as to how many sequential years without NIH funding will affect the PI. Do these individuals tend to regain funding in 2, 3 or 4 years’ time? Do they tend to go away and never come back? More usefully, what proportion of the lost investigators will follow these fates?
The same questions arise for the other factoids Berg mentions. The R00 transition to other funding would seem to be incredibly important to know. But a one year gap seems hardly worth discussing. This can easily happen under the current conditions. But if they are not getting funded after 2 or maybe 3 years after the R00 expires? This is of greater impact.
Still, a welcome first step, Dr. Berg. Let’s hope Sally Rockey is listening.
March 5, 2014
Virginia Hughes has a nice piece out on generational transmission of……experiences. In this case she focuses on a paper by Dias and Ressler (2014) showing that if you do fear conditioning to a novel odor in mice, the next two generations of offspring of these mice retain sensitivity to that odor.
This led me to mention that there is a story in substance abuse that has been presented at meetings in the past couple of years that is fascinating. Poking around I found out that the group of Yasmin Hurd (this Yasmin Hurd, yes) has a new paper out. I’ve been eagerly awaiting this story, to say the least.
Szutorisz H, Dinieri JA, Sweet E, Egervari G, Michaelides M, Carter JM, Ren Y, Miller ML, Blitzer RD, Hurd YL. Parental THC Exposure Leads to Compulsive Heroin-Seeking and Altered Striatal Synaptic Plasticity in the Subsequent Generation.Neuropsychopharmacology. 2014 Jan 2. doi: 10.1038/npp.2013.352. [Epub ahead of print] [PubMed, Neuropsychopharmacology]
This study was conducted with Long-Evans rats. The first step was to expose both male and female rats, during adolescence, to Δ9tetrahydrocannabinol (THC) at a dose of 1.5 mg/kg, i.p. every third day from Post Natal Day 28-49. No detectable THC was still present in the animals 16 (and 28) days later. The animals were bred at PND 64-68. Parallel Vehicle exposed rats were the comparison.
The resulting pups were fostered out to surrogate mothers in new “litters” consisting of approximately equal male/female pubs and an equal number from the THC-exposed and Vehicle-exposed parents. So this rules out any effects the adolescent THC might have on parenting behavior (that would affect the pups) and mutes any effect of littermates who are offspring of the experimental or control parents.
The paper shows a number of phenotypes expressed by the offspring of parents exposed to THC in adolescence. I’ve picked the one that is of greatest interest to me to show. Figure 1d from the paper depicts behavioral data for a heroin intravenous self-administration study conducted when the offspring had reached adulthood. As you can see, under Fixed-Ratio 5 (5 presses per drug infusion) the animals with parents who were exposed to THC pressed more for heroin than did the control group. They were equal in presses directed at the inactive lever and exhibited equal locomotor activity during the self-administration session. This latter shows that the drug-lever pressing was not likely due to a generalized activation or other nonspecific effect.
The paper contains some additional work- electrophysiology showing altered Long Term Depression in the dorsal striatum, differential behavior during heroin withdrawal and alterations in glutamate and dopamine-related gene expression. I’ll let you read the details for yourself.
But the implications here are stunning and much more work needs to be completed post-haste.
We’ve known for some time (centuries?) that substance abuse runs in families. The best studied case is perhaps alcoholism. The heritability of alcoholism has been established using human twin studies, family studies in which degree of relatedness is used and adoption studies. Establishing that alcoholism has a heritable component led to attempts to identify genetic variations that might confer increased risk.
The findings of Szutorisz and colleagues throws a new wrinkle into the usual human study designs. It may be possible to identify another factor- parental drug exposure- which explains additional variability in family outcomes. This would probably help to narrow the focus on the genetic variants that are important and also help to identify epigenetic mechanism that change in response to actual drug use.
On the pre-clinical research side…..wow. Is it via the male or female…or is it both? Does the specific developmental window of exposure (this was adolescent) matter? Does the specific drug matter? Is the downstream effect limited to some substances but not others? Is there a general liability for affective disorder being wrought? Does the effect continue off into subsequent generations? Can it be amped up in magnitude for the F2 generation (and onward) if the F0 and F1 generations are both exposed?
I think if this finding holds up it will help to substantially advance understanding of how An Old Family Tradition can become established. As I posted before:
In his classic song the great philosopher and student of addictive disorders, Hank Williams, Jr., blames a traditional source for increasing the probability of developing substance abuse:
….Hank why do you drink?
(Hank) why do you roll smoke?
Why must you live out the songs you wrote?
Stop and think it over
Try and put yourself in my unique position
If I get stoned and sing all night long
It’s a family tradition!
March 4, 2014
A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.
The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:
- Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
- Usually accepted on standard receipt (postmarked) dates on an on-going basis
- Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner
In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.
This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?
Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:
This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.
This brings us to the key difference between the PA and a PAR (or a PAS):
- Special Types
- PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
- PAS: A PA that includes specific set-aside funds as described in the PAS announcement
Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues. So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.
The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and, presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in a general CSR panel.
As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.
It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.
The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and that a mere PA was not going to be sufficient* to focus minds.
As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.
It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.
*This is a direct validation of my position that the PA is a rather weak stimulus, btw.
As always when it comes to NIDA specifics, see Disclaimer.
A flurry of Twitts from Doctor Zen last week drew my attention, eventually, to a report from The Clayman Institute for Gender Research at Stanford. The direct link to the report is here [PDF] and an executive summary style Dual Career Toolkit is provided as a PPT file.
There is all kinds of interesting stuff in here, including basic demographics on prevalence (36% of the American professoriate), career attitudes (50% of men say their career is primary, only 20% of women do) and impact of dual hires (performance measures of trailing-spouse do not differ from single hire peers). With respect to the last, the authors conclude:
Thus, our data suggest that productivity levels among second hires are not significantly different from those among their peers after data are disaggregated by field, and gender and rank are accounted for. (p72)
The Executive Summary of the full report emphasizes that dual-hires are seen as both a growing reality and a thorny problem for Universities. It takes no great leap for those of us familiar with such cases to grasp that one of the biggest reasons for pushback and objections is the assertion or supposition that the trailing spouse would not deserve a hire in his or her own right. Analyses such as the above seem to be critical to this issue in my view.
I’ve written on this topic before
It was one of my more extensively commented posts (107) so I entirely endorse the idea that this is one of the thornier questions of academics at the moment.
By way of a disclaimer, I am in a dual-academic-career relationship. We have not yet had opportunity or need to press dual-hire issues, but this is always possible in the future.
February 28, 2014
Commenter mikka wants to know why:
I don’t get this “professional editors are not scientists” trope. All the professional editors I know were bench scientists at the start of their career. They read, write, look at and interpret data, talk to bench scientists and keep abreast of their fields. In a nutshell, they do what PIs do, except writing grants and deciding what projects must be pursued. The input some editors put in some of my papers would merit a middle authorship. They are scientists all right, and some of them very good ones.
Look, yes you are right that they are scientists. In a certain way. And yes, I regret the way that my opinion that they are 1) very different from Editors and Associate Editors who are primarily research scientists and 2) ruining science tends to be taken as a personal attack on their individual qualities and competence.
But there is simply no way around it.
The typical professional editor, typically at a Glamour(ish) Mag publication, is under-experienced in science compared with a real Editor.
Regardless of circumstances, if they have gone to the Editorial staff from a postdoc, without experience in the Principal Investigator chair then they have certain limitations.
It is particularly bad that ass kissing from PIs who are desperate to get their papers accepted tends to persuade these people over time that they are just as important as those PIs.
“Input” merits middle authorship, eh? Sure, anyone with half a brain can suggest a few more experiments. And if you have the despotic power of a Nature editor’s keyboard behind you, sure…they damn well will do it. And ask for more. And tell you how uniquely brilliant of a suggestion it all was.
And because it ends up published in a Glamour Mag, all the sheep will bleat approvingly about what a great paper it is.
Professional editors are ruining science.
They have no loyalty to the science*. Their job is to work to aggrandize their own magazine’s brand at the cost of the competition. It behooves them to insist that six papers worth of work gets buried in “Supplemental Methods” because no competing and lesser journal will get those data. It behooves them to structure the system in a way that authors will consider a whole bunch of other interesting data “unpublishable” because it got scooped by two weeks.
They have no understanding or consideration of the realities of scientific careers*. It is of no concern to them whether scientific production should be steady, whether uninteresting findings can later be of significance, nor whether any particular subfield really needs this particular kick in the pants. It is no concern to them that their half-baked suggestion requires a whole R01 scale project and two years of experiments. They do not have to consider any reality whatsoever. I find that real, working scientist Editors are much more reasonable about these issues.
Noob professional editors are star-struck and never, ever are able to see that the Emperor is, in fact, stark naked. Sorry, but it takes some experience and block circling time to mature your understanding of how science really works. Of what is really important over the long haul. Notice how the PLoSFail fans (to pick one recent issue) are heavily dominated by the wet-behind-the-ears types and the critics seem to mostly be established faculty? This is no coincidence.
Again, this is not about the personal qualities of the professional editors. The structure of their jobs, and typical career arc, makes it impossible for them to behave differently.
This is why it is the entire job category of professional editor that is the problem.
If you require authoritah, note that Nobel laureate Brenner said something similar.
It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.
He was clearly not talking about peer review itself, but rather the professional Glamour Mag type editor.
*as well they should not. It is a structural feature of the job category. They are not personally culpable, the institutional limitations are responsible.
February 28, 2014
Do you decide whether to accept a manuscript for review based on the Journal that is asking?
To what extent does this influence your decision to take a review assignment?