Jocelyn Kaiser at ScienceInsider has obtained data on PI numbers from the NIH.

NIH PIs Graphic

Nice.

I think this graph should be pinned up right next to Sally Rockey’s desk. It is absolutely essential to any attempts to understand and fix grant application success rates and submission churning.

UPDATE 03/12/14: I should have noted that this graph depicts PIs who hold R01-equivalent grants (R01, R23, R29, R37 with ARRA excluded). The Science piece has this to say about the differential from RPG:

NIH shared these data for two sets of grants: research project grants (RPGs), which include all research grants, and R01 equivalents, a slightly smaller category that includes the bread-and-butter R01 grants that support most independent labs.

NIH-PIs-RPG-R01eqBut if you read carefully, they’ve posted the excel files for both the R01-equivalents and RPG datasets. Woo-hoo! Let’s get to graphing, shall we? There is nothing like a good comparison graph to make summary language a little more useful. Don’t you think? I know I do….

A “slightly smaller category” eh? Well, I spy some trends in this direct comparison. Let’s try another way to look at it. How about we express the difference between the number of RPG and R01-equivalent numbers to see how many folks have been supported on non-R01/equivalent Research Project Grants over the years…
NIHPI-RPGdifferentialWell I’ll be hornswaggled. All this invention of DP-this and RC-that and RL-whatsit and all the various U-mechs and P01 (Center components seem to be excluded) in recent years seemingly has had an effect. Sure, the number of R01 equivalent PIs only slightly drifted down from the end of the doubling until now (relieved briefly by the stimulus). So those in NIH land could say “Look, we’re not sacrificing R01s, our BreadNButter(TM) Mech!”. But in the context of the growth of nonR01 RPG projects, well….hmmm.

Jeremy Berg has a new President’s Message up at ASBMB Today. It looks into a topic of substantial interest to me, i.e., the fate of investigators funded by the NIH. This contrasts with our more-usual focus on the fate of applications.

With that said, the analysis does place the impact of the sequester in relatively sharp focus: There were about a thousand fewer investigators funded by these mechanisms in FY13 compared with FY12. This represents more than six times the number of investigators who lost this funding from FY11 to FY12 and a 3.8 percent drop in the R-mechanism-funded investigator cohort.

another tidbit addresses the usual claim from NIHlandia that R-mechs and R01s in particular are always prioritized.

In her post, Rockey notes that the total funding for all research project grants, or RPGs, dropped from $15.92 billion in FY12 to $14.92 billion in FY13, a decrease of 6.3 percent. The total funding going to the R series awards that I examined (which makes up about 85 percent of the RPG pool) dropped by 8.9 percent.

What accounts for this difference? U01 awards comprise the largest remaining portion of the RPG pool…The funds devoted to U01 awards remained essentially constant from FY12 to FY13 at $1.57 billion.

Go Read the whole thing.

This type of analysis really needs more attention at the NIH level. They’ve come a looooong way in recent years in terms of their willingness to focus on what they are actually doing in terms of applications, funding, etc. This is in no small part due to the efforts of Jeremy Berg, who used to be the Director of NIGMS. But tracking the fate of applications only goes so far, particularly when it is assessed only on a 1-2 year basis.

The demand on the NIH budget is related to the pool of PIs seeking funding. This pool is considerably less elastic than the submission of grant applications. PIs don’t submit grant applications endlessly for fun, you know. They seek a certain level of funding. Once they reach that, they tend to stop submitting applications. A lot of the increase in application churn over the past decade or so has to do with the relative stability of funding. When odds of continuing an ongoing project are high, a large number of PIs can just submit one or two apps every 5 years and all is well. Uncertainty is what makes her submit each and every year.

Similarly, when a PI is out of funding completely, the number of applications from this lab will rise dramatically….right up until one of them hits.

I argue that if solutions to the application churn and the funding uncertainty (which decreases overall productivity of the NIH enterprise) are to be found, they will depend on a clear understanding of the dynamics of the PI population.

Berg has identified two years in which the PI turnover is very different. How do these numbers compare with historical trends? Which is the unusual one? Or is this the expected range?

Can we see the 1,000 PI loss as a temporary situation or a permanent fix? It is an open question as to how many sequential years without NIH funding will affect the PI. Do these individuals tend to regain funding in 2, 3 or 4 years’ time? Do they tend to go away and never come back? More usefully, what proportion of the lost investigators will follow these fates?

The same questions arise for the other factoids Berg mentions. The R00 transition to other funding would seem to be incredibly important to know. But a one year gap seems hardly worth discussing. This can easily happen under the current conditions. But if they are not getting funded after 2 or maybe 3 years after the R00 expires? This is of greater impact.

Still, a welcome first step, Dr. Berg. Let’s hope Sally Rockey is listening.

Virginia Hughes has a nice piece out on generational transmission of……experiences. In this case she focuses on a paper by Dias and Ressler (2014) showing that if you do fear conditioning to a novel odor in mice, the next two generations of offspring of these mice retain sensitivity to that odor.

This led me to mention that there is a story in substance abuse that has been presented at meetings in the past couple of years that is fascinating. Poking around I found out that the group of Yasmin Hurd (this Yasmin Hurd, yes) has a new paper out. I’ve been eagerly awaiting this story, to say the least.

Szutorisz H, Dinieri JA, Sweet E, Egervari G, Michaelides M, Carter JM, Ren Y, Miller ML, Blitzer RD, Hurd YL. Parental THC Exposure Leads to Compulsive Heroin-Seeking and Altered Striatal Synaptic Plasticity in the Subsequent Generation.Neuropsychopharmacology. 2014 Jan 2. doi: 10.1038/npp.2013.352. [Epub ahead of print] [PubMed, Neuropsychopharmacology]

This study was conducted with Long-Evans rats. The first step was to expose both male and female rats, during adolescence, to Δ9tetrahydrocannabinol (THC) at a dose of 1.5 mg/kg, i.p. every third day from Post Natal Day 28-49. No detectable THC was still present in the animals 16 (and 28) days later. The animals were bred at PND 64-68. Parallel Vehicle exposed rats were the comparison.

The resulting pups were fostered out to surrogate mothers in new “litters” consisting of approximately equal male/female pubs and an equal number from the THC-exposed and Vehicle-exposed parents. So this rules out any effects the adolescent THC might have on parenting behavior (that would affect the pups) and mutes any effect of littermates who are offspring of the experimental or control parents.

TransGenerationalTHCheroinThe paper shows a number of phenotypes expressed by the offspring of parents exposed to THC in adolescence. I’ve picked the one that is of greatest interest to me to show. Figure 1d from the paper depicts behavioral data for a heroin intravenous self-administration study conducted when the offspring had reached adulthood. As you can see, under Fixed-Ratio 5 (5 presses per drug infusion) the animals with parents who were exposed to THC pressed more for heroin than did the control group. They were equal in presses directed at the inactive lever and exhibited equal locomotor activity during the self-administration session. This latter shows that the drug-lever pressing was not likely due to a generalized activation or other nonspecific effect.

The paper contains some additional work- electrophysiology showing altered Long Term Depression in the dorsal striatum, differential behavior during heroin withdrawal and alterations in glutamate and dopamine-related gene expression. I’ll let you read the details for yourself.

But the implications here are stunning and much more work needs to be completed post-haste.

We’ve known for some time (centuries?) that substance abuse runs in families. The best studied case is perhaps alcoholism. The heritability of alcoholism has been established using human twin studies, family studies in which degree of relatedness is used and adoption studies. Establishing that alcoholism has a heritable component led to attempts to identify genetic variations that might confer increased risk.

The findings of Szutorisz and colleagues throws a new wrinkle into the usual human study designs. It may be possible to identify another factor- parental drug exposure- which explains additional variability in family outcomes. This would probably help to narrow the focus on the genetic variants that are important and also help to identify epigenetic mechanism that change in response to actual drug use.

On the pre-clinical research side…..wow. Is it via the male or female…or is it both? Does the specific developmental window of exposure (this was adolescent) matter? Does the specific drug matter? Is the downstream effect limited to some substances but not others? Is there a general liability for affective disorder being wrought? Does the effect continue off into subsequent generations? Can it be amped up in magnitude for the F2 generation (and onward) if the F0 and F1 generations are both exposed?

I think if this finding holds up it will help to substantially advance understanding of how An Old Family Tradition can become established. As I posted before:

In his classic song the great philosopher and student of addictive disorders, Hank Williams, Jr., blames a traditional source for increasing the probability of developing substance abuse:

….Hank why do you drink?
(Hank) why do you roll smoke?
Why must you live out the songs you wrote?
Stop and think it over
Try and put yourself in my unique position
If I get stoned and sing all night long
It’s a family tradition!

A communication to the blog raised an issue that is worth exploring in a little more depth. The questioner wanted to know if I knew why a NIH Program Announcement had disappeared.

The Program Announcement (PA) is the most general of the NIH Funding Opportunity Announcements (FOAs). It is described with these key features:

  • Identifies areas of increased priority and/or emphasis on particular funding mechanisms for a specific area of science
  • Usually accepted on standard receipt (postmarked) dates on an on-going basis
  • Remains active for three years from date of release unless the announcement indicates a specific expiration date or the NIH Institute/Center (I/C) inactivates sooner

In my parlance, the PA means “Hey, we’re interested in seeing some applications on topic X“….and that’s about it. Admittedly, the study section reviewers are supposed to conduct review in accordance with the interests of the PA. Each application has to be submitted under one of the FOAs that are active. Sometimes, this can be as general as the omnibus R01 solicitation. That’s pretty general. It could apply to any R01 submitted to any of the NIH Institutes or Centers (ICs). The PAs can offer a greater degree of topic specificity, of course. I recommend you go to the NIH Guide page and browse around. You should bookmark the current-week page and sign up for email alerts if you haven’t already. (Yes, even grad students should do this.) Sometimes you will find a PA that seems to fit your work exceptionally well and, of course, you should use it. Just don’t expect it to be a whole lot of help.

This brings us to the specific query that was sent to the blog, i.e., why did the PA DA-14-106 go missing, only a week or so after being posted?

Sometimes a PA expires and is either not replaced or you have happened across it in between expiration and re-issue of the next 3-year version. Those are the more-common reasons. I’d never seen one be pulled immediately after posting, however. But the NOT-DA-14-006 tells the tale:

This Notice is to inform the community that NIDA’s “Synthetic Psychoactive Drugs and Strategic Approaches to Counteract Their Deleterious Effects” Funding Opportunity Announcements (FOAs) (PA-14-104, PA-14-105, PA-14-106) have been reposted as PARs, to allow a Special Emphasis Panel to provide peer review of the applications. To make this change, NIDA has withdrawn PA-14-104, PA-14-105, PA-14-106, and has reposted these announcements as PAR-14-106, PAR-14-105, and PAR-14-104.

This brings us to the key difference between the PA and a PAR (or a PAS):

  • Special Types
    • PAR: A PA with special receipt, referral and/or review considerations, as described in the PAR announcement
    • PAS: A PA that includes specific set-aside funds as described in the PAS announcement

Applications submitted under a PA are going to be assigned to the usual Center for Scientific Review (CSR) panels and thrown in with all the other applications. This can mean that the special concerns of the PA do not really influence review. How so? Well, the NIDA has a generic-ish and long-running PA on the “Neuroscience Research on Drug Abuse“. This is really general. So general that several entire study sections of the CSR fit within it. Why bother reviewing in accordance with the PA when basically everything assigned to the section is, vaguely, in this sphere? And even on the more-specific ones (say, Sex-Differences in Drug Abuse or HIV/AIDS in Drug Abuse, that sort of thing) the general interest of the IC fades into the background. The panel is already more-or-less focused on those being important issues.  So the Significance evaluation on the part of the reviewers barely budges in response to a PA. I bet many reviewers don’t even bother to check the PA at all.

The PAR means, however, that the IC convenes their own Special Emphasis Panel specifically for that particular funding opportunity. So the review panel can be tailored to the announcement’s goals much in the way that a panel is tailored for a Request for Applications ( RFA) FOA. The panel can have very specific expertise for both the PAR and for the applications that are received and,  presumably, have reviewers with a more than average appreciation for the topic of the PAR. There is no existing empaneled population of reviewers to limit choices. There is no distraction from the need to get reviewers who can handle applications that are on topics different from the PAR in question. An SEP brings focus. The mere fact of a SEP also tends to keep the reviewer’s mind on the announcement’s goals. They don’t have to juggle the goals of PA vs PA vs PA as they would in  a general CSR panel.

As you know, Dear Reader, I have blogged about both synthetic cannabinoid drugs and the “bath salts” here on this blog now and again. So I can speculate a little bit about what happened here. These classes of recreational drugs hit the attention of regulatory authorities and scientists in the US around about 2009, and certainly by 2010. There have been a modest but growing number of papers published. I have attended several conference symposia themed around these drugs. And yet if you do some judicious searching on RePORTER you will find precious few grants dedicated to these compounds. It it no great leap of faith to figure that various PIs have been submitting grants on these topics and are not getting fundable scores. There are, of course, many possible reasons for this and some may have influenced NIDA’s thinking on this PA/PAR.

It may be the case that NIDA felt that reviewers simply did not know that they wanted to see some applications funded and were consequently not prioritizing the Significance of such applications. Or it may be that NIDA felt that their good PIs who would write competitive grants were not interested in the topics. Either way, a PA would appear to be sufficient encouragement.

The replacement of a PA with a PAR, however, suggests that NIDA has concluded that the problem lies with study section reviewers and  that a mere PA was not going to be sufficient* to focus minds.

As one general conclusion from this vignette, the PAR is substantially better than the PA when it comes to enhancing the chances for applications submitted to it. This holds in a case in which there is some doubt that the usual CSR study sections will find the goals to be Significant. The caveat is that when there is no such doubt, the PAR is worse because the applications on the topic will all be in direct competition with each other. The PAR essentially guarantees that some grants on the topic will be funded, but the PA potentially allows more of them to be funded.

It is only “essentially” because the PAR does not come with set-aside funds as does the RFA or the PAS. And I say “potentially” because this depends on their being many highly competitive applications which are distributed across several CSR sections for a PA.

__

*This is a direct validation of my position that the PA is a rather weak stimulus, btw.

As always when it comes to NIDA specifics, see Disclaimer.

A flurry of Twitts from Doctor Zen last week drew my attention, eventually, to a report from The Clayman Institute for Gender Research at Stanford. The direct link to the report is here [PDF] and an executive summary style Dual Career Toolkit is provided as a PPT file.

There is all kinds of interesting stuff in here, including basic demographics on prevalence (36% of the American professoriate), career attitudes (50% of men say their career is primary, only 20% of women do) and impact of dual hires (performance measures of trailing-spouse do not differ from single hire peers). With respect to the last, the authors conclude:

Thus, our data suggest that productivity levels among second hires are not significantly different from those among their peers after data are disaggregated by field, and gender and rank are accounted for. (p72)

The Executive Summary of the full report emphasizes that dual-hires are seen as both a growing reality and a thorny problem for Universities. It takes no great leap for those of us familiar with such cases to grasp that one of the biggest reasons for pushback and objections is the assertion or supposition that the trailing spouse would not deserve a hire in his or her own right. Analyses such as the above seem to be critical to this issue in my view.

I’ve written on this topic before

Spousal Hiring is Unethical? Puhleeze.

It was one of my more extensively commented posts (107) so I entirely endorse the idea that this is one of the thornier questions of academics at the moment.

__
By way of a disclaimer, I am in a dual-academic-career relationship. We have not yet had opportunity or need to press dual-hire issues, but this is always possible in the future.

Commenter mikka wants to know why:

I don’t get this “professional editors are not scientists” trope. All the professional editors I know were bench scientists at the start of their career. They read, write, look at and interpret data, talk to bench scientists and keep abreast of their fields. In a nutshell, they do what PIs do, except writing grants and deciding what projects must be pursued. The input some editors put in some of my papers would merit a middle authorship. They are scientists all right, and some of them very good ones.

Look, yes you are right that they are scientists. In a certain way. And yes, I regret the way that my opinion that they are 1) very different from Editors and Associate Editors who are primarily research scientists and 2) ruining science tends to be taken as a personal attack on their individual qualities and competence.

But there is simply no way around it.

The typical professional editor, typically at a Glamour(ish) Mag publication, is under-experienced in science compared with a real Editor.

Regardless of circumstances, if they have gone to the Editorial staff from a postdoc, without experience in the Principal Investigator chair then they have certain limitations.

It is particularly bad that ass kissing from PIs who are desperate to get their papers accepted tends to persuade these people over time that they are just as important as those PIs.

“Input” merits middle authorship, eh? Sure, anyone with half a brain can suggest a few more experiments. And if you have the despotic power of a Nature editor’s keyboard behind you, sure…they damn well will do it. And ask for more. And tell you how uniquely brilliant of a suggestion it all was.

And because it ends up published in a Glamour Mag, all the sheep will bleat approvingly about what a great paper it is.

Pfaagh.

Professional editors are ruining science.

They have no loyalty to the science*. Their job is to work to aggrandize their own magazine’s brand at the cost of the competition. It behooves them to insist that six papers worth of work gets buried in “Supplemental Methods” because no competing and lesser journal will get those data. It behooves them to structure the system in a way that authors will consider a whole bunch of other interesting data “unpublishable” because it got scooped by two weeks.

They have no understanding or consideration of the realities of scientific careers*. It is of no concern to them whether scientific production should be steady, whether uninteresting findings can later be of significance, nor whether any particular subfield really needs this particular kick in the pants. It is no concern to them that their half-baked suggestion requires a whole R01 scale project and two years of experiments. They do not have to consider any reality whatsoever. I find that real, working scientist Editors are much more reasonable about these issues.

Noob professional editors are star-struck and never, ever are able to see that the Emperor is, in fact, stark naked. Sorry, but it takes some experience and block circling time to mature your understanding of how science really works. Of what is really important over the long haul. Notice how the PLoSFail fans (to pick one recent issue) are heavily dominated by the wet-behind-the-ears types and the critics seem to mostly be established faculty? This is no coincidence.

Again, this is not about the personal qualities of the professional editors. The structure of their jobs, and typical career arc, makes it impossible for them to behave differently.

This is why it is the entire job category of professional editor that is the problem.

If you require authoritah, note that Nobel laureate Brenner said something similar.

It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists.

He was clearly not talking about peer review itself, but rather the professional Glamour Mag type editor.

_
*as well they should not. It is a structural feature of the job category. They are not personally culpable, the institutional limitations are responsible.

Do you decide whether to accept a manuscript for review based on the Journal that is asking?

To what extent does this influence your decision to take a review assignment?

Why?

Congress is losing it.

February 27, 2014

Just after we noticed that Congress has seen fit to add a special prohibition on anything done with Federal grant funds that might suggest gun control is in order, there’s another late breaking Congressional mandate notice.

NOT-OD-14-062:

FY 2014 New Legislative Mandate

Restriction of Pornography on Computer Networks (Section 528)
“(a) None of the funds made available in this Act may be used to maintain or establish a computer network unless such network blocks the viewing, downloading, and exchanging of pornography.

(b) Nothing in subsection (a) shall limit the use of funds necessary for any Federal, State, tribal, or local law enforcement agency or any other entity carrying out criminal investigations, prosecution, or adjudication activities.”

Really guys? That was a top priority item?

Interesting though, isn’t it? Including indirect cost expenditures this would seem to apply to a very large number of Universities in the US. And now Congress has demanded they adopt nanny pR0n filters.

I don’t see any exceptions for classwork here, either.

Pot kills?

February 25, 2014

Apparently pot CAN kill.

 

Hartung and colleagues conclude from two Cases:

After exclusion of other causes of death we assume that the young men died from cardiovascular complications evoked by smoking cannabis….The assumption of fatal heart failure in both cases is corroborated by the acute effects of marijuana, including a marked increase in heart rate that may result in cardiac ischemia in susceptible individuals, lesser increases in cardiac output, supine blood pressure and postural hypotension….We assume the deaths of these two young men occurred due to arrhythmias evoked by smoking cannabis; however this assumption does not rule out the presence of predisposing cardiovascular factors.

h/t:

The latest round of waccaloonery is the new PLoS policy on Data Access.

I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.

Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.

Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.

This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.

But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.

The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?

This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.

Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?

Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?

This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.

NIH Multi-PI Grant Proposals.

February 24, 2014

In my limited experience, the creation, roll-out and review of Multi-PI direction of a single NIH grant has been the smoothest GoodThing to happen in NIH supported extramural research.

I find it barely draws mention in review and deduce that my fellow scientists agree with me that it is a very good idea, long past due.

Discuss.

While I’m getting all irate about the pathetic non-response to the Ginther report, I have been neglecting to think about the intramural research at NIH.

From Biochemme Belle:

In reflecting on the profound lack of association of grant percentile rank with the citations and quantity of the resulting papers, I am struck that it reinforces a point made by YHN about grant review.

I have never been a huge fan of the Approach criterion. Or, more accurately, how it is reviewed in practice. Review of the specific research plan can bog down in many areas. A review is often derailed off into critique of the applicant’s failure to appropriately consider all the alternatives, to engage in disagreement over the prediction of what can only be resolved empirically, to endless ticky-tack kvetching over buffer concentrations, to a desire for exacting specification of each and every control….. I am skeptical. I am skeptical that identifying these things plays any real role in the resulting science. First, because much of the criticism over the specifics of the approach vanish when you consider that the PI is a highly trained scientist who will work out the real science during the conduct of same. Like we all do. For anticipated and unanticipated problems that arise. Second, because there is much of this Approach review that is rightfully the domain of the peer review of scientific manuscripts.

I am particularly unimpressed by the shared delusion that the grant revision process by which the PI “responds appropriately” to the concerns of three reviewers alters the resulting science in a specific way either. Because of the above factors and because the grant is not a contract. The PI can feel free to change her application to meet reviewer comments and then, if funded, go on to do the science exactly how she proposed in the first place. Or, more likely, do the science as dictated by everything that occurs in the field in the years after the original study section critique was offered.

The Approach criterion score is the one that is most correlated with the eventual voted priority score, as we’ve seen in data offered up by the NIH in the past.

I would argue that a lot of the Approach criticism that I don’t like is an attempt to predict the future of the papers. To predict the impact and to predict the relative productivity. Criticism of the Approach often sounds to me like “This won’t be publishable unless they do X…..” or “this won’t be interpretable, unless they do Y instead….” or “nobody will cite this crap result unless they do this instead of that“.

It is a version of the deep motivator of review behavior. An unstated (or sometimes explicit) fear that the project described in the grant will fail, if the PI does not write different things in the application. The presumption is that if the PI does (or did) write the application a little bit differently in terms of the specific experiments and conditions, that all would be well.

So this also says that when Approach is given a congratulatory review, the panel members are predicting that the resulting papers will be of high impact…and plentiful.

The NHLBI data say this is utter nonsense.

Peer review of NIH grants is not good at predicting, within the historical fundable zone of about the top 35% of applications, the productivity and citation impact of the resulting science.

What the NHLBI data cannot address is a more subtle question. The peer review process decides which specific proposals get funded. Which subtopic domains, in what quantity, with which models and approaches… and there is no good way to assess the relative wisdom of this. For example, a grant on heroin may produce the same number of papers and citations as a grant on cocaine. A given program on cocaine using mouse models may produce approximately the same bibliometric outcome as one using humans. Yet the real world functional impact may be very different.

I don’t know how we could determine the “correct” balance but I think we can introspect that peer review can predict topic domain and the research models a lot better than it can predict citations and paper count. In my experience when a grant is on cocaine, the PI tends to spend most of her effort on cocaine, not heroin. When the grant is for human fMRI imaging, it is rare the PI pulls a switcheroo and works on fruit flies. These general research domain issues are a lot more predictable outcome than the impact of the resulting papers, in my estimation.

This leads to the inevitable conclusion that grant peer review should focus on the things that it can affect and not on the things that it cannot. Significance. Aka, “The Big Picture”. Peer review should wrestle over the relative merits of the overall topic domain, the research models and the general space of the experiments. It should de-emphasize the nitpicking of the experimental plan.

A reader pointed me to this News Focus in Science which referred to Danthi et al, 2014.

Danthi N1, Wu CO, Shi P, Lauer M. Percentile ranking and citation impact of a large cohort of national heart, lung, and blood institute-funded cardiovascular r01 grants. Circ Res. 2014 Feb 14;114(4):600-6. doi: 10.1161/CIRCRESAHA.114.302656. Epub 2014 Jan 9.

[PubMed, Publisher]

I think Figure 2 makes the point, even without knowing much about the particulars
Danthi14-Fig2

and the last part of the Abstract makes it clear.

We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

The only thing surprising in all of this was a quote attributed to the senior author Michael Lauer in the News Focus piece.

“Peer review should be able to tell us what research projects will have the biggest impacts,” Lauer contends. “In fact, we explicitly tell scientists it’s one of the main criteria for review. But what we found is quite remarkable. Peer review is not predicting outcomes at all. And that’s quite disconcerting.”

Lauer is head of the Division of Cardiovascular Research at the NHLBI and has been there since 2007. Long enough to know what time it is. More than long enough.

The take home message is exceptionally clear. It is a message that most scientist who have stopped to think about it for half a second have already arrived upon.


Science is unpredictable.

Addendum: I should probably point out for those readers who are not familiar with the whole NIH Grant system that the major unknown here is the fate of unfunded projects. It could very well be the case that the ones that manage to win funding do not differ much but the ones that are kept from funding would have failed miserably, had they been funded. Obviously we can’t know this until the NIH decides to do a study in which they randomly pick up grants across the entire distribution of priority scores. If I was a betting man I’d have to lay even odds on the upper and lower halves of the score distribution 1) not differing vs 2) upper half does better in terms of paper metrics. I really don’t have a firm prediction, I could see it either way.

A query came in through the email box:

Do you use ELNs in your lab? Is that something that you think would make a useful blog post? I haven’t found much elsewhere in the blogosphere about ELNs. Maybe you will find this to be a shining example of why you have stuck with paper and pen.

I don’t use one so I’m turning this over to you folks. Any recommendations for your fellow Reader?