We’ve previously discussed the NIH F32 Fellowship designed to support postdoctoral trainees. Some of the structural limitations to a system designed on its fact to provide necessary support for necessary (additional) training overlap considerably with the problems of the F31 program designed to support graduate students.

Nevertheless, winning an individual NRSA training fellowship (graduate or postdoctoral) has all kinds of career benefits to the trainee and primary mentor so they remain an attractive option.

A question arose on the Twitts today about whether it was worth it for a postdoc in a new lab to submit an application.

In my limited experience reviewing NRSA proposals in a fellowship-dedicated panel for the NIH, there is one issue that looms large in these situations.

Reviewer #1, #2 and #3: “There is no evidence in the application that sufficient research funds will be available to complete the work described during the proposed interval of funding.

NRSA fellowships, as you are aware, do not come with money to pay for the actual research. The fellowship applications require a good deal of discussion of the research the trainee plans to complete for the proposed interval of training. In most cases that research plan involves a fair amount of work that require a decent amount of research funding to complete.

The reviewers, nearly all of them in my experience, will be looking for signs of feasibility. That the PI is actually funded, funded to do something vaguely related* to the topic of the fellowship proposal and funded for the duration over which the fellowship will be active.

When the PI is not obviously funded through that interval, eyebrows are raised. Criticism is leveled.

So, what is a postdoc in a newer lab to do? What is the PI of a newish lab, without substantial funding to do?

One popular option is to find a co-mentor for the award. A co-mentor that is involved. Meaning the research plan needs to be written as a collaborative project between laboratories. Obviously, this co-mentor should have the grant support that the primary PI is lacking. It needs to be made clear that there will be some sort of research funds to draw upon to support the fellow doing some actual research.

The inclusion of “mentoring committees” and “letters of support from the Chair” are not sufficient. Those are needed, don’t get me wrong, but they address other concerns** that people have about untried PIs supervising a postdoctoral fellow.

It is essential that you anticipate the above referenced Stock Critique and do your best*** to head it off.

__
*I have seen several highly regarded NRSA apps for which the research plan looks to me to be of R01-quality writing and design.

**We’re in stock-critique land here. Stop raging about how you are more qualified than Professor AirMiles to actually mentor a postdoc.

***Obviously the application needs to present the primary mentor’s funding in as positive a light as possible. Talk about startup funds, refer to local pilot grants, drop promising R01 scores if need be. You don’t want to blow smoke, or draw too much attention to deficits, but a credible plan for acquiring funding goes a lot farther than ignoring the issue.

Question of the Day

May 28, 2014

Are your papers reporting “discoveries”?

Or “demonstrating” something?

Discuss.

This topic keeps coming up amongst the trainees and I have an area of confusion.

What does it mean that your mentoring has been bad? Is it all about *outcome*?

Is it about lab favoritism?

Does your mentor fail to advance the careers of everyone? Or is it just that you were not the favored one?

Are there specific things your mentor should and could have done for you that you can mention? Did you only recognize this is retrospect or was it frustrating at the time?

A guest post from @iGrrrl


Like winter, changes to the biosketch are coming

Dr. Rockey spoke about the changes to the biographical sketch at the NORDP meeting this week, and I think I can at least offer a bit more depth about the thinking behind this, both from her comments and from what I’ve seen over the last few years. Certainly my knee-jerk negative opinion about this change has evolved upon reflection and listening to her presentation. This may not be as bad as it sounds. Maybe.

In her talk, of which this question of biosketches was one very small part, her short-hand way of referring to the reasoning behind this was to “reduce the reliance on publishing in one-word-named journals” as a shorthand to judging the quality of an investigator’s productivity. When the biosketch was changed in 2010, shortening the publication list to 15 seemed to me to be designed to reduce a senior investigator’s advantage of sheer numbers of publications. The rise of metrics and h-factor means that the impact factor of the journal in which the work was published now substitutes, in many a reviewer’s mind, as the quick heuristic for assessing the Investigator criterion.

The move to the shorter publication list was also borrowed from NSF’s limit of 10 products for the biosketch. This sounds good on paper, but didn’t account for the differences in culture. Researchers in NSF-type fields are just as conscious of h-index, but you don’t find the same reliance on “glamour magazines” that cut across all NSF research. The result seems to be that many young investigators in biomedicine feel they have to wait to publish until they have a story worthy of C/N/S. I hear sometimes about young researchers failing to make tenure in part because they did not publish enough, not because they didn’t have data, but because they were trying for the high-level journal, didn’t simply get it out in field- or sub-field-specific journals.

And work that appears in those so-called lower-tier journals shouldn’t be dismissed, but it often is effectively ignored when a reviewer’s eyes are looking for the titles of the high-impact journals. If a young faculty member’s list maxes to 15 and they are all solid papers in reasonable journals, that’s usually fine. But sometimes they have fewer than 15, so the reviewer relies more on the impact factor of the journals in which the work appears, and that in turn leads to reliance on C/N/S (JAMA, NEJM, etc). But for the applicant, sometimes the work reflected in the papers is based on a study that simply takes a long time to run, so that one paper in that year might represent a great deal of effort and time with results highly relevant within the context of the subfield. Or a series of papers may have methods published in one journal, the study in two more, and none of them are top-tier, but the entire story is important. This new narrative gives the opportunity to give that context.

This appears to be the point of the change to the biosketch: the impact factor of the journal(s) in which the work appeared may not reflect the impact of the results. Some applicants were including a sentence after every paper on the biosketch to try to give the context and impact–the contribution to the field–but in my experience, reviewers did not like and did not read these sentences. Yet, when reviewers come from a diversity of backgrounds, they may not be able to appreciate the impact of a result on the sub-field. Many of these concerns have been vociferously expressed to Dr. Rockey through various social media, primarily comments here at Our Host’s blot, but also on the RockTalking blog.

The idea behind this new approach to discussing an applicant’s contributions has some reasonable foundations, but I don’t expect it will work. In the short term, applicants will likely struggle to assemble a response to this new requirement. I can’t imagine reviewers will enjoy reading the resulting narratives. It may be that a common rubric approach to writing these sections as a clear story will make them uniform enough for reviewers to quickly judge, but I fully expect they will still be looking for Cell, Nature, and Science.

In my view, once it is on The News Hour then it is really news.

Nature published a commentary by NIH Director Francis S. Collins and NIH Office of Research on Women’s Health Director Janine A. Clayton which warns us that the NIH will start insisting on the inclusion of more sex-difference comparisons. These are to extend from cells to animal models across many areas of pre-clinical work.

The NIH is now developing policies that require applicants to report their plans for the balance of male and female cells and animals in preclinical studies in all future applications, unless sex-specific inclusion is unwarranted, based on rigorously defined exceptions. These policies will be rolled out in phases beginning in October 2014, with parallel changes in review activities and requirements.

I cannot wait to see what the “rigorously defined exceptions” will be for several types of research in which I have an interest. Every rat self-admin study must now include both males and females? For all treatment conditions or will it be acceptable to just tack the sex-comparison on at the end?

Furthermore, the NIH will monitor compliance of sex and gender inclusion in preclinical research funded by the agency through data-mining techniques that are currently being developed and implemented. Importantly, because the NIH cannot directly control the publication of sex and gender analyses performed in NIH-funded research, we will continue to partner with publishers to promote the publication of such research results.

oooooh. “partner with publishers” eh? Of course this is because Clayton and Collins realize that higher JIF journals are entirely uninterested in things as pedestrian as sex-comparisons, particularly when the outcome of the study is “no difference”. Which, btw, is one of the reasons nobody* wants to waste their precious time and grant money doing something as low-return as sex-comparisons. So somehow the NIH is going to lean on publishers to be…friendlier….to such work. I do hope they realize that this is not going to work. The contingencies are not going to change because the NIH asks. Now, if they actually went all in and dismantled GlamourMagScience culture by the judicious use of grant award, grant auditing and rules about the ratio of publications to effort expended… then we might see some progress. That will never happen and thus there will be no change in the publication contingencies that fight against sex-comparison studies.

Dr. Clayton went on The News Hour where Judy Woodruff asked her (and Phyllis Greenberger of Society for Women’s Health Research) some pretty obvious questions. Woodruff wanted to know if there were any clear examples in which women were put at risk or their health suffered because of a lack of such research. She also wanted to know what the implications for research might be- would it be more difficult or more expensive. Finally, Woodruff asked if scientists would resist.

From the transcript:

JUDY WOODRUFF: But how hard is that? Does that mean — is it extra work, is it more expensive? What’s involved in making sure there’s a gender balance?

Now Greenberger snuck in a “Both” off camera but then Clayton went on to be ridiculous and fail to answer the question. The answer is indeed “both” and it is a serious one if the NIH expects to get results. It will be more expensive, progress will be slower and it will be “harder” in the sense of teasing out the right experimental designs and variables so that an interpretable result can be reached. It isn’t rocket science, exactly, but it is harder.

JUDY WOODRUFF: Phyllis Greenberger, were there — were there actually individuals who were harmed or where help wasn’t delivered because the research was done only on males?

Greenberger totally walked around this one and Woodruff, to her credit, fronted Clayton with the same question a bit later. Clayton referred to heart attack warning symptoms in women that might differ from men…of course this has nothing whatever to do with preclinical research. Gaaah! So frustrating. Greenberger chimed back in with talk of drugs being removed from the market for adverse effects in women….with no indication that these were adverse effects that would have been identified in female-specific PREclinical research. C’mon NIH! If you are going to take a run at this, please prepare your argument!


JUDY WOODRUFF: And is that the reason that it wasn’t done earlier, Dr. Clayton, that there was just pushback in the scientific community?

The answer is illustrative of the problem at the NIH….

DR. JANINE CLAYTON: It’s hard to say. There are probably a lot of factors that are involved.

And what’s really important now is right now we have been able to put the focus on getting this as a priority. As Phyllis mentioned, the Society and other advocacy groups and scientists and others have talked about this in the past. In fact, we are supporting scientists who are doing this research, but it wasn’t enough of a priority. In some way, it was like a blind spot. Scientists weren’t thinking about it.

Yes, there are a lot of factors. They aren’t all that complicated either, since they boil down to scientists who want to conduct sex-differences comparisons being able to win funding to do the work.

Clayton is right. The NIH does indeed support investigators doing sex-differences studies.Those scientists do not have a problem of “priority” from the perspective of their own intrinsic motivation.

PubMedSexDiffsWith respect to whether scientists resist, I enjoin you to go over to PubMed and type in Sex Differences and see what fill-in choices are offered to you. Click on several of these searches and see what you find. You will find funded projects in many of your favorite domains of interest. If you bother to click on the papers and look at the grant attributions, you may even find that many of these investigations were completed under NIH funding!

So when Clayton (and in the Commentary she is joined by Director Collins) claims it isn’t a “priority”, it seems misplaced to put this on the shoulders of extramural scientists.

If the NIH wants more sex-differences studies then they need to deploy their tastiest carrot to greater effect. Put out some Funding Opportunity Announcements and see what happens! Fund a few Supplements to the people who are already doing sex-comparisons! Pick up a few grants that missed the payline…again, from the people who are already proposing sex-comparisons!

And if you want to lure in new converts that you didn’t get with an RFA or a Program Announcement? This is simple. Just put out a policy that any grant application with a credible stab at a sex-comparison component gets an extra 5 percentile points credit towards the payline for funding.

Just you wait and see how many sudden converts you make!

___
*of the GlamourMag class investigator

Thought of the Day

May 16, 2014

We non-cheatfucks have to stick together and remind each other that not everyone gets ahead in science by faking data, abusing trainees and generally being the ass.

Some people are actually trying to do science right. Never forget that.

Occasionally, Dear Reader, one or another of you solicits my specific advice on some NIH grant situation you are experiencing. Sometimes the issues are too specific to be of much general good but this one is at least grist for discussion of how to proceed.

Today’s discussion starts with the criterion scores for an R01/equivalent proposal. As a reminder, the five criteria are ordered as Significance, Investigator, Innovation, Approach and Environment. The first round for this proposal ended up with

Reviewer #1: 1, 1, 1, 3, 1
Reviewer #2: 3, 1, 1, 3, 1
Reviewer #3: 6, 2, 1, 8, 1
Reviewer #4: 2, 1, 3, 2, 1

From this, the overall outcome was…. Not Discussed. Aka, triaged.

As you might imagine, the PI was fuming. To put it mildly. Three pretty decent looking reviews and one really, really unfavorable one. This should, in my opinion, have been pulled up for discussion to resolve the differences of opinion. It was not. That indicates that the three favorable reviewers were either somehow convinced by what Reviewer #3 wrote that they had been too lenient…or they were simply not convinced discussion would make a material difference (i.e. push it over the “fund” line). The two 3s on Approach from the first two reviewers are basically a “I’d like to see this come back, fixed” type of position. So they might have decided, screw it, let this one come back and we’ll fight over it then.

This right here points to my problem with the endless queue of the revision traffic pattern and the new A2 as A0 policy that will restore it to the former glory. It should be almost obligatory to discuss significantly divergent scores, particularly when they make a categorical difference. The difference between triaged and discussed and the difference between a maybe-fundable and a clearly-not-fundable score is known to the Chair and the SRO of the study section. Thee Chair could insist on resolving these types of situations. I think they should be obliged to do so, personally. It would save some hassle and extra rounds of re-review. It seems particularly called-for when the majority of the scores are in the better direction because that should be some minor indication that the revised version would have a good chance to improve in the minds of the reviewers.

There is one interesting instructive point that reinforces one of my usual soapboxes. This PI had actually asked me before the review, when the study section roster was posted, what to do about reviewer conflicts. This person was absolutely incensed (and depressed) about the fact that a scientific competitor in highly direct competition with the proposal had been brought on board. There is very little you can do, btw, 30 days out from review. That ship has sailed.

After seeing the summary statement, the PI had to admit that going by the actual criticism comments, the only person with the directly-competing expertise was not Reviewer #3. Since the other three scores were actually pretty good, we can see that I am right on the assumption of what a reviewer will think of your application based on perceptions of competition or personal dis/like. You will often be surprised that the reviewer that you assume is out to screw your application over will be pulling for it. Or at least, will be giving it a score that is in line with the majority of the other reviewers. This appears to be what happened in this case.

Okay. So, as I may have mentioned I have been reluctantly persuading myself that revising triaged applications is a waste of time. Too few of them make it over the line to fund. And in the recently past era of A1 and out….well perhaps time was better spent on a new app. In this case, however, I think there is a strong case for revision. Three of four (and we need to wonder about why there even were four reviews instead of three) of these criterion score sets look to me like scores that would get an app discussed. The ND seems to be a bit of an unfair result, based on the one hater. The PI agreed, apparently, and resubmitted a revised application. In this case the criterion scores were:

Reviewer #1: 1, 2, 2, 5, 1
Reviewer #2: 2, 2, 2, 2, 1
Reviewer #3: 1, 1, 2, 2, 1
Reviewer #4: 2, 1, 1, 2, 1
Reviewer #5: 1, 1, 4, 7, 1

I remind you that we cannot assume any overlap in reviewers nor any identity of reviewer number in the case of re-assigned reviewers. In this case the grant was discussed at study section and ended up with a 26 voted impact score. The PI noted that a second direct competitor on the science had been included on the review panel this time in addition to the aforementioned first person in direct competition.

Oh Brother.

I assure you, Dear Reader, that I understand the pain of getting reviews like this. Three reviewers throwing 1s and 2s is not only a “surely discussed” outcome but is a “probably funded” zone, especially for a revised application. Even the one “5” from Reviewer #1 on Approach is something that perhaps the other reviewers might talk him/her down from. But to have two obviously triage numbers thrown on Approach? A maddening split decision, leading to a score that is most decidedly on the bubble for funding.

My seat of the pants estimation is that this may require Program intervention to fund. I don’t know for sure, I’m not familiar with the relevant paylines and likely success rates for this IC for this fiscal year.

Now, if this doesn’t end up winning funding, I think the PI most certainly has to take advantage of the new A2 as A0 policy and put this sucker right back in. To the same study section. Addressing whatever complaints were associated with Reviewer #1’s and #5’s criticisms of course. But you have to throw yourself on the mercy of the three “good” reviewers and anyone they happened to convince during discussion. I bet a handful of them will be sufficient to bring the next “A0” of this application to a fundable score even if the two less-favorable reviewers refuse to budge. I also bet there is a decent chance the SRO will see that last reviewer as a significant outlier and not assign the grant to that person again.

I wish this PI my best of luck in getting the award.

For some reason I am having a DOI error on the actual comment from Clayton and Collins. So until that is resolved, the sourcing is from the journalists who got the embargoed version.

Apparently Janine Clayton and Francis Collins have issued a commentary on a new policy that the NYT describes as:

The N.I.H. is directing scientists to perform their experiments with both female and male animals and include both sexes in sufficient numbers to see statistically significant differences. Grant reviewers will be instructed to take the sex balance of each study design into account when awarding grants.

Yeah, that sounds pretty clear. My studies just doubled…which means really that they were just cut in half. I’m cool with that. I actually agree that it would be good if we did almost everything as a sex-differences study.

There’s the money though. Sex difference studies in a behaving animal are not just a doubling as it happens (and as I inaccurately described it just above). From a prior post on this topic entitled: The funding is the science II, “Why do they always drop the females?

As nicely detailed in Isis’ post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not “well established” and needs to be “compared to the standard gonadectomy models”, LOL >sigh<.

The money and the progress.

Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as “does this drug compound ameliorate cocaine addiction?” So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost?

Oh, don’t worry bench jockeys. According to the NYT article:

Researchers who work with cell cultures are also being encouraged to study cells derived from females as well as males, and to do separate analyses to tease out sex differences at the cellular level.

“Every cell has a sex,” Dr. Clayton said in a telephone interview. “Each cell is either male or female, and that genetic difference results in different biochemical processes within those cells.”

“If you don’t know that and put all of the cells together, you’re missing out, and you may also be misinterpreting your data,” Dr. Clayton added. For example, researchers recently discovered that neurons cultured from males are more susceptible to death from starvation than those from females, because differences in the ways their cells process nutrients.

“Encouraged”. Okay, maybe you CultureClowns have an escape clause here. Animal model folks are facing “demanded” language.

Final observations are ridiculous:

But [the new policies] are likely to be met with resistance from scientists who fear increased costs and difficulty in performing their experiments. Studying animals of both sexes may potentially double the number required in order to get significant results.

“There’s incredible inertia among people when it comes to change, and the vast majority of people doing biological research are going to think this is a huge inconvenience,” Dr. Zucker said.

Margaret McCarthy, a neuroscientist at University of Maryland School of Medicine who studies sex differences, agreed. “The reactions will range from hostile — ‘You can’t make me do that’ — to ‘Oh, I don’t want to control for the estrous cycle,’” she said.

This has nothing to do with whether a scientist “wants” to or not.

Let me be clear, I want to do sex-differences studies. I am delighted that this will be a new prescription. I agree with the motivating sentiments.

What I “fear” is that grant applications will be kicked in the teeth if they include sex differences comparisons. What I “fear” is that my research programs will be even less productive on the main area of interest, to the tune of a lot of extra work that will simply confirm a lot of what we already know. For example, female rats tend to self-administer more drug than males do. A lot of my colleagues have been working on these topics for a long time. The identification of those areas where it actually matters (i.e., sex difference effects that haven’t yet been detected) are going to come along with a lot of negative findings. What I “fear” is that when we are interested in a certain thing, there is a bit of sex-differences literature and the hypothesis is going to be “males and females are the same” or even “females are more/less sensitive to drug” that this is going to bring down the holy hells of reviewer wrath over what hypothesis we are testing.

I fear a lot of things about this. What I don’t fear is my own interest in the topic. What I don’t fear is the “inconvenience”. I don’t even fear “difficulty”. It just isn’t that difficult to add female groups to my studies.

What it takes is additional grant funding. Or tolerance on the part of P&T committees, hiring committees and grant review panels for apparently reduced progress on a scientific topic of interest. And those things are not at all easy to come by.

The funny thing is, we’ve been taking steps in the lab toward this direction in the past year anyway. So I should be grateful I have at least that little tiny bit of a head start on this stuff.

The Tweep known as @dr24hours made a comment on grant strategy:

which I have to admit makes a lot of sense. Get your mental health together by not worrying about yet another grant deadline…you’ve earned it! Take the time to do what your job really is about…looking at data, planning studies, bringing them to fruition. Publishing. Have some science fun and bask in the sunshine*.

In a different age of the NIH grant game, this would have been a fine strategy for your career as well.

It no longer is fine.

We are in an era of boom and bust instability when it comes to NIH funding. It is the very rare flower indeed, in my estimation, that will be completely free of the cycle in the coming decade or two.

As always, my view is quite possibly colored by my experiences. But I have seen the boom and bust cycle play out across a large number of labs. Some of my close acquaintance. Some labs that I know only through the grant review process. Some labs that happen to make it to the scuttlebutt news channel for some reason or other.

It usually plays out like this. “Yeah, Dr. So-and-so is really well funded…..what? What do you mean they are on the ropes? [checks RePORTER]..how in the hell did THAT happen”. ….Two years later “Oh phew, glad to see So-and-so got another grant. ….what? TWO grants? and an R21? how in the hell did THAT happen?”…

Repeat.

There are a couple of ways the current uncertainty amps up the gain on the boom and bust cycle of grant funding.

Getting back to Dr24hours’ assertion in the Twitt, yes, a lot of PIs do their grant submitting in bursts. If you have enough funding, why submit applications? And if you are approaching the end of your current funding, well, you are going to start shooting those applications out on full-auto fire. This makes a lot of sense and I think a lot of us, including YHN, do this by reflex.

If you apply the current grant success odds to more (down cycle) versus fewer (up cycle) applications, well, you can see that a PI is herself contributing to the amplitude of the cycle.

Then we get to the way grant reviewers look at a PI and the application that is in front of them. Yes, Virginia, perceptions of “too much funding” do contaminate the reviewer’s mind. It is inevitable. Just look at all the screaming over at RockTalking and on my posts about how the “real solution” is to limit the amount of funding any one PI can secure. And when a PI appears to be on the ropes, grant-wise, the reviewers are likely to feel….sympathy. I know, I know, this sentiment is thin stuff compared with the negative value of the perception of “too much funding” but it is most certainly a contributing factor.

What this means is that if you take a PI’s grant proposals as more-or-less equally deserving on objective grounds, the ones that are submitted when she has a few grants are less likely to get a fundable score. This accelerates the downfall after an interval of healthy funding. On the other side, when the PI looks like she is at the end of her funding, the sympathy cred will make an application relatively more likely to fund. And since there will be a lengthy interval of time in which the lab looks to be running on fumes, it could be that several applications (perhaps in different study sections) will be pushed over the funding line by sympathy.

Finally, your friendly Program Officer plays a role in this as well. As we’ve seen over the past few years they have been very overt and explicit about “saving” long-running programs that are on the ropes. They will perhaps not admit it, but you know damn well that when it comes to making out-of-order funding decisions, if there is a perception that PI Jones is “healthily funded” and PI Smith is on the skids, the Smith app will get picked up and the Jones app will not. Again, the down cycle is accelerated and the up-cycle can be inflated. In some perfect world where such considerations didn’t matter, the deflation of the labs on the up cycle would be attenuated by a Program pick up about as often as the rescue of labs on the down cycle would be accomplished. This should work against the boom-and-bust (albeit potentially at the risk of increasing lab-death).

As you are well aware, Dear Reader, I pursue a two-pronged approach when it comes to this stuff.

First is always my advice on how the individual PI is best to navigate the system. I see no solution to this phenomenon other than to keep a steady flow of apps, even when you are relatively well funded. Five years elapses VERY rapidly and the confidence that even a very productive project, or one that hits all the expectations laid out in the original application, will be continued is low. Very low. There is still a degree to which the revise-and-resubmit cycle improves your odds. That can take, what, two years from original submission to eventual funding of the A1? Well two years out from the end of your current award, you still look like you are healthily funded. But what can you do? You sure as heck can’t count on sympathy to push your A0 version over the funding line if you wait until the last six months of your current award.

Second, I continue to discuss ways to stabilize funding. It certainly is all the rage with Rockey and friends. (Sally Rockey showed up to a symposium on “Bridging Career Pathways for an Evolving Biomedical Workforce” at Experimental Biology this year, btw.) As always, it is a long slow slog to get the powers that be to even understand how to ask the right questions, nevermind how to arrive at the right answers. One of the problems is that official NIHdom never thinks about careers. It took a fair while for Rockey to blog about funding rates by PI (here) as opposed to by grant. And it was still hard for her / her data minions to grasp that the success rates of PIs had to look over longer intervals of time, five to ten years. Jeremy Berg has been doing a good job of starting the process of examining the career aspects, see here and here. I think understanding the PI dynamics is critical to achieving some sort of stabilization of the uncertainty of funding that is so paralytic to science right now.

Part of this understanding should be a recognition of the boom-and-bust cycle.

If it exists. My perceptions could be very skewed, I realize. There should be a way for Rockey to get her data miners to characterize periodicity of funding cycles versus relatively sustained funding. To see if the relative proportions of PIs enjoying sustained, relatively invariant levels of funding differ across time. My prediction is, of course, that Rockey would find that there is more variability in a given PI’s funding across time then in the past.

As far as fixes for the boom / bust cycle go, I think we can’t do much about the PIs’ behavior or the way that study sections bias their reviews on the perception of how much funding a person has at that particular moment. This leaves it in the hand of Program to try to stabilize matters. So far their attention seems mainly focused on saving labs as they are approaching “bust”. This is admirable from many perspectives.

But I suspect that if they focused on arresting the “bust” before it actually happens it would have the same overall effect on the PI population while avoiding so much inefficiency of production that attends a lab plummeting toward the funding abyss.

__
*because Winter is Coming.

This puts it…clearly:

A common misunderstanding among early-career scientists is the thought that their passion is their research focus. A more careful examination reveals that their passion is not so much the subject but rather, the promise of the life that academia might offer.

Painfully so, but clearly.

I’ve had to ask myself several times in my career if I was interested in TopicX More than I was with having a career. And occasionally the question forms itself in the other direction. Am I so focused on maintaining my career that the actual science isn’t any fun anymore*?

At times I have felt as though I would walk away if I couldn’t do TopicX. At other times, working on Topics Y and Z has (apparently) been sufficient*.

I am by no means done asking myself the questions, nor is it the case that I always have any choice, really.

I suggest you read the post...if for nothing else it may help you to think about how you make decisions about your career.

__
*In truth, perhaps one of the biggest surprises of my career arc is the degree to which I find I am interested by at least something in just about every project. As it “Yeah, the overall goals here are cool and all but..whoa! What the heck is UP with this thing over here? Wow. Let’s get ON that for a few months…..[two years later]

This is a fascinating read.

Grantome.com is a project of data scientists who have generated a database of grant funding information. This particular blog post focuses on a longitudinal analysis of some of the most heavily NIH-funded Universities and other research institutions. It shows those which are maintaining stable levels of support, those in decline and those which are grabbing an increased share of the extramural NIH pie.

The following graph was described thusly:

Each histogram bar represents the range in the percentage of grants that has been held between 1986 and 2013. The current, 2013 level is represented by a black vertical line. Finally, arrows inform on the latest trend in how these values are changing, where their length and direction reflect predictions in the level of funding that each institution will have over the next 3 years. These predictions were made from linear extrapolation of the average rate of change that was observed over the last 3 years.

This serves as an interesting counterpoint to the discussion of the “real problem” at the NIH as it has typically centered around too many awards per PI, too much funding per PI, the existence of soft-money job categories, the overhead rates enjoyed by certain Universities, etc.

I am particularly fascinated by their searchable database, in which you can play around with the funding histories of various institutions. Searches by fiscal year, funding IC are illuminating, as is toggling between total costs and the number of awards on the graphical display.

This is an overview of a presentation in Symposium 491. Scientists versus Street Chemists: The Toxicity of Designer Marijuana presented Wed, Apr 30, 9:30 AM – 12:00 PM at the 2014 Experimental Biology meeting.

An analytical chemist’s approach to public health problems by J. H. Moran

Jeffrey Moran (PubMed) of the Arkansas Department of Public Health opened with an overview of synthetic cannabinoid products being sold and consumed by individuals seeking a marijuana-like high. The state of Arkansas has established a response to the emergence of synthetic cannabimimetic and stimulant drugs which includes government, academic, clinical and private resources. It is currently called the Center for Drug Detection and Response. He specifically mentioned his own analytic laboratory in the Department of Public Health, academic scientists at the University of Arkansas for Medical Sciences and Cayman Chemicals. The latter company has been essential in preparing analytical standards for their assessment of parent drugs and, in particular, the metabolites that might be found in human samples.

Dr. Moran briefly overviewed the history of the appearance of 3 gram packets of dried plant material selling for $20-$50 each. Rather than boutique potporri, such packets are laced with synthetic cannabinoid drugs. They started appearing in the US around 2008 or 2009 and Arkansas identified their first item in 2010.

From 2010 until the present (2014), Moran’s laboratory in the Department of Health has assessed over 4,300 synthetic drug items and 1,823 human samples. From this they have identified 47 different synthetic cannabinoids, 17 designer stimulants and 9 designer hallucinogens. From this diversity, how to triage? How to decide what to focus on? Well, Dr. Moran said that a half dozen cannabinoid compounds (of the 47) amounted to about 80-90% of the samples they’ve analyzed to date. If I had it right, his summary slide of top suspects included JWH-018, AM2201, UR-144, XLR-11, AB-PINACA, AB-FUBINACA and PB-22. (I may have missed a couple). Interestingly, although Dr. Moran mentioned some compounds dropping off the radar following specific DEA Scheduling actions, and new entities arising to replace them, JWH-018 has been making a comeback. This points, in my view, to an important reminder. Despite the fact that the diversity in designer cannabinoids and cathinones appears to be driven by legal status, it is good to remember there are plenty of highly popular recreational drugs which are clearly illegal and have been so for many decades. I would predict a winnowing process whereby a few highly attractive exemplars of the cannabinoid and cathinone classes of drugs remain with us, even when they have been placed under control by the DEA (or act of Congress).

Dr. Moran when on to mention that the preparations available on the street are not exclusively predictive when it comes to appearances. That is, you might suspect that anything made of dried plant matter might contain cannabinoids where as tablets, capsules and loose powder or crystalline substances might be the cathinones. Although generally true, a few herbal material samples contained synthetic cathinones and a few “pills and powders” were found to contain synthetic cannabinoids.

Dr. Moran also described his role as a public health official by depicting statewide tracking data. The different streams of information can be synthesized and direct him / his office to the right population. If he sees a lot of activity through poison control lines with no corresponding alerts from the law enforcement, then maybe he needs to reach out to local police jurisdictions. Likewise, a flurry of law enforcement without any activity from Emergency Departments may indicate a need to educate health care professionals on what to look for in that community.

One cannot help but walk away from this presentation with an appreciation for two things. One, we are fortunate that the Arkansas folks have taken a lead in generating a wealth of information on the use of cannabimimetic drugs. Second, it is always a pleasure to learn more about how someone with a job mandate that isn’t strictly academic responds to an emerging recreational drug situation like we have been experiencing of late.

__
Dr. Moran disclosed his participation in Pin Point Testing, LLC.