September 28, 2012

It is possible that I am even more motivated to write that grant, finish up that paper or take an experimental run at a project when the competition is someone who is personally a jerk.


September 28, 2012

I recently completed a streak of 32 days in which I got my behind out for a run of at least 1 mi per day.

This followed another streak earlier in the year in which I made it 24 days.

The earlier one, in particular, was sustained through a social media reinforcement meme (#RWRunStreak).

Sustained behavioral change is quite a hurdle for health care, particularly when it comes to exercising regularly, changing food intake and reducing the use of psychoactive substances.

There is a grant application or three in here somewhere.

Analysis II

September 28, 2012

…those who take the listed-second, alleged co-equal contribution author slot are like abused children or battered spouses with Stockholm Syndrome.

It is going to require professional help to bring them back to reality.


September 26, 2012

Junior scientists who have spent many formative years in GlamourMag pursuing laboratories suffer from the academic equivalent of Stockholm syndrome.

It is really not kind of me to front their illusions all at one go.

The always perspicacious Biochemme Belle noted that Francis Collins, boss of the NIH, is suggesting that they need to take steps to de-stigmatize the idea of alternate careers. I.e., careers outside of the traditional academic, grant funded, professorial-appointed track.

At the NIH, we’re in the middle of analysing whether we have the right quality and quantity of training programmes, so people are well prepared for a satisfying and rewarding career. They don’t all have to become tenure-track scientists and clones of their mentors. We need to stop talking about alternative careers as if they are somehow second rate.

I have noted before that the NIH, if you view it as an entity which pays for a service, is making out like a bandit on loopholes to traditional worker protections. Actually, it isn’t all that different from any other white collar, salary style job loophole but it is still a loophole. The work of the NIH is primarily person-work. Lots of people conducting experiments, analyzing them and writing them up into manuscripts which will eventually be published. It is labor.

Correspondingly the NIH benefits when it can get this labor done for less money than would otherwise be accomplished. Sounds familiar doesn’t it? Free enterprise my friends, much beloved of all US politicians.

via this

The way it has done this is to get as much of its labor done by “trainees”. That would be graduate students and postdocs. People who are paid less than what we think of as market rate (as indexed by professorial salaries, scientist salaries in industry and even academic technician salaries) to do the work. The way that these poor suckers are deluded into providing this underpaid service is by sticking a carrot in front of their faces.

The carrot is that of a subsequent job on EasyStreet.

Well, a highly desired job, anyway. Which the trainees are told can only be obtained by working their behinds off (often time at well above standard 40 hr weeks), sacrificing many traditional life goals (like marriage, home ownership, childrearing) and the like. The carrot is tasty, and the working conditions for the trainee are hardly slavery, so the system works.

It should never, ever for one second be missed that the NIH is making out like a bandit from this situation and has every interest in continuing it. Otherwise their money would go nowhere near as far in research productivity. Because their labor force would cost them more if the approximately 75% of PHD students who should really be career techs just started as techs the month after their BS was awarded. It would cost them more if the postdocs who really are best suited for some sort of career staff-scientist, low level project directing type of position* likewise started such a full-benefits, COL adjusting, regular raising job right after the PhD dries.

I think Collins’ comment is consistent with trying to keep a good thing going a little longer.

The heat is on about the carrot. People are talking about how few of the donkeys ever get the carrot and how long they keep plodding away in pursuit of an empty goal. If the suckers trainees don’t believe in the carrot anymore, the scam scheme unplanned exploitation structure will start to collapse.

“Aha!”, thinks** the beast that is the NIH. “All we need to do is put up a carrot and three turnips and the poor fools will think they have better odds of getting something!”.

updated: also see this.
*lest this come off as unduly dismissive from my lofty throne, but for a quirk of fate I very likely would have ended up in such a position and have been reasonably happy about it.

**no, I don’t think any of this is explicit and Machiavellian.

As everyone enjoys themselves predicting their h-index using this new tool, it returns us to talking about the measurement of science and the bean-counting of citations.

For those who are new, citations of your academic papers are good, the more you have the better and all of this is well over 90% dependent on factors such as field size and vigor that have essentially nothing to do with the actual quality of your work itself.


Nevertheless, within some approximation of a related set of peer investigators who publish in roughly the same journals you do….well, the number of citations you get may have something to do with how cool your work is. So there’s that.

My point for today is an excessively narrow one. There are plenty of reasons to show why citations cannot be compared to each other. But I mention to you a reason that will forever be transparent to any bean-counting attempt to quantify your paper quality.

A citation is not a citation.

Sometimes a paper is cited in a fluffy or peripheral way. Mentioned once in the Intro along with four other citations as a general point. Maybe even overgeneralizing and getting it a bit wrong.

Sometimes, a paper is cited in a fundamental, formative way. It is an essential background motivation or concept around which the present work is constructed.

The latter is fantastic and means the paper really had impact.

The former can be little better than a marker for being in the game and communicates very little other than the mere fact that you published a paper. That popped up on the first page of a PubMed search or something. Or happened to be lazily cascade-cited through a small thread of science.

The bean counting doesn’t give a rat’s patootie about which type of citation your paper received.

Middle income

September 14, 2012

Romney has really done it this time.

Stephanopoulos: Is $100,000 middle income?

Romney: No, middle income is $200,000 to $250,000 and less.

Where the middle is approximately the 96th-98th percentile apparently….

PLoS ONE and stupid CV tricks

September 13, 2012

One of my first posts on the academic CV noted that you should actively manage how it *looks*.

My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it? Don’t misunderstand me- nobody is going to hand you a job / grant / etc just because you hit the modal publication numbers. But it will be very easy for you to be pushed out of the running if you do NOT hit the expected values. So do what you can to keep your CV as competitive as possible.

Meaning that more is better, yes, when it comes to publications. But beyond that, that you should have some idea of the expectations for your field. Especially when it comes to first-author vs. multi-author collabs, senior author vs penultimate vs communicating author, IF cachet, etc.

My advice was to seek balance and to work actively to fill the holes. A little down in one area, such as productivity? Then slice the sausage a little thinner. Have plenty of pubs but not enough first-author? Get a little more selfish in the lab this year. Etc.

One issue that requires longer term planning is the publication year consistency. I.e. all else equal it is good to have a steady rate with publications in each calendar year (which is a major part of the citation, very salient.). Obviously subfields vary and so do journals. You should have some idea of the lag time from acceptance to print publication so that you can predict what calendar year a given submission of yours might hit.

For many of the most rapid of my subfield’s usual journals, if you aren’t submitting by Apr-May there’s no chance for that year. For some, even Mar would be a stretch…and for others, 12-18 mo from accept to print is quite possible.

If you have a steady manuscript submit rate, are deep into your career and are the PI- none of this really matters. You have a steady pipeline going and all is well. For the rest of us however…

Sometimes you want to do what you can to shore up *this* calendar year on the CV.

Since PLoS goes to official pub date quite rapidly after acceptance (none of this pre-publication queuing business) this makes it attractive for submitting late in the year.

In the past, the ISI Web of Science appeared only to include those meeting abstracts that appeared in an actual print journal. That is, some academic societies will publish the text of the abstracts from their annual meeting in a Supplement to their captive journal.

I’ve recently noticed that ISI now seems to include presentation listings from societies which do not make their abstracts available in print anymore. Such as the Society for Neuroscience.

Among other things this will be a pain in the ass for looking at the citation report summary stats because they are included in the “items published” per year stats. Personally I’d like to see a default setting where you can exclude those…..now you’d have to go and use the checkbox to exclude the abstracts.

Don’t get me wrong, there are reasons to like the inclusion of the meeting presentations. You can do things like look back at how often a person’s meeting stuff turns into a paper, perhaps find a half-remembered abstract and thereby remember who to email for details, etc.

But this brings up the question of whether to cite meeting presentations in your publications. (the real ones). Will ISI index them and use them to contribute to your h-index? then heck yes you should cite them. Is this an encouragement to cite the abstract as a way to regain priority for that paper that you got scooped on? To get credit for a project for which you ultimately didn’t appear on the author line?

The possibilities are fascinating….

Some Tweep just gave me syncope.

If you are in graduate school or above, start your Full Monty CV.

Right now.

A paper in the October issue of the Journal of Psychopharmacology will be of interest to my readership. It looks at the consequences of exposure to an exogenous cannabinoid agonist

Byrnes JJ, Johnson NL, Schenk ME, Byrnes EM. Cannabinoid exposure in adolescent female rats induces transgenerational effects on morphine conditioned place preference in male offspring.J Psychopharmacol October 2012 26: 1348-1354, first published on April 19, 2012 doi:10.1177/0269881112443745 [ PubMed ]

In this study the authors exposed 23 day old (adolescent) female Sprague-Dawley rats to a three day, twice per day regimen of WIN 55,212-2 which is a full agonist at the CB1 receptor. The more familiar exogenous cannabinoid, Δ9-tetrahydrocannabinol (THC) is a partial agonist at the same site. The authors waited until the animals were adult (60 days), bred them and then examined the subsequent male off-spring of these mothers. They assay of interest was the Conditioned Place Preference test which is one common method to assess subjective drug liking in rats and mice.

The idea is to take a chamber which is divided into two or there sections by dividers and doors (in this case it was a three-chamber apparatus). The chambers are differentiated by salient stimuli such as the floor texture or type, wall stripes (horizontal vs vertical), etc. You let the subject explore at will in pre-conditioning baseline studies. Then, you conduct a series of conditioning sessions in which the animal is injected with a drug and then confined in one of the chambers. On other sessions the animal is injected with the drug vehicle only and confined to the other chamber. In this case, there were three active drug and saline conditioning sessions. Finally, on a later test day the animal is allowed once again to freely explore all of the chambers. The amount of time it spends in each chamber is recorded and the relative preference for the drug-paired chamber over the saline-paired chamber can be expressed, typically as a difference in amount of time, or the percentage of the total time, spent exploring the drug-paired chamber.

The figure presents Conditioned Place Preference data for the adult male offspring (WIN-F1) of mothers which were exposed to WIN 55,212-2 in adolescence and in the control group (VEH-F1) of adult male offspring of mothers treated twice a day for three days with the drug vehicle. There were three different place conditioning levels with groups of animals from the VEH and WIN treated dams place conditioned (in adulthood) with saline, 1 or 5 mg/kg of morphine. As expected, the chamber preferences of animals “conditioned” with vehicle were indistinguishable, i.e., they spent approximately equal time in each chamber. Animals conditioned with morphine, however, spent more time in the drug-paired chamber than in the vehicle-paired chamber.

Interestingly, there was a group difference which depended on the maternal treatment. The offspring of the WIN treated mothers appeared more sensitive to the rewarding effects of morphine because they expressed a conditioned place preference after 1 mg/kg training, unlike the adult offspring of VEH exposed dams. Although I’m not showing it here, the study also looked at adolescent male offspring and found a similar enhancement of morphine place-preference conditioning in the offspring of WIN exposed dams.

The translational take-away is pretty clear and fairly frightening. It suggests that one of the reasons for familial patterns of substance abuse may not simply be down to genetic legacy but may have something to do with drug exposures of the mother.

Potnia Theron observed that journals which impose limits on the number of citations that can be included in a manuscript are getting it wrong.

I agree, totally ridiculous. If the manuscript is egregiously overcited, the editorial and review process can handle it.

The drawback of such policy is palpable. It necessarily will prioritize particular papers (Glamour? First?) and obscure others. It hinders the process of citation thread-pulling which is an essential feature of scholarly reading. As such, it will slow the pace of science.

From the CHE:

In the statement, Hauser calls the five years of investigation into his research “a long and painful period.” He also acknowledges making mistakes, but seems to blame his actions on being stretched too thin. “I tried to do too much, teaching courses, running a large lab of students, sitting on several editorial boards, directing the Mind, Brain & Behavior Program at Harvard, conducting multiple research collaborations, and writing for the general public,” he writes.

Utterly ridiculous. This is the J.O.B. of an active, scientific research generating, field leading Professor appointed at Harvard and many other Universities.

It is not part of the job to falsify data.

This comment at Rock Talk…

NIH should stop allowing multi-awarded PIs to double count publications on multiple grants. This leads to inflation in productivity, and is one reason well funded can fool study sections into thinking they are more productive than they really are. If a publication is counted on multiple grants, its impact on each should be fractionally decreased so that its total impact is the same as the one publication on a single grant PI. Unless council does this, they will not have an accurate indication of whether the increased funding is actually leading to commensurate increase in productivity.

…has the right of it.

It took me about five minutes into reading my very first competing-continuation application when I first sat on a study section to realize that many people are very generous about crediting the grants they happen to be holding on the papers that they submit.

It took me perhaps half a day of study section discussion to figure out why they do this. Because “this amazingly productive researcher” has serious value in the discussion of a grant proposal. And this viewpoint on the part of a reviewer need not be objective in any way. All that is required is the thought that “gee, I see a lot of papers coming out of Dr. Squirrel’s laboratory”. And if ol’ Dr. Squirrel happens to fill up the first page of PubMed hits with papers from the current year and several pages of publications within the past 5 years, then everyone sitting around the table who cares to check will start nodding in agreement.

This is particularly important when it is a competing continuation applications. This type of application (asking for another 5 year interval of support for a project that is already underway) has an explicit section for detailing productivity. Nowadays, I think most people are on board with the idea that they need to list specific NIH grant numbers on each paper (sometime ago it was reasonably common to just say “NIH support” or “NIMH support” or something). So the progress report list better be of papers that mention the grant under discussion. So the smart PI is thinking all along about this list and how long she would like it to be. So she cites as many of her grants as possible on each publication.

And nobody checks.

Well, this isn’t strictly accurate. I have heard people try to reign in a comment about “wonderful productivity under this award” with a close analysis of the listed publications. To point out where a publication appearing in print three months after the start of the award couldn’t possibly have been conducted with the support of that particular award. To show where the attribution of a paper to this particular grant, given the other attributed grants, was an overreach of epic scale. To argue that even if two or three grants might have contributed equally to the paper, it was necessary and fair to divide by two.

I don’t think I’ve ever seen this actually work. I don’t think I’ve once seen a reviewer who stated “wonderfully productive” fully grasp what a critic was saying and reverse his/her opinion.

What I wonder about is the degree to which overall culture on study section can change with respect to this. (And, per usual, I throw this out to my readers for their respective experiences.)

My thought is that this sort of take on “productivity” is entirely dictated by the grant and seniority status of the reviewer. One-grant noobs are absolutely ENRAGED by this seeming disparity. Established PIs who do exactly this same thing in their own grant management strategy act like they don’t know what the youngsters are talking about.

The question is how the various Councils and POs will view this whenever “productivity” becomes an issue.

And I have to tell you, Dear Reader, my confidence that various Program types understand what is going on with this sort of gaming is not very high.

Additional Reading:
Your Grant in Review: Productivity on Prior Awards
Musing on NIGMS’ grant performance data
Another Look at Measuring the Scientific Output and Impact of NIGMS Grants
Productivity Metrics and Peer Review Scores
Mapping Publications to Grants
Comparing performance of within-payline and “select pay” pickup NIH grants at NIAID


September 1, 2012

Phew. We’re okay. For now.