February 28, 2012

Reader jekka asks:

Are you advocating Least Publishable Unit papers here DM?

My answer?


Naturally this comes with qualifiers. For now, however, I invite you to stretch yourself and

1) Define the Least Publishable Unit concept and manuscript type as cleanly as you can.


2) explain to me what the cost/problem/drawback is in the PubMed era.

Finally, please assure me that you have never cited a paper you consider an LPU, never allowed* such a turd to shape, motivate or inform your research and for goodness sake never polluted a grant application with any such thing.

*naturally, I have done all these things. Repeatedly.

Oh yes, “again“*.

As mentors and lab heads we should make it emphatically clear to all members of our labs that “co-equal” is only equal in the Animal Farm sense. I.e., not. And to secure the specific understanding that it is a nearly valueless sop.

As reviewers, we should start criticizing the practice with some StockCritique action. I suggest “The co-equal credit is a lie and a sham and serves only to buy off the authors who are not listed first. Please explain in full how the contributions are equal”.

As Associate Editors, ditto. Only in spades and with the full weight of accept/not accept behind us.

As Journals, generically, there should be a required statement signed by all co-equal authors. To the effect that “I understand that despite the foot note about co-equal contribution, this will not be viewed as such by the academic community at large. I recognize that it is not permitted to re-order the author line on my CV or biosketch or website. I have made this decision to accept the author position of my own free will with full understanding of the career consequences.

If you cannot sign onto this behavior you are admitting you are an exploiting jerk who is full willing to lie to mentees and/or your fellow trainees about their best career moves and have nothing but your own** selfish interests at heart.

ps. it is an absolute OUTRAGE that PubMed doesn’t include the symbols. This should be a trivial fix.
*for those who think this is a mere trifle, why does it keep coming up, eh? The websearch hits coming to our older posts on this topic never die down.
**if you are the lab head or listed-first author

A Sermon on this Sunday

February 26, 2012

…and now, Santorum saying this speech makes him “want to throw up”.

I wonder why we’ve fallen so far back into darkness.

A query on mentoring

February 24, 2012

One of the most fundamental roles the mentor plays in the development of a scientist is the introduction to the subfield. Making the trainee known to other scientists who make up the field. Publication is key. Proper crediting during seminars is another. Sending the trainees to meetings and introducing them around to the key players is good too.

As I said, in my view this is fundamental. Inescapable. Science is a human enterprise like any other and therefore interpersonal relationships matter. A lot. Even if they are not supposed to, we are unable to escape our biases related to “knowing” and “not knowing” other people.

My question for you Dear Reader, is whether you were Introduced to a subfield as a trainee. Did your mentor(s) make a specific point to enshroud you in a field? If you are a mentor, do you go out of your way to Introduce your trainees?

(If applicable, feel free to tell me that this is a mark of backwater, BunnyHopper dominated OldBoyzGirlz backslapping subfields and should be rigorously uprooted.)

Commenter Physician Scientist notes on a prior post that an individual scientist under suspicion for several dubious papers has retained his NIH funding.


This grant was RENEWED!!!!

Not quite. Or maybe the timeline is not quite what it seems. This would appear to be the most recent competing award in 2009. The budget is listed as ending in 2010 but then it continues onward under the “7R01” code in the next year (indicating a change of University) until 2014.

So the objection may not be all that direct if the news of these alleged frauds, misdeeds and/or actual retractions and corrections hadn’t been known when the grant was reviewed.

However, davebridges continues from my more general query about whether PIs should be viewed as innocent* until proven guilty of fraud.

innocent until proven guilty, but im sure reviewers sure can take into account historical accuracy of a lab. Better to renew a grant of a good lab with an instance sloppy record keeping (if thats the case) than a non-retraction lab whose data is never reproducible

This brings up a related, and more pernicious, issue. In my limited experience, “lab whose data is never reproducible” tends to be the stuff of rumor. Word around the campfire. Suspicion. Widespread far beyond those who might actually have tried seriously to replicate said data.

Correct or false, rarely is it a matter of well-explicated, scientific lack-of-evidence. Which, in itself, would still be problematic. There are many Nobel prizes and other fantastic scientific discoveries with a back story of “nobody believes his data”. At least at first, but that could have continued for years or decades. But if it is only suspicion? even if there are a couple of retracted papers….

Should the grant reviewers bust on an application on this basis? If there was one retracted paper would you refuse to issue a fundable score, even if the application had little to do with the topic of the retracted work?

What if you’ve read some internet clown detailing the “obvious” duplications of figures in papers but they’ve yet to be retracted or corrected? Would you mark the present application from that PI downward?

The flip side is that nobody deserves NIH funding. It is a privilege that is getting rarer all the time, going by the success rates. Seemingly the proposals that make it over the bar are held to the highest standard. As we’ve noted repeatedly, there are LOTS of great applications which are not going to get funded.

So why should we (the system) tolerate even a whiff of impropriety? Why not apply the one-strike and yer out principle?

As you know, we had one major ass retraction in the substance abuse fields in recent memory. Major because of the profile and public interest rather than because it had broad influence on the other scientists. I mean sure, maybe people were trying to replicate and follow up but the retraction came out within a year. Not too much damage was done**.

As far as I can tell Ricaurte kept his grants and kept getting more of them. Never paid any obvious price. Was this right? Should he have been busted out of the business for something over which we still do not know, and will never know, the extent of culpability. Should the reviewers simply moved on to a less tainted individual?

I don’t know. All I know is what I would do as a reviewer which is to try to be as fair as possible and to rely on my fellow panel members to reach consensus over how retractions or more suspicions should be viewed.

What would you do?

*from what I can tell in the chatter, this Chu case is limited to suspicion and a few retractions so far?

**yes, if you were the one wasting a year of work it sucked.

Stephen Curry has asked a key question in the wake of his post on the Welcome Trust’s Open Access policy. I was not previously aware that this policy launched an attack on GlamourMag science.

-specifically the Welcome Trust:

affirms the principle that it is the intrinsic merit of the work, and not the title of the journal in which an author’s work is published, that should be considered in making funding decisions.

Stephen eventually got around to asking how we scientists could reinforce this principle.

It is my view that merely asserting “there are great papers published in journals with more pedestrian IF ratings” will not be enough. This will require explaining just what is wrong with the enterprise of high-IF journal chasing “science”. Confrontation, if you will.

You may recognize that I have been pursuing this agenda on this blog for some time.

As I noted on the repost for Percy L. Julian, Ph.D., earlier this week, I’m swamped this month. So for Black History Month I’m offering up reposts. Today’s installment features a scientist who authored a paper I had occasion to blog a few weeks ago and my email box reports has just been elected to the Board of Directors for the academic society College on Problems of Drug Dependence. This post originally appeared on the Sb blog Feb 2, 2009.

CarlHart.jpgAssociate Professor Carl L. Hart, Ph.D. (PubMed; Department Website; ResearchCrossroads Profile) of the Psychology and Psychiatry Departments of Columbia University conducts research on several drugs of abuse with concentrations on cannabis and methamphetamine. In his studies he uses human subjects to determine many critical aspects of the effects of recreational and abused drugs including acute and lasting toxicities as well as dependence. Dr. Hart is also a contributing member of the New York State Psychiatric Institute Division on Substance Abuse.
In his academic research role, Professor Hart works within the highly respected and very well known Substance Use Research Center of Columbia University where he directs both the Methamphetamine Research Laboratory (Meth R01 Abstract) and the Residential Laboratory. The blurb for this latter will give you a good flavor for the workaday of Dr. Hart’s work:

The residential laboratory, designed for continuous observation of human behavior over extended periods of time, provides a controlled environment with the flexibility to establish a range of behaviors, and the ability to monitor simultaneously many individual and social behavior patterns. This laboratory is equipped with a closed circuit television and audio system encompassing each individual chamber for surveillance and measurement purposes, and to provide continuous monitoring for the participant’s protection. We believe that this relatively naturalistic environment can best meet the challenge of modeling the workplace to predict the interaction between drug use and workplace variables. Because our participants live in our laboratory with minimal outside contact, we are able to evaluate multiple aspects of the effects of drugs on workplace productivity in the same individuals.

Read the rest of this entry »

Millions Not Served

February 21, 2012

Michael Eisen notes that Cambridge University Press is offering up a new rental access model, namely 24 hr view-only availability for £3.99.

More importantly the CUP notes that their website tracks “millions” of hits to their Abstract pages which turn away otherwise empty handed.

I’m no genius but the iTunes experience would seem to provide a simple path for publishers. Drop the price point (for full access, mind you) until the market responds. Maybe that is £3.99, maybe £0.99 or maybe even less. I don’t know but there is very likely a nominal rate that gets those millions who are currently turning away to pay for the article.

Or maybe the Netflix model would work better. Again, the cost is going to have to be reasonable. I read a lot of Elsevier content but still the barrier has to be low. £50.00 per year for (real) access to *every* journal? I might even do that just to cover my browsing when I don’t want to wrangle with VPN and proxy servers.

One of the interesting things to arise in this recent round of OpenAccess discussion, in my mind anyway, is the role OD science blogging. Especially the Researchblogging.org style which focuses on explicating, you guessed it, research articles. What great advertising for publishers! Free product shilling from a small but generally dedicated class of folks.

Even Ed Yong may not be able to write purty enough to get the casual reader to part with £3.99. But to part with £0.25? Maybe that would be possible.

Sorry folks, I’m swamped lately. Kept meaning to do something for Black History Month that was new but I haven’t managed to get to it. So I’ll repost this from a few years back. It originally appeared on the SB blog Feb 17, 2009.

Percy Lavon Julian, Ph.D. (1899-1975) was a scientist who rose from humble beginnings, was trained and educated in an adverse cultural era and became a highly accomplished synthetic chemist and entrepreneur (Wikipedia; PubMed; ACS bio). From the American Chemical Society biography:

He was born in Montgomery, Alabama, on April 11, 1899, the son of a railway clerk and the grandson of slaves. From the beginning, he did well in school, but there was no public high school for African-Americans in Montgomery. Julian graduated from an all-black normal school inadequately prepared for college. Even so, in the fall of 1916, at the age of 17, he was accepted as a subfreshman at DePauw University in Greencastle, Indiana. This meant that in addition to his regular college courses he took remedial classes at a nearby high school. He also had to work in order to pay his college expenses. Nevertheless, he excelled. Julian was elected to Phi Beta Kappa and graduated with a B.A. degree in 1920 as valedictorian of his class.

Read the rest of this entry »

Jonah Lehrer has a post up which reviews the now commonly understood wisdom that one Jeremy Lin is a fantastic basketball talent that was overlooked by the system.

Jeremy Lin is a reminder that similar problems almost certainly apply to the NBA, which is why we shouldn’t be so surprised that a benchwarmer cut from multiple teams is quickly becoming a star. There is talent everywhere. We just don’t know how to find it.

As you know, I am of the opinion that our current focus on Glamour Magazine publication success in science is similarly leading us to miss a lot of talent that would perhaps do even better than many selected by the Glamour Gaze.


Despite a recent bobble of a perfectly reasonable question from Comrade PhysioProf by new NIGMS Director Judith Greenberg, NIGMS continues to be our favorite IC on the grant geekery front because they post their funding outcome data.

The latest info is posted here and I’ve taken the liberty of grabbing the first figure. It depicts the competing R01/equivalent applications by priority score that emerged from the initial review and differentiates the ones that they funded versus the ones that they did not. I like these depictions because you can see the rarity of “skips” (those apps which are not funded despite being scored within the range of nearly certain funding and the way “exceptions” (aka “pickups”) still have some relationship to score. Furthermore, you can maybe look across time and see whether the sharpness of the dropoff in chances of getting picked up as you step up away from the apparent payline (the point under which virtually everything gets funded) has changed. This latter might indicate the degree to which Program is meddling with the initial priority rankings.

The previous director of NIGMS, Jeremy Berg, continues his established interest in transmitting NIH career and grant award data to you, Dear Reader. This is, in essence, a guest post. I received the following email…

Hi DM: I was reading your recent post and remembered that I did some analysis at NIGMS that I presented at our advisory council and on other occasions, but it predated the Feedback Loop.

The study was to look at newly funded assistant professors or equivalent ranks from 2004-2006 (360 individuals) and to examine the times (in years) between when they received their BS/BA degrees, when they received their doctorates, when the started in their assistant professor positions, and when they got their R01s (manually from their biosketches). The results are attached. The median time from BS/BA to funding was 15 years. The average age of award was estimated to be 38 years.

Let me know if you have any questions. Feel free to post this if you want.

Best, Jeremy

I most certainly do want. Take a gander at this, folks (click to embiggen).

Now obviously the 360 individuals are but a tiny slice of the NIH-wide pool of PIs. And the 2004-06 window is just one point along the long trendline that we were just discussing. Still, it reflected the situation in one IC right at the time the NIH was getting exercised over the 42 years-to-R01 graph. So it hits the right note.

As a completely untethered personal opinion, I think graduate studies that last 6.7 years are far too long. If I was running the zoo I’d like to see that median back at 5.0 and be much, much peakier. That long-ish tail extending to 10 years and beyond is ridiculous. The post-doctoral training interval, I have less problem with. Five years doesn’t seem too horrible. Although I suppose that could be shaved back a little bit too.

The time to first award after appointment just puts a histogram on the problem that we already know about. Again, were I the Boss of Science, I’d want to get major funding in the hands of good people faster. If you work backwards from the population that eventually won an award, i.e. are “deserving” in some sense, wouldn’t you rather they had the $$ as early in their independent career as possible?

It would be really fascinating to know if the ESI hoopla has shifted this distribution back to the left by any significant amount.

Now thinking about these data some more….wow, take heart o ye of dismal training experience! I was thinking about the prediction for success/failure for a half second until I realized these data only capture those who were eventually successful at landing an R01 from the NIH. Look at those 10-18 (!) year grads. Look at the poor souls stuck for 10-15 years in postdoctoral hell. Sure, they are the exceptions to the distribution and no doubt the successful exceptions to the distribution of folks who got stuck for that length of time in graduate school or postdoctoral “training”. But it was possible for some to succeed at last. Wouldn’t you like to hear their stories? I know I would….

Way back in 2004, a News of the Week bit in Science by Jocelyn Kaiser [PDF] included the now infamous graphic showing the average age of first R01 award from the NIH had risen to 41.9 years of age. This led to a lot of “Jesus-is-coming-look-busy” behavior out of the NIH, as reflected in youth-focused initiatives, including the creation of the Early Stage Investigator definition (no more than 10 years past the PHD award) out of the New Investigator (no prior major/R01-equivalent award) class.

We now have an update from the Rock Talk blog which continues the trend up through the 2011 FY that just ended in October.

My initial reaction was “That’s it????”. After all these measures and attention, all they’ve managed to do is to flatline the trend at 42 years for the past 7 years?

My slightly more considered reaction is “Damn, I hate being right.”

I’ve maintained in both direct and indirect ways on this blog and elsewhere that the NIH approach to fixing the problem of younger investigators fairing poorly at the NIH trough is too little and too indirect. The major input to the system has always been the study section. Peer review. Conducted by “peers”. Except, oopsie, the “peers” are diverse in sex, geographical region, job category, ethnicity/race, University/Institution type….but most certainly not diverse where it counts for the young. The NIH choose to correct the bias against younger investigators by using Programmatic Priority decision making to pick up awards out of the order of funding (which doesn’t compromise science, btw). While I am generally in support of the system by which Program staff of NIH ICs decide to fund grants mostly, but not entirely, in the order determined by peer review I was not a fan of this as the only mechanism to fix the problem of young investigator hosage at the study section level.

As the prior NIH Director, The Great Zerhouni, noted [PDF] in 2007, immediately after implementing the exception funding (aka “pickups”) strategy, study sections started “punishing the young investigators with bad scores,” says Zerhouni.

This initiative fell smack into the truism that despite the best attempts (and firmest admonishments) of the SRO that the study section is to rank applications in a smooth distribution and not pay one iota of attention to the “F-word”, people can’t help but bench mark their scoring to the Fund/Don’t Fund dichotomy. Which means they operate at some level on an internal prediction or expectation* about where paylines and/or eventual success rates are going to fall. When the reviewers got wind that special considerations were being extended to ESI or NI applications, they re-calibrated their perception of the payline and scored accordingly. It was not so much that people were being intentionally punitive, they were simply expressing their existing bias under new parameters.

Some may have even thought they were being super clever because now they could score one more Established Investigator application within the perceived payline and still give the ESI application a likely-fundable score, all the while appearing to spread out the scores in his or her pile of assignments. Bonus!

There is another post up at Rock Talk that provides further information on the longitudinal age trends of the NIH investigator pool. It smells to me of more fake learned-helplessness excuse making along the lines of “what can we do? the demographics are relentless.”. Sally Rockey seems particularly fond of this animation of slides they’ve put up in numerous powerpoint files, but I’m unimpressed. The trendlines for PIs over 66 and under 36 are more interesting to me.

Depressing to see, isn’t it? As a comment on that post noted:

http://nexus.od.nih.gov/all/2011/05/13/update-on-myth-busting-number-of-grants-per-investigator/ includes the total number of PIs on grants in 1986 and 2009. In 1986, there were 16,532PIs of which around 16% were age 36 or younger. In 2009, there were 26,183 PIs of which around 3% were age 36 or younger. That means, despite the large increase in PIs, the raw number of young PIs plummeted from around 2645 to 785. This is depressing.


So, smart guy, what is your solution?“, I hear you ask. Well, I’ll tell you.

Number one, fix the diversity on study section issue. Strive for representation of those who are as-yet-unfunded that approximates their application numbers. The competition of biases is the only tried and true solution.

Two, continue the pickups. Do so in a balancing way that does not merely focus on “Young versus EveryoneElse”. Get serious about re-balancing the aging juggernaut of Baby Boomers against the two scientific generations that slot in just below them**.

Three, ditch the current definition of the ESI as being 10 years out from PhD. This was arbitrary, unfair and asinine from the beginning. It should have been tagged to the first professorial level appointment and/or the first time proposing applications to the NIH as a PI (any Research mechanism, not just R01).

Four, go after study section behavior. Hard. Train and educate. Provide measures of treatment of young versus established investigators, break it down by age if necessary. Give feedback and hold the section’s feet to the fire to do better. Expect SROs to ditch reviewers who just can’t get on board. Yes, they know. They can do it.

Five, snap the Universities into line. This is hard politically but duck soup in a practical sense. NIH awards funding to whom it chooses, as we’ve already discussed above, re: pickups. There is no “deserve”. There is no “right” to funding. There is ONLY what serves the interests of the NIH. So they are perfectly well within their rights to say things like “Yeaaah, we’re not going to fund any Training Grants at an institution that has a crappy record*** on hard money hires for Assistant Professors”. Or “No supplements for you!”. Or “We’re not going to use our discretionary exception funding for any of your**** applications until you get better.” Or even “Dudes, we’re totally serious. No new awards at all until you show us more Assistant Professor hires, better age distribution and yes we will be examining the deals just like we do for the R00 phase of the K99/R00 mechanism to make sure you aren’t pulling shenanigans”.

In my view, we will know the NIH is not really serious about these demographic trends until and unless they take a stab at item Five.
*A lack of calibration on this issue can lead to some sad results. Such as when those who are new to the section in question (or perhaps NIH review in general) lament at the dinner afterwards that they thought they had recommended “a good score” for some particular proposal.

**Self-serving alert!

***Define criteria however you like. Come up with something that you think will address the goal. This is not the point, the point is for NIH to actually deploy it’s considerable power of the purse.

****I know you didn’t forget that the applicant and the awardee for the NIH grant is the University and not the designated PI, Dear Reader. Right?


February 12, 2012

Field says it best

Girlfriend had some serious pipes.

Yes, she certainly did. Thank you for all the music, Whitney. RIP.

Do you cite/list the statistics package you use for analysis down to the version in every paper?

The reason I ask can be found here.