9 measly bucks per hour

February 13, 2013

$9

40 hr week (thank you liberal progressive commie America haters….almost 100 years on and America is still not destroyed)

52 weeks per year (yes I know but those should be 2 weeks of paid vacation, dammit)

$18,720

that’s pre-tax.

The current US Federal minimum wage of $7.25 per hour gets you $15,080 annual. Pre-tax. And let’s face it, without vacation time.

A new paper from the Fantegrossi laboratory examines the behavioral and physiological effects of the substituted cathinone drug, and “bath salts” constituent, 3,4-methylenedioxypyrovalerone (MDPV) [ Search PubMed ] which is the compound which has dominated the US media reports of averse consequences of bath salts intoxication. To the extent that verification of the drug has been provided in such reports, of course. Additional confirmation can be found here, here.

ResearchBlogging.orgThe current issue of Neuropsychopharmacology has a bath salts image on the cover and contains an article from Baumann and colleagues on MDPV pharmacology (I discussed it here) and this paper from Fantegrossi and colleagues.

William E Fantegrossi, Brenda M Gannon, Sarah M Zimmerman and Kenner C Rice In vivo Effects of Abused ‘Bath Salt’ Constituent 3,4-methylenedioxypyrovalerone (MDPV) in Mice: Drug Discrimination, Thermoregulation, and Locomotor Activity. Neuropsychopharmacology (2013) 38, 563–573; doi:10.1038/npp.2012.233; published online 5 December 2012 [ ArticleLink(free); PDF ]

This is a behavioral pharmacology study in male NIH Swiss mice which first uses drug discrimination techniques to show that when mice are trained to discriminate 0.3 mg/kg i.p. MDPV from saline the subsequent dose response curves for 0.01 to 0.3 mg/kg of MDPV, METH and MDMA are nearly identical. This article has been made freely available so I won’t belabor this part of the study.

Fantegrossi13-mdpvFig4What I wanted to focus on was the radiotelemetry studies of body temperature and locomotion. For reasons related to this classic paper on MDMA from Malberg and Seiden, most investigations of the effects of stimulant drugs in rodents should include some consideration of the role of ambient temperature. Fantegrossi and colleagues examined the effects of 0.3-30 mg/kg i.p. MDPV at both 20°C and 28°C. They showed, first of all, that MDPV produces no change in body temperature when administered at 20°C, but induces temperature elevations in a dose-dependent manner when animals are evaluated at 28°C. Even more interesting is what is shown in Figure 4 which I’ve included here. You can see that the locomotor stimulant effect (total activity counts over 6 hrs; left panel) of MDPV also is more pronounced at the higher ambient temperature with a peak differential observed after the 10 mg/kg i.p. dose (timecourse for this dose shown in right panel). There were also some other interesting phenomenological differences observed with the high ambient temperature condition.

At the highest tested dose of MDPV (30 mg/kg), significant focused stereotypy was observed at 28 1C, but not at 20 1C. Furthermore, four (of six) mice treated with 30 mg/kg MDPV at the high ambient temperature engaged in skin-picking and self-biting, which drew blood, and, in accordance with our IACUC approval, were removed from the study and euthanized. No signs of self-injurious behavior were observed at any dose of MDPV administered at 20 1C.

Repetitive, stereotyped behavior is common with locomotor stimulants and can be observed following high doses of amphetamine, methamphetamine and cocaine among other compounds. So this is probably an expected effect. What was interesting here was the dependency on ambient temperature. Off the top of my head, I can’t remember either the stimulant drug sterotypy literature (which focuses on charcterizing the repetitive behaviors) or the locomotor studies (where the “inverted U” dose effect function often reflects the emergence of stereotyped behavior after high doses) focusing too heavily on the ambient temperature issue. No doubt I could stand to go back and review some papers with a closer eye on the ambient temperature.

This study, however, points a finger at environmental issues when trying to figure out the degree to which the drug MDPV might cause sensational media-friendly outcomes in some users. Studies such as the present one may indicate that factors as subtle as how hot it is the day a person takes a given drug can change the experience from relatively benign into something much more severe. Thus, a dose of a drug which has been taken before by the same user may have highly unpredictable effects just based on this one difference in the situation.

ADDITIONAL READING

Watterson et al 2012 demonstrated intravenous self-administration in rats.
Huang et al, 2012 showed locomotor effects of MDPV on activity wheels in rats.
Fuwa et al 2007 shows dopamine responses with microdialysis and locomotor effects [in Japanese, but the Abstract is in English and the figures are easily interpreted]
Meltzer et al 2006 present monoamine pharmacology on a series of pyrovalerone compounds

__
Fantegrossi WE, Gannon BM, Zimmerman SM, & Rice KC (2012). In vivo Effects of Abused ‘Bath Salt’ Constituent 3,4-methylenedioxypyrovalerone (MDPV) in Mice: Drug Discrimination, Thermoregulation, and Locomotor Activity. Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology PMID: 23212455

The NOT-OD-13-039 was just published, detailing the many data faking offenses of one Bryan Doreian. There are 7 falsifications listed which include a number of different techniques but mostly involve falsely describing the number of samples/repetitions that were performed (4 charges) and altering the numeric values obtained to reach a desired result (3 charges). The scientific works affected included:

Doreian, B.W. “Molecular Regulation of the Exocytic Mode in Adrenal Chromaffin Cells.’ Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, August 2009; hereafter referred to as the “Dissertation.’

Doreian, B.W., Fulop, T.G., Meklemburg, R.L., Smith, C.B. “Cortical F-actin, the exocytic mode, and neuropeptide release in mouse chromaffin cells is regulated by myristoylated alanine-rich C-kinase substrate and myosin II.’ Mol Biol Cell. 20(13):3142-54, 2009 Jul; hereafter referred to as the “Mol Biol Cell paper.’

Doreian, B.W., Rosenjack, J., Galle, P.S., Hansen, M.B., Cathcart, M.K., Silverstein, R.L., McCormick, T.S., Cooper, K.D., Lu, K.Q. “Hyper-inflammation and tissue destruction mediated by PPAR-γ activation of macrophages in IL-6 deficiency.’ Manuscript prepared for submission to Nature Medicine; hereafter referred to as the “Nature Medicine manuscript.’

The ORI notice indicates that Doreian will request that the paper be retracted.

There were a couple of interesting points about this case. First, that Doreian has been found to have falsified information in his dissertation, i.e., that body of work that makes up the major justification for awarding him a PhD. From the charge list, it appears that the first 4 items were both included in the Mol Bio Cell paper and in his Dissertation. I will be very interested to see if Case Western Reserve University decides to revoke his doctorate. I tend to think that this is the right thing to do. If it were my Department this kind of thing would make me highly motivated to seek a revocation.

Second, this dissertation was apparently given an award by the Case Western Reserve University School of Medicine:

The Doctoral Excellence Award in Biomedical Sciences is established to recognize exceptional research and scholarship in PhD programs at the School of Medicine. Nominees’ work should represent highly original work that is an unusually significant contribution to the field. A maximum of one student per PhD program will be selected, but a program might not have a student selected in a particular year. The Graduate Program Directors chosen by the Office of Graduate Education will review the nominations and select recipients of each Award.
Eligibility

Open to graduating PhD students in Biochemistry, Bioethics, Biomedical Engineering, Epidemiology and Biostatistics, Genetics, Molecular Medicine, Neurosciences, Nutrition, Pathology, Pharmacology, Physiology and Biophysics, and Systems Bio and Bioinformatics.

This sidebar indicates the 2010 winners are:

Biochemistry: Matthew Lalonde
Biomedical Engineering: Jeffrey Beamish
Epidemiology and Biostatistics: Johnie Rose
Neurosciences: Phillip Larimer
Nutrition: Charlie Huang
Pathology: Joshua Rosenblum
Pharmacology: Philip Kiser
Physiology and Biophysics: Bryan Doreian

Now obviously with such an award it is not a given that Mr. Doreian’s data faking prevented another deserving individual from gaining this recognition and CV item. It may be that there were no suitable alternatives from his Department that year, certainly it did not get one in 2011. It may also be the case that his apparent excellence had no impact on the selection of other folks from other Departments…or maybe he did set a certain level that prevented other folks from gaining an award that year. Hard to say. This is unlike the zero sum nature of the NIH Grant game in which it is overwhelmingly the case that if a faker gets an award, this prevents another award being made to the next grant on the list.)

But still, this has the potential for the same problem with only discovering the fakes post-hoc. The damage to the honest scientist has already been done. There is another doctoral student who suffered at the hands of this fellow’s cheating. This is even before we get to the more amorphous effect of “raising the bar” for student performance in the department.

Now fear not, it does appear that this scientific fraudster has left science.

Interestingly he appears to be engaging in a little bit of that Web presence massaging that we discussed in the case of alcohol research fraudster Michael Miller, Ph.D., last of SUNY Upstate. This new data faking fraudster Bryan Doreian, has set up a “brandyourself” page.

“Our goal is to make it as easy as possible to help anyone improve their own search results and online reputation.

Why should Mr. Doreian needs such a thing? Because he’s pursuing a new career in tutoring for patent bar exams. Hilariously it has this tagline:

My name is Bryan and I am responsible for the operations, management and oversight of all projects here at WYSEBRIDGE. Apart from that some people say I am pretty good at data analysis and organization.

This echos something on the “brandyourself” page:

Bryan has spent years in bio- and medical- research, sharpening his knack for data analysis and analytical abilities while obtaining a PhD in Biophysics.

Well, the NIH ORI “says” that he is pretty good at, and/or has sharpened his knack for, faking data analysis. So I wonder who those “some people” might be at this point? His parents?

His “About” page also says:

Doctoral Studies

In 2005, I moved to Cleveland, OH to begin my doctoral studies in Cellular and Molecular Biophysics. As typical for a doctoral student, many hours were spent studying, investigating, pondering, researching, the outer fringes of information in order to attempt to make sense of what was being observed. 5 years later, I moved further on into medical research. After 2+ years and the conclusion of that phase of research, I turned my sights onto the Patent Bar Exam.

At this point you probably just want to take this down my friend. A little free advice. You don’t want people coming to your new business looking into the sordid history of your scientific career as a fraudster, do you?

On scientific fraud

February 11, 2013

Did you ever notice how small time bank or convenience store thieves only get caught after about the tenth heist using the exact same disguise and MO?

Read ORI findings and retraction notices with this in mind.

[H/t: a couple of peeps who made me think of this recently. You know who you are. ]

Repost: Study Section, Act I

February 11, 2013

I think it has been some time since I last reposted this. This originally appeared Jun 11, 2008.


Time: February, June or October
Setting: The Washington Triangle National Hotel, Washington DC

    Dramatis Personæ:

  • Assistant Professor Yun Gun (ad hoc)
  • Associate Professor Rap I.D. Squirrel (standing member)
  • Professor H. Ed Badger (standing member, second term)
  • Dr. Cat Herder (Scientific Review Officer)
  • The Chorus (assorted members of the Panel)
  • Lurkers (various Program Officers, off in the shadows)

Read the rest of this entry »

SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like “low-impact journals” are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn’t be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word “tier”. As in “The work from the prior interval of support was mostly published in second tier journals…“.

It is almost always second tier that is used.

But this is never correct in my experience.

If we’re talking Impact Factor (and these people are, believe it) then there is a “first” tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the “second tier”.

A jump down to the IF 12 or so of PNAS most definitely represents a different “tier” if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about “second tier journals” they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a “the rest of the pack” zone in which there is a meaningful perception difference from the fifth tier. So…. Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn’t it?)

So there you have it. If you* are going to use “tier” to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

___
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use “second tier journal” to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as “second tier”. Go nuts.

…."what a drag".

February 11, 2013

For my lovely persecution complex commenters….

NIH Sekrits

February 11, 2013

It is a little known #truefact of the NIH that every 500 logins or refreshes on your eRA Commons account improves your eagerly anticipated grant score by 1 percentile point.

I keep mulling over the data presented in this entry at the Rock Talk blog. I originally concluded that this, combined with the revelation that applications-per-PI only went from 1.2 to 1.5 across the FY98-FY11 interval, showed that the RealProblemTM at the NIH was the growth in the number of Principal Investigator mouths at the trough.
As a reminder, these are data for the investigator-initiated Research Project Grants only and exclude the ARRA largesse. The graph shows change data from the baseline of Fiscal Year 1998. As a brief summary of my prior thoughts:

the post indicates an increase from an average of 1.2 applications submitted per investigator to an average of 1.5 per investigator from 1998 to 2011…This surprised many of us on the Twitters. I don’t think I know of any active scientists who are submitting less than several NIH grant applications per year…if we harken back to some data on..the Rock Talk blog (maybe; UPDATED, it was RockTalk) which showed the average NIH PI had only about 2 grants concurrently then we must consider that there are still a LOT of folks out there on a single grant at a time. Especially if they have long-term continuations going, sure, maybe there are a lot of PIs who only have to submit an application once every 5 years. The post also indicates that there were 19,000 applicants in 1998 and this grew to 32,000 in 2011. Some 13,000 new mouths, a 68% increase in PIs seeking money from the NIH.

I’ve added emphasis to highlight what has been bothering me.

The notion that we have 68% more PIs seeking money from the NIH should have been more of a warning. The thing is, it dovetails nicely with one of the very truthy memes that we have going about the effects of the NIH Doubling interval. More people in the system made a certain sense. Particularly for those of us who were entering the system approximately during the doubling interval and did not feel as though it was easy to get a grant funded. Certainly, success rates did not double. In the historical sense the success rates were only moderately restored from a slide that ran from the late 80s (40% for experienced applicants. Think about that.) to about 1994 (25% for experienced applicants). So if the budget was doubled and success rates were far from doubled, there must be more people seeking funding. Right?

What never seemed possible to me was that traditional research-heavy Universities, who were already deep into NIH-addiction, were throwing up that many new jobs. Sure, they expanded their soft-money faculty positions a little bit…and let the occasional word-salad-position Assistant Adjunct Research Project Professor of Bunny Hopping upjumped postdoc submit a grant or two. But it didn’t seem likely to me that this explained the budget/payline disparity. Nor, in context of Rockey’s data, did a 68% increase in PIs at such places seem likely. So I was always asserting that the growth came in large part from the entry of new institutions into the system. In the sense that smaller, less research intensive Universities were, perhaps, putting on a big push to get in the NIH game. Perhaps this was by hiring new NIH-honcho faculty. Perhaps by pushing hard from the deanlet level to get the existing faculty to submit more grants, bring in more NIH moola. This latter hypothesis was fueled by rumor of this kind of behavior from some of my colleagues and friends so I was primed to believe it.

My new realization of the week is that the data from Rock Talking are misleading. The denominator for the grants-per-PI is calculated on a per-FiscalYear basis. It has to be, even though they don’t say this. So you only get counted if you’ve submitted at least one grant in the FY. Similarly, the growth in the number of PIs from the 1998 baseline is likewise a reflection of the number of PIs submitting at least one competing grant application in a given FiscalYear. Again, they don’t specify. I was perhaps assuming that this reflected the number of PIs in the system, i.e. submitting competing or noncompeting applications. In some senses, we also have to keep in mind the number of occasional applicants to the NIH…hard to believe from my perspective but sure, why not consider that there is a pool of PIs who may have repeated, but not continuous funding from the NIH across their careers?

Keep in mind that I’m eventually getting around to the consideration of the massive decrement in the purchasing power of the standard, $250K direct cost, full modular award.

As you can see, a full modular $250,000 year in 2011 has 69% of the purchasing power of that same award in 2001.

We’ll return to this.

Let us start with consideration of what appears to be, going by disgruntleprof comments on various blogs and opinion pieces, the shining virtue of the NIH system…the one-R01 small town grocer. This PI submitted a grant application once every five years to continue her R01…in the old days. So on average this person would be submitting 0.2 grants per FY but in the Rockey analysis would only count as 1 grant-submitted…every five years. Over time, however, she is now facing a decreased probability of getting funded the first time and, let us say, submits an application three times (A2 scenario, not unlikely at all by the end of the doubling), a year apart, during her 5 year window. Her Rockey number is still 1 application per year but her 5 year average has increased to 0.6. Similarly, since we’re dealing with the one-grant scenario, the appearance in the Number-of-Mouths data is likewise affected by the frequency of submissions. Taking the 3 tries case again, if she only had to apply twice every 10 years in the past but is now applying 6 times to maintain funding, she has tripled her presence in the Rockey way of looking at the number of applicants. If we’re talking about an overall 68% change over time…this kind of behavioral change is significant if it occurs in any appreciable part of the PI distribution. It makes it look like there is a big change in the number of PIs that need to be fed when there have not, in fact, been two more PIs added to the system.

Getting back up to my original thoughts on where the RealProblemTM lies, however, this is all critical. Is the NIH in fact supporting 68% more investigators in 2011 vs 1998? This is what Sally Rockey’s post would imply. It certainly implied this to me. However, it may simply reflect the same number of overall PIs in the NIH-funded extramural workforce who simply have to submit more grant applications to maintain the same number of grants.

Which brings me to my next point. Note that I said “same number of grants” but not “same amount of funding”. Because it is also clear that over this self-same interval when SmallTownGrocerPI was forced to submit applications more frequently to sustain her funding, she was also forced to try to get more awards simply to maintain the same level of operation. Because the purchasing power of the grant dollars had fallen by so much and yet the full-modular cap still imposed a de facto limit on budget escalation. grants_per_pi_allNow true, the “myth-busting” data from Rockey show only a 4-5% shift in 1-grant to 2-grant PIs from FY1986 to FY2004 when the doubling was rolling hard. This is where the simple case we are discussing really breaks down. Obviously there are many varieties and mixtures of PIs in terms of the number of applications submitted, the stable-versus-growth aspirations, the amount of NIH funding that represents stable state, the mixture of R01 and “other” funding, etc.

So obviously it would be a complex modeling job in the NIH databases to get the best understanding.

But it strikes me that one of the simplest and most productive things for the NIH (read: Sally Rockey’s data mining minions) to do would be to take a closer look at the number of PIs applying instead of the number of applications. The number of PIs over an extended window of time, not just on a per-FY basis.

This reason that this is important to know is that the success of any proposed fixes to the NIH depend on this reality. If there has genuinely been an increase in the number of PIs then shelling some of them out of the system permanently (including by preventing entry) is the only way to have sustained effect. Within that category, it may be necessary to see if the growth in PIs has come from the top research Universities or from increases in the lower-tier Universities.

If the main trouble is the uncertainty of maintaining one award, then the solutions are to extend the interval of non-competing and/or give a much larger payline break to competing continuations versus new applications.

If the trouble is that the purchasing power of the full-modular has decreased, then boost the limit to $375K per year in direct costs. [ETA: per comment from Grumble, note that the purchasing power has also been eroded by habitual budget reductions upon funding. Some ICs cut a whole year. Some have made 1-2 module ($25K per module) reductions the SOP. Some hit even non-competing renewals with additional reductions because of budgetary uncertainty. They do this to artificially prop up the success rates. Take one module from 9 awards and you can fund 10.]

It is incredibly frustrating for those of us who watch from the outside since these data are clearly available within the NIH databases and they simply seem to be looking* in the wrong direction.

__
*I realize that Sally Rockey may have a ton of analyses that she simply has not put up on the blog. Somehow, given her little oopsie with the alternative career fate of trainees, I doubt it.

Jean Lud Cadet, M.D. [ PubMed, GoogleScholar, DepartmentalPage ] is the Chief of the Molecular Neuropsychiatry Research Branch in the Intramural Resarch Program at the National Institute on Drug Abuse. Within this branch he heads the Molecular Neuropsychiatry section which has maintained major interests in dissecting the toxic effects of methamphetamine, cocaine and MDMA on the brain using rodent models. He has a recent review article Epigenetics of Methamphetamine-Induced Changes in Glutamate Function that you might find of interest.

PhotoCredit: ASBMB

PhotoCredit: NIDA IRP

According to an interview with the American Society for Biochemistry and Molecular Biology Dr Cadet received his MD degree from Columbia University and completed residencies in Psychiatry at Columbia University and in Neurology at Mount Sinai Medical Center. Dr. Cadet indicates in the interview that it was chance notice of an announcement for a fellowship in Pharmacology at the NIMH IRP (which he secured and spent time as a Neuropsychiatry Fellow) that cemented his interest in research. Going by the PubMed record, it was during this time that Dr. Cadet became interested in movement disorder related to dopamine disruptions which foreshadowed his eventual interest in damage to dopaminergic functions caused by stimulant drugs. After the Fellowship, Dr. Cadet became Assistant Professor of Neurology and Psychiatry at Columbia University and then subsequently moved to the NIDA IRP in 1992.

Dr. Cadet is also the Associate Director for Diversity and Outreach within the NIDA IRP and, per an interview with the ASMBM Dr. Cadet states:

As the Associate Director for Diversity and Outreach, my greatest passion is the recruitment of young scientists from under-represented populations into various NIH programs. I have been in charge of recruiting summer students into the NIDA-IRP since 1995. I am also the chair of the Diversity and Outreach Committee (DOC) that is actively recruiting young scientists from under-represented groups. This committee has recently reached out to Patterson High School, a neighborhood high school. Two Patterson junior students are now serving internships in basic science laboratories at the NIDA-IRP. Using funds that were recently provided by the Scientific Director of NIDA-IRP, the DOC has also established a competitive application process that has helped to recruit 6 post-baccalaureate and/or post-doctoral fellows within the NIDA-IRP. I am relentless in my pursuit of Diversity within the NIDA-IRP and my activities together with those of DOC members are helping our intramural program to serve as a beacon to be followed by others.

I thank you Dr. Cadet for both furthering our understanding of the ways in which exposure to stimulant drugs of abuse can disrupt the brain and your efforts to extend opportunities within science to those who are of underrepresented racial or ethnic backgrounds.

__
Post-baccalaureate program at NIDA IRP

Prior entries in this series overview the contributions of Yasmin Hurd, Carl Hart, Chana Akins and Percy Julian.

via Bashir-

A new paper has been published that purports to refute the conclusion of the Ginther report (also see this, this, this, this) that there exists substantial bias in the awarding of NIH grants to white versus black PIs.

Jiansheng Yang, Michael W. Vannier, Fang Wang, Yan Deng, Fengrong Ou, James Bennett, Yang Liud, Ge Wang A bibliometric analysis of academic publication and NIH funding Journal of Informetrics 7 (2013) 318– 324 [ journal link ]

My biggest concern here has to do with the sampling…otherwise I guess we should view it as data that contributes to the overall picture. Much as Ginther et al drew a host of “oh it must really be…” alternative explanations, so should this.

The authors targeted 92 medical schools (1) and selected 31 odd-number-rank schools (2). They identified white and African American faculty members (from, ah, web page pictures and, um “names”. also “resumes as needed”.(3)) They then did a 1:2 pairing of black with white faculty in the same discipline, with the same degree and within the same medical school (4), same sex and title/academic rank.

So. They were able to identify 130 black professors of which only 14 were funded by the NIH from 2008 to 2011(5). Two were excluded because they couldn’t find matching white faculty and one for failing to have any SCI/Web of Science presence (this was used to generate h-index, citations etc).

Eleven. Eleven faculty (out of 130) members, plus an additional 22 matched white faculty, comprise the sample for the correlation of scientific productivity with grant award. Kinda thin.

They took the rankings of the medical schools from US News and World Report and divided the institutions into thirds “Tiers”. Ten of the grant sample pairs came from the top third of medical schools and one from the second tier (6)

In Table 2 the paper lists the mean (7) papers, citations and a couple of productivity indices they made up (8). Black investigators had fewer papers (but not significantly different), significantly fewer citations (9) and significantly lower Pc-index.

Second, the productivity measure in terms of peers’ citations, or the Pc-index, is the sum
of the numbers of citations to one’s papers weighted by his/her a-indices respectively. While the Pr-index is useful for
immediate productivity measurement, the Pc-index is retrospective and generally more relevant.

There was no difference in the PcXImpactFactor index. Interesting how they describe the one that identified a difference as “most relevant” isn’t it?

Then we move on to tables 3 and 4 in which the authors show that if you “normalize” the PIs’ award funding by the various performance measures (10) there is no difference between black and white professors.

There are a few more complaints about the earlier part of the study but that isn’t really focused on the grant-getting so I’ll leave it for now. It reflects the entire 130 pair sample and examines the productivity measures. There are interesting tibits in the fact that they only had significant differences in the Asst professor ranks. In the larger NIH-grant picture, perhaps their excuse of too few black Full and Associate professors for analysis is highly meaningful for the overall disparity of grant award? Then there was the observation of differences only in the Assistant professors at the top one-third of medical schools but not in the bottom two thirds.

I’ll end with my observations:

1) why not academic departments? what proportion of the NIH PI population is at medical schools versus regular academic departments? what about non-University institutions?

2) why not all of them?

3) really? like they never heard of passing. Also “white”? What sort of “white” are we talking here? How do we know their sample of white medical school faculty matches the overall NIH sample of white PIs?

4) so the sample had to be really narrow here because they had to find disciplinary descriptions broad enough that they even had an AfricanAmerican professors represented. This will not be the case everywhere.

5) isn’t the whole issue that is at the heart of Ginther those investigators who were NOT funded by the NIH? That’s what assessing the disparity is about….figuring out if there are “missing” investigators who should have been funded by were not. Right? Determining whether those funded black investigators are as good as a sample of white investigators is beside the point. I really need to chase down the exact quote but one of the ERA era leaders said something to the effect that women will enjoy true equality not when they can succeed by being better than all the men but when all they have to do is be as good as the worst men in a given workplace. The same logic applies here. The focus should be on the whole distribution of funded investigators. It is irrelevant if, say, black investigators who “should” be at Tier 2b Med school are really employed at Tier 1c Med schools. What matter is if there are black scientists who are just as good as Tier 3f Med school white investigators but are not getting the funding their counterparts are enjoying.

6) ok, whut? why this skew for the top end? if they sought to focus on the elite, why not just sample all of the schools in the top third? or once you get past this the NIH grants are few and thin on the ground? particularly for black investigators perhaps? or for both white and black professors?

7) all of a sudden the white sample is down to 11, should have been 22. I can’t figure out what they did here.

8) the a-index they base much of this on seems to be an attempt to parse author credit depending on position in the author list, number of authnors, etc. yeah….that’s not resting on a bunch of subfield(9) practice equivalencies, is it?

9) yeah, the disciplinary “matching” isn’t working for me here. if the pairs were within Medical School and within discipline presumably this means within Department. This is almost certain to mean that the pairs differed in subdisciplinary issues like model, technical approaches, etc. Differences that can be even more significant contributors to citations than are the broad disciplinary labels. Now true, we’d want to know if there was any evidence that black investigators were more likely to be in lower citation, slower pub rate subfields…

10) This also depends on their being a direct and positive correlation between funding and “productivity”. As one example, human imaging research is really expensive, generates papers slowly, rarely ends up in CNS journals and probably isn’t cited that highly. People who do such work are living in the same pharmacology, psychiatry and neuroscience departments that contain bench jockey labs shiving each other in the back to race to the latest CNS scoop job. Same title, same department but….comparable? please. oh yeah, see 9) again.

This first went up on the old SB blog in Feb of 2009.


YasminHurd.jpgYasmin L. Hurd, Ph.D. is Professor of Pharmacology and Systems Therapeutics as well as Psychiatry at Mt. Sinai Medical Center (PubMed; Hurd Lab; Department; Research Crossroads) .
As is overviewed on the “research” tab of her webpage, Professor Hurd has longstanding interests in mesocorticolimbic areas that are affected by drugs of abuse. Her areas of concentration include the in vivo neurochemical responses to drugs, the influence of drugs on fetal brain development and the molecular and biochemical changes that might be associated with dependence.
Professor Hurd obtained her doctorate in 1989 from the Karolinska…Okay, right there your brain should go ‘click‘.

Read the rest of this entry »

I pointed out some time ago that inflation “UnDoubled” the NIH budget rapidly in the wake of sustained Bush-era (now Obama-era) flatline budgets for the NIH. Nothing like a graph to make a point so I’ll repost it.
Heinig07-NIHbudget-trend.jpeg.jpg

Figure 1. NIH Appropriations (Adjusted for Inflation in Biomedical Research) from 1965 through 2007, the President’s Request for 2008, and Projected Historical Trends through 2010.
All values have been adjusted according to the Biomedical Research and Development Price Index on the basis of a standard set of relevant goods and services (with 1998 as the base year).* The trend line indicates average real annual growth between fiscal years 1971 and 1998 (3.34%), with projected growth (dashed line) at the same rate. The red square indicates the president’s proposed NIH budget for fiscal year 2008, also adjusted for inflation in biomedical research.

Now, what I ran across today at Ethan Perlstein’s post on Postdocalypse now (go read) was this graph which makes the same point in a slightly different way. I like it. He didn’t link the source so I’m not certain of the inflation adjustment used…probably not the above BRDPI, I would think. But still…makes the point doesn’t it? At best the NIH purchasing power went up by 50%. It was never actually “doubled”.

UPDATE: Perlstein noted that he grabbed the figure from this article at dailykos by emptypockets which says this about the sourcing:

The Science column links to a study by Paula Stephan, an economist at Georgia State University (PDF of PowerPoint slides) that puts some numbers on exactly how the doubling affected young scientists.

Fascinating topic raised by @rxnm_ on the twitts today:

Hard to be excited that my work helped someone else get money they can use to pay me to continue being a temp. #postdocalypsenow

and

I am not ungrateful and don’t think PI’s don’t deserve grants on their own merit. It is just hard to feel it as a shared success.

This is one of the realities of the training arc. When you are a postdoc in a lab, part of what you are doing is servicing the grant game. Whether you realize this or not. You are going to be expected to work on topics related to the lab’s current funding (in most cases, biomed, ymmv, etc, etc). In this your work will be included in progress reports and in future grant applications as well. Your (“your”) papers will be used by the PI to support her reputation as a productive scientist. To shore up the appearance that when she proposes a given research plan, by glory some cool stuff will get published as a result!

But the NIH grant process can be lengthy. Submit a proposal in Feb/March for review in Jun/Jul…Council review in Sep…and first possible funding Dec 1. And we all know that means no budget, Continuing Resolution and good luck seeing your money until late Feb, early March. If the grant doesn’t score well the first time, it must be revised in Nov, reviewed in Feb…Council in May..for funding in Jul. Eighteen glorious months.

So chances are very good that the hard work of the postdoc will end up in tangible grant support results for the laboratory that only the PI is going to “enjoy”. Well, and the techs. And of course any more-junior trainees….and….FUTURE POSTDOCS aaaaarrrrrghhhhhhh YOUR COMPETITION!!!!!!! arrrgggghhh!!!! dammit!

How many postdocs think about the labors of the prior trainees when they enter a laboratory on a funded grant? And how they are benefiting from the work of those prior individuals? How many late-stage postdocs who are starting to feel pretty damn exploited when that grant based on their work, that they wrote half* of, gets funded just as they are leaving**? I wonder if any of them think about the grant that funded their first three years in the lab…
__
*hahahahaha.

**leaving for a professorial job, no biggie. but what if there has been a great laboratory shrinking due to grant loss and the timing is such that the new grant comes in too late for the current postdoc?

‘Tis the time of the year for interviewing graduate school candidates. The exact purposes vary from a significant selection process to “just make sure s/he isn’t completely bonkers, okay?”.

Michael Eisen asked on the Twitts:

what do people think are the most useful things to ask in a 30m grad school interview?

After a wisecrack or two I came up with a serious one.

“tell me about the moment you first realized you weren’t the smartest person in the room?”

What would you suggest, Dear Reader?