Thought of the Day

July 24, 2014

What fraction of the stuff proposed in funded grants actually gets done after feasibility and field movement come to play?

Dear Editor Whitehare,

Do you really expect us to complete the additional experiments that Reviewer #3 insisted were necessary? You DO realize that if we did those experiments the paper would be upgraded enough that we sure as hell would be submitting it upstream of your raggedy ass publication, right?

Collegially,
The Authors

Datahound has some very interesting analyses up regarding NIH-wide sex differences in the success of the K99/R00 program.

Of the 218 men with K99 awards, 201 (or 92%) went on to activate the R00 portion. Of the 142 women, 127 (or 89%) went on to these R00 phase. These differences in these percentages are not statistically different.

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).

Yowza.

So per my usual, I’m very interested in what the ICs that are closest to my lab’s heart have been up to with this program. Looking at K99 awardees from 07 to 09 I find women PIs to constitute 3/3, 1/3 and 2/4 in one Institute and 1/7, 2/6 and 5/14 in the other Institute. One of these is doing better than the other and I will just note that was before the arrival of a Director who has been very vocal about sex discrimination in science and academia.

In terms of the conversion to R01 funding that is the subject of Datahound’s post, the smaller Institution has decent outcomes* for K99 awardees from 07 (R01, R21, nothing), 08 (R01, R01, R01) and 09 (P20 component, U component, nothing, nothing).

In the other Institute, the single woman from 07 did not appear to convert to the R00 phase but Google suggests made Assistant Professor rank anyway. No additional NIH funding. The rest of the 07 class contains 4 with R01 and two with nothing. In 08, the women PIs are split (one R01, one nothing) similar to the men (2 R01, 2 with nothing). In 09 the women PIs have two with R01s, one R03 and two with nothing.

So from this qualitative look, nothing is out of step with Datahound’s NIH-wide stats. There are 14/37 women PIs, this 38% is similar to the NIH-wide 39% Datahound quoted although there may be a difference between these two ICS (30% vs 60%) that could stand some inquiry. One of 37 K99 awardees failed to convert to R00 from the K99 (but seems to be faculty anyway). Grant conversion past the R00 is looking to be roughly half or a bit better.

I didn’t do the men for the 2009 cohort in the larger Institute but otherwise the sex differences in terms of getting/not getting additional funding beyond the R00 seems pretty similar.

I do hope Datahound’s stats open some eyes at the NIH, however. Sure, there are reasons to potentially excuse away a sex difference in the rates of landing additional research funding past the R00. But I am reminded of a graph Sally Rockey posted regarding the success rate on R01-equivalent awards. It showed that men and women PIs had nearly identical success rates on new (Type 1) proposals but slightly lower success on Renewal (Type 2) applications. This pastes over the rates of conversion to R00 and the acquisition of additional funding, if you squint a bit.

Are women overall less productive once they’ve landed some initial funding? Are they viewed negatively on the continuation of a project but not on the initiation of it? Are women too humble about what they have accomplished?
__
*I’m counting components of P or U mechanisms but not pilot awards.

This is a guest post from someone who wishes to remain anonymous.

[UPDATE March 2017: I have received a letter from a lawyer purporting to represent Mr. Galli. This letter expressed distress with alleged “defamatory” statements in this post and the ensuring comments. I have consequently gone through to edit this post, and comments, to make it as clear as possible that opinions are being offered so that they might not be misconstrued as a statement of fact by the average reader. -DM]
 


 

This week, the Society for Neuroscience opened its website allowing attendees to book their hotels for their annual meeting. The timing was couldn’t have been worse for the Vanderbilt neuroscience community given that on Monday, a former graduate student of the program leveled a disturbing series of accusations against neuroscientist Aurelio Galli. [UPDATE: The lawyer purporting to represent Mr. Galli has noted that this lawsuit was “dismissed with prejudice in December 2014”. This seems to be a pertinent fact for readers to consider. -DM] At least 10 of the 60+ alleged events of harassment occurred at SfN meetings. The year before the defendant claims she was subject to harassment, The Society for Neuroscience named Vanderbilt their ‘Neuroscience Training Program of the Year’.

 

In a 20 million dollar harassment suit filled in Nashville, sordid details were laid out of alcohol fueled harassment both in the lab and at the Society for Neuroscience’s annual meetings in 2012 and 2013. The student, a recovering alcoholic, alleges she was subjected to unwelcome and embarrassing commentary from Galli about her perceived lesbianism, her sex life and her looks both in lab as well as in front of male professors.

 

Vanderbilt fired back saying they had investigated the claims and would vigorously defend themselves.  The medical center director and the chancellor were named as defendants, as were Mark Wallace, the head of the Vanderbilt Brain Institute and National Academy member and Chair of the Department of Molecular Biology and Physiology, Roger Cone. Wallace and Cone were included for their failure to act on the student’s claims and protect her career.

 

For those outside the field, the neuroscience community seems to be holding down opposite poles in gender and racial equality. The leadership of both the Journal of Neuroscience and the Society are enviably gender balanced in the last decade. SfN was one of the first national societies to initiate meaningful career-long mentorship for women and minorities. Thanks in part to this commitment, women constitute 50% of most neuroscience graduate training programs. The national attrition of women from academic science is also evident in Vanderbilt’s neuroscience program which has an all male leadership and > 30% of its training faculty as women. The vast majority of these female faculty members are assistant professors.

 

Sending a female graduate student from a heavily male influenced neuroscience graduate program to SfN would present many sources of potential conflict. The first SfN meeting the student claims she was harassed at was in New Orleans, a city proud of its tradition of asking women to show their breasts for beads.

 

The female graduate student alleges that at SfN, her PI required her to attend a cocktail party on a boat where senior male scientists “became intoxicated and were allowed to make romantic and sexual advances on the students”. <I’ll insert my editorial opinion that news does not surprise me especially in light of the report this week from Kate Clancy that the majority of women in her survey of field scientists say they have been harassed with more than 20% reporting that they have been assaulted.>

 

Why would anyone attend boat party or any other kind of party where alcohol is flowing freely and fun is a much more clear objective than science?   For many trainees, this is often the only chance they have to spend time talking to well-published PIs. Presumably, at a party like this, senior investigators would be amenable to laid back conversations with trainees providing a rare chance to judge the character of potential future mentors.

 

These parties are the products of the bygone era of much larger gatherings held a decade or more ago by men who were SfN officers and investigators. Hosts had ample institutional ‘slush’ funds and open bar was the norm. [UPDATE: I have edited out a sentence in the original post that the lawyer contends “inappropriately conflates” allegations against Mr. Galli with the actions of another neuroscientist. I didn’t read the authors opinion that way but in an excess of caution am removing it. -DM]

 

[UPDATE: I have edited out a paragraph in the original post that is related to the lawyer’s contention about the “inappropriately conflates” issue mentioned above. I didn’t read the authors opinion that way but in an excess of caution am removing it. -DM]

 

From the Venderbuilt lawsuit, “networking” was the reported benefit Galli touted as a reason for the trainee to attend the boat party. [UPDATE: I have edited out a half-sentence in the original post that is related to the lawyer’s contention about the “inappropriately conflates” issue mentioned above. -DM] …so these kinds of parties probably did help him advance his career. [UPDATE: The lawyer asserts this is “demonstrably false” but since this is a speculative opinion by the original author, I don’t see how this could possibly be true. -DM] The expectation that a female recovering alcoholic would likewise benefit underscores a clear cultural clash that needs to be addressed by both the Vanderbilt community and the Society for Neuroscience.

Maia Szalavitz has penned a new article on addiction that has been circulated, credulously and uncritically, on social media by people who should know better. So, once more, into the breech, Dear Reader.

The article in question is Most of Us Still Don’t Get It: Addiction is a Learning Disorder is posted at substance.com.

We can start with the sub-header:

Addiction is not about our brains being “hijacked” by drugs or experiences—it’s about learned patterns of behavior. Our inability to understand this leads to no end of absurdities.

From whence comes learning if not from experiences? And what is the ingestion of a psychoactive drug if not an experience? She is making no sense here. The second sentence is pure straw-man, particularly when you read the entire piece and see that her target is science, scientists and the informed public rather than the disengaged naive reader.

Academic scientists focused on drug abuse have talked about the learning aspect, of habits and of the lasting consequences of drug experiences since forever. This is not in the least little bit unknown or novel.
Read the rest of this entry »

From
https://twitter.com/LSU_FISH/status/489505604218933248

we see that Science magazine has really made a mistake with a cover picture.

The Science Careers subsection Editor Jim Austin wondered:

and then later wondered some more:
https://twitter.com/SciCareerEditor/status/489528456783224833

to which I observed that no, he had plenty of company:

Look, I am sure there will be plenty of folks more expert and concise and, dare I say it, civil than I am who will weigh in. But here we are.

This cover is bullshit. It objectifies the female form, whether one considers the subjects to be female or not. It is designed explicitly to draw the infamous male gaze.
Read the rest of this entry »

A cautionary tale from University System Professor Emeritus (RIP) John R. Cash:

try not to forget your past undergraduate interns.

Excellent comment from eeke:

My last NIH grant application was criticized for not including a post-doc at 100% effort. I had listed two techs, instead. Are reviewers under pressure to ding PI’s for not having post-docs or some sort of trainee? WTF?

My answer:

I think it is mostly because reviewers think that a postdoc will provide more intellectual horsepower than a tech and especially when you have two techs, you could have one of each.

I fully embrace this bias, I have to admit. I think a one-tech, one-PD full modular R01 is about the standardest of standard lineups. Anchors the team. Best impact for the money and all that.

A divergence from this expectation would require special circumstances to be explained (and of course there are many projects imaginable where two-tech, no-postdoc *would* be best, you just have to explain it)

What do you think, folks? What arrangement of personnel do you expect to see on a grant proposal in your field, for your agencies of closest interest? Are you a blank slate until you see what the applicant has proposed or do you have….expectations?

from University System Professor Emeritus (RIP) John R. Cash:
Well, first you gotta want to get off bad enough to want to get on in the first place“.

Some guy has written a blog post asking “Is it morally acceptable to hire postdocs?

This is not an absurd question on the face of it and one of his points appears to be that hiring postdocs is done in preference to hiring longer-term staff-scientist type people.


Hire permanent researchers instead of postdocs. This I think is closer to a fundamental resolution of the problem. Rather than hiring a short-term postdoc by dangling a future faculty job in front of them, it is far more fair to hire a researcher permanently with a salary and benefits adequate to their experience. Although the current funding system is not particularly suitable for this – obviously, permanent researchers should be paid by the university not by grants – it can be done. A permanent researcher also becomes a great asset for the lab as they accumulate valuable skills.

I agree that if you can manage to do this, in preference to a series of 3-5 year cheap ‘trainees’ doing the same job, this is a morally superior place to be. Totally.

The blog post starts, however, with the following figure

sourced from Schillebeeckx et al (2013) in Nature Biotechnology.

See how the production of new PhDs each year leads to an ever-increasing disconnect between the number of available PhDs and the number of faculty jobs? So yes, there is an increasing body of postdocs being exploited and not being able to get the faculty jobs that they started graduate training to obtain.

BUT THEY DIDN’T GET DROPPED OFF BY THE STORK OF SCIENCE!!!!!!!!!!!!!!!!!!!!

They were made. Intentionally. By faculty who benefit tremendously in their own careers from an inexpensive, unbelievably hard working, young and less-distracted, deluded and optimistic workforce.

Faculty who, as it happens, are unbelievably motivated to come up with excuses why their continued overproduction of PhDs year in, year out, is not any sort of problem.

You know, kind of like any sort of Western subpopulation which advocates family sizes of 8, 15 or whatnot can’t find any sort of problem with overpopulation.

And kind of like the US Baby Boomers stopped talking about overpopulation the second they realized their comfy retirements were gonna depend on a lot bigger working population behind them, paying the taxes that they couldn’t even be bothered to pay in their own heyday.

But I digress.

The point is, that this blog post contains a big old howler:

One might object to this: Isn’t there the same problem with PhDs as with postdocs? In my view, the problem is not the same. I believe that entering a PhD program in natural sciences is not a commitment to an academic track, whereas entering a postdoc is, in most cases. Most jobs outside of academia do not require a postdoc experience, so a postdoc definitely narrows down one’s options. In contrast, a PhD generally widens the options. So, in my view, most PhDs should not go onto the academic track. But in general having more educated people in the non-academic world is good, especially given how many people do not believe in evolution or what idiots oversee science in Congress. A more detailed discussion of this subject is a topic for another day.

HAHAHAHAAH!!!!!!

Riiiiiiigghhhhhttt.

Bog-standard excuse making that I hear from every damn participant in a graduate program that simply cannot bear to see that their habit has been to exploit cut rate labor. At first they simply refused to admit that there was any overproduction whatsoever. Then, when the evidence became overwhelming, they clutched the excuse of “alt-careers” and “general good” like a man going down for the third time grasping a life-saver ring.

It’s laughable and pathetic.

One might even venture, immoral.

__
p.s. I don’t blame people directly for participating in this crappy system we are in. It demands that PIs exploit people to survive in the grant-funded rat race. Having a lab based exclusively on the work of ever more expensive career TurboTechs and Staff Scientists is a path to disaster. I grasp this. But for Glory’s sake people! Stop pretending it is something it isn’t. Stop pretending that your lab’s arrangements are totally free of exploitation but those other aspects of the system, over there, are immoral and evil.

Nailed it:

Ok, ok, I have no actual data on this. But if I had to pick one thing in substance abuse science that has been most replicated it is this.

If you surgically implant a group of rats with intravenous catheters, hook them up to a pump which can deliver small infusions of saline adulterated with cocaine HCl and make these infusions contingent upon the rat pressing a lever…

Rats will intravenously self-administer (IVSA) cocaine.

This has been replicated ad nauseum.

If you want to pass a fairly low bar to demonstrate you can do a behavioral study with accepted relevance to drug abuse, you conduct a cocaine IVSA study [Wikipedia] in rats. Period.

And yet. There are sooooo many ways to screw it up and fail to replicate the expected finding.

Note that I say “expected finding” because we must include significant quantitative changes along with the qualitative ones.

Off the top of my head, the types of factors that can reduce your “effect” to a null effect, change the outcome to the extent even a statistically significant result isn’t really the effect you are looking for, etc

  • Catheter diameter or length
  • Cocaine dose available in each infusion
  • Rate of infusion/concentration of drug
  • Sex of the rats
  • Age of rats
  • Strain of the rats
  • Vendor source (of the same nominal strain)
  • Time of day in which rats are run (not just light/dark* either)
  • Food restriction status
  • Time of last food availability
  • Pair vs single housing
  • “Enrichment” that is called-for in default guidelines for laboratory animal care and needs special exception under protocol to prevent.
  • Experimenter choice of smelly personal care products
  • Dirty/clean labcoat (I kid you not)
  • Handling of the rats on arrival from vendor
  • Fire-alarm
  • Cage-change day
  • Minor rat illness
  • Location of operant box in the room (floor vs ceiling, near door or away)
  • Ambient temperature of vivarium or test room
  • Schedule- weekends off? seven days a week?
  • Schedule- 1 hr? 2hr? 6 hr? access sessions
  • Schedule- are reinforcer deliveries contingent upon one lever press? five? does the requirement progressively increase with each successive infusion?
  • Animal loss from the study for various reasons

As you might expect, these factors interact with each other in the real world of conducting science. Some factors you can eliminate, some you have to work around and some you just have to accept as contributions to variability. Your choices depend, in many ways, on your scientific goals beyond merely establishing the IVSA of cocaine.

Up to this point I’m in seeming agreement with that anti-replication yahoo, am I not? Jason Mitchell definitely agrees with me that there are a multitude of ways to come up with a null result.

I am not agreeing with his larger point. In fact, quite the contrary.

The point I am making is that we only know this stuff because of attempts to replicate! Many of these attempts were null and/or might be viewed as a failure to replicate some study that existed prior to the discovery that Factor X was actually pretty important.

Replication attempts taught the field more about the model, which allowed investigators of diverse interests to learn more about cocaine abuse and, indeed, drug abuse generally.

The heavy lifting in discovering the variables and outcomes related to rat IVSA of cocaine took place long before I entered graduate school. Consequently, I really can’t speak to whether investigators felt that their integrity was impugned when another study seemed to question their own work. I can’t speak to how many “failure to replicate” studies were discussed at conferences and less formal interactions. But given what I do know about science, I am confident that there was a little bit of everything. Probably some accusations of faking data popped up now and again. Some investigators no doubt were considered generally incompetent and others were revered (sometimes unjustifiably). No doubt. Some failures to replicate were based on ignorance or incompetence…and some were valid findings which altered the way the field looked upon prior results.

Ultimately the result was a good one. The rat IVSA model of cocaine use has proved useful to understand the neurobiology of addiction.

The incremental, halting, back and forth methodological steps along the path of scientific exploration were necessary for lasting advance. Such processes continue to be necessary in many, many other aspects of science.

Replication is not an insult. It is not worthless or a-scientific.

Replication is the very lifeblood of science.

__
*rats are nocturnal. check out how many studies**, including behavioral ones, are run in the light cycle of the animal.

**yes to this very day, although they are certainly less common now

I am no fan of the hysterical hand wringing about some alleged “crisis” of science whereby the small minded and Glam-blinded insist that most science is not replicable.

Oh, don’t get me wrong. I think replication of a prior result is the only way we really know what is most likely to be what. I am a huge fan of the incremental advance of knowledge built on prior work.

The thing is, I believe that this occurs down in the trenches where real science is conducted.

Most of the specific complaining that I hear about failures to replicate studies is focused on 1) Pharma companies trying to cherry pick intellectual property off the latest Science, Nature or Cell paper and 2) experimental psychology stuff that is super truthy.

With regard to the former, cry me a river. Publication in the highest echelons of journals, publication of a “first” discovery/demonstration of some phenomenon is, by design, very likely not easily replicated. It is likely to be a false alarm (and therefore wrong) and it is likely to be much less generalizable than hoped (and therefore not “wrong” but definitely not of use to Pharma vultures). I am not bothered by Pharma execs who wish that public funded labs would do more advancing of intellectual property and serve it up to them part way down the traditional company pipeline. Screw them.

Psych studies. Aaah, yes. They have a strong tradition of replication to rely upon. Perhaps they have fallen by the wayside in recent decades? Become seduced to the dark side? No matter. Let us return to our past, eh? Where papers in the most revered Experimental Psychology journals required several replications within a single paper. Each “Experiment” constituting a minor tweak on the other ones. Each paper firmly grounded in the extant literature with no excuses for shitty scholarship and ignoring inconvenient papers. If there is a problem in Psych, there is no excuse because they have an older tradition. Or possibly some of the lesser Experimental Psychology sects (like Cognitive and Social) need to talk to the Second Sect (aka Behaviorism).

In either of these situations, we must admit that replication is hard. It may take some work. It may take some experimental tweaking. Heck, you might spend years trying to figure out what is replicable / generalizable, what relies upon very ….specific experimental conditions and what is likely to have been a false alarm. And let us admit that in the competitive arena of academic science, we are often more motivated by productivity than we are solving some esoteric problem that is nagging the back of our minds. So we give up.

So yeah, sometimes practicalities (like grant money. You didn’t seriously think I’d write a post without mentioning that, did you?) prevent a thorough run at a replication. One try simply isn’t enough. And that is not a GoodThing, even if it is current reality. I get this.

But….

Some guy has written a screed against the replication fervor that is actually against replication itself. It is breathtaking.

All you need to hook your attention is conveniently placed as a bullet point pre-amble:

· Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
· Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.
· Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.
· Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.

· The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.

· Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.

Seriously, go read this dog.

This part seals it for me.

So we should take note when the targets of replication efforts complain about how they are being treated. These are people who have thrived in a profession that alternates between quiet rejection and blistering criticism, and who have held up admirably under the weight of earlier scientific challenges. They are not crybabies. What they are is justifiably upset at having their integrity questioned.

This is just so dang wrong. Trying to replicate another paper’s effects is a compliment! Failing to do so is not an attack on the authors’ “integrity”. It is how science advances. And, I dunno, maybe this guy is revealing something about how he thinks about other scientists? If so, it is totally foreign to me. I left behind the stupid game of who is “brilliant” and who is “stupid” long ago. You know, when I was leaving my adolescent arrogance (of which I had plenty) behind. Particularly in the experimental sciences, what matters is designing good studies, generating data, interpreting data and communicating that finding as best one can. One will stumble during this process…if it were easy it wouldn’t be science. We are wrong on a near-weekly basis. Given this day to day reality, we’re going to be spectacularly wrong on the scale of an entire paper every once in awhile.

This is no knock on someone’s “integrity”.

Trying to prevent* anyone from replicating your work, however, IS a knock on integrity.

On the scientific integrity of that person who does not wish anyone to try to replicate his or her work, that is.

__
*whether this be by blocking publication via reviewer or editorial power/influence, torpedoing a grant proposal, interfering with hiring and promotion or by squelching intrepid grad students and postdoctoral trainees in your own lab who can’t replicate “The Effect”.

I wrote this awhile ago. Seems worth reposting for new readers:

I really should apologize to my readers who get their feelings hurt when 1) I bash GlamourMag science and 2) CPP bashes society journal level science. I just couldn’t figure out how to make it something other than a nonpology. So the nonpology version is, sorry dudes, sorry that your feelings are hurt if there is some implication that you are a trivial fame-chasing, probably data faking GlamourHound. also, if the ranting that I trigger from certain commenters has the effect of making you feel as though you are a trivial, meaningless speedbump who is wasting NIH dollars better spent on RealScientists who do RealGrandeWorkEleven. The fact is, CPP and I are in relatively comfortable situations compared with many of our readers. It is no secret that we have jobs and grant funding. Although it is true that both of us are not above making an exaggerated point for dramatic discussion-encouraging purposes, it is probably no surprise that we come from distinctly different points of view ForRealz on this particular issue. Speaking only for myself in this case, I’ve been around long enough and enjoyed enough of what I consider to be success in what I want to do as a scientist that it tends to insulate me against criticism. I get that this is not true for all of you. If my intent in raising these issues (i.e., to show that the dominant meme is not reflective of the only way to have a career) backfires for some of you, I do regret that.