from an essay at Science Careers by Adam Rubin:

When I was there, about twenty people worked in the lab, including seven grad students, postdocs out the wazoo, and even an undergrad who used to whine—and these were his exact words—”Adam, the data are being weird!” I think he’s a medical doctor now. Anyway, it was known as the department’s largest lab, a bustling powerhouse facility that churned out grants and always dominated the annual holiday party dessert competition.

Now, however, it appears to have fallen victim to the same budget cuts that are killing science around the country. Research projects have been abandoned. Equipment sits idle. The lab of twenty has become a lab of five. And the five are scared.

The past five or six months have been a bit depressing on my campus too. The parking lots are noticeably less full. Sure, at first it was the end of the school year to blame. And then we hit the swing of early summer when the Americans with families went off…then it was vacation August for all the Eurohabituated scientists. It was easy to mouth all the excuses. And to refuse to recognize the reality.

September is done now and it is hard to maintain any sort of fiction.

The labs are empty. There are many fewer people around. Everything has shrunken in upon itself.

It hasn’t been a huge explosion, either. No orgy of dramatic dissolution wherein a faculty member cashes in all at once.

Just a sloooooow, gradual depressing attrition of people.

A recognition you haven’t seen anyone in that particular lab space in…well, quite some time.

The sad part is, my department is doing relatively…not well, but okay. We’ve had a number of grants land on the laboratories in the past nine months or so. Really hard to complain too much in these times of belt-tightening at the NIH.

But this may not be occurring with other departments around campus. I don’t know. I don’t really pay much attention to who funds them and how hard they all work at securing funding. I can’t see it and I don’t want to….not my pay grade. Still, my perception may be enhanced by those people, over there. Those losers not in my department. Those guys.

Still. Even within our own department, we’re in survival mode. Seemingly. We’re working….but it is less than vibrant. Not what I’d describe as bustling….which it has been before. And hopefully will again.

I don’t know who the Op/Ed author quoted above worked with, what sort of lab it was or where the PI was in career progression. But assuredly, folks. Assuredly. Some of these labs are not going to come back. The PI may be near enough to the end of the career to just pack it in. There is also the possibility of a death-spiral in which an interval of low production may lead to no more trainees having interest in the lab, and therefore no preliminary data for new proposals and therefore no new funding.

The University may run out of patience and shut a PI down unwillingly.

This is still the front end of the process but make no mistake, we are fully engaged. Shrinking lab size is the first step, but it is utterly undeniable at present. It is a clear antecedent to the coming collapse of labs themselves.

My optimism that the NIH extramural-research enterprise is too big to fail is being sorely tested people.

As far as I can tell, the British Journal of Pharmacology has taken to requiring that authors who use animal subjects conduct their studies in accordance with the “ARRIVE” (Animals in Research: Reporting In Vivo Experiments) principles. These are conveniently detailed in their own editorial:

McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL.Guidelines for reporting experiments involving animals: the ARRIVE guidelines.Br J Pharmacol. 2010 Aug;160(7):1573-6. doi: 10.1111/j.1476-5381.2010.00873.x.

and paper on the guidelines:

Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG; NC3Rs Reporting Guidelines Working Group.Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol. 2010 Aug;160(7):1577-9. doi: 10.1111/j.1476-5381.2010.00872.x.

The editorial has been cited 270 times. The guidelines paper has been cited 199 times so far and the vast, vast majority of these are in, you guessed it, the BRITISH JOURNAL OF PHARMACOLOGY.

One might almost suspect the journal now has a demand that authors indicate that they have followed these ARRIVE guidelines by citing the 3 page paper listing them. The journal IF is 5.067 so having an item cited 199 times since it was published in the August 2010 issue represents a considerable outlier. I don’t know if a “Guidelines” category of paper (as this is described on the pdf) goes into the ISI calculation. For all we know they had to exempt it. But why would they?

And I notice that some other journals seem to have published the guidelines under the byline of the self same authors! Self-Plagiarism!!!

Perhaps they likewise demand that authors cite the paper from their own journal?

Seems a neat little trick to run up an impact factor, doesn’t it? Given the JIT and publication rate of real articles in many journals, a couple of hundred extra cites in the sampling interval can have an effect on the JIT.

From the Science Careers section, Michael Price reports on a recent National Academies of Science symposium on the NIH foofraw about Biomedical career trajectories. The NAS, you will recall, is a society of very elite and highly established scientists in the US. It will not surprise you one bit to learn that they cannot fathom making changes in our system of research labor to benefit the peons anymore than the NIH can:

First issued in June 2012, the working group’s report made a controversial proposal: that funding should gradually be moved away from R01 grants and toward new NIH training grants in an effort to decouple graduate student and postdoc stipends. But responses to this proposal were tepid at the June [Advisory Committee to the Director (ACD) of the National Institutes of Health (NIH)] meeting where the proposals were first presented. Such a move would reduce the number of graduate students and postdocs available to principal investigators (PIs), and make trainees more expensive to hire, some ACD members argued. That would reduce PIs’ autonomy and encumber the research enterprise. “One wants to be sure that the principal investigators, who are supposed to be doing the research, continue to have enough flexibility to be able to support the research they want to do,” offered biologist Robert Horvitz of the Massachusetts Institute of Technology in Cambridge.

Reduce the number of easily exploitable laborers and/or make them more expensive. Presumably by forcing PIs to conduct more of their work with a more-permanent workforce (at any degree level). Permanent employees* which have that nasty tendency to gain seniority and consequently cost more money compared with the constantly turning-over grad student and postdoc labor pool.

And reduce our autonomy to hire foreign workers to further suppress wages and expectations for the domestic PhD pool. (Individual and Institutional postdoctoral and graduate “training” fellowships from the NIH currently only extend to US citizens. So I imagine PIs are assuming a shift to more fellowships would “reduce PIs’ autonomy” to hire foreign PhDs.)

the Price article continues:

When the ACD convened in December to discuss implementing the working group’s recommendations, this one had vanished from the agenda. The discussions at the December meeting avoided controversial issues, centering on whether, in an era in which only a small minority of scientists can realistically expect academic research careers, universities were adequately training students for a range of careers beyond the tenure track.

So it isn’t just the NAS Greybearded and BlueHaired contingent. This is the NIH response to their own working group.

Pass the buck.

Really strong work there, NIH.

Anything better from the NAS meeting?

In contrast to the measured discussion at December’s ACD meeting, the attendees of last week’s NAS meeting—mostly researchers who have studied the academic labor market—were critical of the status quo, arguing that keeping things the way they are would be disastrous for the scientific workforce.

m’kay…..and…..

There aren’t enough permanent jobs in academia for the vast majority of science graduates—and yet little has been done to curtail the production of doctorates, Ginther argues. “Employment has been stagnant, but Ph.D. production has been zooming,” Ginther said.

Ginther? Remember her? Wonder how NIH is coming along on the R01 funding disparity issue? HAAHA, I crack myself up.

Anyway…is anyone at NAS or the ACD discussing how we need to shut down the PhD firehose in addition to functionally restricting the import of foreign labor? hell no….

At December’s ACD meeting, the discussion focused on tweaking graduate programs to better prepare students for jobs outside academia, and several ACD members pointed to the relatively low unemployment numbers among science Ph.D.s as reassurance about trainees’ professional prospects.

Oh, but the scuttlebutt. That’s a brightspot, right?

None of the presenters at last week’s meeting put forth any radical suggestions for how to overhaul the academic training system, but the tenor of the discussions was far more critical of established practices than the discussions heard at NIH in December 2012. After Ginther’s presentation, this reporter overheard a chat between two meeting attendees. One suggested that science professors cannot in good conscience encourage their students to pursue a Ph.D.,

Sigh. No “radical suggestions”, eh? So basically there is no real difference from the ACD meeting. Ok, so one overheard conversation is snarky….but this does not a “tenor” make. How do you know the ACD folks didn’t also say such things outside of the formal presentations and the journalist just didn’t happen to be there to eavesdrop? Lots of people are saying this, they just aren’t saying it very loud, from a big platform or in large numbers. When you start seeing the premier graduate training programs in a subarea of science trumpeting their 30% or 50% reductions in admissions, instead of the record increases**, then we’ll be making some strides on the “tenor”.

Remember though, the NIH is taking all this stuff very, very seriously.

the ACD moved forward with most of the working group’s other recommendations, including proposals that would: establish a new funding program to explore how to better train grad students and postdocs for nonacademic careers; require trainees funded by NIH to have an individual development plan; encourage institutions to limit time-to-graduation for graduate students to 5 years; encourage institutions to track the career outcomes of their graduates; and encourage NIH study sections to look favorably upon grant proposals from teams that include staff scientists

Right.

1) Nonacademic careers in science are also drying up. This is the ultimate in buck-passing and feigned ignorance of what time it is on the street.
2) IDPs? Are you kidding? What good does it do to lay out specifically “I’d like to take these steps to become a tenure-track faculty” when there are STILL no jobs and no research funding for those who manage to land them? IDPs are the very definition of rearranging deck chairs.
3) I totally support faster time to PhD awards for the individual. However on a broad basis, this just accelerates the problem by letting local departments up their throughput of newly minted PhDs. Worthless goal if it is not combined with throttling back on the number of PhD students being trained overall.
4) Making training departments track outcomes is good but..to what end? So that prospective graduate students will somehow make better choices? Ha. And last I checked, when PhD programs are criticized for job outcome they start waving their hands furiously and shout about the intervening postdoctoral years and how it is in no way their fault or influence that determines tenure-track achievement of their graduates.
5) “encourage” study sections? Yeah, just like the NIH has been encouraging study sections to treat tenure-track traditional hire Assistant Professors better. Since the early 80s at the least and all to no avail. As we know, the only way the NIH could make any strides on that problem was with affirmative action style quotas for younger PIs.

Tilghman, who headed the working group and I think has been around the NIH for a few rodeos before, is not impressed:

Yet, the working group’s chair, former Princeton University president Shirley Tilghman, told Science Careers that she couldn’t “help but go back to [her] cynicism” so long as NIH merely “encouraged” many of these measures.

Where “cynicism” is code for “understanding that NIH has no intention whatsoever in changing and is merely engaging in their usual Kabuki theater to blunt the fangs of any Congressional staff that may happen to get a wild hair over any of this career stuff”.

Score me as “cynical” too.

[ h/t: DJMH ]
__
*and yeah. It sucks to have a 5-year grant funding cycle and try to match that on to supporting permanent employees. I get that this is not easy. I deal with this myself, you know. My convenience doesn’t excuse systematic labor exploitation, though.

**Dude I can’t even. Bragging about record admits for several recent years now, followed finally this year by some attempt to figure out if the participating faculty can actually afford to take on graduate students. FFS.

There should be a rule that you can’t write a review unless you’ve published at least three original research papers in that topic/area of focus.

Also a rule that your total number of review articles cannot surpass your original research articles.

Someone or other on the Twitts, or possible a blog comment, made a remark about academic citation practices that keeps eating at me.

It boils down to this.

One of the most fundamental bits of academic credit that accrues to authors are the citations of their research papers. Citations form the ballyhooed h-index (X papers with at least X cites each) go into the “Highly Cited” measure of awesomeness and are generally viewed as an important indication of your impact on science.

Consequently, when you choose to cite a review article to underline a point you are making in your own article, you are taking the credit that rightfully goes to the people who did the actual work, and handing it over to some review author.

Review authors are extracting surplus value from the people who did the actual creating. Kind of like a distributor of widgets extracts value from those people who actually made them by providing the widgets in an easy/efficient location for use. Good for them but…..

So here’s the deal. If you are citing a review only as a sort of collected works, stop doing that. I can make an exception when you are citing the review for the unique theoretical or synthetic contribution made by the review authors. Fine. But when you are just doing it because you want to make a general “..it is well established that Bunnies make it to the hedgerow in 75% of baseline time when they are given amphetamine” type of point, don’t do that. Cite some of the original authors!

If you really need to, you can cite (Jo et al, 1954, Blow et al 1985, Moe et al 2005; see Pig and Dog, 2013 for recent review).

Look at it this way. Would you rather your papers were cited directly? Or are you okay with the citations for something to which you contributed fundamentally being meta-cites of some review article?

exhibit a:

h/t retractionwatch blog and PhysioProffe.

As we all know, much of the evaluation of scientists for various important career purposes involves the record of published work.

More is better.

We also know that, at any given point in time, one might have work that will eventually be published that is not, quiiiiiite, actually published. And one would like to gain credit for such work.

This is most important when you have relatively few papers of “X” quality and this next bit of work will satisfy the “X” demand.

This can mean first-author papers, papers from a given training stint (like a 3-5 yr postdoc) or the first paper(s) from a new Asst Professor’s lab. It may mean papers associated with a particular grant award or papers conducted in collaboration with a specific set of co-authors. It could mean the first paper(s) associated with a new research direction for the author.

Consequently, we wish to list items that are not-yet-papers in a way that implies they are inevitably going to be real papers. Published papers.

The problem is that of vaporware. Listing paper titles and authors with an indication that it is “in preparation” is the easiest thing in the world. I must have a half-dozen (10?) projects at various stages of completion that are in preparation for publication. Not all of these are going to be published papers and so it would be wrong for me to pretend that they were.

Hardliners, and the NIH biosketch rules, insist that published is published and all other manuscripts do not exist.

In this case, “published” is generally the threshold of receiving the decision letter from the journal Editor that the paper is accepted for publication. In this case the manuscript may be listed as “in press“. Yes, this is a holdover term from the old days. Some people, and institutions requiring you to submit a CV, insist that this is the minimum threshold.

But there are other situations in which there are no rules and you can get away with whatever you like.

I’d suggest two rules of thumb. Try to follow the community standards for whatever the purpose and avoid looking like a big steaming hosepipe of vapor.

“In preparation” is the slipperiest of terms and is to be generally avoided. I’d say if you are anything beyond the very newest of authors with very few publications then skip this term as much as possible.

I’d suggest that “in submission” and “under review” are fine and it looks really good if that is backed up with the journal’s ID number that it assigned to your submission.

Obviously, I suggest this for manuscripts that actually have been submitted somewhere and/or are out for review.

It is a really bad idea to lie. A bad idea to make up endless manuscripts in preparation, unless you have a draft of a manuscript, with figures, that you can show on demand.

Where it gets tricky is what you do after a manuscript comes back from the journal with a decision.

What if it has been rejected? Then it is right back to the in preparation category, right? But on the other hand, whatever perception of it being a real manuscript is conferred by “in submission” is still true. A manuscript good enough that you would submit it for consideration. Right? So personally I wouldn’t get to fussed if it is still described as in submission, particularly if you know you are going to send it right back out essentially as-is. If it’s been hammered so hard in review that you need to do a lot more work then perhaps you’d better stick it back in the in preparation stack.

What if it comes back from a journal with an invitation to revise and resubmit it? Well, I think it is totally kosher to describe it as under review, even if it is currently on your desk. This is part of the review process, right?

Next we come to a slightly less kosher thing which I see pretty frequently in the context of grant and fellowship review. Occasionally from postdoctoral applicants. It is when the manuscript is listed as “accepted, pending (minor) revision“.

Oh, I do not like this Sam I Am.

The paper is not accepted for publication until it is accepted. Period. I am not familiar with any journals which have accepted pending revision as a formal decision category and even if such exist that little word pending makes my eyebrow raise. I’d rather just see “Interim decision: minor revisions” but for some reason I never see this phrasing. Weird. It would be even better to just list it as under review.

Final note is that the acceptability of listing less-than-published stuff on your CV or biosketch or Progress Report varies with your career tenure, in my view. In a fellowship application where the poor postdoc has only one middle author pub from grad school and the two first author works are just being submitted…well I have some sympathy. A senior type with several pages of PubMed results? Hmmmm, what are you trying to pull here. As I said above, maybe if there is a clear reason to have to fluff the record. Maybe it is only the third paper from a 5 yr grant and you really need to know about this to review their continuation proposal. I can see that. I have sympathies. But a list of 8 manuscripts from disparate projects in the lab that are all in preparation? Boooo-gus.

In Science, from Sandra L. Schmid, Ph.D. [PubMed] who is Chair of Cell Biology at UT Southwestern.

The problem:

CVs provide a brief description of past training—including the researcher’s pedigree—as well as a list of awards, grants, and publications. A CV provides little insight into attributes that will ensure future success in the right environment. For example, a CV is unlikely to reflect the passion, perseverance, and creativity of individuals who struggled with limited resources and created their own opportunities for compelling research. Nor is a CV likely to identify bold and imaginative risk-takers who might have fallen—for the moment—just short of a major research success. The same is true for those who found, when they realized their goal, that their results exceeded the imaginations of mainstream reviewers and editors, the gatekeepers of high-profile journals. Finally, for junior hires at early stages of their careers, a CV is unlikely to reveal individuals who are adept at recombining knowledge and skills gained from their graduate and postdoctoral studies to carve out new areas of research, or those able to recognize and take advantage of unique opportunities for collaboration in their next position.

Her Department’s solution:

We will be asking applicants to write succinct cover letters describing, separately and briefly, four elements: (1) their most significant scientific accomplishment as a graduate student; (2) their most significant scientific accomplishment as a postdoc; (3) their overall goals/vision for a research program at our institution; and (4) the experience and qualifications that make them particularly well-suited to achieve those goals. Each of the cover letters will be read by several faculty members—all cell biology faculty members will have access to them—and then we will interview, via video conferencing technologies, EVERY candidate whose research backgrounds and future interests are a potential match to our departmental goals.

She closes with what I see as a deceptively important comment:

Let’s run this experiment!

You have probably gleaned, Dear Reader, that one of my greatest criticisms of our industry is that the members of it throw all of their scientific training out the window when it comes to the actual behavior OF the industry. Paper review, grant review, assessment of “quality”, dealing with systematic bias and misdirection…… MAN we are bad at this.

Above all, we are reluctant to run experiments to test our deep seated beliefs. Our beliefs that GRE quantitative or verbal or subject predict grad school performance. Our beliefs that undergraduate GPA is the key or maybe it is research experience in a lab of some DewD we’ve heard of. Our belief that what makes the postdoc is X number of first author pubs in journals of just exactly this Impact Factor. Our confidence that past performance predicts future success of our new Asst Professor hire….or tenure candidate.

So often we argue, viciously, our biases. So infrequently do we test them.

So bravo to Chair Schmid for actually running an experiment.

Thought of the Day

August 29, 2013

What “best predicts” the success of a junior scientist is handing her a laboratory and R01 level funding.

The notion that past publication record predicts anything independently from these two factors is arrant nonsense.


BikeMonkey Post
Once upon a time I used to try to go fast on a mountain bike. Now and again. The picture here is not of “some random white guy you pulled off the internet” as a certain ex-intern once remarked to me. This is towards the end of a race held on a ski mountain where the cross-country course was basically UP, across, DOWN and zig zagging down the ski slope face was always a blast. I don’t do the stupid stuff anymore. In no small part because of mini-waccaloons that will depend on my brain functioning more or less normally for another several years. But…..I never was an idiot and if you look at this picture with an educated eye you’ll see that I have the rear locked up a bit too much and could have been making better time. Oh, hell, take a listen and I’ll meet you after the jump

Read the rest of this entry »

Terminated

August 23, 2013

The Twitt @TellDrTell wondered:

This brings up the question of what is meant by the “terminal degree“, and this way of phrasing it focuses on one aspect of the concept, namely the “highest” degree.

For many fields of endeavor, some sort of degree that includes the word “Doctor” is the terminal degree. These ones are familiar to my audience.

  • Doctor of Philosophy (PhD or DPhil if you are a Brit)
  • Doctor of Medicine (MD)
  • Doctor of Veterinary Medicine (DVM)
  • Doctor of Dental Surgery (DDS)

These terminal degrees happen to predominate in our research fields and in the population of PIs who secure major grant awards. There are also others of potential interest to this audience, including

  • Juris Doctor (JD. Did you know lawyers can call themself “Doctor”? Why don’t they?)
  • Doctor of Education (Ed.D.; fraught with implications)
  • Doctor of Psychology (PsyD)
  • Doctor of Pharmacy (PharmD)

If you hold only one Doctoral degree then presumably most folks would agree this is the highest one. But @TellDrTell wondered which to consider the highest one if a person holds two doctoral degrees.

Wikipedia and other sources tend to distinguish research degrees from professional degrees. In our usual pool of Doctoral letters, the Ph.D.s are research degrees and most of the other ones are professional degrees. This is underlined by the fact that most of the dual Doctoral degree subpopulation holds a PhD and one of the so-called professional degrees.

Being a research degree, obviously the PhD is higher, better and/or more terminal.

But wait. The Wikipedia lists a whole other bunch of research doctorates, like Doctor of Management and Doctor of Modern Languages, that you’ve never heard of and sound like some scam to avoid doing a Doctor of Philosophy in the respective subjects. In more familiar terms, there are PhDs in both Pharmacology and Psychology, so the PsyD and PharmD seem like lesser degrees to some folks. More limited.

Obviously those are lesser than the professional doctorates in Medicine, Dental Surgery, Veterinary Medicine and Juris. Wait, Juris? Is that law degree more “terminal” than a Ed.D. that was awarded after 6 years* of painstaking thesis research?

Gaaah!

Okay, let’s just say the Ph.D. is the best, all others are lesser and you should list your Ph.D. as your highest degree if you are also a M.D. or a D.V.M.

Unless you went to a combined M.D./Ph.D. program, in which case I think you are this, but not separately either a M.D. or a Ph.D.. And yes, unsurprisingly, I have heard at least one M.D., Ph.D. speak of how awesomely better this is than those lesser M.D./Ph.D. folks**.

And since it is usually a Doctor of Philosophy in [Subject], and the sciences are the most awesome, I think we can safely say that if you have two degrees in which one is a Ph.D. [Science] and the other is Ph.D. [Philosophy], the latter*** is the higher one. And you win the entire world’s respect.

__
*I don’t actually know the duration of Ed.D. programs.
**Gawd, I love academics.

***Because Philosophy squared

GMP has an observation up at Academic Jungle that resonates:

2) Nobody ever pats you on the back and tells you “Good job.” Ever. Except perhaps the people whose approval in the professional arena doesn’t mean much, like your partner or your parents. …The fact that you are supposed to forever go on based on your own convictions and some internal source of energy (must be nuclear, eh?), without ever expecting to get a little energy back in the form of praise from colleagues in the professional community is a really tall order. … I never expected that I would have to be the sole engine propelling myself and all my group members for the next 40+ years. I praise my students when they do a good job, but for us grownups there is no such thing. I suppose you get an award every now and then, but what’s that, a pat on the back every few years? That’s a lean affirmation diet.

It’s totally true. Frequently so, anyway. Those who are supposed to be reviewing and helping with your career locally, such as a Chair or even Dean type of person, are universally motivated to tell you that you are not good enough so that you will work harder. Grant review, paper review…there are some warm fuzzy comments made but somehow the criticisms seem to loom larger. Peers who want to talk about your papers like to bring up the stuff you didn’t do or the flaws or explicate the methods. This is science and a healthy part of it, but it can be hard on the ego since there is never any impression of universal acclaim followed by more and more and more unquestioning approval of your work.

There is a subtle feeling we encourage in science about expectations as well. Sure, you just published a paper, got a grant from the NIH or graduated a PhD student…..but here’s the trick. If you really belong, if you are really one of us…that is expected! So why should there be any special notice for your accomplishment?

Each novel accomplishment for your career simply raises you to a new level of expectation. Just scored your first Nature paper from your own laboratory? Hey, that’s great. But now you are a CNS Glamour Lab and, well, of course that is what you do. (Hey, when’s the next one coming out?)

Over the years I have tried to go out of my way to congratulate my peers, especially the more junior ones, when I see they got a new grant award. Tried to take special notice of their papers and congratulate them on trainees flying the coop. Say something about their selection for study section. I’ve tried to remind some of my closer peers more directly when I see them as an important part of the field and our overall endeavor. And I don’t just limit it to the plebes like me or extramural scientists either. SROs and POs in the NIH need to get some positive feedback too. Your senior faculty won’t be hurt to know you think of them as the best person to serve as Chair or even to make a run at a Deanship (should they be so crazy).

I am not natively a person who is effusive in praise. So I’ve had to make a conscious effort. I’ve done so ever since coming into contact with the Imposter Syndrome in blog-discussions, which was a big factor in crystallizing my thoughts on this.

tl;dr version: Your Humble Narrator is a sexist pig apologist for the old school heteronormative stultifying patriarchal system, hates women, resents his spouse and would leave his kids with the dogcatcher at the slightest excuse.

More after the jump….
Read the rest of this entry »

There is an entry up on the Scientific American Blog Network’s Guest Blog by two of the principals of μBiome. In Crowdfunding and IRBs: The Case of uBiome Jessica Richman and Zachary Apte address prior criticism of their approach to the treatment of human subjects. In particular, the criticism over their failure to obtain approval from an Institutional Review Board (IRB) prior to enrolling subjects in their study.

In February, there were several posts about the ethics of this choice from a variety of bloggers. (See links from Boundary Layer Physiology (here, here, here) Comradde Physioprof (here, here, here), Drugmonkey (here), Janet Stemwedel (here), Peter Lipson (here).) We greatly appreciate the comments, suggestions and criticisms that were made. Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach.

If you follow the linked blog posts, you will find that when Richman and/or Apte engaged with the arguments, they took a wounded tone. This is a stance they continue.

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I was one of the ones who brought up the Tuskegee Syphilis Experiment. Naturally, this was by way of making an illustrative example of why we have modern oversight of research experiments. I did not anticipate that any of the research planned by the uBiome folks would border on this sort of horrible mistreatment of research subjects. Not at all. And mentioning that older history does not so accuse them either.

PhysioProf made this point very well.

UPDATE 2: The need for IRB review has little to do with researchers’ intentions to behave ethically–nowadays it is rare that we are talking about genuinely evil exploitative abusive shitte–but rather that it is surprisingly complicated to actually implement processes, procedures, and protocls that thoroughly safeguard human subjects’ rights and safety, even with the best of intentions. This inquiry has absolutely nothing to so with whether the uBiome people are nice guys who just want to do cool science with the best of intentions. That is irrelevant.

IRBs are there exactly to ensure that earnest scientists with the best of intentions in their hearts are forced to think through all of the possible ramifications of their proposed human subjects research projects in a thorough and systematic manner before they embark on their research. The evidence we are in possession of as of now suggests strongly that uBiome has not done so.

This is a critical reason why scientists using human or animal subjects need to adhere to the oversight/protection mechanisms. The second critical reason is that the people doing the research are biased. Again, it is not the case that one thinks all scientists are setting out to do horrible Mengele type stuff in pursuit of their obsessions. No. It is that we all are subject to subtle influences on our thinking. And we, as humans, have a very well documented propensity to see things our own way, so to speak. Even when we think we are being totally objective and/or professional. By the very nature of this, we are unable to see for ourselves where we are going astray.

Thus, external oversight and review provides a needed check on our own inevitable bias.

We can all grumble about our battles with IRBs (and Institutional Animal Care and Use Committees for animal subject research). The process is far from perfect so a little bit of criticism is to be expected.

Nevertheless I argue that we should all embrace the oversight process unreservedly and enthusiastically. We should be proud, in fact, that we conduct our research under such professional rules. And we should not operate grudgingly, ever seeking to evade or bypass the IRB/IACUC process.

Richman and Apte of μBiome need to take this final step in understanding. They are not quite there yet:

Before we started our crowdfunding campaign, we consulted with our advisors at QB3, the startup incubator at UCSF, and the lawyers they provided us. We were informed (correctly) that IRBs are only required for federally funded projects, clinical trials, and those who seek publication in peer-reviewed journals. That’s right — projects that don’t want federal money, FDA approval, or to publish in traditional journals require no ethical review at all as far as we know.

Well, that is just plain wrong. Being a professional scientist is what “requires” us to seek oversight of our experiments. I believe I’ve used the example in the past of someone like me buying a few operant chambers out of University Surplus, setting them up in my garage and buying some rats from the local pet store. I could do this. I could do this without violating any laws. I could dose them* with all sorts of legally-obtainable substances, very likely. Sure, no legitimate journal would take my manuscript but heck, aren’t we in an era where the open access wackaloons are advocating self-publishing everything on blogs? I could do that. Or, more perniciously, this could be my little pilot study incubator. Once I figured I was on to something, then I could put the protocols through my local IACUC and do the “real” study and nobody would be the wiser.

Nobody except me, that is. And this is why such a thing is never going to happen. Because I know it is a violation of my professional obligations as I see them.

Back to Richman and Apte’s excuse making:

Although we are incubated in the UCSF QB3 Garage, we were told that we could not use UCSF’s IRB process and that we would have to pay thousands of dollars for an external IRB. We didn’t think it made sense (and in fact, we had no money) to pay thousands of dollars on the off chance that our crowdfunding campaign was a success.

and whining

We are happy to say that we have completed IRB review and that our protocol has been approved. The process was extremely time-consuming, and expensive. We went back and forth for months to finally receive approval, exchanging literally hundreds of pages of documents. We spent hundreds of hours on the project.

First, whatever the UCSF QB3 Garage is, it was screwing up if it never considered such issues. Second, crying poverty is no excuse. None whatsoever. Do we really have to examine how many evils could be covered under “we couldn’t afford it”? Admittedly, this is a problem for this whole idea of crowd-funded science but..so what? Solve it. Just like they** had to solve the mechanisms for soliciting the donations in the first place. Third….yeah. Doing things ethically does require some effort. Just like conducting experiments and raising the funds to support them requires effort. Stop with the whining already!

The authors then go on in a slightly defensive tone about the fact they had to resort to a commercial IRB. I understand this and have heard the criticisms of such Pay-for-IRB-oversight entities. From my perspective this is much, much lesser of a concern. The absolute key is to obtain some oversight that is independent of the research team. That is first-principles stuff to my view. They also attempt to launch a discussion of whether novel approaches to IRB oversight and approvals need to be created to deal with citizen-science and crowd-funded projects. I congratulate them on this and totally agree that it needs to be discussed amongst that community.

What I do not appreciate is their excuse making. Admitting their error and seeking to generate new structures which satisfy the goal of independent oversight for citizen-science in the future is great. But all the prior whinging and excuse making, combined with the hairsplitting over legal requirements, severely undercuts progress. That aspect of their argument is telling their community that the traditional institutional approaches do not apply to them.

This is wrong.

UPDATE: Read uBiome is determined to be a cautionary tale for citizen science over at thebrokenspoke blog.
__
*orally. not sure civilians can get a legal syringe needle anywhere.

**(the global crowdfund ‘they’)

Additional Reading:

Animals in Research: The conversation begins
Animals in Research: IACUC Oversight

Animals in Research: Guide for the Care and Use of Laboratory Animals

Animals in Research: Mice and Rats and Pigeons…Oh My!
Virtual IACUC: Reduction vs. Refinement
Animals in Research: Unnecessary Duplication

Academic comfort levels

July 23, 2013

I have a question for you, Dear Reader.

During what fraction of the time you spent at each major career stage in academics – undergrad, grad school, postdoc, faculty level (TT or no, plz specify) – did you feel comfortable?

Not did you feel it was easy exactly, but did you feel as though you had it handled? As though there was little doubt you were doing a good job of what you were expected to do.

For me, undergrad all 4 yrs, grad school 3 yrs, postdoc maybe a scattered 2 yrs total time.

At the faculty level maybe my first three years and again for maybe 6 mo last year.

For faculty, make special note of the tenure decision- were you feeling comfortable in the few years leading up to that?