February 20, 2014
February 19, 2014
Glad to see Zerhouni walk this one back….
Originally posted on Speaking of Research:
On June 4th 2013, Elias Zerhouni, a former Director of the National Institutes of Health (NIH) made some comments at a Scientific Management and Review Board (SMRB) meeting that were reported in NIH Record as follows:
“We have moved away from studying human disease in humans,” he lamented. “We all drank the Kool-Aid on that one, me included.” With the ability to knock in or knock out any gene in a mouse—which “can’t sue us,” Zerhouni quipped—researchers have over-relied on animal data. “The problem is that it hasn’t worked, and it’s time we stopped dancing around the problem…We need to refocus and adapt new methodologies for use in humans to understand disease biology in humans.”
This comment has been used by many animal rights activists to claim that animal research does not work. Here is a selection (many more examples exist):
View original 291 more words
February 19, 2014
ResearchGate is, as you are undoubtedly aware, the latest and most annoying version of “The Facebook/LinkedIn for Scientists™”.
I just noticed that they have some sort of request for you to “confirm” that your publication indeed cited their publication.
What POSSIBLE goal does this serve? I mean, just look at the damn paper! Did it cite yours? Yes/No. Done.
January 31, 2014
An article in Slate makes the case, a bit excitedly, that popular college/University ranking entities should present the ratio of permanent to temporary faculty more prominently. I agree wholeheartedly that this is information the consumer needs to know. The relative adjunctification is highly pertinent to the quality of education on offer.
The simple ratio of teaching bodies is not enough, though it is probably the only thing Deans and Presidents are willing to report.
Ideally the percentage of student contact hours, including labs and sections, would be reported by tenure track status of the instructor.
January 29, 2014
I had a thought occur to me over the past few days. It’s been growing along at the back of my mind and is only partially crystallized.
What if PIs of a given class of interest, whether that be sex, ethnicity, nation of origin or whatever, are not randomly distributed across the various topic domains supported by the NIH? What if a PI of characteristic X tends to work on Topic B using Model M whereas a PI of characteristic Y tends to work on Topic A using Model H?
What if the funding rates for Topic X differed from those for Topic Y? Or if applications using Model M consistently succeeded differently compared with applications using Model H?
I didn’t see any covariates for topic domain or even the funding IC in the Ginther report.
Surely someone at NIH is thinking about this. Surely?
I have two anecdotes for your consideration.
First, as with many areas of science, the ones dear to me suffer from a sex bias. There is a huge tendency to do the animal studies in male animals. Any study using female animals is very frequently a sex comparison study and is proposed explicitly or implicitly as a comparison with the default, i.e. male. I’ve talked about this before. The NIH also takes pains to fix the generalized reluctance via their most functional technique, the call for applications for a dedicated pool of money. In theory, the awarding of grants on sex-differences or on issues specific to women’s health will then spur additional work. Perhaps create a sustained program or even a career of work on this topic.
My anecdote is that I’ve noticed over the years (possible confimation bias here) that women in my field have a greater representation than men in these sorts of studies. Sex-differences models and womans’ health issues in my fields of interest seem to have women as the driving investigators more often than their overall representation.
If this generalizes, then we will want to know if the competitive success of such grant applications because of topic is contaminating our estimation of women PI’s success.
The second anecdote is older and comes from my long history participating on the “Diversity” committees of various academic institutions. Back in the dark ages I recall an incident where a Prof in the experimental sciences had to go to war with a Dean who was in charge of undergraduate summer research funds for underrepresented individuals. The Prof had a candidate who wanted to work in the experimental science, but the awards were generally being made to kids who wanted to work on academic topics related to underrepresented groups. The Dean thought this was the most important thing to do. In this case the prof won his battle in the second year of trying, over the objections of the Dean. I keep in touch with some of my undergraduate professors and I can say that said undergrad went on to become a NIH funded investigator (who still fails to work on issues directly related to underrepresentation). I have no idea if any of the other underrepresented summer research students went on to glorious academic careers in their respective disciplines, perhaps they did. But this is not the point. The point is that perhaps I am a little too glib about the pipeline implications of Ginther. Perhaps the grooming of underrepresented minority undergrads for a career in academics is itself not topic neutral. And the shaping and shifting from that very early stage may dictate field of study and therefore the eventual success rate at the NIH game.
Assuming, of course, that Topic X enjoys differential success rate from Topic Y when the grants are under review at the NIH.
January 29, 2014
Should I cite my research articles “diversely”?
That is, should I give the slightest thought to whether the people I cite, the lab heads in particular, represent the full diversity of my field? Of my country? The world?
If I consider this at all, am I compromising the purity and integrity of my research manuscripts?
January 22, 2014