When disaster strikes

November 30, 2012

NIH Director Collins is very sad about laboratories stricken by megastorm Sandy.

He suggests we should all “help”.

Well sure. We should all help other scientists when disaster strikes.

But you know….? Disaster is striking all over the country. One lab at a time. When they can’t keep their funding. And yeah, many of these result in the loss of “cutting edge” work. The loss of “a decade of” samples or mouse lines or other valuables.

What makes the victims of a natural disaster any more worthy than any of the rest of us?

I’ve been entertaining myself in a twitscussion with my good friend @mrgunn, a dyed-in-the-wool altmetrics wackanut obsessive. It was all started because he RT’d a reference to an article by Taylor and Thorisson entitled “Fixing authorship – towards a practical model of contributorship” which includes subsections such as “Authorship broken, needs fixing” and “Inadequate definitions of authorship“.

These were the thrusts of the article that annoyed me since I feel there is this whole area of interest that is based on a footing of disgruntled sand. In short, there IS no problem with authorship that “needs fixing”. This has not been proven by the people advancing this agenda to any believable degree and you see an awful lot of “everyone knows” type of assertion.

Some other headings in the bit are illustrative, let’s start with “Varied authorship conventions across disciplines“. This is true. But it is not a problem. My analogy of the day is different languages spoken by different people. You do not tell someone speaking a language other than that you understand that they are doing it wrong and we all just need to learn Esperanto. What you do is seek a translation. And if you feel like that is not giving you a “true” understanding, by all means, take the time to learn the language with all of its colloquial nuance. Feel free.

Heck, you can even write a guide book. For all the effort these “authorship is broken” wackaloons take to restate the unproven, they could write up a whole heck of a lot of style-guidage.

“….the discipline of Experimental Psychology is heavily driven by Grand Theorye Eleventy approaches. Therefore the intellectualizing and theorizing is of relatively greater importance and the empirical data-making is lesser. The data may reflect only a single, rather simple model for producing it. This is why you see fewer authors, typically just a trainee and a supervisor. Or even single-author papers. In contrast, the more biological disciplines in the Neuroscience umbrella may be more empirical. Credit is based on who showed something first, and who generated the most diverse sets of data, rather than any grand intellectualizing. Consequently, the author lists are long and filled with people who contributed only a little bit of data to each publication….”

Done. Now instead of trying to force a review of a person’s academic contributions into a single unified framework, one can take the entirely easy step of understanding that credit accrues differently across scientific disciplines.

ahhhh, but now we come to the altmetrics wackaloons who are TrueBelievers in the Church of Universal Quantification. They insist that somehow “all measures” can be used to create….what? I suppose a single unified evaluation of academic quality, impact, importance, etc. And actually, they don’t give a rat’s patootie about the relevance, feasibility or impact of their academic endeavor to capture all possible measures of a journal article or a contributing author. It doesn’t matter if the measure they use entails further misrepresentations. All that they care about is that they have a system to work with, data to geek over and eventually papers to write. (some of them wish to make products to sell to the Flock, of course).

This is just basic science, folks. How many of us have veeeeeery thin justifications for our research topics and models? Not me of course, I work on substance abuse…but the rest of y’all “basic” scientists….yeah.

The wackaloon justifications sound hollow and rest on very shifty support because they really don’t care. They’ve landed on a few trite, truthy and pithy points to put in their “Introduction” statements and moved on. Everyone in the field buys them, nods sagely to each other and never. bothers. to. examine. them. further. Because they don’t even care if they believe it themselves, their true motivation is the tactical problem at hand. How to generate the altmetrics data. Perhaps secondarily how to make people pay attention to their data and theories. But as to whether there is any real world problem (i.e., with the conduct of science” to which their stuff applies? Whether it fixes anything? Whether it just substitutes a new set of problems for an old set? Whether the approach presents the same old problems with a new coat of paint?

They don’t care.

I do, however. I care about the conduct of science. I am sympathetic to the underlying ideas of altmetrics as it happens, so far as they criticize the current non-altmetric, the Journal Impact Factor. On that I agree that there is a problem. And let’s face it, I like data. When I land on a PLoS ONE paper, sure, I click on the “metrics” tab. I’m curious.

But make no mistake. Tweets and Fb likes and blog entries and all that crapola just substitute a different “elite” in the indirect judging of paper quality. Manuscripts with topics of sex and drugs will do relatively better than ones with obscure cell lines faked up to do bizarre non-biological shit on the bench. And we’ll just end up with yet more debates about what is “important” for a scientist to contribute. Nothing solved, just more unpleasantness.

Marrying these two topics together we get down to the discussion of the “Author Contribution” statement, increasingly popular with journals. Those of us in the trenches know that these are really little better than the author position. What does it tell us that author #4 in a 7 author paper generated Fig 3 instead of Fig 5? Why do we need to know this? So that the almetrics wackaloons can eventually tot up a score of “cumulative figures published”? Really? This is ridiculous. And it just invites further gaming.

The listed-second, co-equal contribution is an example. Someone dreamed up this as a half-assed workaround to the author-order crediting assumptions. It doesn’t work, as we’ve discussed endlessly on this blog, save to buy off the extra effort of the person listed not-first with worthless currency. So in this glorious future in which the Author Contribution is captured by the altmetrics wackaloons, there will be much gaming of the things that are said on this statement. I’ve already been at least indirectly involved in some discussion of who should be listed for what type of contribution already. It was entirely amiable but it is a sign of the rocky shoals ahead. I foresee a solution that is exactly as imprecise as what the critics are on about already (“all authors made substantial contributions to everything, fuck off”) and we will rapidly return to the same place we are now.

Now, is there harm?

I’d say yes. Fighting over irrelevant indirect indicators of “importance” in science is already a huge problem because it is inevitably trying to fit inherently disparate things into one framework. It is inevitably about prescribing what is “good” and what is “bad” in a rather uniform way. This is exemplified by the very thing these people are trying to criticize, the Journal Impact Factor. It boggles my mind that they cannot see this.

The harms will be similar. Scientists spending their time and effort gaming the metrics instead of figuring out the very fastest and best way to advance science*. Agencies will fund those who are “best” at a new set of measures that have little to do with the scientific goals….or will have to defend themselves when they violate** these new and improved standards. Vice Deans and P&T committees will just have even more to fight over, and more to be sued about when someone is denied tenure and the real reasons*** are being papered over with quantification of metrics. Postdoctoral bakers agonizing over meeting the metrics instead of working on what really matters, “fit” and “excitement”.

__
*Which is “publish all the data as quickly as possible” and let the hive/PubMed sort it out.
**see complaints from disgruntled NIH applicants about how they “deserve” grants because their publications are more JIF-awesome or more plentiful then the next person.
***”an asshole that we don’t want in our Dept forever”

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the “median impact factor” supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about…
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person’s CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn’t all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

PSA on Journal selection

November 27, 2012

Academic trainees should not be publishing in journals that do not yet have Impact Factors. Likewise they should not be publishing in journals that are not indexed by the major search database (like PubMed) used in their field.

The general science journal Nature has an interesting editorial up:


Earlier this year, we published a Correspondence that rightly took Nature to task for publishing too few female authors in our News and Views section (D. Conley and J. Stadmark Nature 488, 590; 2012). Specifically, in the period 2010–11, the proportions of women News and Views authors in life, physical and Earth sciences were 17%, 8% and 4%, respectively. The authors of the Correspondence had taken us to task in 2005 with a similar analysis for the authorship of our Insight overview articles, and gave us slight credit for having improved that position.

they then went on to perform some additional reviews of their performance.


Our performance as editors is much less balanced.
Of the 5,514 referees who assessed Nature’s submitted papers in 2011, 14% were women.
Of the 34 researchers profiled by journalists in 2011 and so far in 2012, 6 (18%) were women.
Of externally written Comment and World View articles published in 2011 and so far in 2012, 19% included a female author.

then, after the inevitable external blaming they actually get down to it.

We therefore believe that there is a need for every editor to work through a conscious loop before proceeding with commissioning: to ask themselves, “Who are the five women I could ask?”

Under no circumstances will this ‘gender loop’ involve a requirement to fulfil a quota or to select anyone whom we do not know to be fully appropriate for the job, although we will set ourselves internal targets to help us to focus on the task.

HAHHAHAAH. “We’re going to have quotas but we’re not using quotas!” Good one Nature!

What a load of crap. People in academia and other places that are dealing with representativeness need to just stop falling for this right-wing, anti-affirmative-action, anti-diversity bullshit talking point. Quotas are just fine. Numbers are the way clearly discriminatory and unequal practices are revealed and they are the only way we’re going to know when we’ve improved.

But…regardless. Good on Nature for this one.

For the rest of you, keep the spotlight shining brightly upon them. Because they admit themselves that this gender inequality of their pages has been brought to their awareness as long ago as 2005 and. they. still. haven’t. really. improved. Make no mistake, improving diversity on any measure is not easy. It takes highly sustained attention, effort and force of will to change entrenched, unthinking* cultural biases. Not everyone in the organization will even agree with the goals expressed in this editorial and will work harder to find excuses not to change than they do to make improvements. So I don’t expect miracles.

But Nature, you are a premier venue of scientific publication which gives you a very high platform from which to enact cultural change. I do hope you are not blowing smoke on this one.

__
*which they are for the most part.

The ORI blog has an entry up about a new program which attempts to rehabilitate those who have committed academic misconduct.

RePAIR’s premise is that an intense period of intervention, with multiple participants from different institutions who spend several days together at a neutral site, followed by a lengthy period of follow up activities back at their home institution, will rebuild their ethical views. ORI doesn’t know whether RePAIR will work and cannot formally endorse it. But ORI staff do find RePAIR an intriguing and high-minded experiment that research institutions may wish to consider as a resource.

I like the idea of experimenting. But I have to admit I’m skeptical. I do not believe that academic misconduct at the grad student, postdoc and professorial level is done out of ignorance. I believe that it occurs because someone is desperate and/or weak and allows the pressures of this career path to nudge them down the slippery slope.

Now true, many cognitive defenses are erected to convince themselves that they are justified. Perhaps the “everyone is doing it” one is something that can be addressed with these re-education camps. But many of the contingencies won’t go away. There is no weekend seminar that can change the reality of the NIH payline or the GlamourMag chase.

I suspect this will be a fig leaf that Universities use to cover up the stench should they choose to retain a convicted fraudster or to hire one.

Speaking of which, a Twitt yesterday alleged that Marc Hauser has been reaching out to colleagues, seeking collaboration. It made me wonder if anyone with an ORI finding against them has ever returned in a meaningful way? Whether any University would hire them, whether they would be able to secure funding and whether the peer review process would accept their data for publication.

Can anyone think of such a person?

Holy Moly! by way of the CPDD blog an announcement from NIH Director Collins

After rigorous review and extensive consultation with stakeholders, I have concluded that it is more appropriate for NIH to pursue functional integration, rather than major structural reorganization, to advance substance use, abuse, and addiction-related research. To that end, the National Institute on Drug Abuse (NIDA) and the National Institute on Alcohol Abuse and Alcoholism (NIAAA) will retain their institutional identities, while strengthening their ongoing efforts to work more closely with each other and with related research programs at other institutes and centers.

And that’s all she wrote, folks. Like I’ve always said, if you can’t merge these two ICs then there is no point in any talk about merging any existing ICs.

For some idea of what the “functional integration” means, see this site.

Background reading:

    The Merger of NIDA and NIAAA: Here We Go!
    Grrrrrrrrr. CPDD and RSoA annoyance
    Your academic society is working for (or against?) you under the NIH Grant waterline
    The Gender Smog We Breathe: The NIH Edition
    NIH Director Collins moves forward with NIAAA/NIDA merger
    Is NIAAA a better steward of NIH grant monies than is NIDA?
    Update on the NIAAA/NIDA Merger
    Beverage industry is not enthusiastic about merging NIAAA with NIDA
    The NIDA/NIAAA merger and the newly proposed NIH Center for Translational Research.

Co-Communicating Author

November 15, 2012

What is “communicating author” attribution for in your field? How is it interpreted?

(and do you note it on a CV or job application in any way?)

First it was NIAMS which joined NIGMS in publishing funding outcome data that is of high interest to applicants. I’m referring to the number of grants funded/not funded by priority score. NIGMS has presented these data for ages and the National Institute of Arthritis and Musculoskeletal and Skin Diseasesput theirs up for the first time just this summer.

The new data from NINDS for Fiscal Year 2011 are remarkably consistent with the datasets posted by NIGMS and by NIAMS.

This graph presents the R01 data, inclusive of new and competing continuation applications and of both experienced and New Investigators/Early Stage Investigators. You can visit their page for further graphical breakdowns.

What is readily apparent from this graph, as with all similar graphs that I’ve seen to date, is that there is a readily perceptible percentile-rank payline under which essentially all grants are funded. Above this apparent payline, funding is possible but comparatively rare. Furthermore, in this zone of exception funding (aka grey zone, aka “pickups”) the chances of getting funded are best for apps with scores closest to the apparent payline. The probability of the exception funding decreases as a function of this distance.

This tells you, as always, that the major input to the system continues to be the score ranks (and therefore percentiles) decided by the study section review. The relative impact of Program decisions to fund grants out of order is comparatively small. In the broader scheme.

But yes, if you are one of those with an application in the 15-18%ile that didn’t get funded by NINDS while they were picking up 25+%ile scores this is a HUGE impact. I’ve been on both sides of this fence (I assume) in the past so….yeah.

The rather interesting bits in all of this have to do with my comment about “remarkably consistent” and “apparent payline”. As you know, some of the NIH ICs publish their paylines formally and some do not. In fact, many of the latter insist that they do not in fact have a payline. As it happens, most ICs from whom I tend to seek money fall into the latter category. What you realize over the years of calling Program Officers to beg for some small indication of your grant’s chances of funding is that they all have paylines. There is always some internal sense of what kind of score is going to be a near-certainty, what is a “no-way” and what might be arguable depending on the final details. What you also gain a vague picture of, through this and through talking to your peers about what funded and what did not, a distribution much like the above figure. So, as we start to see more and more ICs (with different alleged payline policies) post their data, we can see that they all behave more or less the same when it comes to funding grants by initial review scores/percentiles.

There are also certain ICs that bruit it about that they stick strictly to the order of peer review. I.e., that they never skip a grant under the payline and that, more importantly, that they never fund differentially above the payline. They imply quite strongly, or state outright, that grants with equal ranks above the payline will all be funded or not funded.

I have been immensely skeptical of such claims and insisted that I wouldn’t believe it until I see the funding data.

NINDS is one of these ICs that claims/is claimed to fund in “strict order”.

These data show that is totally false.

I LOVE being right.

__
[h/t: PhysioProf]

There’s a new Case Report in the Journal of Analytical Toxicology

Adamowicz P, Tokarczyk B, Stanaszek R, Slopianka M. Fatal Mephedrone Intoxication–A Case Report. J Anal Toxicol. 2012 Nov 7. [Epub ahead of print] [PubMed][DOI]

The victim was a 30 year old male, found in a stairwell in a “critical state”. Emergency response was ineffectual and the individual died at the scene. The toxicology testing found his blood and vitreous humour positive for mephedrone (5.5 and 7.1 ug/mL, respectively). There was no alcohol in the individual, no positives on “routine screening analysis” nor any sign of amphetamine, methamphetamine or MDMA. The 2C-B compound initially suspected by police (based on some field assay, looks like) was not confirmed in the powder in his possession nor in tissue samples.

That’s it, short and sweet. The mephedrone (aka 4-methylmethcathinone) killed him.

__
Additional reading on the substituted cathinone designer drugs of abuse can be found in my archive.

Related reading on MDMA-induced fatality can be found in the MDMA Archive.

Hoppity Hop Hop!

November 9, 2012

Only one day (today) left to donate through my challenge page in the Science bloggers for students drive to a classroom project at DonorsChoose Folks.

How about a nice frog project?

Do you remember the first time you participated in a dissection in your science class? Well, my students have not had that experience yet but I would like to change that for them.
I have great students! Our school is a Title 1 school and most of our students come from a high poverty area in South Carolina. They absolutely love the idea of having the opportunity to dissect a frog and learn more about animal anatomy. They are VERY bright students and are eager to learn.

Maybe a little gene wrangling?

I have conducted this laboratory activity in the last two years of AP Biology, and most students cite it as THE reason they choose to study science or continue education and training in biotechnology. This lab turns students into scientists–it allows them to take samples of their own DNA, use a technique called PCR to make billions of copies of identical DNA, and then analyze their own genes to determine their genotype for a specific trait.

Every little bit counts. And if you enter the matching code SCIENCE, your donation up to $100 will be doubled.

Thanks again to all who have donated to the Scientopia and other blog challenges.

Confession

November 7, 2012

I actually believed, until recently, that the “our internal polling shows…” routine was just part of the spin machine. That at worst the campaign might keep the candidate fooled to keep his confidence up.

But that for realzies the campaigns wanted to operate on the best possible polling data.

So they could know where to devote resources, where to fly the candidate for speechifying and all that.

Voting Day in the US

November 6, 2012

Did you, my US Readers?

I hope so.

I voted on a number of things this year. For Obama to continue as President, obviously. A few nearly-uncontested other races and a few where it matters. The odd ballot initiative.

My Congress Critter race was made easier for me…my professional interests are increasingly at odds with my usual preferences and luckily I don’t have to deal with my recent quandry over voting for a Critter that I think is an idiot anymore. Small favors.

A local election or two were so bad that I had to write in someone else. I don’t like to protest vote all that much but it really was called for in this case. My own party’s candidate sucked that bad.

Ballot initiative type stuff….to raise $$ support for stuff that should be coming from taxes. Sound familiar? Naturally, this can’t happen anymore in these here U-nited States so we’re left with this. Stupid ballot initiatives begging one special flower government role after another. Hoping to twang enough heartstrings to push this one over the line to get some funds for an overdue reason. Yeah, I vote for these anyway.

Only two real hold-your-nose votes for me. Social issues, that kind of thing. Biggies, really. So it matters. But the options on each side are suboptimal. Nobody said voting was supposed to be easy though.

If you are a USian and are reading this, make sure you’ve voted, eh? It all matters, even the easy stuff.

First, my thanks to those of you who have found the time, generosity and money to donate to classrooms that are in need. Not just for my challenge

…but all of you donating through Scientopia blog challenges or other Science Bloggers for Students efforts.

There is some news as we near the end of the challenge period, originally planned to end 11/6. From DonorsChoose honchos:

Due to the massive power outages on the eastern seaboard this week, we are going to extend the Science Bloggers for Students campaign through next Friday, November 9. The SCIENCE match code will also be extended.

Oh yeah. So you have another week to throw down a little scratch. Also, remember to enter SCIENCE in the “Match or gift code” box when you are checking out with your payment information for your donation. There is a match in effect up to a $100 donation so you get to leverage your contribution. Pretty cool.

I have heard a very dispiriting rumour floating about and I raise it on the blog to see if any of my Readers have seen similar things happening.

Once upon a time, the NIH came to the realization that the peer review process for grant applications had a bias against the less-established, newer, younger, etc Principal Investigator. That is, their proposals did not score as well and were not being funded at the same rate as those applications on which more senior and established investigators were the PI.

Someone clearly came to the conclusion, which I share, that this difference was not due to any meaningful difference in the chance that the ensuing science would be valuable and productive. So the NIH set about a number of steps to redress the situation.

One of the pathetic bandaid solutions steps they came up with was to ensure that the burden of triage did not fall disproportionally upon the younger PI applications.

As you are aware, approximately half of applications do not get discussed at the meeting. This is based on preliminary scores issued by the three assigned reviewers, generated prior to the actual meeting date. Without being fully considered by the entire committee during discussion, an application cannot be “rescued” from various sources of unfair or bad review. Although we all recognized that a full rescue from nearly-triaged to a fundable score is rare it is at least possible. And since the NIH is really just looking at aggregate scores when it comes to the bias-against-noobPI-apps stuff, the movement in the positive direction is still a desired goal.

What someone in the halls of the Center for Scientific Review at the NIH realized was that if noobPI apps were generally scoring worse than those of established PIs, then they were more likely to be triaged. So if the general triage line was 50% of applications, then perhaps 75% of Early Stage Investigator or New Investigator apps were being triaged.

The solution was to put down an edict to the SROs that “the burden of triage should not fall unfairly upon the ESI/NI applications”. Meaning that when the triage lines were originally drawn based on the preliminary scores, the SRO had to specifically review the population of ESI/NI apps and make sure that an equal proportion of them were being discussed, let’s say 50% for convenience. This meant that sometimes ESI apps were being dragged up for discussion with preliminary scores that were worse than scores of several apps from established PIs which were being triaged/not discussed.

You will anticipate my skepticism. At the time, and this was years ago by now, I thought it was a ridiculous and useless move. Because once the preliminary scores were in that range, they were very unlikely to move. And it did nothing to address the presumed bias that put those ESI/NI scores so much lower, on average, then they should have been. It was a silly dodge to keep the aggregate numbers up without doing anything about the fundamental outcome- fundable? or not-fundable?

HOWEVER.

The rumour I have heard is that some SROs have been interpreting this rule to mean that only 50% of ESI/NI apps should be discussed. A critical distinction from “at least“, which was my prior understanding of the policy and certainly how it was used in any study sections I participated in. In such a new interpretation of the policy, there would potentially be established investigator applications being discussed which had preliminary scores worse than some ESI applications that were not being discussed.

This is so backwards that it burns.