When disaster strikes

November 30, 2012

NIH Director Collins is very sad about laboratories stricken by megastorm Sandy.

He suggests we should all “help”.

Well sure. We should all help other scientists when disaster strikes.

But you know….? Disaster is striking all over the country. One lab at a time. When they can’t keep their funding. And yeah, many of these result in the loss of “cutting edge” work. The loss of “a decade of” samples or mouse lines or other valuables.

What makes the victims of a natural disaster any more worthy than any of the rest of us?

I’ve been entertaining myself in a twitscussion with my good friend @mrgunn, a dyed-in-the-wool altmetrics wackanut obsessive. It was all started because he RT’d a reference to an article by Taylor and Thorisson entitled “Fixing authorship – towards a practical model of contributorship” which includes subsections such as “Authorship broken, needs fixing” and “Inadequate definitions of authorship“.

These were the thrusts of the article that annoyed me since I feel there is this whole area of interest that is based on a footing of disgruntled sand. In short, there IS no problem with authorship that “needs fixing”. This has not been proven by the people advancing this agenda to any believable degree and you see an awful lot of “everyone knows” type of assertion.

Some other headings in the bit are illustrative, let’s start with “Varied authorship conventions across disciplines“. This is true. But it is not a problem. My analogy of the day is different languages spoken by different people. You do not tell someone speaking a language other than that you understand that they are doing it wrong and we all just need to learn Esperanto. What you do is seek a translation. And if you feel like that is not giving you a “true” understanding, by all means, take the time to learn the language with all of its colloquial nuance. Feel free.

Heck, you can even write a guide book. For all the effort these “authorship is broken” wackaloons take to restate the unproven, they could write up a whole heck of a lot of style-guidage.

“….the discipline of Experimental Psychology is heavily driven by Grand Theorye Eleventy approaches. Therefore the intellectualizing and theorizing is of relatively greater importance and the empirical data-making is lesser. The data may reflect only a single, rather simple model for producing it. This is why you see fewer authors, typically just a trainee and a supervisor. Or even single-author papers. In contrast, the more biological disciplines in the Neuroscience umbrella may be more empirical. Credit is based on who showed something first, and who generated the most diverse sets of data, rather than any grand intellectualizing. Consequently, the author lists are long and filled with people who contributed only a little bit of data to each publication….”

Done. Now instead of trying to force a review of a person’s academic contributions into a single unified framework, one can take the entirely easy step of understanding that credit accrues differently across scientific disciplines.

ahhhh, but now we come to the altmetrics wackaloons who are TrueBelievers in the Church of Universal Quantification. They insist that somehow “all measures” can be used to create….what? I suppose a single unified evaluation of academic quality, impact, importance, etc. And actually, they don’t give a rat’s patootie about the relevance, feasibility or impact of their academic endeavor to capture all possible measures of a journal article or a contributing author. It doesn’t matter if the measure they use entails further misrepresentations. All that they care about is that they have a system to work with, data to geek over and eventually papers to write. (some of them wish to make products to sell to the Flock, of course).

This is just basic science, folks. How many of us have veeeeeery thin justifications for our research topics and models? Not me of course, I work on substance abuse…but the rest of y’all “basic” scientists….yeah.

The wackaloon justifications sound hollow and rest on very shifty support because they really don’t care. They’ve landed on a few trite, truthy and pithy points to put in their “Introduction” statements and moved on. Everyone in the field buys them, nods sagely to each other and never. bothers. to. examine. them. further. Because they don’t even care if they believe it themselves, their true motivation is the tactical problem at hand. How to generate the altmetrics data. Perhaps secondarily how to make people pay attention to their data and theories. But as to whether there is any real world problem (i.e., with the conduct of science” to which their stuff applies? Whether it fixes anything? Whether it just substitutes a new set of problems for an old set? Whether the approach presents the same old problems with a new coat of paint?

They don’t care.

I do, however. I care about the conduct of science. I am sympathetic to the underlying ideas of altmetrics as it happens, so far as they criticize the current non-altmetric, the Journal Impact Factor. On that I agree that there is a problem. And let’s face it, I like data. When I land on a PLoS ONE paper, sure, I click on the “metrics” tab. I’m curious.

But make no mistake. Tweets and Fb likes and blog entries and all that crapola just substitute a different “elite” in the indirect judging of paper quality. Manuscripts with topics of sex and drugs will do relatively better than ones with obscure cell lines faked up to do bizarre non-biological shit on the bench. And we’ll just end up with yet more debates about what is “important” for a scientist to contribute. Nothing solved, just more unpleasantness.

Marrying these two topics together we get down to the discussion of the “Author Contribution” statement, increasingly popular with journals. Those of us in the trenches know that these are really little better than the author position. What does it tell us that author #4 in a 7 author paper generated Fig 3 instead of Fig 5? Why do we need to know this? So that the almetrics wackaloons can eventually tot up a score of “cumulative figures published”? Really? This is ridiculous. And it just invites further gaming.

The listed-second, co-equal contribution is an example. Someone dreamed up this as a half-assed workaround to the author-order crediting assumptions. It doesn’t work, as we’ve discussed endlessly on this blog, save to buy off the extra effort of the person listed not-first with worthless currency. So in this glorious future in which the Author Contribution is captured by the altmetrics wackaloons, there will be much gaming of the things that are said on this statement. I’ve already been at least indirectly involved in some discussion of who should be listed for what type of contribution already. It was entirely amiable but it is a sign of the rocky shoals ahead. I foresee a solution that is exactly as imprecise as what the critics are on about already (“all authors made substantial contributions to everything, fuck off”) and we will rapidly return to the same place we are now.

Now, is there harm?

I’d say yes. Fighting over irrelevant indirect indicators of “importance” in science is already a huge problem because it is inevitably trying to fit inherently disparate things into one framework. It is inevitably about prescribing what is “good” and what is “bad” in a rather uniform way. This is exemplified by the very thing these people are trying to criticize, the Journal Impact Factor. It boggles my mind that they cannot see this.

The harms will be similar. Scientists spending their time and effort gaming the metrics instead of figuring out the very fastest and best way to advance science*. Agencies will fund those who are “best” at a new set of measures that have little to do with the scientific goals….or will have to defend themselves when they violate** these new and improved standards. Vice Deans and P&T committees will just have even more to fight over, and more to be sued about when someone is denied tenure and the real reasons*** are being papered over with quantification of metrics. Postdoctoral bakers agonizing over meeting the metrics instead of working on what really matters, “fit” and “excitement”.

__
*Which is “publish all the data as quickly as possible” and let the hive/PubMed sort it out.
**see complaints from disgruntled NIH applicants about how they “deserve” grants because their publications are more JIF-awesome or more plentiful then the next person.
***”an asshole that we don’t want in our Dept forever”

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the “median impact factor” supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about…
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person’s CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn’t all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

PSA on Journal selection

November 27, 2012

Academic trainees should not be publishing in journals that do not yet have Impact Factors. Likewise they should not be publishing in journals that are not indexed by the major search database (like PubMed) used in their field.

The general science journal Nature has an interesting editorial up:


Earlier this year, we published a Correspondence that rightly took Nature to task for publishing too few female authors in our News and Views section (D. Conley and J. Stadmark Nature 488, 590; 2012). Specifically, in the period 2010–11, the proportions of women News and Views authors in life, physical and Earth sciences were 17%, 8% and 4%, respectively. The authors of the Correspondence had taken us to task in 2005 with a similar analysis for the authorship of our Insight overview articles, and gave us slight credit for having improved that position.

they then went on to perform some additional reviews of their performance.


Our performance as editors is much less balanced.
Of the 5,514 referees who assessed Nature’s submitted papers in 2011, 14% were women.
Of the 34 researchers profiled by journalists in 2011 and so far in 2012, 6 (18%) were women.
Of externally written Comment and World View articles published in 2011 and so far in 2012, 19% included a female author.

then, after the inevitable external blaming they actually get down to it.

We therefore believe that there is a need for every editor to work through a conscious loop before proceeding with commissioning: to ask themselves, “Who are the five women I could ask?”

Under no circumstances will this ‘gender loop’ involve a requirement to fulfil a quota or to select anyone whom we do not know to be fully appropriate for the job, although we will set ourselves internal targets to help us to focus on the task.

HAHHAHAAH. “We’re going to have quotas but we’re not using quotas!” Good one Nature!

What a load of crap. People in academia and other places that are dealing with representativeness need to just stop falling for this right-wing, anti-affirmative-action, anti-diversity bullshit talking point. Quotas are just fine. Numbers are the way clearly discriminatory and unequal practices are revealed and they are the only way we’re going to know when we’ve improved.

But…regardless. Good on Nature for this one.

For the rest of you, keep the spotlight shining brightly upon them. Because they admit themselves that this gender inequality of their pages has been brought to their awareness as long ago as 2005 and. they. still. haven’t. really. improved. Make no mistake, improving diversity on any measure is not easy. It takes highly sustained attention, effort and force of will to change entrenched, unthinking* cultural biases. Not everyone in the organization will even agree with the goals expressed in this editorial and will work harder to find excuses not to change than they do to make improvements. So I don’t expect miracles.

But Nature, you are a premier venue of scientific publication which gives you a very high platform from which to enact cultural change. I do hope you are not blowing smoke on this one.

__
*which they are for the most part.

The ORI blog has an entry up about a new program which attempts to rehabilitate those who have committed academic misconduct.

RePAIR’s premise is that an intense period of intervention, with multiple participants from different institutions who spend several days together at a neutral site, followed by a lengthy period of follow up activities back at their home institution, will rebuild their ethical views. ORI doesn’t know whether RePAIR will work and cannot formally endorse it. But ORI staff do find RePAIR an intriguing and high-minded experiment that research institutions may wish to consider as a resource.

I like the idea of experimenting. But I have to admit I’m skeptical. I do not believe that academic misconduct at the grad student, postdoc and professorial level is done out of ignorance. I believe that it occurs because someone is desperate and/or weak and allows the pressures of this career path to nudge them down the slippery slope.

Now true, many cognitive defenses are erected to convince themselves that they are justified. Perhaps the “everyone is doing it” one is something that can be addressed with these re-education camps. But many of the contingencies won’t go away. There is no weekend seminar that can change the reality of the NIH payline or the GlamourMag chase.

I suspect this will be a fig leaf that Universities use to cover up the stench should they choose to retain a convicted fraudster or to hire one.

Speaking of which, a Twitt yesterday alleged that Marc Hauser has been reaching out to colleagues, seeking collaboration. It made me wonder if anyone with an ORI finding against them has ever returned in a meaningful way? Whether any University would hire them, whether they would be able to secure funding and whether the peer review process would accept their data for publication.

Can anyone think of such a person?

Holy Moly! by way of the CPDD blog an announcement from NIH Director Collins

After rigorous review and extensive consultation with stakeholders, I have concluded that it is more appropriate for NIH to pursue functional integration, rather than major structural reorganization, to advance substance use, abuse, and addiction-related research. To that end, the National Institute on Drug Abuse (NIDA) and the National Institute on Alcohol Abuse and Alcoholism (NIAAA) will retain their institutional identities, while strengthening their ongoing efforts to work more closely with each other and with related research programs at other institutes and centers.

And that’s all she wrote, folks. Like I’ve always said, if you can’t merge these two ICs then there is no point in any talk about merging any existing ICs.

For some idea of what the “functional integration” means, see this site.

Background reading:

    The Merger of NIDA and NIAAA: Here We Go!
    Grrrrrrrrr. CPDD and RSoA annoyance
    Your academic society is working for (or against?) you under the NIH Grant waterline
    The Gender Smog We Breathe: The NIH Edition
    NIH Director Collins moves forward with NIAAA/NIDA merger
    Is NIAAA a better steward of NIH grant monies than is NIDA?
    Update on the NIAAA/NIDA Merger
    Beverage industry is not enthusiastic about merging NIAAA with NIDA
    The NIDA/NIAAA merger and the newly proposed NIH Center for Translational Research.