LPU Redoux

April 12, 2013

Another round of trying to get someone blustering about literature “clutter” and “signal to noise ratio” to really explain what he means.

Utter failure to gain clarity.

Again.

Update 1:

It isn’t as though I insist that each and every published paper everywhere and anywhere is going to be of substantial value. Sure, there may be a few studies, now and then, that really don’t ever contribute to furthering understanding. For anyone, ever. The odds favor this and do not favor absolutes. Nevertheless, it is quite obvious that the “clutter”, “signal to noise”, “complete story” and “LPU=bad” dingdongs feel that it is a substantial amount of the literature that we are talking about. Right? Because if you are bothering to mention something under 1% of what you happen across in this context then you are a very special princess-flower indeed.

Second, I wonder about the day to day experiences of people that bring them to this. What are they doing and how are they reacting? When I am engaging with the literature on a given topic of interest, I do a lot of filtering even with the assistance of PubMed. I think, possibly I am wrong here, that this is an essential ESSENTIAL part of my job as a scientist. You read the studies and you see how it fits together in your own understanding of the natural world (or unnatural one if that’s your gig). Some studies will be tour-de-force bravura evidence for major parts of your thinking. Some will provide one figure’s worth of help. Some will merely sow confusion…but proper confusion to help you avoid assuming some thing is more likely to be so than it is. In finding these, you are probably discarding many papers on reading the title, on reading the Abstract, on the first quick scan of the figures.

So what? That’s the job. That’s the thing you are supposed to be doing. It is not the fault of those stupid authors who dared to publish something of interest to themselves that your precious time had to be wasted determining it was of no interest to you. Nor is it any sign of a problem of the overall enterprise.

UPDATE 2:
Thoughts on the Least Publishable Unit

LPU

Authors fail to illuminate the LPU issue

Better Living Through Least Publishable Units

Yet, publishing LPU’s clearly hasn’t harmed some prominent people. You wouldn’t be able to get a job today if you had a CV full of LPU’s and shingled papers, and you most likely wouldn’t get promoted either. But perhaps there is some point at which the shear number of papers starts to impress people. I don’t completely understand this phenomenon.

Avalanche of Useless Science

Our problem is an “Avalanche of Low Quality Research”? Really?

Too Many Papers

We had some incidental findings that we didn’t think worthy of a separate publication. A few years later, another group replicated and published our (unpublished) “incidental” results. Their paper has been cited 12 times in the year and a half since publication in a field-specific journal with an impact factor of 6. It is incredibly difficult to predict in advance what other scientists will find useful. Since data is so expensive in time and money to generate, I would much, much rather there be too many publications than too few (especially given modern search engines and electronic databases).

For reference, Scicurious’ proposal

What if manuscript submission could be as good as a one-shot?

Like this: you submit a paper to a large umbrella of journals of several “tiers”. It goes out for review. The reviewers make their criticisms. Then they say “this paper is fine, but it’s not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions”. Or they have the option to reject it outright for all the journals in question. Where there is discrepancy (as usual) the editor makes the call.

and the Neuroscience Peer Review Consortium.

The Neuroscience Peer Review Consortium is an alliance of neuroscience journals that have agreed to accept manuscript reviews from other members of the Consortium. Its goals are to support efficient and thorough peer review of original research in neuroscience, speed the publication of research reports, and reduce the burden on peer reviewers.

I think these schemes are flawed for a simple reason. As I noted in a comment at Sci’s digs….

Nobody bucking for IF immediately goes (significantly) down. They go (approximately) lateral and hope to get lucky. The NPRC is a classic example. At several critical levels there is no lateral option. And even if there was, the approximately equal IF journals are in side-eyeing competition…me, I sure as hell don’t want the editors of Biol Psychiatry, J. Neuro and Neuropsychopharmacology knowing that I’ve been rejected by one or two of the other ones first.

I also contest the degree to which a significantly “lower” journal thinks that it is, indeed, lower and a justifiable recipient of the leavings. Psychopharmacology, for example, is a rightful next stop after Neuropsychopharmacology but somehow I don’t think ol’ Klaus is going to take your manuscript any easier just because the NPP decision was “okay, but just not cool enough”. Think NPP and Biol Psych are going to roll over for your Nature Neuroscience reject? hell no. Not until their reviewers say “go”.

This NPRC thing has been around since about 2007. I find myself intensely curious about how it has been going. I’d like to see some data in terms of how many authors choose to use it (out of the total manuscripts rejected from each participating journal), how many paper are subsequently accepted at another consortium journal, the network paths between journals for those that are referred, etc.

My predictions are that referrals are very rare, that they are inevitably downward in journal IF and that they don’t help very much. With respect to this latter, I mean that I bet it is a further minority of the manuscripts that use this system that are subsequently accepted by the second journal editor on the strength of the original reviews and some stab by the authors at a minimal revision (i.e., as if they’d gotten minor-revisions from the original editor instead of rejection).

One fascinating, unknowable curiosity is the desk reject factor. The NPRC could possibly know how many of the second editors did desk rejects of the referred manuscripts based on the forwarded reviews. That would be interesting to see. But what they can’t know is how many of those would have been sent out for review if the reviews had not been forwarded. And if they had been sent out for review, what fraction would have received good enough reviews (for the presumptively more pedestrian journal) that they would have made it in.

Idle thought

March 6, 2013

Relevant to Sci’s recent ranting about the paper chase in science…

Sorry reviewers, I am not burning a year and $250K to satisfy your curiosity about something stupid for a journal of this IF.

SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like “low-impact journals” are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn’t be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word “tier”. As in “The work from the prior interval of support was mostly published in second tier journals…“.

It is almost always second tier that is used.

But this is never correct in my experience.

If we’re talking Impact Factor (and these people are, believe it) then there is a “first” tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the “second tier”.

A jump down to the IF 12 or so of PNAS most definitely represents a different “tier” if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about “second tier journals” they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a “the rest of the pack” zone in which there is a meaningful perception difference from the fifth tier. So…. Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn’t it?)

So there you have it. If you* are going to use “tier” to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

___
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use “second tier journal” to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as “second tier”. Go nuts.

@mbeisen is on fire on the Twitts:

@ianholmes @eperlste @dgmacarthur @caseybergman and i’m not going to stop calling things as they are to avoid hurting people’s feelings

Why? Open Access to scientific research, naturally. What else? There were a couple of early assertions that struck me as funny including

@eperlste @ianholmes @dgmacarthur @caseybergman i think the “i should have to right to choose where to publish” argument is bullshit

and

@eperlste @ianholmes @dgmacarthur @caseybergman funding agencies can set rules for where you can publish if you take their money

This was by way of answering a Twitt from @ianholmes that set him off, I surmise:

@eperlste @dgmacarthur how I decide where to pub is kinda irrelevant. The point is, every scientist MUST have the freedom to decide for self

This whole thing is getting ridiculous. I don’t have the unfettered freedom to decide where to publish my stuff and it most certainly is an outcome of the funding agency, in my case the NIH.

Here are the truths that we hold to be self-evident at present time. The more respected the journal in which we publish our work, the better the funding agency “likes” it. This encompasses the whole process from initial peer review of the grant applications, to selection for funding (sometimes via exception pay) to the ongoing review of program officers. It extends not just from the present award, but to any future awards I might be seeking to land.

Where I publish matters to them. They make it emphatically clear in ever-so-many-ways that the more prestigious the journal (which generally means higher IF, but not exclusively this), the better my chances of being continuously funded.

So I agree with @mbeisen about the “I have the right to choose where I publish is bullshit” part, but it is for a very different reason than seems to be motivating his attitude. The NIH already influences where I “choose” to publish my work. As we’ve just seen in a prior discussion, PLoS ONE is not very high on the prestige ladder with peer reviewers…and therefore not very high with the NIH.

So quite obviously, my funder is telling me not to publish in that particular OA venue. They’d much prefer something of a lower IF that is better respected in the field, say, the journals that have longer track records, happen to sit on the top of the ISI “substance abuse” category or are associated with the more important academic societies. Or perhaps even the slightly more competitive rank of journals associated with academic societies of broader “brain” interest.

Even before we get to the Glamour level….the NIH funding system cares where I publish.

Therefore I am not entirely “free” to choose where I want to publish and it is not some sort of moral failing that I haven’t jumped on the exclusive OA bandwagon.

@ianholmes @eperlste @dgmacarthur @caseybergman bullshit – there’s no debate – there’s people being selfish and people doing the right thing

uh-huh. I’m “selfish” because I want to keep my lab funded in this current skin-of-the-teeth funding environment? Sure. The old one-percenter-of-science monster rears it’s increasingly ugly head on this one.

@ianholmes @eperlste @dgmacarthur @caseybergman and we have every right to shame people for failing to live up to ideals of field

What an ass. Sure, you have the right to shame people if you want. And we have the right to point out that you are being an asshole from your stance of incredible science privilege as a science one-percenter. Lecturing anyone who is not tenured, doesn’t enjoy HHMI funding, isn’t comfortably ensconced in a hard money position, isn’t in a highly prestigious University or Institute, may not even have achieved her first professorial appointment yet about “selfishness” is being a colossal dickweed.

Well, you know how I feel about dickweedes.

I do like @mbeisen and I do think he is on the side of angels here*. I agree that all of us need to be challenged and I find his comments to be this, not an unbearable insult. Would it hurt to dip one toe in the PLoS ONE waters? Maybe we can try that out without it hurting us too badly. Can we preach his gospel? Sure, no problem. Can we ourselves speak of PLoS ONE papers on the CVs and Biosketches of the applications we are reviewing without being unjustifiably dismissive of how many notes Amadeus has included? No problem.

So let us try to get past his rhetoric, position of privilege and stop with the tone trolling. Let’s just use his frothing about OA to examine our own situations and see where we can help the cause without it putting our labs out of business.

__
*ETA: meaning Open Access, not his attacks on Twitter

Dump Journals

January 17, 2013

To be absolutely clear, I use the term “dump journal” without malice. Some do, I know, but I do not. I use it to refer to journals of last resort. The ones where you and your subfield are perfectly willing to publish stuff and, more importantly, perfectly willing to cite other papers. Sure, it isn’t viewed as awesome, but it is….respectable. The Editor and sub-editors, probably the editorial board, are known people. Established figures who publish most of their own papers in much, much higher IF journals. It is considered a place where the peer review is solid, conducted by appropriate experts who, btw, review extensively for journals higher up the food chain.

What interests me today, Dear Reader, are the perceptions and beliefs of those people who are involved in the dump journal. Authors who submit work there, the Editor and any sub-editors….and the reviewers. Do we all commonly view the venue in question as a “dump journal”? Or are there those that are surprised and a bit offended that anyone else would consider their solid, society level journals as such a thing?

PatheticImpFactorAre there those who recognize that others view the journal as a dump journal but wish to work to change this reputation? By being harsher during the review process than is warranted given the history of the journal? That approach is a game of chicken though…if you think a dump journal is getting too uppity for its current IF then you are going to just move on to some other journal for your data-dumping purposes, are you not? If a publisher or journal staff wanted to make a serious move up the relative rankings, they’d better have a plan and a steely nerve if you ask me.

This brings me around to my fascination with PLoS ONE and subjective notions of its quality and importance. What IS this journal? Is it a dumping grounds for stuff you had rejected elsewhere on “importance” and “impact” grounds and you just want the damn data out there already? That would qualify as a dump journal in my view. Or do you view it as a potential primary venue…because it enjoys an IF in the 4s and that’s well into run-of-the-mill decent for your subfield?

Furthermore, how does this color your interaction with the journal? I know we have a few folks around here who function as Academic Editors. Are you one of those that thinks PLoS ONE should be ever upping its “quality” in an attempt to improve the reputation? Do you fear it becoming a “dump journal”? Or do you embrace that status?

Are you involved with another journal that some might consider a dump journal for your field? Do you think of it this way yourself? Or do see it as a solid journal and it is that other journal, 0.245 IF points down, which is the real dump journal?

For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.

But here is my question…. How many of you are barraged by requests for reprints? That’s the way open access on the personal level has always worked. I use it myself to request things I can’t get to by the journal’s site. The response is always prompt from the communicating author.

Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say…more than one per week?

So are you? Are you getting this many requests?

First, let’s all enjoy the bliss of, count ’em, EIGHT authors who….

1D.S., A.B., M. Maroteaux, T.J., C.P.M., R.S., J.-A.G., and G.S. contributed equally to this work.

To make it extra hilarious please note that the first four are listed first authors and the last four are…listed last authors.

This is ridiculous. Going by the affiliations of the first four and the last four (and knowing a little something about the careers status of several of the last four) it looks very much like typical trainee-PI pairings in a multi-group collaboration. Consequently it would make considerably more sense to identify the four trainees and the four PIs as contributing equally compared with each other…but not across the trainee/PI divide.

But really, the discussion of the day is raised by a troll communication to the blog.

As you know there are style guides for journals as to how previous studies are to be cited and how they are to be referred to in the text. One typical style guide might suggest that you use “As shown by Gun et al (2009), the PhysioWhimple nucleus is critical in…“. You might also resort to the more conversational “Gun and colleagues demonstrated…“.

Very good, right?

Now what about when the paper in question indicates co-equal contribution, eh? Then you should say “Genedog, Tideliar and colleagues showed….“. Right? You should absolutely insist on including the name of the co-equal authors, should you not?

Especially if you are one of those who insists that this designation is meaningful…

h/t: a certain troll

Ahh, reviewers

December 13, 2012

One thing that cracks me up about manuscript review is the reviewer who imagines that there is something really cool in your data that you are just hiding from the light of day.

This is usually expressed as a demand that you “consider” a particular analysis of your data. In my work, behavioral pharmacology, it can run to the extent of wanting to parse individual subjects’ responses. It may be a desire that you approach your statistical analysis differently, changing the number of factors included in your ANOVA, a suggestion you should group your data differently (sometimes if you have extended timecourses such as sequential drug self-administration sessions, etc, you might summarize the timecourse in some way) or perhaps a desire to see a bunch of training, baseline or validation behavior that…

….well, what?

Many of these cases that I’ve seen show the reviewer failing to explain exactly what s/he suspects would be revealed by this new analysis or data presentation. Sometimes you can infer that they are predicting that something surely must be there in your data and for some reason you are too stupid to see it. Or are pursuing some agenda and (again, this is usually only a hint) suspected of covering up the “real” effect.

Dudes! Don’t you think that we work our data over with a fine toothed comb, looking for cool stuff that it is telling us? Really? Like we didn’t already think of that brilliant analysis you’ve come up with?

Didya ever think we’ve failed to say anything about it because 1) the design just wasn’t up to the task of properly evaluating some random sub-group hypothesis or 2) the data just don’t support it, sorry. or 3) yeah man, I know how to validate a damn behavioral assay and you know what? nobody else wants to read that boring stuff.

and finally..my friends, the stats rules bind you just as much as they do us. You know? I mean think about it. If there is some part of a subanalysis or series of different inferential techniques that you want to see deployed you need to think about whether this particular design is powered to do it. Right? I mean if we reported “well, we just did this ANOVA, then that ANOVA…then we transformed the data and did some other thing…well maybe a series of one-ways is the way to go….hmm. say how about t-tests? wait, wait, here’s this individual subject analysis!” like your comments seem to be implying we should now do…yeah that’s not going to go over well with most reviewers.

So why do some reviewers seem to forget all of this when they are wildly speculating that there must be some effect in our data that we’ve not reported?

I’ve been entertaining myself in a twitscussion with my good friend @mrgunn, a dyed-in-the-wool altmetrics wackanut obsessive. It was all started because he RT’d a reference to an article by Taylor and Thorisson entitled “Fixing authorship – towards a practical model of contributorship” which includes subsections such as “Authorship broken, needs fixing” and “Inadequate definitions of authorship“.

These were the thrusts of the article that annoyed me since I feel there is this whole area of interest that is based on a footing of disgruntled sand. In short, there IS no problem with authorship that “needs fixing”. This has not been proven by the people advancing this agenda to any believable degree and you see an awful lot of “everyone knows” type of assertion.

Some other headings in the bit are illustrative, let’s start with “Varied authorship conventions across disciplines“. This is true. But it is not a problem. My analogy of the day is different languages spoken by different people. You do not tell someone speaking a language other than that you understand that they are doing it wrong and we all just need to learn Esperanto. What you do is seek a translation. And if you feel like that is not giving you a “true” understanding, by all means, take the time to learn the language with all of its colloquial nuance. Feel free.

Heck, you can even write a guide book. For all the effort these “authorship is broken” wackaloons take to restate the unproven, they could write up a whole heck of a lot of style-guidage.

“….the discipline of Experimental Psychology is heavily driven by Grand Theorye Eleventy approaches. Therefore the intellectualizing and theorizing is of relatively greater importance and the empirical data-making is lesser. The data may reflect only a single, rather simple model for producing it. This is why you see fewer authors, typically just a trainee and a supervisor. Or even single-author papers. In contrast, the more biological disciplines in the Neuroscience umbrella may be more empirical. Credit is based on who showed something first, and who generated the most diverse sets of data, rather than any grand intellectualizing. Consequently, the author lists are long and filled with people who contributed only a little bit of data to each publication….”

Done. Now instead of trying to force a review of a person’s academic contributions into a single unified framework, one can take the entirely easy step of understanding that credit accrues differently across scientific disciplines.

ahhhh, but now we come to the altmetrics wackaloons who are TrueBelievers in the Church of Universal Quantification. They insist that somehow “all measures” can be used to create….what? I suppose a single unified evaluation of academic quality, impact, importance, etc. And actually, they don’t give a rat’s patootie about the relevance, feasibility or impact of their academic endeavor to capture all possible measures of a journal article or a contributing author. It doesn’t matter if the measure they use entails further misrepresentations. All that they care about is that they have a system to work with, data to geek over and eventually papers to write. (some of them wish to make products to sell to the Flock, of course).

This is just basic science, folks. How many of us have veeeeeery thin justifications for our research topics and models? Not me of course, I work on substance abuse…but the rest of y’all “basic” scientists….yeah.

The wackaloon justifications sound hollow and rest on very shifty support because they really don’t care. They’ve landed on a few trite, truthy and pithy points to put in their “Introduction” statements and moved on. Everyone in the field buys them, nods sagely to each other and never. bothers. to. examine. them. further. Because they don’t even care if they believe it themselves, their true motivation is the tactical problem at hand. How to generate the altmetrics data. Perhaps secondarily how to make people pay attention to their data and theories. But as to whether there is any real world problem (i.e., with the conduct of science” to which their stuff applies? Whether it fixes anything? Whether it just substitutes a new set of problems for an old set? Whether the approach presents the same old problems with a new coat of paint?

They don’t care.

I do, however. I care about the conduct of science. I am sympathetic to the underlying ideas of altmetrics as it happens, so far as they criticize the current non-altmetric, the Journal Impact Factor. On that I agree that there is a problem. And let’s face it, I like data. When I land on a PLoS ONE paper, sure, I click on the “metrics” tab. I’m curious.

But make no mistake. Tweets and Fb likes and blog entries and all that crapola just substitute a different “elite” in the indirect judging of paper quality. Manuscripts with topics of sex and drugs will do relatively better than ones with obscure cell lines faked up to do bizarre non-biological shit on the bench. And we’ll just end up with yet more debates about what is “important” for a scientist to contribute. Nothing solved, just more unpleasantness.

Marrying these two topics together we get down to the discussion of the “Author Contribution” statement, increasingly popular with journals. Those of us in the trenches know that these are really little better than the author position. What does it tell us that author #4 in a 7 author paper generated Fig 3 instead of Fig 5? Why do we need to know this? So that the almetrics wackaloons can eventually tot up a score of “cumulative figures published”? Really? This is ridiculous. And it just invites further gaming.

The listed-second, co-equal contribution is an example. Someone dreamed up this as a half-assed workaround to the author-order crediting assumptions. It doesn’t work, as we’ve discussed endlessly on this blog, save to buy off the extra effort of the person listed not-first with worthless currency. So in this glorious future in which the Author Contribution is captured by the altmetrics wackaloons, there will be much gaming of the things that are said on this statement. I’ve already been at least indirectly involved in some discussion of who should be listed for what type of contribution already. It was entirely amiable but it is a sign of the rocky shoals ahead. I foresee a solution that is exactly as imprecise as what the critics are on about already (“all authors made substantial contributions to everything, fuck off”) and we will rapidly return to the same place we are now.

Now, is there harm?

I’d say yes. Fighting over irrelevant indirect indicators of “importance” in science is already a huge problem because it is inevitably trying to fit inherently disparate things into one framework. It is inevitably about prescribing what is “good” and what is “bad” in a rather uniform way. This is exemplified by the very thing these people are trying to criticize, the Journal Impact Factor. It boggles my mind that they cannot see this.

The harms will be similar. Scientists spending their time and effort gaming the metrics instead of figuring out the very fastest and best way to advance science*. Agencies will fund those who are “best” at a new set of measures that have little to do with the scientific goals….or will have to defend themselves when they violate** these new and improved standards. Vice Deans and P&T committees will just have even more to fight over, and more to be sued about when someone is denied tenure and the real reasons*** are being papered over with quantification of metrics. Postdoctoral bakers agonizing over meeting the metrics instead of working on what really matters, “fit” and “excitement”.

__
*Which is “publish all the data as quickly as possible” and let the hive/PubMed sort it out.
**see complaints from disgruntled NIH applicants about how they “deserve” grants because their publications are more JIF-awesome or more plentiful then the next person.
***”an asshole that we don’t want in our Dept forever”

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the “median impact factor” supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about…
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person’s CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn’t all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

PSA on Journal selection

November 27, 2012

Academic trainees should not be publishing in journals that do not yet have Impact Factors. Likewise they should not be publishing in journals that are not indexed by the major search database (like PubMed) used in their field.

The general science journal Nature has an interesting editorial up:


Earlier this year, we published a Correspondence that rightly took Nature to task for publishing too few female authors in our News and Views section (D. Conley and J. Stadmark Nature 488, 590; 2012). Specifically, in the period 2010–11, the proportions of women News and Views authors in life, physical and Earth sciences were 17%, 8% and 4%, respectively. The authors of the Correspondence had taken us to task in 2005 with a similar analysis for the authorship of our Insight overview articles, and gave us slight credit for having improved that position.

they then went on to perform some additional reviews of their performance.


Our performance as editors is much less balanced.
Of the 5,514 referees who assessed Nature’s submitted papers in 2011, 14% were women.
Of the 34 researchers profiled by journalists in 2011 and so far in 2012, 6 (18%) were women.
Of externally written Comment and World View articles published in 2011 and so far in 2012, 19% included a female author.

then, after the inevitable external blaming they actually get down to it.

We therefore believe that there is a need for every editor to work through a conscious loop before proceeding with commissioning: to ask themselves, “Who are the five women I could ask?”

Under no circumstances will this ‘gender loop’ involve a requirement to fulfil a quota or to select anyone whom we do not know to be fully appropriate for the job, although we will set ourselves internal targets to help us to focus on the task.

HAHHAHAAH. “We’re going to have quotas but we’re not using quotas!” Good one Nature!

What a load of crap. People in academia and other places that are dealing with representativeness need to just stop falling for this right-wing, anti-affirmative-action, anti-diversity bullshit talking point. Quotas are just fine. Numbers are the way clearly discriminatory and unequal practices are revealed and they are the only way we’re going to know when we’ve improved.

But…regardless. Good on Nature for this one.

For the rest of you, keep the spotlight shining brightly upon them. Because they admit themselves that this gender inequality of their pages has been brought to their awareness as long ago as 2005 and. they. still. haven’t. really. improved. Make no mistake, improving diversity on any measure is not easy. It takes highly sustained attention, effort and force of will to change entrenched, unthinking* cultural biases. Not everyone in the organization will even agree with the goals expressed in this editorial and will work harder to find excuses not to change than they do to make improvements. So I don’t expect miracles.

But Nature, you are a premier venue of scientific publication which gives you a very high platform from which to enact cultural change. I do hope you are not blowing smoke on this one.

__
*which they are for the most part.

Co-Communicating Author

November 15, 2012

What is “communicating author” attribution for in your field? How is it interpreted?

(and do you note it on a CV or job application in any way?)