Some self-congratulatory meeting of the OpenAccess Illuminati* took place recently and a summary of takeaway points has been posted by Stephen Curry (the other one).

These people are exhausting. They just keep bleating away with their talking points and refuse entirely to ever address the clear problems with their plans.

Anonymous peer review exists for a reason.

To hear them tell it, the only reason is so hateful incompetent reviewers can prevent their sterling works of genius from being published right away.

This is not the reason for having anonymous peer review in science.

Their critics regularly bring up the reason we have anonymous peer review and the virtues of such an approach. The OA Illuminati refuse to address this. At best they will vaguely acknowledge their understanding of the issue and then hand wave about how it isn’t a problem just …um…because they say so.

It’s also weird that 80%+ of their supposed problems with peer review as we know it are attributable to their own participation in the Glamour Science game. Some of them also see problems with GlamHumping but they never connect the dots to see that Glamming is the driver of most of their supposed problems with peer review as currently practiced.

Which tells you a lot about how their real goals align with the ones that they talk about in public.

Edited to add:
Professor Curry weighed in on twitter to insist that the goal is not to force everyone to sign reviews. See, his plan allows people to opt out if they choose. This is probably even worse for the goal of getting an even-handed and honest review of scientific papers. And even more tellingly, is designing the experiment so that it cannot do anything other than provide evidence in support of their hypothesis. Neat trick.

Here’s how it will go down. People will sign their reviews when they have “nice, constructive” things to say about the paper. BSDs, who are already unassailable and are the ones self-righteously saying they sign all their reviews now, will continue to feel free to be dicks. And the people** who feel that attaching their name to their true opinion will still feel pressure. To not review, to soft-pedal and sign or to supply an unsigned but critical review. All of this is distorting.

Most importantly for the open-review fans, it will generate a record of signed reviews that seem wonderfully constructive or deserved (the Emperor’s, sorry BSDs, critical pants are very fine indeed) and a record of seemingly unconstructive critical unsigned reviews (which we can surely dismiss because they are anonymous cowards). So you see? It proves the theory! Open reviews are “better” and anonymous reviews are mean and unjustified. It’s a can’t-miss bet for these people.

The choice to not-review is significant. I know we all like to think that “obvious flaws” would occur to anyone reading a paper. That’s nonsense. Having been involved in manuscript and grant review for quite some time now I am here to tell you that the assigned reviewers (typically 3) all provide unique insight. Sometimes during grant review other panel members see other things the three assigned people missed and in manuscript review the AE or EIC see something. I’m sure you could do parallel sets of three reviewers and it would take quite a large sample before every single concern has been identified. Comparing this experience to the number of comments that are made in all of the various open-commenting systems (PubMed Commons commenting system was just shuttered for lack of general interest by the way) and we simply cannot believe claims that any reviewer can be omitted*** with no loss of function. Not to mention the fact that open commenting systems are just as subject to the above discussed opt-in problems as are signed official review systems of peer review.
__
*hosted at HHMI headquarters which I’m sure tells us nothing about the purpose

**this is never an all-or-none associated with reviewer traits. It will be a manuscript-by-manuscript choice process which makes it nearly impossible to assess the quelling and distorting effect this will have on high quality review of papers.

***yes, we never have an overwhelmingly large sample of reviewers. The point here is the systematic distortion.

Elsevier has a new ….journal? I guess that is what it is.

Data in Brief

From the author guidelines:

Data in Brief provides a way for researchers to easily share and reuse each other’s datasets by
publishing data articles that:

Thoroughly describe your data, facilitating reproducibility. Make your data, which is often buried in supplementary material, easier to find. Increase traffic towards associated research articles and data, leading to more citations. Open up doors for new collaborations.
Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.

At the moment they only list Section Editors in Proteomics, Materials Science, Molecular Phylogenetics and Evolution, Engineering and Genomics. So yes, there will apparently be peer review of these datasets:

Because Data in Brief articles are pure descriptions of data they are reviewed differently than a typical research article. The Data in Brief peer review process focuses on data transparency.

Reviewers review manuscripts based on the following criteria:
Do the description and data make sense? Do the authors adequately explain its utility to the community? Are the protocol/references for generating data adequate? Data format (is it standard? potentially re-usable?) Does the article follow the Data in Brief template? Is the data well documented?

Data in Brief that are converted supplementary files submitted alongside a research article via another Elsevier journal are editorially reviewed….

Wait. What’s this part now?

Here’s what the guidelines at a regular journal, also published by Elsevier, have to say about the purpose of Data in Brief:

Authors have the option of converting any or all parts of their supplementary or additional raw data into one or multiple Data in Brief articles, a new kind of article that houses and describes their data. Data in Brief articles ensure that your data, which is normally buried in supplementary material, is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. Authors are encouraged to submit their Data in Brief article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your Data in Brief article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the new, open access journal, Data in Brief. Please note an open access fee is payable for publication in Data in Brief.

emphasis added.

So, for those of you that want to publish the data underlying your regular research article, instead of having it go unheeded in a Supplementary Materials pdf you now have the opportunity to pay an Open Access fee to get yourself a DOI for it.

Someone forwarded me what appears to be credible evidence that Wiley is considering taking Addiction Biology Open Access.

To the tune of $2,500 per article.

At present this title has no page charges within their standard article size.

This is interesting because Wiley purchased this title quite a while ago at a JIF that was at or below my perception of my field’s dump-journal level.

They managed to march the JIF up the ranks and get it into the top position in the ISI Substance Abuse category. This, IMO, then stoked a virtuous cycle in which people submit better and better work there.

At some point in the past few years the journal went from publishing four issues per year to six. And the JIF remains atop the category.

As a business, what would you do? You build up a service until it is in high demand and then you try to cash in, that’s what.

Personally I think this will kill the golden goose. It will be a slow process, however, and Wiley will make some money in the mean time.

The question is, do most competitors choose to follow suit? If so, Wiley wins big because authors will eventually have no other option. If the timing is good, Addiction Biology makes money early and then keeps on going as the leader of the pack.

All y’all Open Access wackaloons believe this is inevitable and are solidly behind Wiley’s move, no doubt.

I will be fascinated to see how this one plays out.

From the author guidelines:

eNeuro uses a double-blind review process, which means the identities of both the authors and reviewers are concealed throughout the review process. In order to facilitate this, authors must ensure their manuscripts are prepared in a way that does not reveal their identity.

And how do they plan to accomplish this feat?

Eliminate author names and contact information from anyplace in the paper. See Title page for more information.

Make sure to use the third person to refer to personal work e.g. replace any phrases like ‘as we have shown before’ with ‘has been shown before (Anonymous, 2007)’

Make sure that the materials and methods section does not refer to personal work. Do not include statements such as “using the method described in (XXX, 2007).” See Materials and Methods for more information.

Ensure that figures do not contain any affiliation-related identifier.

Depersonalize the work by using anonymous text where necessary. Do not include statements such as “as we have reported before”.

Remove self-citations and citations to unpublished work.

Do not eliminate essential self-references or other references, but limit self-references only to papers that are relevant for those reviewing the submitted paper.

Remove references to funding sources

I will be fascinated to see what procedures they have in place to determine if the blinding is actually working.

Will reviewers asked for their top five guesses as to the identity of the group submitting the manuscript do better than chance?

Will identification depend on the fame and status (and productivity) of the group submitting the paper?

Will it correlate with relatedness of scientific expertise?

What fraction of authors are identified all the time versus never?

Somehow, I suspect the staff of eNeuro will not really be interested in testing their assumptions.

The latest round of waccaloonery is the new PLoS policy on Data Access.

I’m also dismayed by two other things of which I’ve heard credible accounts in recent months. First, the head office has started to question authors over their animal use assurance statements. To fail to take the statement of local IACUC oversight as valid because of the research methods and outcomes. On the face of it, this isn’t terrible to be robustly concerned about animal use. However, in the case I am familiar with, they got it embarrassingly wrong. Wrong because any slight familiarity with the published literature would show that the “concern” was misplaced. Wrong because if they are going to try to sidestep the local IACUC and AAALAC and OLAW (and their worldwide equivalents) processes then they are headed down a serious rabbithole of expensive investigation and verification. At the moment this cannot help but be biased- and accusations are going to rain down on the non-English-speaking and non-Western country investigators I can assure you.

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

These two issues highlight, once again, why it is TERRIBLE for us scientists to let the humanities trained and humanities-blinkered wordsmiths running journals dictate how publication is supposed to work.

Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.

Unfortunately PLoS has decided to listen to the wild-eyed fanatics and to play in their fantasy realm of paranoid ravings.

This is a shame and will further isolate PLoS’ reputation. It will short circuit the gradual progress they have made in persuading regular, non-waccaloon science folks of the PLoS ONE mission. It will seriously cut down submissions…which is probably a good thing since PLoS ONE continues to suffer from growing pains.

But I think it a horrible loss that their current theological orthodoxy is going to blunt the central good of PLoS ONE, i.e., the assertion that predicting “impact” and “importance” before a manuscript is published is a fool’s errand and inconsistent with the best advance of science.

The first problem with this new policy is that it suggests that everyone should radically change the way they do science, at great cost of personnel time, to address the legitimate sins of the few. The scope of the problem hasn’t even been proven to be significant and we are ALL supposed to devote a lot more of our precious personnel time to data curation. Need I mention that research funds are tight and that personnel time is the most significant cost?

This brings us to the second problem. This Data Access policy requires much additional data curation which will take time. We all handle data in the way that has proved most effective for us in our operations. Other labs have, no doubt, done the same. Our solutions are not the same as people doing very closely the same work. Why? Because the PI thinks differently. The postdocs and techs have different skill sets. Maybe we are interested in sub-analysis of a data set that nobody else worries about. Maybe the proprietary software we use differs and the smoothest way to manipulate data is different. We use different statistical and graphing programs. Software versions change. Some people’s datasets are so large as to challenge the capability of regular-old, desktop computer and storage hardware. Etc, etc, etc ad nauseum.

Third problem- This diversity in data handling results, inevitably, in attempts for data orthodoxy. So we burn a lot of time and effort fighting over that. Who wins? Do we force other labs to look at the damn cumulative records for drug self-administration sessions because some old school behaviorists still exist in our field? Do we insist on individual subjects’ presentations for everything? How do we time bin a behavioral session? Are the standards for dropping subjects the same in every possible experiments. (answer: no) Who annotates the files so that any idiot humanities-major on the editorial staff of PLoS can understand that it is complete?

Fourth problem- I grasp that actual fraud and misleading presentation of data happens. But I also recognize, as the waccaloons do not, that there is a LOT of legitimate difference of opinion on data handling, even within a very old and well established methodological tradition. I also see a lot of will on the part of science denialists to pretend that science is something it cannot be in their nitpicking of the data. There will be efforts to say that the way lab X deals with their, e.g., fear conditioning trials, is not acceptable and they MUST do it the way lab Y does it. Keep in mind that this is never going to be single labs but rather clusters of lab methods traditions. So we’ll have PLoS inserting itself in the role of how experiments are to be conducted and interpreted! That’s fine for post-publication review but to use that as a gatekeeper before publication? Really PLoS ONE? Do you see how this is exactly like preventing publication because two of your three reviewers argue that it is not impactful enough?

This is the reality. Pushes for Data Access will inevitably, in real practice, result in constraints on the very diversity of science that makes it so productive. It will burn a lot of time and effort that could be more profitably applied to conducting and publishing more studies. It addresses a problem that is not clearly established as significant.

An email from current president of the Society for Neuroscience announced the intent of the society to launch a new Open Access journal. They are seeking an Editor in Chief, so if you know any likely candidates nominate them.

The Society for Neuroscience Council has appointed a Search Committee to recommend candidates to serve as editors-in-chief for two Society-published journals:

The Editor-in-Chief of The Journal of Neuroscience, to be appointed for a 5-year term beginning Jan. 1, 2015, after a period of transition with the current editor; and
The first Editor-in-Chief of a new online, open access neuroscience journal, expected to launch in late 2014, and temporarily referred to herein as “New Journal.” Please see the announcement here for more information about New Journal. This 5-year appointment will commence in the spring of 2014, to allow the new editor to be involved in decisions connected with the start-up of New Journal and the organizing of an initial editorial board.

The members of the Search Committee are: Moses Chao, Chair; Holly Cline; Barry Everitt; David Fitzpatrick; and Eve Marder.

The list of evaluation criteria may help you to think about who you should nominate.

In evaluating candidates for the editor-in-chief positions, the Search Committee will consider the following criteria:

  • previous editorial experience

  • adequate time flexibility to take on the responsibilities of editor-in-chief

  • a distinguished record of research in neuroscience

  • familiarity with online submission, peer review and manuscript tracking systems

  • ideas about novel approaches and receptivity to innovation during a time of great change in the scientific publishing field

  • service to and leadership in the neuroscience community (e.g., SfN committees)

  • evidence of good management skills and the ability to lead colleagues on an editorial board

  • for New Journal: the capacity to proactively engage on a start-up venture, and to innovate and lead in the creation of a high quality open access neuroscience journal, and guide it on a path to success

  • for The Journal of Neuroscience: the capacity to build on an established record of success, while continuing to evolve a leading journal in the field and take it to the next level

Interesting next step for the SfN. Obviously reflects some thinking that they may be left behind (even further, see diminishing reputation after the launch of Nature Neuroscience and Neuron) in the glorious New World Order of Open Access publication. Might just be a recognition that Open Access fees for a new journal when all the infrastructure is already there is going to be a cash cow for the Society from the beginning.

What I will be fascinated to see is where they pitch the New Journal* in terms of impact. Are they just trying to match JNeuro? Will they deliberately go a little lower down the feeding chain to avoid undercutting the flagship journal?

__
*my suggestion of Penfield must have been too esoteric a reference…..

PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I’m sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.
has_user_comments[sb]

In case you are interested in seeing what sorts of comments are being made.

@mbeisen is on fire on the Twitts:

@ianholmes @eperlste @dgmacarthur @caseybergman and i’m not going to stop calling things as they are to avoid hurting people’s feelings

Why? Open Access to scientific research, naturally. What else? There were a couple of early assertions that struck me as funny including

@eperlste @ianholmes @dgmacarthur @caseybergman i think the “i should have to right to choose where to publish” argument is bullshit

and

@eperlste @ianholmes @dgmacarthur @caseybergman funding agencies can set rules for where you can publish if you take their money

This was by way of answering a Twitt from @ianholmes that set him off, I surmise:

@eperlste @dgmacarthur how I decide where to pub is kinda irrelevant. The point is, every scientist MUST have the freedom to decide for self

This whole thing is getting ridiculous. I don’t have the unfettered freedom to decide where to publish my stuff and it most certainly is an outcome of the funding agency, in my case the NIH.

Here are the truths that we hold to be self-evident at present time. The more respected the journal in which we publish our work, the better the funding agency “likes” it. This encompasses the whole process from initial peer review of the grant applications, to selection for funding (sometimes via exception pay) to the ongoing review of program officers. It extends not just from the present award, but to any future awards I might be seeking to land.

Where I publish matters to them. They make it emphatically clear in ever-so-many-ways that the more prestigious the journal (which generally means higher IF, but not exclusively this), the better my chances of being continuously funded.

So I agree with @mbeisen about the “I have the right to choose where I publish is bullshit” part, but it is for a very different reason than seems to be motivating his attitude. The NIH already influences where I “choose” to publish my work. As we’ve just seen in a prior discussion, PLoS ONE is not very high on the prestige ladder with peer reviewers…and therefore not very high with the NIH.

So quite obviously, my funder is telling me not to publish in that particular OA venue. They’d much prefer something of a lower IF that is better respected in the field, say, the journals that have longer track records, happen to sit on the top of the ISI “substance abuse” category or are associated with the more important academic societies. Or perhaps even the slightly more competitive rank of journals associated with academic societies of broader “brain” interest.

Even before we get to the Glamour level….the NIH funding system cares where I publish.

Therefore I am not entirely “free” to choose where I want to publish and it is not some sort of moral failing that I haven’t jumped on the exclusive OA bandwagon.

@ianholmes @eperlste @dgmacarthur @caseybergman bullshit – there’s no debate – there’s people being selfish and people doing the right thing

uh-huh. I’m “selfish” because I want to keep my lab funded in this current skin-of-the-teeth funding environment? Sure. The old one-percenter-of-science monster rears it’s increasingly ugly head on this one.

@ianholmes @eperlste @dgmacarthur @caseybergman and we have every right to shame people for failing to live up to ideals of field

What an ass. Sure, you have the right to shame people if you want. And we have the right to point out that you are being an asshole from your stance of incredible science privilege as a science one-percenter. Lecturing anyone who is not tenured, doesn’t enjoy HHMI funding, isn’t comfortably ensconced in a hard money position, isn’t in a highly prestigious University or Institute, may not even have achieved her first professorial appointment yet about “selfishness” is being a colossal dickweed.

Well, you know how I feel about dickweedes.

I do like @mbeisen and I do think he is on the side of angels here*. I agree that all of us need to be challenged and I find his comments to be this, not an unbearable insult. Would it hurt to dip one toe in the PLoS ONE waters? Maybe we can try that out without it hurting us too badly. Can we preach his gospel? Sure, no problem. Can we ourselves speak of PLoS ONE papers on the CVs and Biosketches of the applications we are reviewing without being unjustifiably dismissive of how many notes Amadeus has included? No problem.

So let us try to get past his rhetoric, position of privilege and stop with the tone trolling. Let’s just use his frothing about OA to examine our own situations and see where we can help the cause without it putting our labs out of business.

__
*ETA: meaning Open Access, not his attacks on Twitter

For some reason the response on Twittah to the JSTOR downloader guy killing himself has been a round of open access bragging. People are all proud of themselves for posting all of their accepted manuscripts in their websites, thereby achieving personal open access.

But here is my question…. How many of you are barraged by requests for reprints? That’s the way open access on the personal level has always worked. I use it myself to request things I can’t get to by the journal’s site. The response is always prompt from the communicating author.

Seems to me that the only reason to post the manuscripts is when you are fielding an inordinate amount of reprint requests and simply cannot keep up. Say…more than one per week?

So are you? Are you getting this many requests?

Remember when Nature offered us a completely objective and unbiased review of PLoS?

Public Library of Science (PLoS), the poster child of the open-access publishing movement, is following an haute couture model of science publishing — relying on bulk, cheap publishing of lower quality papers to subsidize its handful of high-quality flagship journals.

drdrA alerts us to the fact that Nature Publishing Group seems to have changed their minds about dirty, gutter, bulk publication of lower quality papers.

Nature Scientific Reports

Commentary from Martin Fenner over at PLoS blogs and from Bjorn Brembs.
This is why NPG cracks me up. Totally unembarrassed to say whatever, whenever no matter how inconsistent with their supposed other goals (see goals for robust online discussion of published papers) or with their prior statements or with their other actions (see hand wringing about Impact Factors). Just like a good business should, I suppose.

In case anyone missed this, The Brain Observatory at UCSD is slicing perhaps the most well known brain in cognitive neuroscience. That of Henry Molaison, aka “HM”.
http://thebrainobservatory.ucsd.edu/hm_live.php
DAY 2 UPDATE: They are slicing again after quite a bit of time to get a new microtome blade going. You can follow Twitter commentary on the #HM hashtag (even if you don’t have a Twitt account).

While wallowing in the murkily polluted wading pool* that is the blogospheric discussion of Unscientific America, I noticed that Uncertain Chad and Aunt Janet have returned to the more fundamental, and therefore more interesting, question. It touches on the larger topic of OpenAccess Science, the Congressional mandate for deposition of NIH-funded manuscripts in PubMed Central and yes, Obama’s inagural call to restore science to its rightful place.

Read the rest of this entry »

Secret Science, Again

June 3, 2009

My usual preamble is that I don’t really get on board with the OpenScienceEverything!!!! types but I do back some essential principles. One, that if the taxpayer funds our work than that taxpayer has a right to the usual output of our work (i.e., the papers) without a lot of additional hassle or charge. Two, our usual output is intended to be public. Meaning that while various interests may want to make money from our output, the goal would be to make it available (again, at a charge) to as many people who would want it. Three, our usual output is also intended to be archival to history
Well, awhile back some colleagues and I were discussing a situation that was initially sort of amusing. Then I realized that the situation was complicated and I’m not really sure where I stand.
Should people be allowed to blog and Tweet and otherwise discuss results that are presented at scientific conferences?

Read the rest of this entry »

Browsing over DamnGoodTechnician’s recent posts for the one I was going to excoriate gently discuss, I ran across this gem:

Part of my project has been to recapitulate the results from a fairly recent Nature paper. I’m not sure how many of you have attempted this feat, but I believe deciphering the Rosetta Stone may have been simpler. What concentration of these ingredients did you use? WHICH of these ingredients did you use? How long? How many media changes? Transfection? Infection? Gack. The kicker is that the protocol induces a switch in cell fate, and the timecourse for that change is more or less two weeks, so any conditions I set up today as a “Let’s see if this set of conditions proves you guys weren’t lying” experiment won’t be ready to go until nearly May.
I’ve been banging my head on this protocol for about two months now

Word.

Read the rest of this entry »

An editorial in Nature tells its readership that It’s good to blog. And more specifically:

More researchers should engage with the blogosphere, including authors of papers in press.

This is a very strange little editorial. It isn’t really what it seems to be about. Or it is about more than it seems. Something like that.
Let us start with the bloggy part.

Indeed, researchers would do well to blog more than they do. The experience of journals such as Cell and PLoS ONE, which allow people to comment on papers online, suggests that researchers are very reluctant to engage in such forums. But the blogosphere tends to be less inhibited, and technical discussions there seem likely to increase.
Moreover, there are societal debates that have much to gain from the uncensored voices of researchers. A good blogging website consumes much of the spare time of the one or several fully committed scientists that write and moderate it. But it can make a difference to the quality and integrity of public discussion.

Sounds pretty good. Nice little bit of endorsement from one of the science world’s two premier general-science magazines. All y’all bloggers who are on the science paths will want to keep a copy of this editorial in your little file (along with such items as this, this, this and this) to brandish to the Chair or Dean or tenure committee once your blogging habits are discovered.
The observation that discussions at official journal sites are likely to be less vigorous and useful in comparisons to more informal forums, such as blogs, is to be congratulated. Too true. We cannot rely on publishers who create discussion mechanisms because they are inevitably leery of the free-flowing anonymous-comment powered, occasionally offensive or profane discussions that abound on blogs. So they try to control and civilize the discussion. This never goes well.

Read the rest of this entry »