The sidebar to McKight’s column at ASBMB Today this month is hilarious.

Author’s Note

I’ve decided it’s prudent to take a break from the debate about the quality of reviewers on National Institutes of Health study sections. The American Society for Biochemistry and Molecular Biology governing council met in mid-November with Richard Nakamura, director of the NIH’s Center for Scientific Review. The discussion was enlightening, and the data presented will inform my future columns on this topic.

HAHAHHAA. Clearly it is news to McKnight that his opinions might actually be on topics for which there are data in support or contradiction? And now he has to sit down and go through actual facts to try to come up with a better argument that study sections today are populated with riff-raff who are incompetent to review science.

Never fear though, he still leaves us with some fodder for additional snickering at his….simple-minded thinking. He would like his readers to answer some poll questions…

The first question is:
Should the quality of the proposed research and researcher be the most important criteria dictating whether an NIH-sponsored grant is funded?

The response item is Yes/No so of course some 99% of the responses are going to be Yes. Right? I mean jeepers what a stupid question. Particularly without any sense of what he imagines might be a possible alternative to these two considerations as “the most important criteria”. Even more hilariously since he has totally conflated the two things that are actual current items of debate (i.e., project versus person) and tie directly into his two prior columns!

The next question:
The review process used to evaluate NIH grant applications is:

has three possible answers:
essentially perfect with no room for improvement
slightly sub-optimal but impossible to improve
suboptimal with significant room for improvement

Again, simple-minded. Nobody thinks the system is perfect, this is a straw-man argument. I predict that once again, he’s going to get most people responding on one option, the “suboptimal, room for improvement” one. This is, again, trivial within the discussion space. The hard questions, as you my Readers know full well, relate to the areas of suboptimality and the proposed space in which improvements need to be made.

What is he about with this? Did Nakamura really tell him that the official CSR position is that everything is hunky-dory? That seems very unlikely given the number of initiatives, pilot studies, etc that they (CSR) have been working through ever since I started paying attention about 7-8 years ago.

Ah well, maybe this is the glimmer of recognition on the part of McKnight that he went off half-cocked without the slightest consideration that perhaps there are actual facts here to be understood first?

Thought of the day

December 5, 2014

One thing that always cracks me up about manuscript review is the pose struck* by some reviewers that we cannot possibly interpret data or studies that are not perfect.

There is a certain type of reviewer that takes the stance* that we cannot in any way compare treatment conditions if there is anything about the study that violates some sort of perfect, Experimental Design 101 framing even if there is no reason whatsoever to suspect a contaminating variable. Even if, and this is more hilarious, if there are reasons in the data themselves to think that there is no effect of some nuisance variable.

I’m just always thinking….

The very essence of real science is comparing data across different studies, papers, paradigms, laboratories, etc and trying to come up with a coherent picture of what might be a fairly invariant truth about the system under investigation.

If the studies that you wish to compare are in the same paper, sure, you’d prefer to see less in the way of nuisance variation than you expect when making cross-paper comparisons. I get that. But still….some people.

Note: this is some way relates to the alleged “replication crisis” of science.
__
*having nothing to go on but their willingness to act like the manuscript is entirely uninterpretable and therefore unpublishable, I have to assume that some of them actually mean it. Otherwise they would just say “it would be better if…”. right?

I can’t even imagine what they are thinking.

This Notice informs the applicant community of a modification for how NIH would like applicants to mark changes in their Resubmission applications. NIH has removed the requirement to identify ‘substantial scientific changes’ in the text of a Resubmission application by ‘bracketing, indenting, or change of typography’.

Effective immediately, it is sufficient to outline the changes made to the Resubmission application in the Introduction attachment. The Introduction must include a summary of substantial additions, deletions, and changes to the application. It must also include a response to weaknesses raised in the Summary Statement. The page limit for the Introduction may not exceed one page unless indicated otherwise in the Table of Page Limits.

First of all “would like” and “removed the requirement” do not align with each other. If the NIH “would like” that means this is not just a “we don’t care whether you do it or not”. So why not make it a mandate?

Next up…WHY?

Finally: How in all that is holy do they really expect the applicant to (“must”) summarize “substantial additions, deletions, and changes” and to “include a response to weaknesses” in just one page?

I am starting to suspect Rockey is planning on burning the OER down to the ground before leaving for greener pastures.

A tweet from @babs_mph sent me back to an older thread where Rockey introduced the new Biosketch concept. One “Senior investigator” commented:

For those who wonder where this idea came from, please see the commentary by Deputy Director Tabak and Director Collins (Nature 505, 612–613, January 2014) on the issue of the reproducibility of results. One part of the commentary suggests that scientists may be tempted to overstate conclusions in order to get papers published in high profile journals. The commentary adds “NIH is contemplating modifying the format of its ‘biographical sketch’ form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.”

Here’s Collins and Tabak, 2014 in freely available PMC format. The lead in to the above referenced passage is:

Perhaps the most vexed issue is the academic incentive system. It currently overemphasizes publishing in high-profile journals. No doubt worsened by current budgetary woes, this encourages rapid submission of research findings to the detriment of careful replication. To address this, the NIH is contemplating…

Hmmm. So by changing this, the ability on grant applications to say something like:

“Yeah, we got totally scooped out of a Nature paper because we didn’t rush some data out before it was ready but look, our much better paper that came out in our society journal 18 mo later was really the seminal discovery, we swear. So even though the entire world gives primary credit to our scoopers, you should give us this grant now.”

is supposed to totally alter the dynamics of the “vexed issue” of the academic incentive system.

Right guys. Right.

Snowflakes falling

December 5, 2014

We’ve finally found out, thanks to Nature News, that the paltry academic salary on which poor Jim Watson has been forced to rely is $375,000 per year as “chancellor emeritus” at Cold Spring Harbor Laboratory. The current NIH salary limitation is $181,500, this is the maximum amount that can be charged to Federal grants. I’m here to tell you, most of us funded by NIH grants do not make anything like this as an annual salary.

#MontyPythonidae

 

The Office of Extramural Research blog, RockTalking, has 73 comments posted to the discussion of the new NIH biosketch format. I found one that expressed no concern and apparently the rest range from opposed to outraged. One of the things that people seem particularly enraged by is the report of the supposed pilot study they ran. The blog entry reports on how many people found the new format helpful and, as the many commenters point out, the real question is whether this new format is better or worse than the old format. This you will recognize, OER watchers, as a common ploy for the NIH- carefully construct the “study” or the data mining inquiry so as to almost guarantee an outcome that puts the NIH’s activities and initiatives in a favorable light. We are not fooled.

 

Ruth Coker Burks’ Story Corp is a must-listen. Jesus effing Christ we failed the fuck out of everything in the early days of the HIV/AIDS crisis. Thank goodness there were a few people like Ms. Burks around.

 

Phoenix AZ police can’t stand being overshadowed in this critical measure of awesome policiness.

 

Apparently Cerebral Cortex is the latest academic journal to play shenanigans with the pre-print queue. Looks like there is an article by Studer and colleagues that was first published online Aug 7, 2013. I can find no information on the submission and acceptance dates. Perhaps I am just overlooking it but I have noticed a couple of times that journals with terrible timeline issues don’t seem to publish this information like most journals do these days. Go figure.

 

According to some guy on the internet Jim Watson also has an awesome house that he doesn’t have to pay for.

https://twitter.com/noahWG/status/539484800860844032

(in case you were worried about substantial amounts of his paltry $375K per year salary being eaten up in housing costs just like most other academics’ salaries.

 

What is even more worrying about the NIH Office of Extramural Research is that even when they set out a pretty clear goal they are so bad at reaching it.

Marcia McNutt in Science:

Consider a rather outrageous proposal. Perhaps there has been too much emphasis on bibliometric measures that either distort the process or minimally distinguish between qualified candidates. What if, instead, we assess young scientists according to their willingness to take risks, ability to work as part of a diverse team, creativity in complex problem-solving, and work ethic? There may be other attributes like these that separate the superstars from the merely successful. It could be quite insightful to commission a retrospective analysis of former awardees with some career track record since their awards, to improve our understanding of what constitutes good selection criteria. One could then ascertain whether those qualities were apparent in their backgrounds when they were candidates for their awards.