Preprint Ratio

June 30, 2018

We’re approximately one year into the NIH policy that encouraged deposition of manuscripts into preprint servers. My perception is that the number of labs taking the time to do so is steadily increasing.

It is rather slow compared to what I would have expected, going by the grant applications I have reviewed in the past year.

Nevertheless, preprint deposition is getting popular enough that the secondary questions are worth discussing.

How many preprints are too many?

Meaning, is there a ratio of preprints to now-published-in-Journalofology preprints that is of concern?

It is sort of like the way I once viewed listing conference abstracts on a CV. It’s all good if you can see a natural progression leading up to eventual publication of a paper. If there are a lot of conference presentations that never led to papers then this seems….worrisome?

So I’ve been thinking about how preprints may be similar. If one has a ton of preprints that never ever seen to get published, this may be an indication of certain traits. Bad traits. Inability to close type of traits.

So I have been thinking that one of the things guiding my preprint behavior is how many my lab has at a given time that have not advanced to publication yet. And maybe there are times when waiting to upload more preprints is advisable.

Thoughts, Dear Reader?

There’s a thread on faculty retention (or lack thereof, really) on the twitts today:
https://twitter.com/BatesPhysio/status/1009797654115647488

I know this is a little weird for the segments of my audience that are facing long odds even to land a faculty job and for those junior faculty who are worried about tenure. Why would a relatively secure Professor look for a job in a different University? Well…..reasons.

As is typical, the thread touched on the question of why Universities won’t work harder in advance to keep their faculty so delightfully happy that they would never dream of leaving.
https://twitter.com/BatesPhysio/status/1009840055077294082

Eventually I mentioned my theory of how Administration views retention of their faculty.

I think Administration (and this is not just academics, I think this applies pretty broadly) operates from the suspicion that workers always complain and most will never do anything about it. I think they suppose that for every 10 disgruntled employees, only 5 will even bother to apply elsewhere. Of these maybe three will get serious offers. Ultimately only one will leave*.

So why invest in 10 to keep 1?

This, anyway, is what I see as motivating much of the upper management thinking on what appear to be inexplicably wasteful faculty departures.

Reality is much more nuanced.

I think one of the biggest mistakes being made is that by the time a last-ditch, generally half-arsed retention ploy is attempted it can be psychologically too late. The departing faculty member is simply too annoyed at the current Uni and too dazzled by the wooing from the new Uni to let any retention offer sway their feelings. The second biggest mistake is that if there is an impression created that “everybody is leaving” and “nobody is being offered reasonable retention” this can spur further attempts to exit the building before the roof caves in.

Yes, I realize some extremely wealthy private Universities all covered in Ivy have the $$ to keep all their people happy all of the time. This is not in any way an interesting case. Most Universities have to be efficient. Spending money on faculty that are going to stay anyway may be a waste, better used elsewhere. Losing too many faculty that you’ve spent startup costs on is also inefficient.

So how would you strike the right balance if you were Dean at a R1 University solidly in the middle of the pack with respect to resources?
__
*Including by method of bribing one or more of the “serious offers” crowd to stay via the mysteries of the RetentionPackageTM

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group’s report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of “sustained interactions” between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often “giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)”. And they sure as heck provide certain PIs with “a competitive advantage not available to other applicants”.

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

…or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

__
Disclaimer: As always, Dear Reader, I have related experiences. I’ve competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I’ve also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of “sustained interactions” between me and Program staff that provided me “a competitive advantage” although I don’t know the extent to which this was not available to other PIs.

Twitter Cloud

June 16, 2018

Sounds just about right

There is a cautionary tale in the allegations against three Dartmouth Professors who are under investigation (one retired as a Dean reached a recommendation to fire him) for sexual harassment, assault and/or discrimination. From The Dartmouth:

several students in the PBS department described what they called an uncomfortable workplace culture that blurred the line between professional and personal relationships.

Oh, hai, buzzkill! I mean it’s just normal socializing. If you don’t like it nobody is forcing you to do it man. Why do you object to the rest of us party hounds having a little fun?

They said they often felt pressured to drink at social events in order to further their professional careers, a dynamic that they allege promoted favoritism and at times inappropriate behavior.

The answer is that this potential for nastiness is always lurking in these situations. There are biases within the laboratory that can have very lasting consequences for the trainees. Who gets put on what projects. Who gets preferential resources. Who is selected to attend a fancy meeting with a low trainee/PI ratio? Who is introduced around as the amazing talented postdoc and who is ignored? This happens all the time to some extent but why should willingness (and ability, many folks have family responsibilities after normal working hours) to socialize with the lab affect this?

Oh, come on, buzzkill! It’s just an occasional celebration of a paper getting accepted.

Several students who spoke to The Dartmouth said that Kelley encouraged his lab members to drink and socialize at least weekly, often on weeknights and at times during business hours, noting that Whalen occasionally joined Kelley for events off-campus.

Or, you know, constantly. Seriously? At the very least the PI has a drinking problem* and is covering it up with invented “lab” reasons to consume alcohol. But all too often it turns sinister and you can see the true slimy purpose revealed.

At certain social events, the second student said she sometimes refused drinks, only to find another drink in her hand, purchased or provided by one of the professors under the premise of being “a good host.”

Yeah, and now we get into the area of attempted drug-assisted sexual assault. Now sure, it could just be the PI thinking the grad student or postdoc can’t afford the drinks and wants to be a good chap. It could be. But then…..

She described an incident at a social event with members of the department, at which she said everyone was drinking, and one of the professors put his arm around her. She said his arm slid lower, to the point that she was uncomfortable and “very aware of where his hand [was] on [her] body,” and she said she felt like she was being tested.

Ugh. The full reveal of the behavior.

Look, as always, there is a spectrum here. The occasional lab celebration that involves the consumption of alcohol, and the society meeting social event that involves consumption of alcohol, can be just fine. Can be. But these traditions in the academic workplace are often co-opted by the creeper to his own ends. So you can end up with that hard-partying PI who is apparently just treating his lab like “friends” or “family” and belives that “everyone needs to blow off steam” to “build teamwork” and this lets everyone pull together….but then the allegations of harassment start to surface. All of the “buddies” who haven’t been affected (or more sinisterly have been affected for the good) circle the wagons.
Bro 1: Oh, he’s such a good guy.
Bro 2: Why are you being a buzzkill?
Bro 3: Don’t you think they are misinterpreting?

He isn’t, because people are being harmed and no, the victims are not “misinterpreting” the wandering arm/hand.

Keep a tight rein on the lab-based socializing, PIs. It leads to bad places if you do not.

__
*And that needs to be considered even when there is not the vaguest shred of sexual assault or harassment in evidence.

There has been a case of sexual harassment, assault and/or workplace misconduct at Dartmouth College that has been in the news this past year.

In allegations that span multiple generations of graduate students, four students in Dartmouth’s department of psychological and brain sciences told The Dartmouth this week that three professors now under investigation by the College and state prosecutors created a hostile academic environment that they allege included excessive drinking, favoritism and behaviors that they considered to be sexual harassment.

It was always a little bit unusual because three Professors from the same department (Psychological and Brain Sciences) were seemingly under simultaneous investigation and the NH State AG launched an investigation at the same time. It is not all clear to me yet but it seems to be a situation in which the triggering behaviors are not necessarily linked.

The news of the day (via Valley News) is that one of the professors under investigation has retired, “effective immediately”.

Professor Todd Heatherton has retired, effective immediately, following a recommendation by the dean of the faculty of arts and sciences, Elizabeth Smith, that his tenure be revoked and that he be terminated, Hanlon said in the email.

“In light of the findings of the investigation and the dean’s recommendation, Heatherton will continue to be prohibited from entering campus property or from attending any Dartmouth-sponsored events, no matter where they are held,” Hanlon wrote.

This comes hard on the heels of Inder Verma retiring from the Salk Institute just before their institutional inquiry was set to conclude.

I understand the role of plea bargains in normal legal proceedings. I am not sure I understand the logic of the approach when it comes to busting sexual harasser/discriminater individuals in academia. I mean sure, it may avoid a protracted legal fight between the alleged perpetrator and the University or Institute as the former fights to retain a shred of dignity, membership in the NAS or perhaps retirement benefits. But for the University or Institute, in this day and age of highly public attention they just like they are, yet again, letting a perp off the hook*. So any fine statements they may have made about taking sexual discrimination seriously and having zero tolerance rings hollow. I am mindful that what we’ve seen in the past is that the Universities and Institutes are fully willing to deploy their administrative and legal apparatus to defend an accused perpetrator, often for years and in repeated incidents, when they think it in their interest to do so. So saving money can’t really be the reason. It really does seem to be further institutional protection- they cannot be accused of having admitted to defending and harboring the perp over the past years or decades of his harassing behavior.

It is all very sad for the victims. The victims are left with very little. There is no formal finding of guilt to support their allegations. There is often no obvious punishment for a guy who should probably have long since retired (Verma is 70) simply retiring. There is not even any indirect apology from the University or Institution. I wish we could do better.

__
*At least in the Verma case, the news reporting made it very clear that the Salk Board of Trustees formally accepted Verma’s tender of resignation which apparently then halted any further consideration of the case. They could have chosen not to accept it, one presumes.

Inder Verma has resigned his position at the Salk Institute before a formal conclusion was reached in their internal investigation. One can only imagine they were moving toward a finding of guilt and he was tipped to resign.

http://www.sciencemag.org/news/2018/06/leading-salk-scientist-resigns-after-allegations-harassment

A bit in Science authored by Jocelyn Kaiser recently covered the preprint posted by Forscher and colleagues which describes a study of bias NIH grant review. I was struck by a response Kaiser obtained from one of the authors on the question of range restriction.

Some have also questioned Devine’s decision to use only funded proposals, saying it fails to explore whether reviewers might show bias when judging lower quality proposals. But she and Forscher point out that half of the 48 proposals were initial submissions that were relatively weak in quality and only received funding after revisions, including four that were of too low quality to be scored.

They really don’t seem to understand NIH grant review where about half of all proposals are “too low quality to be scored”. Their inclusion of only 8% ND applications simply doesn’t cut it. Thinking about this, however, motivated me to go back to the preprint, follow some links to associated data and download the excel file with the original grant scores listed.

I do still think they are missing a key point about restriction of range. It isn’t, much as they would like to think, only about the score. The score on a given round is a value with considerable error, as the group itself described in a prior publication in which the same grant reviewed in different ersatz study sections ended up with a different score. If there is a central tendency for true grant score, which we might approach with dozens of reviews of the same application, then sometimes any given score is going to be too good, and sometimes too bad, as an estimate of the central tendency. Which means that on a second review, the score for the former are going to tend to get worse and the scores for the latter are going to tend to get better. The authors only selected the ones that tended to get better for inclusion (i.e., the ones that reached funding on revision).

Anther way of getting at this is to imagine two grants which get the same score in a given review round. One is kinda meh, with mostly reasonable approaches and methods from a pretty good PI with a decent reputation. The other grant is really exciting, but with some ill considered methodological flaws and a missing bit of preliminary data. Each one comes back in revision with the former merely shined up a bit and the latter with awesome new preliminary data and the methods fixed. The meh one goes backward (enraging the PI who “did everything the panel requested”) and the exciting one is now in the fundable range.

The authors have made the mistake of thinking that grants that are discussed, but get the same score well outside the range of funding, are the same in terms of true quality. I would argue that the fact that the “low quality” ones they used were revisable into the fundable range makes them different from the similar scoring applications that did not eventually win funding.

In thinking about this, I came to realize another key bit of positive control data that the authors could provide to enhance our confidence in their study. I scanned through the preprint again and was unable to find any mention of them comparing the original scores of the proposals with the values that came out of their study. Was there a tight correlation? Was it equivalently tight across all of their PI name manipulations? To what extent did the new scores confirm the original funded, low quality and ND outcomes?

This would be key to at least partially counter my points about the range of applications that were included in this study. If the test reviewer subjects found the best original scored grants to be top quality, and the worst to be the worst, independent of PI name then this might help to reassure us that the true quality range within the discussed half was reasonably represented. If, however, the test subjects often reviewed the original top grants lower and the lower grants higher, this would reinforce my contention that the range of the central tendencies for the quality of the grant applications was narrow.

So how about it, Forscher et al? How about showing us the scores from your experiment for each application by PI designation along with the original scores?
__
Patrick Forscher William Cox Markus Brauer Patricia Devine, No race or gender bias in a randomized experiment of NIH R01 grant reviews. Created on: May 25, 2018 | Last edited: May 25, 2018; posted on PsyArXiv

Self plagiarism

June 8, 2018

A journal has recently retracted an article for self-plagiarism:

Just going by the titles this may appear to be the case where review or theory material is published over and over in multiple venues.

I may have complained on the blog once or twice about people in my fields of interest that publish review after thinly updated review year after year.

I’ve seen one or two people use this strategy, in addition to a high rate of primary research articles, to blanket the world with their theoretical orientations.

I’ve seen a small cottage industry do the “more reviews than data articles” strategy for decades in an attempt to budge the needle on a therapeutic modality that shows promise but lacks full financial support from, eg NIH.

I still don’t believe “self-plagiarism” is a thing. To me plagiarism is stealing someone else’s ideas or work and passing them off as one’s own. When art critics see themes from prior work being perfected or included or echoed in the masterpiece, do they scream “plagiarism”? No. But if someone else does it, that is viewed as copying. And lesser. I see academic theoretical and even interpretive work in this vein*.

To my mind the publishing industry has a financial interest in this conflation because they are interested in novel contributions that will presumably garner attention and citations. Work that is duplicative may be seen as lesser because it divides up citation to the core ideas across multiple reviews. Given how the scientific publishing industry leeches off content providers, my sympathies are…..limited.

The complaint from within the house of science, I suspect, derives from a position of publishing fairness? That some dude shouldn’t benefit from constantly recycling the same arguments over and over? I’m sort of sympathetic to this.

But I think it is a mistake to give in to the slippery slope of letting the publishing industry establish this concept of “self-plagiarism”. The risk for normal science pubs that repeat methods are too high. The risks for “replication crisis” solutions are too high- after all, a substantial replication study would require duplicative Introductory and interpretive comment, would it not?

__

*although “copying” is perhaps unfair and inaccurate when it comes to the incremental building of scientific knowledge as a collaborative endeavor.

MeToo STEM

June 4, 2018

There is a new blog at MeTooSTEM.wordpress.com that seeks to give voice to people in STEM disciplines and fields of work that have experienced sexual harassment.

Such as Jen:

The men in the lab would read the Victoria’s Secret catalog at lunch in the break room. I could only wear baggy sweatshirts and turtlenecks to lab because when I leaned over my bench, the men would try to look down my shirt. Then came the targeted verbal harassment of the most crude nature

or Sam:

I’ve been the victim of retaliation by my university and a member of the faculty who was ‘that guy’ – the ‘harmless’ one who ‘loved women’. The one who sexually harassed trainees and colleagues.

or Anne:

a scientist at a company I wanted to work for expressed interest in my research at a conference. … When I got to the restaurant, he was 100% drunk and not interested in talking about anything substantive but instead asked personal questions, making me so uncomfortable I couldn’t network with his colleagues. I left after only a few minutes, humiliated and angry that he misled about his intentions and that I missed the chance to network with people actually interested in my work

Go Read.

A recent twitt cued a thought.

Don’t ask your staff for a meeting without giving an indication of what it is about.

“Hey, I need to see you” can be very anxiety provoking.

“Come see me about the upcoming meeting Abstracts deadline” is not that hard to do.

“We need to talk about the way we’re doing this experiment” is duck soup.

Try to remember this when summoning your techs or trainees.

Grinders

June 1, 2018

I cracked wise

and then Tweeps came out of the woodwork to say they had night AND day guards.

Is this normal life under Trump?

Is this a risk of academic science?