The NOT-OD-18-197 this week seeks to summarize policy on the submission of revised grant applications that has been spread across multiple prior notices. Part of this deals with the evolved compromise where applicants are only allowed to submit a single formal revision (the -xxA1 version) but are not prohibited from submitting a new (-01, aka another A0 version) one with identical content, Aims, etc.

Addendum A emphasizes rules for compliance with Requirements for New Applications. The first one is easy. You are not allowed an extra Introduction page. Sure. That is what distinguishes the A1, the extra sheet for replying.

After that it gets into the weeds. Honestly I would have thought this stuff all completely legal and might have tried using it, if the necessity ever came up.

The following content is NOT allowed anywhere in a New A0 Application or its associated components (e.g., the appendix, letters of support, other attachments):

Introduction page(s) to respond to critiques from a previous review
Mention of previous overall or criterion scores or percentile
Mention of comments made by previous reviewers
Mention of how the application or project has been modified since its last submission
Marks in the application to indicate where the application has been modified since its last submission
Progress Report

I think I might be most tempted to include prior review outcome? Not really sure and I’ve never done this to my recollection. Mention of prior comments? I mean I think I’ve seen this before in grants. maybe? Some sort of comment about prior review that did not mean the revision series.

Obviously you can accomplish most of this stuff within the letter of the law by not making explicit mention or marking of revision or of prior comments. You just address the criticisms and if necessary say something about “one might criticize this for…but we have proposed….”.

The Progress Report prohibition is a real head scratcher. The Progress Report is included as a formal requirement with a competing continuation (renewal in modern parlance) application. But it has to fit within the page limits, unlike either an Introduction or a List of Publications Resulting (also an obligation of renewals apps) which gets you extra pages.

But the vast majority of NIH R01s include a report on the progress made so far. This is what is known as Preliminary Data! In the 25 page days, I tended to put Preliminary Data in a subsection with a header. Many other applications that I reviewed did something similar. It might as well have been called the Progress Report. Now, I sort of spread Preliminary Data around the proposal but there is a degree to which the Significance and Innovation sections do more or less form a report on progress to date.

There are at least two scenarios where grant writing behavior that I’ve seen might run afoul of this rule.

There is a style of grant writer that loves to place the proposal in the context of their long, ongoing research program. “We discovered… so now we want to explore….”. or “Our lab focuses on the connectivity of the Physio-Whimple nucleus and so now we are going to examine…”. The point being that their style almost inevitably requires a narrative that is drawn from the lab as a whole rather than any specific prior interval of funding. But it still reads like a Progress Report.

The second scenario is a tactical one in which a PI is nearing the end of a project and chooses to continue work on the topic area with a new proposal rather than a renewal application. Maybe there is a really big jump in Aims. Maybe it hasn’t been productive on the previously proposed Aims. Maybe they just can’t trust the timing and surety of the NIH renewal proposal process and need to get a jump on the submission date. Given that this new proposal will have some connection to the ongoing work under a prior award, the PI may worry that the review panel will balk at overlap. Or at anticipated overlap because they might assume the PI will also be submitting a renewal application for that existing funding. In the old days you could get 2 or 3 R01 more or less on the same topic (dopamine and stimulant self-administration, anyone?) but I think review panels are unkeen on that these days. They are alert to signs of multiple awards on too-closely-related topics. IME anyway. So the PI might try to navigate the lack of overlap and/or assure the reviewers that there is not going to be a renewal of the other one in some sort of modestly subtle way. This could take the form of a Progress Report. “We made the following progress under our existing R01 but now it is too far from the original Aims and so we are proposing this as a new project..” is something I could totally imagine writing.

But as we know, what makes sense to me for NIH grant applications is entirely beside the point. The NOT clarifies the rules. Adhere to them.

I’ve been seeing a few Twitter discussions that deal with a person wondering if their struggles in the academy are because of themselves (i.e., their personal merit/demerit axis) or because of their category (read: discrimination). This touches on the areas of established discrimination that we talk about around these parts, including recently the NIH grant fate of ESI applicants, women applicants and POC applicants.

In any of these cases, or the less grant-specific situations of adverse outcome in academia, it is impossible to determine on a case by case basis if the person is suffering from discrimination related to their category. I mean sure, if someone makes a very direct comment that they are marking down a specific manuscript, grant or recommendation only because the person is a woman, or of color or young then we can draw some conclusions. This never* happens. And we do all vary in our treatments/outcomes and in our merits that are intrinsic to ourselves. Sometimes outcomes are deserved, sometimes they vary by simple statistical chance and sometimes they are even better than deserved. So it is an unanswerable question, even if the chances are high that sometimes one is going to be treated poorly due to one’s membership in one of the categories against which discrimination has been proven.

These questions become something other than unanswerable when the person pondering them is doing “fine”.

“You are doing fine! Why would you complain about mistreatment, never mind wonder if it is some sort of discrimination you are suffering?”

I was also recently struck by a Tweeter comment about suffering a very current discrimination of some sort that came from a scientist who is by many measures “doing fine”.

Once, quite some time ago, I was on a seminar committee charged with selecting a year’s worth of speakers. We operated under a number of constraints, financial and topic-wise; I’m sure many of you have been on similar committees. I immediately noticed we weren’t selecting a gender balanced slate and started pushing explicitly for us to include more women. Everyone sort of ruefully agreed with me and admitted we need to do better. Including a nonzero number of female faculty on this panel, btw. We did try to do better. One of the people we invited one year was a not-super-senior person (one our supposed constraints was seniority) at the time with a less than huge reputation. We had her visit for seminar and it was good if perhaps not as broad as some of the ones from more-senior people. But it all seemed appropriate and fine. The post-seminar kvetching was instructive to me. Many folks liked it just fine but a few people complained about how it wasn’t up to snuff and we shouldn’t have invited her. I chalked it up to the lack of seniority, maybe a touch of sexism and let it go. I really didn’t even think twice about the fact that she’s also a person of color.

Many years later this woman is doing fine. Very well respected member of the field, with a strong history of contributions. Sustained funding track record. Trainee successes. A couple of job changes, society memberships, awards and whatnot that one might view as testimony to an establishment type of career. A person of substance.

This person went on to have the type of career and record of accomplishment that would have any casual outsider wondering how she could possibly complain about anything given that she’s done just fine and is doing just fine. Maybe even a little too fine, assuming she has critics of her science (which everyone does).

Well, clearly this person does complain, given the recent Twitt from her about some recent type of discrimination. She feels this discrimination. Should she? Is it really discrimination? After all, she’s doing fine.

Looping back up to the other conversations mentioned at the top, I’ll note that people bring this analysis into their self-doubt musings as well. A person who suffers some sort of adverse outcome might ask themselves why they are getting so angry. “Isn’t it me?”, they think, “Maybe I merited this outcome”. Why are they so angered about statistics or other established cases of discrimination against other women or POC? After all, they are doing fine.

And of course even more reliable than their internal dialog we hear the question from white men. Or whomever doesn’t happen to share the characteristics under discussion at the moment. There are going to be a lot of these folks that are of lesser status. Maybe they didn’t get that plum job at that plum university. Or had a more checkered funding history. Fewer highly productive collaborations, etc. They aren’t doing as “fine”. And so anyone who is doing better, and accomplishing more, clearly could not have ever suffered any discrimination personally. Even those people who admit that there is a bias against the class will look at this person who is doing fine and say “well, surely not you. You had a cushy ride and have nothing to complain about”.

I mused about the seminar anecdote because it is a fairly specific reminder to me that this person probably faced a lot of implicit discrimination through her career. Bias. Opposition. Neglect.

And this subtle antagonism surely did make it harder for her.

It surely did limit her accomplishments.

And now we have arrived. This is what is so hard to understand in these cases. Both in the self-reflection of self-doubt (imposter syndrome is a bear) and in the assessment of another person who is apparently doing fine.

They should be doing even better. Doing more, or doing what they have done more easily.

It took me a long while to really appreciate this**.

No matter how accomplished the woman or person of color might be at a given point of their career, they would have accomplished more if it were not for the headwind against which they always had to contend.

So no, they are not “doing fine”. And they do have a right to complain about discrimination.

__
*it does. but it is vanishingly rare in the context of all cases where someone might wonder if they were victim of some sort of discrimination.
**I think it is probably my thinking about how Generation X has been stifled in their careers relative to the generations above us that made this clearest to me. It’s not quite the same but it is related.

We have just learned that in addition to the bias against black PIs when they try to get research funding (Ginther et al., 2011), Asian-American and African-American K99 applicants are also at a disadvantage. These issues trigger my usual remarks about how NIH has handled observed disparities in the past. In the spirit of pictures being worth more than words we can look up the latest update on success rates for RPG (a laundry list of research grant support mechanisms) broken down by two key factors.
First up is the success rate by the gender of the PI. As you can see very clearly, something changed in 2003. All of a sudden a sustained advantage for men disappeared. Actually two things happened. This disparity was “fixed” and the year after success rates went in the tank for everyone. There are a couple of important observations. The NIH didn’t suddenly fix whatever was going on in study section, I guaranfrickentee it. I guarantee there were not also any magic changes in the pipeline or female PI pool or anything else. I guarantee you that the NIH decided to equalize success rates by heavy handed top-down affirmative action policies in the nature of “make it so” and “fix this”. I do not recall ever seeing anything formal so, hey, I could be way off base. If so, I look forward to any citation of information showing change in the way they do business that coincided directly with the grants submitted for the FY2003 rounds.
The second thing to notice here is that women’s success rates never exceeded that for men. Not for fifteen straight Fiscal Years. This further supports my hypothesis that the bias hasn’t been fixed in some fundamental way. If it had been fixed, this would be random from year to year, correct? Sometimes the women’s rates would sneak above the men’s rates. That never happens. Because of course when we redress a bias, it can only ever just barely reach statistically indistinguishable parity and if god forbid the previously privileged class suffers even the tiniest little bit of disadvantage it is an outrage.
Finally, the fact that success rates went in the tanker in 2004 should remind you that men enjoyed the advantage all during the great NIH doubling! The salad days. Lots of money available and STILL it was being disproportionately sucked up by the advantaged group. You might think that when there is an interval of largesse that systems would be more generous. Good time to slip a little extra to women, underrepresented individuals or the youth, right? Ha.

Which brings me to the fate of first-time investigators versus established investigators. Oh look, the never-funded were instantly brought up to parity in 2007. In this case a few years after the post-doubling success rates went in the toilet but more or less the same pattern. Including the failure of the statistically indistiguishable success rates for the first timers to ever, in 11 straight years of funding, to exceed the rates for established investigators. Because of affirmative action instead of fixing the bias. As you will recall, the head of the NIH at that time made it very clear that he was using “make it so” top-down heavy handed quota based affirmative action to accomplish this goal.

Zerhouni created special awards for young scientists but concluded that wasn’t enough. In 2007, he set a target of funding 1500 new-investigator R01s, based on the previous 5 years’ average.

Some program directors grumbled at first, NIH officials say, but came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

“quotas”.

I do not recall much in the way of discussing the “pipelines” and how we couldn’t possible do anything to change the bias of study sections until a new, larger and/or better class of female or not-previously-funded investigators could be trained up. The NIH just fixed it. ish. permanently.

For FY2017 there were 16,954 applications with women PIs. 3,186 awards. If you take the ~3% gap from the interval prior to 2003, this means that the NIH is picking up some 508 research project grants from women PIs via their affirmative action process. Per year. If you apply the ~6% deficit enjoyed by first time investigators in the salad days you end up with 586 research project grants picked up by affirmative action. Now there will be some overlap of these populations. Women are PI of about 31% of applications in the data for the first graph and first timers are about 35% for the second. So very roughly women might be 181 of the affirmative action newbie apps and newbies might be 178 of the affirmative action women’s apps. The estimates are close. So let’s say something like 913 unique grants are picked up by the NIH just for these two overt affirmative action purposes. Each and every Fiscal Year.

Because of the fact that, for example, African-American PIs of research grants or K99 apps represent such tiny percentages of the total (2% in both cases), the number of pickups that would be necessary to equalize success rate disparities is tiny. In the K99 analysis, it was a mere 23 applications across a decade. Two per year. I don’t have research grant numbers handy but if we use the data underlying the first graph, this means there were about 1,080 applications with African-American PIs in FY2017. If they hit the 19% success rate this would be about 205 applications. Ginther reported about a 13% success rate deficit, working out to 55% of the success rate enjoyed by white applicants at the time. This would correspond to a 10.5% success rate for black applicants now, or about 113 application. So 92 would be needed to make up the difference for African-American PIs assuming the Ginther disparity still holds. This would be less than one percent of the awards made.

Less than one percent. And keep in mind these are not gifts. These are making up for a screwjob. These are making up for the bias. If any applicants from male, established or white populations go unfunded to redress the bias, they are only losing their unearned advantage. Not being disadvantaged.

Oh, what a shocker.

In the wake of the 2011 Ginther finding [see archives on Ginther if you have been living under a rock] that there was a significant racial bias in NIH grant review, the concrete response of the NIH was to blame the pipeline. Their only real dollar, funded initiatives were to attempt to get more African-American trainees into the science pipeline. The obvious subtext here was that the current PIs, against whom the grant review bias was defined, must be the problem, not the victim. Right? If you spend all your time insisting that since there were not red-fanged, white-hooded peer reviewers overtly proclaiming their hate for black people that peer review can’t be the problem, and you put your tepid money initiatives into scraping up more trainees of color, you are saying the current black PIs deserve their fate. Current example: NIGMS trying to transition more underrepresented individuals into faculty ranks, rather than funding the ones that already exist.

Well, we have some news. The Rescuing Biomedical Research blog has a new post up on Examining the distribution of K99/R00 awards by race authored by Chris Pickett.

It reviews success rates of K99 applicants from 2007-2017. Application PI demographics broke down to nearly 2/3 White, ~1/3 Asian, 2% multiracial and 2% black. Success rates: White, 31%, Multiracial, 30.7%, Asian, 26.7%, Black, 16.2%. Conversion to R00 phase rates: White, 80%, Multiracial, 77%, Asian, 76%, Black, 60%.

In terms of Hispanic ethnicity, 26.9% success for K99 and 77% conversion rate, neither significantly different from the nonHispanic rates.

Of course, seeing as how the RBR people are the VerySeriousPeople considering the future of biomedical careers (sorry Jeremy Berg but you hang with these people), the Discussion is the usual throwing up of hands and excuse making.

“The source of this bias is not clear…”. ” an analysis …could address”. “There are several potential explanations for these data”.

and of course
“put the onus on universities”

No. Heeeeeeyyyyyuuullll no. The onus is on the NIH. They are the ones with the problem.

And, as per usual, the fix is extraordinarily simple. As I repeatedly observe in the context of the Ginther finding, the NIH responded to a perception of a disparity in the funding of new investigators with immediate heavy handed top-down quota based affirmative action for many applications from ESI investigators. And now we have Round2 where they are inventing up new quota based affirmative action policies for the second round of funding for these self-same applicants. Note well: the statistical beneficiaries of ESI affirmative action polices are white investigators.

The number of K99 applications from black candidates was 154 over 10 years. 25 of these were funded. To bring this up to the success rate enjoyed by white applicants, the NIH need only have funded 23 more K99s. Across 28 Institutes and Centers. Across 10 years, aka 30 funding cycles. One more per IC per decade to fix the disparity. Fixing the Asian bias would be a little steeper, they’d need to fund another 97, let’s round that to 10 per year. Across all 28 ICs.

Now that they know about this, just as with Ginther, the fix is duck soup. The Director pulls each IC Director aside in quiet moment and says ‘fix this’. That’s it. That’s all that would be required. And the Directors just commit to pick up one more Asian application every year or so and one more black application every, checks notes, decade and this is fixed.

This is what makes the NIH response to all of this so damn disturbing. It’s rounding error. They pick up grants all the time for reasons way more biased and disturbing than this. Saving a BSD lab that allegedly ran out of funding. Handing out under the table Administrative Supplements for gawd knows what random purpose. Prioritizing the F32 applications from some labs over others. Ditto the K99 apps.

They just need to apply their usual set of glad handing biases to redress this systematic problem with the review and funding of people of color.

And they steadfastly refuse to do so.

For this one specific area of declared Programmatic interest.

When they pick up many, many more grants out of order of review for all their other varied Programmatic interests.

You* have to wonder why.
__
h/t @biochembelle

*and those people you are trying to lure into the pipeline, NIH? They are also wondering why they should join a rigged game like this one.

Zealots

July 12, 2018

One of my favorite thing about this blog, as you know Dear Reader, is the way it exposes me (and you) to the varied perspectives of academic scientists. Scientists that seemingly share a lot of workplace and career commonalities which, on examination, turn out to differ in both expected and unexpected ways. I think we all learn a lot about the conduct of science in the US and worldwide (to lesser extent) in this process.

Despite numerous pointed discussions about differences of experience and opinion for over a decade now, it still manages to surprise me that so many scientists cannot grasp a simple fact.

The way that you do science, the way the people around you do science and the way you think science should be done are always but one minor variant on a broad, broad distribution of behaviors and habits. Much of this is on clear display from public evidence. The journals that you read. The articles that you read. The ones that you don’t but can’t possible miss knowing that they exist. Grant funding agencies. Who gets funded. Universities. Med schools within Universities. Research Institutions or foundations. Your colleagues. Your mentors and trainees. Your grad school drinking buddies. Conference friends and academic society behaviors.

It is really hard to miss. IMO.

And yet.

We still have this species of dumbass on the internet that can’t get it through his* thick head that his experiences, opinions and, yes, those of his circle of reflecting room buddies and acolytes, is but a drop in the bucket.

And they almost invariable start bleating on about how their perspective is not only the right way to do things but that some other practice is unethical and immoral. Despite the evidence (again, often quite public evidence) that large swaths of scientists do their work in this totally other, and allegedly unethical, way.

The topic of the week is data leeching, aka the OpenAccessEleventy perspective that every data set you generate in your laboratory should be made available in easily understood, carefully curated format for anyone to download. These leeches then insist that anyone should be free to use these data in any way they choose with barely the slightest acknowledgment of the person who generated the data.

Nobody does this. Right? It’s a tiny minority of all academic scientific endeavor that meets this standard at present. Limited in the individuals, limited in the data types and limited in the scope even within most individuals who DO share data in this way. Maybe we are moving to a broader adoption of these practices. Maybe we will see significant advance. But we’re not there right now.

Pretending we are, with no apparent recognition of the relative proportions across academic science, verges on the insane. Yes, like literally delusional insanity**.

__
*94.67% male

**I am not a psychiatristTM

From the email bag:

My question is: Should institutions pull back start-up funds from new PIs if R01s or equivalents are obtained before funds are burned? Should there be an expiration date for these funds?

Should? Well no, in the best of all possible worlds of course we would wish PIs to retain all possible sources of support to launch their program.

I can, however, see the institutional rationale that startup is for just that, starting. And once in the system by getting a grant award, the thinking goes, a PI should be self-sustaining. Like a primed pump.

And those funds would be better spent on starting up the next lab’s pump.

The expiration date version is related, and I assume is viewed as an inducement for the PI to go big or go home. To try. Hard. Instead of eking it out forever to support a lab that is technically in operation but not vigorously enough to land additional extramural funding.

Practically speaking the message from this is to always check the details for a startup package. And if it expires on grant award, or after three years, this makes it important to convert as much of that startup into useful Preliminary Data as possible. Let it prime many pumps.

Thoughts, folks? This person was wondering if this is common. How do your departments handle startup funds?

Trophy collaborations

July 5, 2018

Jason Rasgon noted a phenomenon where one is asked to collaborate on a grant proposal but is jettisoned after funding of the award:

I’m sure there are cases where both parties amicably terminate the collaboration but the interesting case is where the PI or PD sheds another investigator without their assent.

Is this common? I can’t remember hearing many cases of this. It has happened to me in a fairly minor way once but then again I have not done a whole lot of subs on other people’s grants.

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was…great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don’t know about all of y’all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don’t need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

The latest Journal Citation Reports has been released, updating us on the latest JIF for our favorite journals. New for this year is….

…..drumroll…….

provision of the distribution of citations per cited item. At least for the 2017 year.

The data … represent citation activity in 2017 to items published in the journal in the prior two years.

This is awesome! Let’s drive right in (click to enlarge the graphs). The JIF, btw is 5.970.

Oh, now this IS a pretty distribution, is it not? No nasty review articles to muck it up and the “other” category (editorials?) is minimal. One glaring omission is that there doesn’t appear to be a bar for 0 citations, surely some articles are not cited. This makes interpretation of the article citation median (in this case 5) a bit tricky. (For one of the distributions that follows, I came up with the missing 0 citation articles constituting anywhere from 17 to 81 items. A big range.)

Still, the skew in the distribution is clear and familiar to anyone who has been around the JIF critic voices for any length of time. Rare highly-cited articles skew just about every JIF upward from what your mind things, i.e., that that is the median for the journal. Still, no biggie, right? 5 versus 5.970 is not all that meaningful. If your article in this journal from the past two years got 4-6 citations in 2017 you are doing great, right there in the middle.

Let’s check another Journal….

Ugly. Look at all those “Other” items. And the skew from the highly-cited items, including some reviews, is worse. JIF is 11.982 and the article citation median is 7. So among other things, many authors are going to feel like they impostered their way into this journal since a large part of the distribution is going to fall under the JIF. Don’t feel bad! Even if you got only 9-11 citations, you are above the median and with 6-8 you are right there in the hunt.

Final entry of the day:

Not too horrible looking although clearly the review articles contribute a big skew, possibly even more than the second journal where the reviews are seemingly more evenly distributed in terms of citations. Now, I will admit I am a little surprised that reviews don’t do even better compared with primary review articles. It seems like they would get cited more than this (for both of these journals) to me. The article citation mean is 4 and the JIF is 6.544, making for a slightly greater range than the first one, if you are trying to bench race your citations against the “typical” for the journal.

The first takeaway message from these new distributions, viewed along with the JIF, is that you can get a much better idea of how your articles are fairing (in your favorite journals, these are just three) compared to the expected value for that journal. Sure, sure we all knew at some level that the distribution contributing to JIF was skewed and that median would be a better number to reflect the colloquial sense of typical, average performance for a journal.

The other takeaway is a bit more negative and self-indulgent. I do it so I’ll give you cover for the same.

The fun game is to take a look at the articles that you’ve had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the “JCR” (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.

Still with me, fellow axe-grinders?

Every editorial staff I’ve ever seen talk about journal business in earnest is concerned about raising the JIF. I don’t care how humble or soaring the baseline, they all want to improve. And they all want to beat some nearby competitors. Which means that if they have any sense at all, they are concerned about decreasing the uncited dogs and increasing the articles that will be cited in the JCR year above their JIF. Hopefully these staffs also understand that they should be beating their median citation year over year to improve. I’m not holding my breath on that one. But this new publication of distributions (and the associated chit chat around the campfire) may help with that.

Final snark.

I once heard someone concerned with JIF of a journal insist that they were not “systematically overlooking good papers” meaning, in context, those that would boost their JIF. The rationale for this was that the manuscripts they had rejected were subsequently published in journals with lower JIFs. This is a fundamental misunderstanding. Of course most articles rejected at one JIF level eventually get published down-market. Of course they do. This has nothing to do with the citations they eventually accumulate. And if anything, the slight downgrade in journal cachet might mean that the actual citations slightly under-represent what would have occurred at the higher JIF journal, had the manuscript been accepted there. If Editorial Boards are worried that they might be letting bigger fish get away, they need to look at the actual citations of their rejects, once published elsewhere. And, back to the story of the day, those actual citations need to be compared with the median for article citations rather than the JIF.

Light still matters

July 2, 2018

In the midst of all this hoopla about reliability, repeatability, the replication crisis and what not the Editorial Board of the Journal of Neuroscience has launched an effort to recommend best practices. The first one was about electrophysiology. To give you a flavor:

There is a long tradition in neurophysiology of using the number of neurons recorded as the sample size (“n”) in statistical calculations. In many cases, the sample of recorded neurons comes from a small number of animals, yet many statistical analyses make the explicit assumption that a sample constitutes independent observations. When multiple neurons are recorded from a single animal, however, either sequentially with a single electrode or simultaneously with multiple electrodes, each neuron’s activity may not, in fact, be independent of the others. Thus, it is important for researchers to account for variability across subjects in data analyses.

I emphasize the “long tradition” part because clearly the Editorial Board does not just mean this effort to nibble around the edges. It is going straight at some long used practices that they think need to change.

There was a long and very good twitter thread from someone which dealt in part with unreliability relating to when one chooses to conduct behavioral tasks in rodents with respect to their daily light cycle. As a reminder, rodents are nocturnal and are most active (aka “awake”) in the dark. Humans, as a reminder, are not. So, as you might imagine, there is a lot of rodent research (including behavioral research) that fails to grasp this difference and simply runs the rats in their light cycle. Also known as their inactive part of the day. Aka. “asleep”.

I am being totally honest when I say that the response has been astonishing to me. The pushback!

It’s totally surprising that we not only got a lot of “it doesn’t matter” responses but actually a lot of implication that it is better (without quite saying so directly). I’m not going to run down everything but players include @svmahler, @StephenMaren, @aradprof, @DennisEckmeier, @jdavidjentsch, and @sleepwakeEEG.

There are just too many ludicrous things being said to characterize them all. But, one species of argument is “it doesn’t matter [for my endpoint]”. The last part is implied. But early in this thread I posted a link to my prior post which discusses two of my favorite papers on this topic. Scheving et al, 1968 showed a four fold difference in mortality rate after a single dose of amphetamine depending on when it was administered. Roberts and colleagues showed that cocaine self-administration changes all across the day in a very nice circadian pattern. I also noted a paper I had discussed very indirectly in a post on contradicting your own stuff. Halberstadt and colleagues (2012) played around with some variables in a very old model from the Geyer lab and found that time of day interacted with other factors to change results in a rat locomotor assay. I mean c’mon, how many thousands of papers use locomotor assays to asssess psychomotor stimulant drugs?

There’s some downshifting and muttering in the tweet discussion about “well if it doesn’t matter who cares” but nobody has started posting published papers showing where light cycle doesn’t matter for their assays (as a main factor or as an interaction). Yet. I’m sure it is just an oversight. Interestingly the tone of this seems to be arguing that it is ridiculous to expect people to do their rat assays in reverse light unless it is proven (I guess by someone else?) that it changes results.

This, my friends, is very much front and center in the “reproducibility crisis” that isn’t. Let us return to the above comment at J Neuro about “long traditions”. Do you know how hard it is to fight long traditions in scientific subareas? Sure you do. Trying to get funded, or publish resulting studies, that deal with the seemingly minor choices that have been made for a long time is very difficult. Boring and incremental. Some of these things will come out to be negative, i.e., it won’t matter what light cycle is used. Good luck publishing those! It’s no coincidence that the aforementioned Halberstadt paper is published in a very modest journal. So we end up with a somewhat random assortment of some people doing their work in the animals’ inactive period and some in the active period. Rarely is there a direct comparison (i.e., within lab). So who knows what contribution that is….until you try to replicate it yourself. Wasting time and money and adding potential interactions…..very frustrating.

So yes, we would like to know it all, kind of like we’d like to know everything in male and female animals. But we don’t. The people getting all angsty over their choice to run rodents in the light next tried the ploy to back and fill with “can’t we all get along” type of approach that harmonizes with this sentiment. They aren’t wrong, exactly. But let us return to the J Neuro Editorial effort on best practices. There IS a best option here, if we are not going to do it all. There’s a slope in your choice of default versus checking the other. And for behavioral experiments that are not explicitly looking at sleepy rats or mice, the best option is running in their active cycle.

There is lots of fun ridiculousness in the thread. I particularly enjoyed the argument that because rats might be exposed briefly to light in the course of trying to do reverse-cycle experiments, we should just default to light cycle running. Right? Like if you walk from the light into a darkened room you suddenly fall asleep? Hell no. And if you are awakened by a light in your face in the middle of the night you are suddenly as awake as if it were broad noon? HAHAHHAHAA! I love it.

Enjoy the threads. Click on those tweeter links above and read the arguments.

__
Roberts DC, Brebner K, Vincler M, Lynch WJ. Patterns of cocaine self-administration in rats produced by various access conditions under a discrete trials procedure. Drug Alcohol Depend. 2002 Aug 1;67(3):291-9. [PubMed]

Scheving LE, Vedral DF, Pauly JE. Daily circadian rhythm in rats to D-amphetamine sulphate: effect of blinding and continuous illumination on the rhythm. Nature. 1968 Aug 10;219(5154):621-2. [PubMed]