By now most of you are familiar with the huge plume of vapor emitted by a user of an e-cigarette device on the streets. Maybe you walked through it and worried briefly about your second-hand vape exposure risk. Some of you may even have been amused to hear your fellow parents tell you with a straight face that their kids “only vape the vehicle for the flavor”. Sure. Ahem.

Nicotine is one thing, but there is also a growing trend to use e-cigarettes to vape marijuana and, allegedly, stimulants such as flakka (alpha-PVP).

As with many emerging drug trends it can be difficult to put solid, peer-reviewed epidemiology on the table to verify these behaviors.

A recent paper reports on some initial estimates on practices among middle- and high-school students.

High School Students’ Use of Electronic Cigarettes to Vaporize Cannabis. Morean ME, Kong G, Camenga DR, Cavallo DA, Krishnan-Sarin S. Pediatrics. 2015 Oct;136(4):611-6. doi: 10.1542/peds.2015-1727. Epub 2015 Sep 7.[PubMed]

The authors surveyed 5 High Schools and 2 middle schools in Connecticut in the spring of 2014. Apparently insufficient middle school data were obtained so the paper focuses on the high school respondents only.

There were three key questions for the purposes of assessing behavior rates. Students were classified as “never used” or “lifetime used” (for ever having tried at least once) for e-cigarette use, for cannabis use (any method) and for cannabis use with an e-cigarette device.

Out of the total sample of 3847 HS students who completed the entire survey (52% female), about 5.4% had used an e-cigarette to self-administer cannabis. If, however, the sample was limited to those who had ever used an e-cigarette, then 18% had used one to administer cannabis. For lifetime cannabis users, it went to 18.4% and for dual e-cigarette and cannabis users, 26.5%.

So while the majority of high school students who have ever tried cannabis have never tried using an e-cigarette to dose themselves, 20% is a sizeable minority.

As always, it will be most interesting to see where these trends go and how they extend to older user groups. It could be that it is something that kids try and abandon (perhaps due to not learning different inhalation topography necessary for the desired high as with nicotine). It may be that older users are loathe to change their established patterns or see no advantages to e-cigarettes. I anticipate that solid data on these trends will be slow to emerge but I’ll be keeping an eye out.


Relatedly, the research community has been responding to this trend, and I wanted to draw two new papers to your attention.

Marusich and colleagues report from the Wiley group at RTI that they have a new model of flakka (and methamphetamine) delivery that increases locomotor activity and induces place preference in mice.

Pharmacological Effects of Methamphetamine and Alpha-PVP Vapor and Injection, Julie A. Marusich, , Timothy W. Lefever, Bruce E. Blough, Brian F. Thomas, Jenny L. Wiley, 2016, Neurotoxicology, doi:10.1016/j.neuro.2016.05.015

Nguyen and colleagues report from the Taffe group at TSRI that they have a new model of THC delivery that induces hypothermia, hypolocomotion and anti-nociception in rats.

Inhaled delivery of Δ9-tetrahydrocannabinol (THC) to rats by e-cigarette vapor technology, Jacques D. Nguyen, Shawn M. Aarde, Sophia A. Vandewater, Yanabel Grant, David G. Stouffer, Loren H. Parsons, Maury Cole, Michael A. Taffe, 2016, Neuropharmacology,doi:10.1016/j.neuropharm.2016.05.021

I am starting to suspect that the Scientific Premise review item will finally communicate overall excitement/boredom to the applicant. This will be something to attend closely when deciding to revise an application or just to start over. 

superkash started it:

and amplified it:

there was a bit of chatter and then eventually AListScientist asserted:
https://twitter.com/AListScientist/status/736240678557093889

First, I addressed Independence of a NIH Grant PI to some extent way back in 2007, reposted in 2009.

I, as well as several other colleagues who review grants, have noticed a seemingly sharp uptick in the number of applications coming in from PIs who are more “transitioning” than “transitioned”. PIs whose job titles might be something other than “Assistant Professor” and ones who are still in or around the same laboratory or research group in which they have done a big chunk of postdoctoral work. In extreme cases the PI might still be titled “Postdoc” or have trained in the same place essentially since graduate school!

Readers of this blog might conclude that this trend, which I’ve been noticing for at least the past 3-4 rounds, delights me. And to the extent that it represents a recognition of the problems with junior scientist’s making the career transition to independence this does appear a positive step. To the extent that it opens up artificial barriers blocking the next generation of scientists- great.

The slightly more cynical view expressed by colleagues and, admittedly, myself is that this trend has been motivated by IC Program behavior both in capping per-PI award levels and in promoting grant success for New Investigators. In other words that the well-established PIs with very large research groups are thinking that grants for which they would otherwise be the PI will now be more successful with some convenient patsy long-term postdoc at the helm. The science, however, is just the same old stuff of the established group and PI.

I surmise that the tweeting of @superkash was related to this conundrum. I would suggest to newcomers to the NIH system that these issues are still alive and well and contribute in various ways to grant review outcome. We see very clearly in various grant/career related discussion on twitter, this blog and commentary to various NIH outlets that peer scientists have strong ideas on categories of PI that deserve or don’t deserve funding. For example in the recent version on CSR’s Peer Review website, comments suggest we should keel the yuge labs, keel the ESIs, keel the riffraff noobs and save the politically disconnected. The point being that peer reviewers come with biases for and against the PI(s) (and to lesser extent the other investigators).

The fact that the Investigator criterion is one of the five biggies (and there is no official suggestion that it is of any lesser importance than Approach, Significance or Innovation) permits (and one might say requires) the reviwers to exercise these biases. It also shows that AListScientist’s apparent belief that Investigators are not to be evaluated because the applicant University has certified them is incorrect.

The official CSR Guidance on review of the Investigator criterion is posed as a series of questions:

Are the PD/PIs, collaborators, and other researchers well suited to the project? If Early Stage Investigators or New Investigators, or in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance and organizational structure appropriate for the project?

well suited
appropriate experience
Right there you can see where the independence of the PI might be of interest to the reviewer.

have they demonstrated an ongoing record of accomplishments
We what to know what they personally have accomplished. Or caused to have accomplished if you want to natter about PIs not really doing hands on science. The point is, can this PI make the proposed studies happen? Is there evidence that she has done so before? Or is there merely evidence that he has existed as glorified hands in the PIs lab up to this point in time?

are their leadership approach, governance and organizational structure appropriate for the project?

Can they lead? Can they boot all the tails hard enough to get this project accomplished? I say that this is an entirely appropriate consideration.

I hope you do as well and I would be interested to hear a counter argument.

I suspect that most of the pushback on this comes from the position of thinking about the Research Assistant Professor who IS good enough. Who HAS operated more or less independently and led projects in the SuperLab.

The question for grant review is, how are we to know? From the record and application in front of us.

__
I am unable to leave this part off: If you are a RAP or heading to be one as a mid to late stage postdoc, the exhortation to you is to lay down evidence of your independence as best you are able. Ask Associate Professor peers that you know about what possible steps you can take to enhance the optics of you-as-PI on this.

Well this is certainly exciting news!

Jeremy Berg, a biochemist and administrator at the University of Pittsburgh (Pitt) in Pennsylvania, will become the next editor-in-chief of Science magazine on 1 July. A former director of the National Institute of General Medical Sciences (NIGMS) who has a longstanding interest in science policy, Berg will succeed Marcia McNutt, who is stepping down to become president of the National Academy of Sciences.

I am a big fan of Jeremy Berg and his efforts to use data to drive policy when heading one of the NIH ICs and his efforts as a civilian to extract NIH grant data via FOIA requests for posting on his DataHound blog.

I look forward to a new era at Science magazine with an EIC who prefers that institutions make their decisions based on data and that they be as transparent as possible about their processes.

The big news of the day is that Santa Cruz Biotech has been punished for their malfeasance.

Buzzfeed News reports:

After years of allegations of mistreated research goats and rabbits, a settlement agreement (pdf) announced late on Friday will put Santa Cruz Biotechnology out of the scientific antibody business. The company will also pay a $3.5 million fine, the largest ever issued for this type of violation.

The settlement is only three pages so go ahead and read it. It is pretty much to the point.

Santa Cruz Biotech neither admits nor denies the allegations, blah, blah, but it is settling. They are to be penalized $3.5 million dollars, payable by the end of May, 2016. Their animal welfare act registration is revoked effective Dec 31, 2016. They will not use any inventory of the blood or serum they have on hand collected prior to Aug 21, 2015 to make, sell, transport, etc anything from May 20, 2016 to Dec 31, 2016 (after which they still cannot, I assume, since the license will be revoked). They agree to cease all activity as a research facility and will request cancellation of their registration with APHIS as such as of May 31, 2016.

I don’t know how easy it will be for the overall company to get around this by starting up some other entity, possibly off shore, but it sure as heck looks like Santa Cruz Biotech is out of business.

Hoo-ray!!

There are several specific allegations of animal use violations under the Animal Welfare Act at play. But for me there was one really big deal issue, I assume this was why the hammer came down so hard and why Santa Cruz Biotech decided they had no choice but to settle in this manner.

As Nature reported in early 2013, Santa Cruz Biotech hid an animal facility from Federal inspectors.

A herd of 841 goats has kicked up a stir for one of the world’s largest antibody suppliers after US agricultural officials found the animals — including 12 in poor health — in an unreported antibody production facility owned by California-based Santa Cruz Biotechnology.

“The existence of the site was denied even when directly asked” of employees during previous inspections, according to a US Department of Agriculture (USDA) report finalised on 7 December, 2012. But evidence gathered on a 31 October inspection suggested that an additional barn roughly 14 kilometres south of the company’s main animal facility had been in use for at least two and a half years, officials said.

This is mind bogglingly bad, in my view. Obviously criminal behavior. The Nature bit described this as “another setback”. To me this should have been game over right here. Obviously trying to cover up misuse of animals so my thought is that even if it worked, and you can’t actually observe the misuse, well, “get Capone on taxes even if you can’t prove the crime” theory.

But then there was more. In the midst of all the inspecting and reporting and what not….

In July 2015, the major antibody provider Santa Cruz Biotechnology owned 2,471 rabbits and 3,202 goats. Now the animals have vanished, according to the US Department of Agriculture (USDA).

the company seems to have done away with its entire animal inventory. When the USDA inspected the firm’s California facility on 12 January, it found no animal-welfare violations, and listed “no animals present or none inspected”. USDA spokesman Ed Curlett says that no animals were present during the inspection.

The fate of the goats and rabbits is unclear. The company did not respond to questions about the matter, and David Schaefer, director of public relations for the law firm Covington & Burling in Washington DC, which is representing Santa Cruz Biotechnology, declined to comment on the animals’ fate.

This sounds like an outrage, I know. But the bottom line is that a company in good standing with animal use regulatory authorities could in fact decide to euthanize all of its animals. It could decide to transfer or sell them to someone else under the appropriate regulations and procedures. This is really suspicious that the company won’t say what it did with the animals, but still.

It’s the concealment of the animal facility mentioned in the Dec 7, 2012 report that is the major violation in my view. They deserve to be put out of business for that.

One issue I’ve heard raised is that some PIs like to use salary differentials to reward the “good postdocs” with bonus pay.

Given the behaviorist education that lurks in my background, I am theoretically* in support of this notion.

The new salary rules may minimize such flexibility in the future.

Are you aware of labs in which merit of postdocs as interpreted by the PI leads to salary differentials?

Is this a legitimate complaint about the overtime rules?

Will PIs use the permission to work overtime (and be paid for it) as a workaround for merit pay?
__
*Given my distaste for workplace bias and desire to be a fair manager, I have never used merit to decide postdoc pay. I stick to NRSA schedules and to institutional adjustments as appropriate.

In a piece on HuffPo, NIH Director Francis Collins announces the NIH response to Obama’s new rules on overtime for salaried employees. Collins:

Under the new rule, which was informed by 270,000 public comments, the threshold will be increased to $47,476 effective December 1, 2016. ….In response to the proposed FLSA revisions, NIH will increase the awards for postdoctoral NRSA recipients to levels above the threshold.

“levels”. Meaning, presumably the entire scale will start around $47.5K and move upward with years of postdoctoral experience, as the NRSA scale usually does.

What about the larger population of postdocs that are paid from non-NRSA funds, Dr. Collins?

..we recognize that research institutions that employ postdocs will need to readjust the salaries they pay to postdocs that are supported through other means, including other types of NIH research grants. While supporting the increased salaries will no doubt present financial challenges to NIH and the rest of the U.S. biomedical research enterprise, we plan to work closely with leaders in the postdoc and research communities to find creative solutions to ensure a smooth transition.

Imprecise and highly disappointing when it comes to the postdocs supported on “other types of NIH research grants”. This would have been a great opportunity to state that the NIH expects any postdocs paid from RPGs to be on the NRSA scale, wouldn’t it? Most postdocs are supported on NIH grants. This Rock Talk post shows in FY2009 something like 11,000 basic biomed postdocs on Federal research grants and only 1,000 on Federal fellowships and training grants (and ~7,800 on nonFederal support). So Francis Collins is talking the happy talk about 10% of the postdocs who work for him and throwing 90% into the storm.

The OER head, Michael Lauer, has a few more interesting points on the Open Mike blog.

Institutions that employ postdocs through non-NRSA support can choose how to follow the new rule. They may choose to carefully track their postdocs’ hours and pay overtime. Or, keeping with the fact that biomedical research – as in many professional and scientific careers – does not fit into neatly defined hourly shifts, institutions can choose to raise salaries to the new FLSA salary threshold or above it, if they do not yet pay postdocs at or above that level.

This would imply that Dr. Collins’ supposed plan to “work closely with” and “ensure a smooth transition” is more realistically interpreted as “hey, good luck with the new Obama regs, dudes”.

Before we get at it in the comments, a few lead off points from me:

The current NRSA scale pays 0 year postdocs $43,692 so in December the brand new postdoc will see a $4,000 raise, roughly. There is currently something on the order of $1,800 increases for each successive year of experience, this estimate is close enough for discussion purposes. If this yearly raise interval is maintained we can expect to see that same $4,000 pay rise applied to every salary level. Remember to apply your local benefits rate for the cost to a grant, if you are paying your postdocs at NRSA scale from RPG funds. Could turn this into a $5,000-$6,000 cost to the grant.

Postdocs getting paid more is great. Everyone in science should be paid more but there is something specific here. Postdocs frequently work more than 40 h per week for their salaried positions. This is right down the middle of the intent of Obama’s change for the overtime rules. He is right on this. Period.

With that said, there is a very real disconnect here between the need to pay postdocs more and the business model which funds them. As mentioned above, 90% of Federally funded postdocs are supported by research grants, and 10% on fellowships or traineeships. (A population almost 8 times as large as the latter are supported by nonFederal funds- the percentage of these working on Federal research projects is likely to be substantial.) A grant may have one or two postdocs on it so adding another $5,000-$10,000 per year isn’t trivial. Especially since the research grant budgets are constrained in a number of ways.

First, in time. We propose grants in a maximum of 5 year intervals but often the budget is designed one or two years prior to funding. These grant budgets are not supplemented in the middle of a competitively-awarded interval just because NRSA salary levels are increased. Given the way NRSA rises have been coming down randomly over the years, it is already the case that budgets are stretched. Despite what people seem to think (including at NIH), we PIs do not pad the heck out of our proposed research budgets. We can’t. Our peers would recognize it on review and ding us accordingly.

Second, grants are constrained by the modular budgeting process which limits direct costs to $250,000 per year. This a soft and nebulous limit which depends on the culture of grant design, review and award. Formally speaking, one can choose a traditional budget process at any time if one needs to request funds in excess of $250,000 per year. Practically speaking, a lot of people choose to use the modular budget process. For reasons. The purchasing power has been declining for 15 years and there is no sign of a change in the expectations for per-grant scientific output.

Third, grant budgets are often limited by reductions to the requested budget that are imposed by the NIH. This can be levied upon original funding of the award or upon the award of each of the annual non-competing intervals of funding. These can often range to 10%, for argument’s sake let’s keep that $25,000 figure in mind when assessing the impact of such a reduction on paying a salary for a staff member, such as a postdoc. Point being, it’s a big fraction of a salary. This new postdoc policy isn’t going to result in fewer cuts or shallower cuts. Believe me.

I will be watching the way that local Universities choose to deal with the new policy with curiosity. I think we all see that trying to limit postdocs to 40 h a week of work so as to avoid raising the base salary is a ridiculous plan*. The other competitive motivations will continue to drive some postdocs to work more. This will put Universities (and PIs) in the extremely distasteful position of creating this elaborate fiction about working hours.

One potential upside for the good PI, who is already maintaining postdocs at NRSA levels even when funded from the RPG, is that it will force the bad PIs into line. This should narrow the competitive disadvantage that comes with trying to treat your postdocs well.

Final point. This will take away jobs. Fewer postdocs will be hired. Whether this is good or bad….well, opinions vary. But the math is unmistakable.

[UPDATE: The modular budget grant limit of $250,000 was established for R01s in FY2000 and the NRSA 0 year postdoc salary in FY2000 was $26,916. This is 10.8% of the direct costs of a full modular R01. In FY2017 when this new NRSA adjustment, the 0 year postdoc will be 19% of the direct costs of a full modular R01. In short the postdoc is now 76% more expensive than the postdoc was in FY2000.]
__
*It is, however, a failed opportunity to attempt to normalize academic science’s working conditions. I see no reason we shouldn’t take a stab at enforcing a 40 h work week in academic science, personally. Particularly for the grad student and post-doc labor force who are realistically not very different from the technicians who do, btw, enjoy most labor protections.

JNeuroJIF2014

I select these journals for comparison for a reason, of course. First, I’m in the addiction fields and Addiction Biology tops the JIF list of ISI Journal Citation Reports for the subcategory of Substance Abuse. Second, Biological Psychiatry and Neuropsychopharmacology publish a lot of behavioral pharmacology, another superset under which my work falls

The timeline is one of convenience, do note that I was in graduate school long before this.

When I entered graduate school, it was clear that publishing in the Journal of Neuroscience was considered something special. All the people presenting work from the platform at the Annual Meeting of the SfN were publishing relentlessly in JNeuro. People with posters drawing a crowd five people deep and spilling over the adjacent posters in an arc? Ditto.

I was in graduate school to study behavior, first, and something about the way the body accomplished these cool tasks second. This is still pretty much true, btw. For various reasons, I oriented toward the chemical communication and information transmission processes of the brain as my favored level of analysis. In short, I became a behavioral pharmacology person in orientation.

In behavioral pharmacology, the specificity of the analysis depends on three overarching factors. First, the components of the nervous system which respond to given drug molecules. Second, the specificity with which any given exogenous drug manipulation may act. Third, the regional constraints under which the drug manipulation is applied. By the time I entered graduate school, the scope of manipulations were relatively well developed. Sure, not all tools ended up having exactly the specificity that they were assumed to have. New receptor and transporter and intracellular chemical recognition sites were discovered frequently. Still are. But on the whole, we knew a lot about the interpretive space within which new experiments were being conducted.

I contrast this with lesion work. Because at the time I was in graduate school, there was another level of analysis that was also popular- the brain lesion. This related to a set of techniques in which regions of the brain were surgically deactivated/removed as the primary manipulation. The interpretive space tended to include fierce debate over the specificity with which the lesion had been produced. The physical area removed was rarely consistent in extent even within one study. Different approaches to the target might entail various collateral damages that were essentially ignored within a paper. The regions that were ablated contained, of course, a multitude of neuronal and glial subtypes and occasionally axonal tracts that were just passing through the neighborhood. Specificity was, in a word, poor.

I noticed very early in my days of grinding reading of my areas of interest that the Journal of Neuroscience just LOOOOOOOVED them a lesion study. And absolutely hated behavioral pharmacology.

I was, for a time, dismayed.

I couldn’t believe it. The relative level of confidence in the claims versus the experimental evidence was ridiculously poor for lesions versus pharmacology. The designs were less comprehensive and less well controlled. The inconvenient bits of evidence provided early were entirely forgotten in a later rush to claim lesion/behavior impairment specificity. The rapid fire exchange of data in publications from the competing labs was exciting but really pointed out the flaws in the whole premise.

At the very least, you could trade one level of uncertainty of the behavioral pharmacology for an equally troublesome uncertainty in the lesion world.

It boggled my mind that one of these technique domains and levels of analysis was considered The Awesome for the flagship journal of the very prestigious and large Society for Neuroscience and the other was considered unworthy*.

Particularly when I would see the broad stretch of interpretive domains that enjoyed space and an audience at the Annual Meeting of the Society for Neuroscience. It did not escape my attention that the SfN was delighted to take dues and Annual Meeting fees from people conducting a whole host of neuroscience investigations (far, far beyond the subject of this post, btw. I have another whole rant on the topic of the behavioral specificity and lack thereof.) that would never be considered for publication in J Neuro on a categorical basis.

It has been a long time since my dawning realization of these issues and I have survived just fine, so far, doing the things that interest me in science. I may have published work once or twice in J Neuro but I generally do not, and can not. They are still no fans of what I think is most interesting in science.

It turns out that journals that are fans of behavioral pharmacology, see Figure above, do publish some of the stuff that I think is most interesting. They are accepting of levels of analysis that are most interesting to me, in addition to considerable overlap with the J Neuro-acceptable analyses of the present day. And as time has gone by, the JIF of these journals has risen while that of J Neuro has fallen. Debate the reasons for this as you like, we all know there are games to be played to change the JIF calculation. But ultimately, papers are cited or not and this has a role in driving the JIF.

I watch the JIF numbers for a whole host of journals that publish a lot more pedestrian work than these journals do as well. The vast majority are on slight upward trends. More science is being published and more citations are available for distribution, so this makes a lot of sense.

J Neuro tends to stand out as the only one on a long and steady downward trend.

If J Neuro doesn’t halt this slide, it will end up down in the weeds of the 3-5 JIF range pretty soon. It will have a LOT more company down there. And it’s pretensions to being the venue for the very best neuroscience work will be utterly over.

I confess I am a little bit sad about this. It is very hard to escape the imprinting of my undergraduate and graduate school education years. Not too sad, mind you, I definitely enjoy the schadenfreude of their demise.

But I am a little sad. This Journal is supposed to be awesome in my mind. It still publishes a lot of good stuff. And it deserves a lot of credit for breaking the Supplemental Materials cycle a few years ago. I still like the breadth and excitement of the SfN Annual Meeting which gives me a related warm fuzzy for the Journal.

But still. If they go down they have nothing but themselves to blame. And I’m okay being the natterer who gets to sneer that he told em so.

__
*There is an argument to be made, one that is made by many, that the real problem at J Neuro is not the topic domains, per se, but rather a broader issue of the insider club that runs SfN and therefore the Journal**. I am not sure I really care about this too much because the result is the same.

**One might observe that publications which appear to be exceptions to the technique-domain rules usually come with insider-club authors.

Thought of the Day

May 13, 2016

I think I have made incremental progress in understanding you all “complete story” muppets and in understanding the source of our disagreement.

There are broader arcs of stories in scientific investigation. On this I think we all agree.

We would like to read the entire arc. On this, I think, we all agree.

The critical difference is this.

Is your main motivation that you want to read that story and find out where it goes?

Or is your main motivation that you want to be the one to discover, create and/or tell that story, all by your lonesome, so you get as much credit for it as possible?

While certainly subject to scientific ego, I conclude that I lean much more toward wanting to know the story than you “complete story” people do.

Conversely, I conclude that you “shows mechanism”, “complete story” people lean towards your own ego burnishing for participation in telling the story than you do towards wanting to know how it all turns out as quickly as possible.

There’s a new post up at The Ideal Observer.

Many times you find people talking about how many papers a scientist has published, but does anyone seriously think that that is a useful number? One major factor is that individual researchers and communities have dramatically different ideas about what constitutes a publication unit.

Go read and comment.

In my world, when you are about to conduct a between-groups study you do what you can to ensure that there is nothing about the group assignment that might produce a result because of this assignment, rather than your Group treatment.

Let’s say we are using the Hedgerow Dash model of BunnyHopping. If you test a population of 16 Bunnies for their speed, you are going to find some are faster and some are slower on a relatively consistent basis. So if you happen to put the 8 fastest ones in the Methamphetamine group and the 8 slowest ones in your Vehicle group, you are potentially going to have an apparent effect of Drug Treatment that is really associated with individual differences in Hedgerow Dash performance.

There are two basic ways to deal with this.

The first is random assignment from a relatively homogeneous pool of subjects. For example, you order all the Bunnies from the vendor in one large group and treat them all identically right up until you assign them to Groups. The idea is that you are unlikely to assign, by chance, Bunnies most likely to produce one particular category of outcome (independent of the treatment) into one Group and those destined for the opposite outcome in another Group.

The second is balanced assignment. For this, you are likely taking your homogeneous pool of Bunnies and testing them on a key variable or two. The individual differences that may potentially produce an apparent result where it doesn’t exist can thereby be directly minimized. So perhaps you run a pre-test for assignment purposes. Maybe you use a loud noise as the stimulus instead of Coyote pee, or maybe you’ve found that Bobcat pee can work. Baddaboom, baddabing, you can rank your Bunnies on Hedgerow Dash speed and assign them to groups such that the starting mean is equivalent.

In my world of behavioral pharmacology, the random assignment approach is the baseline. If you don’t at least do this, you had better have a good reason. Doing balanced assignment, I would assert, is generally considered even better. A cleaner and superior design leading to more clearly interpretable outcomes.

I am looking at a reviewer comment on one of our manuscripts with disbelief.

This person appears to think that random assignment would have been “surely” better than the balanced assignment we used. Because, you see, the Reviewer asserts that exposure to Bobcat pee must surely confound the response to Coyote pee. This is despite the fact that this is a repeated measures design in which Bunnies are tested daily for longitudinal changes in Hedgerow Dash performance. With Coyote pee. The Group variable you can think of as the time of day in which they were tested, Bunnies being crepuscular and all. The focus is on this Group variable, not the assay (i.e., longitudinal Dash performance changes). Prior literature has established clearly that there are large individual differences in Dash performance, particularly over time with repeated Coyote pee exposure. The rationale for good balancing of groups is overwhelming. And yet. And yet. This reviewer is certain that random assignment would have been better.

Some days, people. Some days.

The Aims shall be Three, and Three shall be the number of Aims.

Four shalt there not be, nor Two except as they precede the Third Aim.

Five is right out.