Your Grant in Review: Competing Continuation, aka Renewal, Apps
January 28, 2016
In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.
Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.
The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.
I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.
Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?
Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg’s old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.
Quoted bits are from my prior post.
Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.
We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?
Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.
Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don’t necessarily have to turn this into a LPU sausage-slicing discussion. Let’s assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?
Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.
This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn’t overwhelmingly productive, didn’t obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting “stuck” in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of “We didn’t get to this in the past five years but it is still a good idea”? Neutral? Negative? AYFK?
Did you address your original Specific Aims? …this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. … A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. … You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?
My original formulation of this isn’t quite right for today’s discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we’re not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?
It will be unsurprising to you that by this point of my career, I’ve had competing continuation applications to which just about all of these scenarios apply, save Glam. We’ve had projects where we absolutely nailed everything we proposed to do. We’ve had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We’ve had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We’ve had projects with reasonably high productivity that have….wandered….from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We’ve never been completely blanked on a project with zero related publications to my recollection, but we’ve had some very low productivity ones (albeit with excellent excuses).
I doubt we’ve ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.
I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The “hugely productive” arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called “real scientific progress” versus papers published. This can be the Aims, the overall theme or just about the sneer of “they don’t really do any interesting science”.
I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.
I’ve also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.
Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?
DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg’s data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates “between 0.6 and 5 published papers per $100k in funding.” which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).
**and also a pronounced lack of success renewing projects to go with it.
***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc – there are many models that can all take a long time to get to the point of publishing data. These considerations play….let us say variably, with reviewers. IME.
Your Grant in Review: Skin in the Game
January 25, 2016
Should people without skin in the game be allowed to review major research grants?
I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists…
On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.
On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.
Thoughts?
Would you prefer review by those who are subject to the funding system? Or doesn’t it matter?
Data stream
January 21, 2016
Yes, I realize the science these days is super collaborative and needs expensive tools, models and techniques to be cool.
However.
Strategically as a lab, you need to have a bread and butter data stream that you produce in house. Data that you generate, interpret, understand and publish without the input of any other lab groups. Data that is, in and of itself, capable of generating publications that meet at least the lower bound expectations of your department, subfield and whomever else is evaluating you.
This may not be the same thing over the long haul, either. Interests change. But the thing that never changes is that nobody is going to find your publication goals, demands or needs as critical as you do. And in this game, not publishing is simply not an option.
So figure out your data stream and protect it.
Papers
January 21, 2016
January is a great time to look at yourself in the mirror and ask what your plan is for improving your record of publication.
What are your usual hurdles that get in the way? What are the current hurdles?
What works to get you moving?
My biggest problem is me.
We’re at the point in my lab where available data are not really the issue, we have many dishes cooking along in parallel at most times. Something is always ready or close to being ready to serve up.
The problem is almost always the wandering of my attention and my energy to kick something over the final step to submission.
The game I have taken to playing with myself is to see how long I can go with at least one manuscript under review. I made it something like 14 mo a few years ago. Of course I then promptly fell into another extended dry spell but….
The other game I play with myself is to see how many manuscripts we can have under review simultaneously. That is, of course, much more subject to the ebb and flow of project maturation and the review process. But if we happen to have a few stacking up, sure I’ll use the extra motivation to keep my attention pegged to finishing a draft.
When all else fails there is always “We need this published in order to help get this next grant funded, aiieeeee!”
Where will your lab be in 5 years?
January 12, 2016
Scientifically, that is.
I like the answer Zoe gave for her own question.
I, too, just hope to be viable as a grant funded research laboratory. I have my desires but my confidence in realizing my goals is sharply limited by the fact I cannot count on funding.
Edited to add:
When I was a brand new Assistant Professor I once attended a career stage talk of a senior scientist in my field. It wasn’t an Emeritus wrap-up but it was certainly later career. The sort of thing where you expect a broad sweeping presentation of decades of work focused around a fairly cohesive theme.
The talk was “here’s the latest cool finding from our lab”. I was…..appalled. I looked over this scientist’s publication record and grant funding history and saw that it was….scattered. I don’t want to say it was all over the place, and there were certain thematic elements that persisted. But this was when I was still dreaming of a Grande Arc for my laboratory. The presentation was distinctly not that.
And I thought “I will be so disappointed in myself if I reach that stage of my career and can only give that talk”.
I am here to tell you people, I am definitely headed in that direction at the moment. I think I can probably tell a slightly more cohesive story but it isn’t far away.
I AM disappointed. In myself.
And of course in the system, to the extent that I think it has failed to support my “continuous Grande Arc Eleventy” plans for my research career.
But this is STUPID. There is no justifiable reason for me to think that the Grande Arc is any better than just doing a good job with each project, 5 years of funding at a time.
Top Powerball winner fantasy (of academic scientists)
January 11, 2016
“If I win the Powerball, I can afford all kinds of new domestic help and maybe even figure out how to issue grants to my lab so that I can spend more time publishing papers.”
Top one of one Powerball winner fantasy (of most people)
January 11, 2016
Minions
January 8, 2016
I don’t care what stage of the doctoral arc you inhabit, having science minions helps move your science forward.
The more minions, the better, assuming you have the resources available to fill their time productively.
If you don’t want moar data than you can generate with your own two hands, this is not the right career for you.
I’m 14 carat……want to look good for the PI, mmm
January 6, 2016
Have you ever been in a lab with a golden-child trainee?
Was it you?
What is “neuroscience”?
January 5, 2016
At the end of December when everyone was out of the lab on vacation the Journal of Neuroscience twitterers ran an episode of Ask Me Anything, Neuroscience. I had responded to an earlier teaser on this and asked the acting Editor in Chief of the Journal of Neuroscience the question which titles this post, figuring she should know. Obviously, I shaded the question….a little.
She replied:
..which is fascinatingly imprecise. Particularly for an EIC who has to decide categorically what is and is not appropriate material for the Journal she Edits. If we were talking about the range of investigation covered by the presentations at the annual meeting of the Society for Neuroscience, this would be a great answer. The breadth of science at that meeting is tremendous and I can buy that it covers almost everything “to do with neurons”. This is not the case for the Journal of Neuroscience. Which should probably be re-named the “Journal of Some Neuroscience but not other Neuroscience”.
As you will recall, Dear Reader, I have observed on more than one occasion that as a wee graduate student trainee I realized this fact with some dismay. I was outraged! How can this type of science be okay and this other type of science not, when the only difference is the techniques involved?!??, I wondered. How can these people not see that the Emperor’s New Clothes are not better, more precise or more mechanistically insightful results, they are just different levels of analysis?
Over the *cough*cough*decades this attitude has turned to bemusement, particularly as the Journal of Neuroscience‘s JIF has slid inexorably* down (currently 6.3) into just-barely-above-the-herd levels (25th in the Neuroscience category). Just ahead of such titles as Glia and Brain Behavior and Immunity. It is behind the Journal of Pineal Research, ffs! Yes, yes, JNeuro still punches above its JIF in reputational terms with the cognoscenti but there are many JIF-equivalent-or-better journal options. And after all, we all realize that the JIF still rules where it counts- when people aren’t assessing the science from an informed perspective. So the cost to those who do that other type of science involving neurons that is not acceptable for JNeuro has lessened considerably. The gains of sneaking one into the JNeuro have likewise lessened. Better to try at a less technique-limited venue that has a higher JIF
There was followup from the JNeuro twitter intern:
https://twitter.com/JNeuroscience/status/681927316193165312
and a related reply from the acting EIC.
Also particularly amusing given the place that “shows mechanism” holds in the mind of the average bio-scientist type, most certainly including neuroscientists, these days. I’d like to see an accounting of how many J Neuro articles in a given year reasonably qualify as “New observation without mechanism”. I’m betting the number is so low as to falsify this claim in any reasonable mind.
Then later there was this claim during an unrelated exchange:
Which I think is bizarre buck-passing for an Editor or Associate Editor of a Journal to engage in. At the least, it illustrates how and why it is bogus to claim “New observation without mechanism” is welcome– if one only selects reviewers who will not buy this for a second then where are we? Also, I am curious if AEs use the presumption of what reviewers might say to desk-reject said manuscripts. See also, the above comments about what qualifies as “neuroscience” and whether or not certain approaches and techniques are ruled in/out at this particular journal. Speaking as a reviewer, I try to follow the Editorial lead in the sense that “appropriate for this journal” has to be recommended, I rely on what they have actually been publishing**.
In closing, I’ll point out that I write this for the current version of younger-me. Those of you who aspire some day to publish in J Neuro, because you are a proud neuroscientist and proud member of the Society for Neuroscience. You who bring your posters to the Annual Meeting and then notice, chillingly, that science like yours never seems*** to get published in J Neuro. Have a heart. Leave your Imposter Syndrome behind. There are many so-called “more specialized” (that’s meant to be an insult when reviewers or AEs say that, btw) journals which have better JIFs. Get your work published there. Keep coming to the SfN meeting and chatting with the folks who appreciate what you do.
Keep on with the science that satisfies you.
And feel free to snicker about those people who do cell biology accidentally in neurons and call themselves neuroscientists.
__
*all snark aside, I do lament this. J Neuroscience is a great journal and resisted Glamming it up and JIF chasing in response to the invention of Neuron and Nature Neuroscience. It is unfortunate it is being punished for this. And of course, before the aforementioned baby Glams, it really did shine as a pinnacle for a Society published journal.
**Unless, of course, I am engaging in a rather intentional pushback along the lines of what the JNeuro EIC is suggesting, i.e., putting my marker down that I think the journal in question should be publishing a certain kind of paper.
***Yes, there will be the occasional paper that gets into a given journal. And you will think “aha, we have something very similar so let’s submit!”. Give it a try for sure. But don’t be too amped when you get desk-rejected. Often enough you will find out that the relationships between the editorial staff and the authors is slightly closer than you enjoy. Shrug and move on. Or, if a PI and you DGAF about your reputation at that particular journal, write a pointed inquiry to the AE to see what they say. I had one of these at a journal that rhymes with Serebral Kortex awhile ago. The new editorial staff tried to slam the old editorial staff and basically said, well that would never get in anymore. I was amused. And we published that paper somewhere else and moved on. As one does.
What is a scientific “observation”?
January 5, 2016
Reference to this https://t.co/hc9YYH8Myr popped up on the Twitter recently.
So what constitutes an “observation” to you?
To me, I think I’d need the usual minimum group size, say N=8, and at least two conditions or treatments to compare to each other. This could be either a between-groups or within-subject design.
Dissection of sleazy, dishonest AR shillery posing as journalism
January 4, 2016
Notice what I did there? Setting a bias right from the start with click-bait headlining?
Well, that is just how a buzzfeed piece entitled “The Silent Monkey Victims Of The War On Terror” starts.
“Victims”.
I called this piece out for being sleazy and dishonest in a tweet and the author, one Peter Aldhous, Buzzfeed News Reporter, took exception. He emailed me asking how I could possibly accuse him of being a shill for the AR agenda, asserting he has no allegiance whatsoever to animal rights and complaining about how someone as allegedly influential as me could damage his professional reputation.
So I felt I owed him an explanation.
First, I make no apology for my distaste for AR adherents. They are terrorists, yes, terrorists, and they inhabit a nihilist, anti-social ideology. Of terrorism.
Second, I’ve written a few posts about the use of animals in research (see below for Additional Reading). There is a pretty good dose of information at Speaking of Research as well. I mention this not so much to draw specifics as to show that there is information available on the web, readily searchable, for a journalist to quickly find for an alternative viewpoint to the AR nonsense. That is, if they are interested in researching a story. I’ll also point out that the Science Editor at Buzzfeed is someone who spent years pounding the floors at the annual meeting of the Society for Neuroscience and has even written on the use of nonhuman primates in autism models. Again, the point is that this journalist has a route to further education and balance, if he had only chosen to use it. The piece does not reflect any such background, in my reading of it.
What I want to dissect today, however, is the way this piece by Aldhous is carefully crafted to attack nonhuman primate research, as opposed to providing a reasonable discussion of the use of animals in specific research.
The article starts with “victims” and has chosen to describe this as resulting from “The War on Terror”. Right away, we see a sleazy link between something that many Americans oppose, i.e., the description of the Bush agenda as a war on terror and the Bush agenda itself, and the use of animals in research. It is a typical tactic of the AR position. If you can establish that one area of research is unneeded in the eyes of your audience then you are three quarters of the way home.
And this is AR thinking, make no bones about it. Why? Follow the logic. There are sizable swaths of Americans who disagree that we should spend public money investigating any number of health conditions. From infectious disease like HIV (although see this) to obesity to diabetes to depression to substance abuse. Simply because they do not agree that these are topics that are worth of investigation. Anthrax, botulism and nerve gas are no different in this respect. Some people feel that the war on terror is overblown, the risks of a bio or chemo weapon attack are small and we should not put any public money into this topic whatsoever- from research to law enforcement.
So if you argue that your particular agenda should rule the day when it comes to research, you are saying that everyone’s agenda in a pluralistic democratic society should rule the day. This leaves us with very little science conducted and certainly no animal science. This is why I call this a bit of AR shillery. The logic leads to no animal research on any health topic.
Note, it is fine to hold that belief in pluralistic democratic society but let us be honest about what you are about, eh? And sure, I can see that there would be some agenda so narrowly focused, so out of the mainstream that we cannot reasonable credit as being a legitimate concern of the American people. It should be self-evident from the support for the Bush administration’s war on terror (and our public discussion over bioterror) that this is not the case here.
Ok. But what about the converse? Is just any use of animals in research okay then? No, no it is not. Certainly, we have a cascade of federal law, federal regulation and widely adopted guidelines of behavior. We have rules against unnecessary duplication. We wrangle, sometimes at long length, over reduction and refinement of the research that uses animals. Even an apparent exemption from the full weight of the Animal Welfare Act for certain experimental species doesn’t really exempt them from oversight.
Getting back to the article, it next pursues two themes including the idea that there are a “lot” of monkeys being used and that they are all “suffering” and in pain. The article includes this pull quote:
“Wow, that’s a lot of monkeys,” said Joanne Zurlo of the Johns Hopkins Bloomberg School of Public Health, who studies alternatives to animal experimentation. “It’s quite disturbing.”
It is? How do we know this? How are we to evaluate this with any sort of context? How is it “disturbing” unless we have already decided we are against the use of monkeys in this, or for that matter any, research?
The piece brags about some exclusive review Buzzfeed has conducted to examine the publicly-available documents showing about 800 nonhuman primates used in “Column E” (the most painful/distressful category) US research in the 1999-2006 interval and a jump to about 1400 in the 2009-2014 interval.
Speaking of Research maintains an animal-use statistics page. The US page shows that of the species not exempted from tracking by the Helms Amendment, non-human primates account for 7% of the total (all categories of research use) in 2014. This is 57,735 individuals- note that given that non-human primates can be used for years if not decades in some kinds of research, this does not equate to a per-year number the way it would for a species that only lives 2 years like a rat. But at any rate, the 600 extra [ETA: “E” category monkeys] that the Buzzfeed piece seems to be charging to the war on terror is only a 1% increase in the annual “E” use of non-human primates.
This is “disturbing”? Again, I think this alone shows how disingenuous the piece really was. A “one percent increase in the use of monkeys for bioweapon research” doesn’t really have the same punch, does it?
What about other frames of reference? From the Speaking of Research page:
Scientists in the US use approximately 12-25 million animals in research, of which only less than 1 million are not rats, mice, birds or fish. We use fewer animals in research than the number of ducks eaten per year in this country. We consume over 1800 times the number of pigs than the number used in research. We eat over 340 chickens for each animal used in a research facility, and almost 9,000 chickens for every animal used in research covered by the Animal Welfare Act. For every animal used in research, it is estimated that 14 more are killed on our roads.
Or what about the fact that Malaysia culled 97,119 macaque monkeys (long-tailed, i.e. M. fascicularis and pig-tailed, i.e. M. nemestrina; common research lab species) in 2013. Culled. That means killed, by rough means (by the reporting) without any humane control of pain or suffering. No use for them, no scientific advances, no increase in knowledge…probably not even used for food. Just…..killed. 167 times the number scored as used in bioweapons research were just eliminated in a single year in a single country.
Failing to provide these contexts, and writing a piece that is majorly focused on the number of research monkeys used for bioweapons studies is dishonest, in my view.
Okay, so what about the pain and suffering part of the piece? Well, Aldhous writes:
BuzzFeed News has calculated the number of primates used each year for what the USDA calls “Column E” experiments, in which animals experience pain or distress that is not fully alleviated with painkillers, tranquilizers, or other drugs. Because monkeys are emotionally complex creatures that are thought to experience suffering similarly to how we do, such experiments are especially controversial.
The number of primates used in these ethically fraught experiments
Notice the slant? First of all, human introspection about the “pain and suffering” of nonhumans is suspect, to say the least. Yes, including monkeys, dolphins or whathaveyou. The statement about monkeys being “emotionally complex creatures” is pure AR theology. The idea that nonhuman suffering is identical to human suffering is entirely unproven and there are large numbers of people who disagree with this characterization (see the Malaysian culling, above, for an example). If you try to get people to define terms and provide evidence you devolve into really bad eye-of-the-beholder anecdata on the one hand up against a profound lack of evidence on the other. Humans are demonstrably different from all other species we know to date. And efforts to view nonhumans as “like us” invariably involve some very convenient definitions, goal post moving, blindness to the quality or universality or ease of the human trait, etc.
Calling it “especially controversial” and “ethically fraught” is hardly even handed journalism. Where is the balance here? The people who shout loudest about the use of monkeys being “controversial” don’t believe in any animal research. Seriously, probe them. What use of animals isn’t ethically fraught? Hammering this idea over and over throughout the piece is poisoning the well. It is acting like this is established fact that everyone agrees with. Not so. And the slant of these terms is certainly on the side of “this research is bad”. You use other terms when you want to describe a neutral disagreement of sides.
One very important point is the lie of the truncated distribution. We know perfectly well that there is a big part of the American distribution that is essentially unconcerned about animal use and animal suffering. If you know anyone who uses sticky traps to deal with unwanted household rodents…they are doing Category E research. Catch and release fishing? Ditto. People who own large dogs in city apartments and walk them just twice a day….well it isn’t Category E but it sure doesn’t sound humane to me. The point is that research and researchers do not operate in this part of the distribution. They operate in the well-regulated part of the distribution that is explicitly concerned with the welfare of animal subjects in research. Notice all the pull quotes he included from researchers seem to express caution? Obviously I can’t know how selected and cherry picked those comments were (I suspect very) but they do testify to the type of caution expressed by most, if not all, animal researchers. We are always looking to reduce and refine. And look, individual scientists may view different research priorities differently…but it is hardly fair to only present the skeptics. Where are the full throated defenders of the bioweapons research in this article? Well, they wouldn’t talk on record* due in very large part, I assert, to a well-informed skepticism that journalists ever care to be balanced on these topics.
The Aldhous piece goes on to a very sleazy sleight of hand by mentioning a violation report in which an animal research facility was cited for failing to follow care protocols. He picks out three institutions:
three institutes have dominated the most ethically contentious primate experiments: the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland, the Lovelace Respiratory Research Institute in Albuquerque, New Mexico, and the Battelle Memorial Institute in Columbus, Ohio.
Since 2002, these three institutions have collectively used more than 6,400 Column E primates. In 2014, they accounted for almost two-thirds of the monkeys used in these experiments.
Again with the “most ethically contentious” charge. Nice. But the point of this is…what? Many bioweapons pathogens can only be studied at very high cost isolation facilities. It is good and right that there are not many of them and that they account for the majority of the animal use. It is also good and right that they are subject to regulatory oversight in case any slip ups need to be corrected, yes?
After a routine inspection in March, Lovelace was cited for failing to provide monkeys with the care that was supposed to be delivered — including intravenous fluids, Tylenol for fever, and antidiarrheal drugs.
The report shows that three animals did not receive Tylenol when they should have, and three did not receive anti-diarrheals when they should have, for 2-4 days of symptoms. There were 57 animals in a Cohort that did not receive injections of fluids but there is no indication that this resulted in any additional pain or distress and we can’t even tell from this brief protocol language whether this was supposed to be as-need-per-veterinarian-recommendation or not. There are two additional Cohorts mentioned for which it is noted the animals were treated according to protocol and the table in the Aldhous piece lists 431 animals used at Lovelace in 2014, probably the year for which the above citation refers. Naturally, Aldhous fails to mention these citation numbers leaving the reader free to assume the worst. This is classic misdirection and smearing at work. Which is why I call it dishonest. “Loose stools or fever for 2 to 4 days in less than 1% of individuals” sounds more like an over the counter medication warning or a threshold for when to finally call the doctor to the average ear.
Aldhous next diverts into a fairly decent discussion of how animal models may or may not fully predict human outcome but I think that in the context, and with his shading, it falls short of the mark. I’m not going to step through all of his examples because there are certain fundamental truths about research.
1) If we knew the result in advance, the experiments wouldn’t be necessary. So if we sometimes find out that animal models are limited, we only come to this conclusion in the doing. There is no way to short circuit this process.
2) We use animals, even monkeys, as proxies and models. Sometimes, they are going to come up short of full prediction for human health. This does not mean they are not valuable to use as models. Again, see 1. We only find this out in the doing and most research is novel territory.
3) The overlap between animal testing and research is fuzzy in this discussion. If you want to evaluate medications, your research may not be dedicated to, or idealized for, novel discovery about the disease process itself. This doesn’t make it less valuable. Both have purposes.
4) It is dishonest to point to places where animal research failed to predict some adverse outcome of a medication in humans without discussing the many-X more potential medications that were screened out with animal models. Protection from harm is just as important, maybe more so, than identification of a helpful medication, is it not?
So as you can see, I think this piece in Buzzfeed is written from start to finish to advance the AR agenda. It is not by any means fair or balanced. This is relatively common with journalism but that is no excuse. It is sleazy. It is dishonest. There is every reason to expect that balanced information and opinion is readily available to a journalist, even one who has no scientific background whatever.
I do not know the heart and mind of the author and as I mentioned at the outset, he protested vehemently that my take was not his intent. Which is why I have tried to focus on the piece and what was included and written. I will suggest that if Aldhous is sincere, he will read what I have written here, follow the links and take a very hard editor’s look at what he has written and the impact it has on the average reader.
__
*I don’t know the solution to this problem. A piece like this one Aldhous wrote is the type of thing that hardens attitudes. Which makes it harder for the balanced story to get out. It’s a vicious cycle and I have no idea how to break it until and unless science journalists stop with this sleazy and biased AR shillery on their own.
Additional Reading
Logothetis driven out of monkey research
UCLA scientists have been under attack for over a decade
Repost: Insightful Animal Behavior: A “Sufficiently Advanced Technology”
Aphorism
January 2, 2016
Found this on the Facebooks. It seems appropriate for a science-careers audience:
There was a farmer who grew excellent quality corn. Every year he won the award for the best grown corn. One year a newspaper reporter interviewed him and learned something interesting about how he grew it. The reporter discovered that the farmer shared his seed corn with his neighbors. “How can you afford to share your best seed corn with your neighbors when they are entering corn in competition with yours each year?” the reporter asked.
“Why sir,” said the farmer, “Didn’t you know? The wind picks up pollen from the ripening corn and swirls it from field to field. If my neighbors grow inferior corn, cross-pollination will steadily degrade the quality of my corn. If I am to grow good corn, I must help my neighbors grow good corn.”
Lab Goals for 2016
January 1, 2016
Sometweep or other mentioned career goals for the year. I don’t really set lab goals at all….too busy just keeping on with whatever is in front of me? Maybe this is a bad idea?
Dunno.
I came up with the above as an off the cuff response.
Anyway, now I am curious if you set goals for yourself in the academic career and science profession space?