The many benefits of the LPU
June 29, 2016
A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified
…a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.
Sarewitz ends with an exhortation of sorts. To publish much less.
Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.
Interestingly, this “Price” he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes “an elitist”.
Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.
Price was worried about “distinguished”, but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the “merely good” scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol’ Steve McKnight’s complaints about the riff-raff to it.
Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.
In today’s competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.
Investing more effort in fewer but ‘more complete’ publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (13439–13446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics. Proc. Natl Acad. Sci. USA 112,
One Richard Ebright commented thusly:
Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of “minimum publishable units.”
This is as wrong as wrong can be.
LPU: A Case Study
Let’s take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as “This new street drug is 10,000 times more potent than morphine” [WaPo version].
Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.
In the delusional world of the “complete story” tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.
That is a whole lot of time, effort and money.
In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others’ work in whole or in part.
Choices or assumptions made that lead to blind alleys will waste everyone’s time equally.
Did I mention the funding yet?
Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:
1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]
2) There isn’t anything new here. This is just a potent synthetic opiate, right? That’s what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don’t necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]
3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn’t it be useful to know what they have found? Why IS it that W-18 is active in writhing…or can’t this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn’t have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]
Etcetera.
Science is iterative and collaborative.
It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.
Waiting while several groups pursue a supposed “complete story” in parallel only for one to “win” and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.
JIF notes 2016
June 27, 2016
If it’s late June, it must be time for the latest Journal Impact Factors to be announced. (Last year’s notes are here.)
Nature Neuroscience is confirming its dominance over Neuron with upward and downward trends, respectively, widening the gap.
Biological Psychiatry continues to skyrocket, up to 11.2. All pretensions from Neuropsychopharmacology to keep pace are over, third straight year of declines for the ACNP journal lands it at 6.4. Looks like the 2011-2012 inflation was simply unsustainable for NPP. BP is getting it done though. No sign of a letup for the past 4 years. Nicely done BP and any of y’all who happen to have published there in the past half-decade.
I’ve been taking whacks at the Journal of Neuroscience all year so I almost feel like this is pile-on. But the long steady trend has dropped it below a 6, listed at 5.9 this year. Oy vey.
Looks like Addiction Biology has finally overreached with their JIF strategy. It jumped up to the 5.9 level 2012-2013 but couldn’t sustain it- two consecutive years of declines lowers it to 4.5. Even worse, it has surrendered the top slot in the Substance Abuse category. As we know, this particular journal maintains an insanely long pre-print queue with some papers being assigned to print two whole calendar years after appearing online. Will anyone put up with this anymore, now that the JIF is declining and it isn’t even the best-in-category anymore? I think this is not good for AB.
A number of journals in the JIF 4-6 category that I follow are holding steady over the past several years, that’s good to see.
Probably the most striking observation is what appears to be a relatively consistent downward trend for JIF 2-4 journals that I watch. These were JIFs that have generally trended upward (slowly, slowly) from 2006 or so until the past couple of years. I assumed this was a reflection of more scientific articles being published and therefore more citations available. Perhaps this deflationary period is temporary. Or perhaps it reflects journals that I follow not keeping up with the times in terms of content?
As always, interested to hear what is going on with the journals in the fields you follow, folks. Have at it in the comments.
There should be only three categories of review outcome.
Accept, Reject and Minor Revisions.
Part of the Editorial decision making will have to be whether the experiments demanded by the reviewers are reasonable as “minor” or not. I suggest a lean towards accepting only the most minimal demands for additional experimentation as “minor revisions” and otherwise to choose to reject.
And no more of this back and forth with Editors about what additional work might make it acceptable for the journal as a new submission either.
We are handing over too much power to direct and control the science to other people. It rightfully belongs within your lab and within your circle of key peers.
If J Neuro could take a stand against Supplemental Materials, they and other journals can take a stand on this.
I estimate that the greatest advantage will be the sharp decline in reviewers demanding extra work just because they can.
The second advantage will be with Editors themselves having to select from what is submitted to them, instead of trying to create new papers by holding acceptances at bay until the authors throw down another year of person-work.
1) Behavior is plural
2) No behavioral assay is a simple readout of the function of your favorite nucleus, neuronal subpopulation, receptor subtype, intracellular protein or gene.
Newly funded NIH PIs
June 18, 2016
It would be fascinating if NIH did audits to search for who gets funded with their first grant by pedigree and productivity measures.
I’d want to see categorization of number of first and last author pubs. Of course. Some sort of measure of productivity in the time since being appointed to the independent investigator title. Mediated by JIF.
Then pedigree by the grant wealth and productivity of the pre-independence mentors.
I wonder if you can get away with crap productivity of you are tied into the network. And if you can overcome your Outsider status by generating a ton of pubs.
I wonder how likely a newb is to be funded as the years elapse from the time of first appointment without senior author publications.
Power in the NIH review trenches
June 18, 2016
That extensive quote from a black PI who had participated in the ECR program is sticking with me.
Insider status isn’t binary, of course. It is very fluid within the grant-funded science game. There are various spectra along multiple dimensions.
But make no mistake it is real. And Insider status is advantageous. It can be make-or-break crucial to a career at many stages.
I’m thinking about the benefits of being a full reviewer with occasional/repeated ad hoc status or full membership.
One of those benefits is that other reviewers in SEPs or closely related panels are less likely to mess with you.
Less likely.
It isn’t any sort of quid pro quo guarantee. Of course not. But I guarantee that a reviewer who thinks this PI might be reviewing her own proposal in the near future has a bias. A review cant. An alerting response. Whatever.
It is different. And, I would submit, generally to the favor of the applicant that possesses this Mutually Assured Destruction power.
The Ginther finding arose from a thousand cuts, I argue. This is possibly one of them.
Nakamura reports on the ECR program
June 17, 2016
If I stroke out today it is all the fault of MorganPhD.
Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that’s five years and fifteen funding rounds ago, folks) by the Ginther report.
The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.
The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.
One-quarter of researchers in ECR’s first cohort were from minority groups, he notes. “But as we’ve gone along, there are fewer underrepresented minorities coming into the pool.”
…
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.
Ok, but how have the ECR participants fared?
[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.
NIIIIIICE. Except they didn’t flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.
The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn’t gone through the resubmission process.
Not sure if this really means “hadn’t” or “hadn’t necessarily“. The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn’t revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?
It’s also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.
SERIOUSLY Richard Nakamura? You just didn’t happen to request your data miners do the most important analysis? How is this even possible?
How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.
Moving along, we get a further insight into Richard Nakamura and his position in this situation.
Nakamura worries that asking minority scientists to play a bigger role in NIH’s grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”
Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don’t decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!
Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:
Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.
“After I sat on the panel, I realized there was a real network that exists, and I wasn’t part of that network,” he says. “My comments as a reviewer weren’t taken as seriously. And the people who serve on these panels get really nervous about having people … that they don’t know, or who they think are not qualified, or who are not part of the establishment.”
If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”
The person in the best position to decide what is good or bad for his or her career is the investigator themself.
This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn’t necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.
__
Related Reading:
Toni Scarpa to leave CSR
More on one Scientific Society’s Response to the Scarpa Solicitation
Your Grant In Review: Junior Reviewers Are Too Focused on Details
CPDD 2016: Thought of the Day
June 14, 2016
There is a lot of focus on cannabis this year. Much more than usual, seemingly.
And everyone talks about how it is the growing legalization (medical and recreational) that is the driving justification.
I find this to be interesting.
NIH Director Collins and CSR Director Nakamura continue to kick the funding disparity can down the road
June 11, 2016
A News piece in Science by Jeffrey Mervis details the latest attempt of the NIH to kick the Ginther can down the road.
Armed with new data showing black applicants suffer a 35% lower chance of having a grant proposal funded than their white counterparts, NIH officials are gearing up to test whether reviewers in its study sections give lower scores to proposals from African-American applicants. They say it’s one of several possible explanations for a disparity in success rates first documented in a 2011 report by a team led by economist Donna Ginther of the University of Kansas, Lawrence.
Huh. 35%? I thought Ginther estimated more like a 13% difference? Oh wait. That’s the award probability difference. About 16% versus 29% for white applicants which would be about a 45% lower chance. And this shows “78-90% the rate of white…applicants”. And there was Nakamura quoted in another piece in Science:
At NIH, African-American researchers “receive awards at “55% to 60% the rate of white applicants,” Nakamura said. “That’s a huge disparity that we have not yet been able to seriously budge,” despite special mentoring and networking programs, as well as an effort to boost the number of scientists from underrepresented minorities who evaluate proposals.
Difference vs rate vs lower chance…. Ugh. My head hurts. Anyway you spin it, African-American applicants are screwed. Substantially so.
Back to the Mervis piece for some factoids.
Ginther..noted…black researchers are more likely to have their applications for an R01 grant—the bread-and-butter NIH award that sustains academic labs—thrown out without any discussion…black scientists are less likely to resubmit a revised proposal …whites submit at a higher rate than blacks…
So, what is CSR doing about it now? OK HOLD UP. LET ME REMIND YOU IT IS FIVE YEARS LATER. FIFTEEN FUNDING ROUNDS POST-GINTHER. Ahem.
The bias study would draw from a pool of recently rejected grant applications that have been anonymized to remove any hint of the applicant’s race, home institution, and training. Reviewers would be asked to score them on a one-to-nine scale using NIH’s normal rating system.
It’s a start. Of course, this is unlikely to find anything. Why? Because the bias at grant review is a bias of identity. It isn’t that reviewers are biased against black applicants, necessarily. It is that they are biased for white applicants. Or at the very least they are biased in favor of a category of PI (“established, very important”) that just so happens to be disproportionately white. Also, there was this interesting simulation by Eugene Day that showed a bias that is smaller than the non-biased variability in a measurement can have large effects on something like a grant funding system [JournalLink].
Ok, so what else are they doing?
NIH continues to wrestle with the implications of the Ginther report. In 2014, in the first round of what NIH Director Francis Collins touted as a 10-year, $500 million initiative to increase the diversity of the scientific workforce, NIH gave out 5-year, $25 million awards to 10 institutions that enroll large numbers of minority students and created a national research mentoring network.
As you know, I am not a fan of these pipeline-enhancing responses. They say, in essence, that the current population of black applicant PIs is the problem. That they are inferior and deserve to get worse scores at peer review. Because what else does it mean to say the big money response of the NIH is to drum up more black PIs in the future by loading up the trainee cannon now?
This is Exhibit A of the case that the NIH officialdom simply cannot admit that there might be unfair biases at play that caused the disparity identified in Ginther and reinforced by the other mentioned analyses. The are bound and determined to prove that their system is working fine, nothing to see here.
So….what else ?
A second intervention starting later this year will tap that fledgling mentoring network to tutor two dozen minority scientists whose R01 applications were recently rejected. The goal of the intervention, which will last several months, is to prepare the scientists to have greater success on their next application. A third intervention will educate minority scientists on the importance of resubmitting a rejected proposal, because resubmitted proposals are three times more likely to be funded than a de novo application from a researcher who has never been funded by NIH.
Oh ff….. More of the same. Fix the victims.
Ah, here we go. Mervis finally gets around to explaining that 35% number
NIH officials recently updated the Ginther study, which examined a 2000–2006 cohort of applicants, and found that the racial disparity persists. The 35% lower chance of being funded comes from tracking the success rates of 1054 matched pairs of white and black applicants from 2008 to 2014. Black applicants continue to do less well at each stage of the process.
I wonder if they will be publishing that anywhere we can see it?
But here’s the kicker. Even faced with the clear evidence from their own studies, the highest honchos still can’t see it.
One issue that hung in the air was whether any of the disparity was self-inflicted. Specifically, council members and NIH officials pondered the tendency of African-American researchers to favor certain research areas, such as health disparities, women’s health, or hypertension and diabetes among minority populations, and wondered whether study sections might view the research questions in those areas as less compelling. Valantine called it a propensity “to work on issues that resonate with their core values.” At the same time, she said the data show minorities also do less well in competition with their white peers in those fields.
Collins offered another possibility. “I’ve heard stories that they might have been mentored to go into those areas as a better way to win funding,” he said. “The question is, to what extent is it their intrinsic interest in a topic, and to what extent have they been encouraged to go in that direction?”
Look, Ginther included a huge host of covariate analyses that they conducted to try to make the disparity go away. Now they’ve done a study with matched pairs of investigators. Valantine’s quote may refer to this or to some other analysis I don’t know but obviously the data are there. And Collins is STILL throwing up blame-the-victim chaff.
Dude, I have to say, this kind of denialist / crank behavior has a certain stench to it. The data are very clear and very consistent. There is a funding disparity.
This is a great time to remind everyone that the last time a major funding disparity came to the attention of the NIH it was the fate of the early career investigators. The NIH invented up the ESI designation, to distinguish it from the well established New Investigator population, and immediately started picking up grants out of the order of review. Establishing special quotas and paylines to redress the disparity. There was no talk of “real causes”. There was not talk of strengthening the pipeline with better trainees so that one day, far off, they magically could better compete with the established. Oh no. They just picked up grants. And a LOT of them.
I wonder what it would take to fix the African-American PI disparity…
Ironically, because the pool of black applicants is so small, it wouldn’t take much to eliminate the disparity: Only 23 more R01 applications from black researchers would need to be funded each year to bring them to parity.
Are you KIDDING me? That’s it?????
Oh right. I already figured this one out for them. And I didn’t even have the real numbers.
In that 175 bin we’d need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.
Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it….. 20.8%.
Mere handfuls. I had probably overestimated how many black PIs were seeking funding. If this Mervis piece is to be trusted and it would only take 23 pickups across the entire NIH to fix the problem….
I DON’T UNDERSTAND WHAT FRANCIS COLLINS’ PROBLEM IS.
Twenty three grants is practically rounding error. This is going to shake out to one or maybe three grants per year for the ICs, depending on size and what not.
Heck, I bet they fund this many grants every year by mistake. It’s a big system. You think they don’t have a few whoopsies sneak by every now and again? Of course they do.
But god forbid they should pick up 23 measly R01s to fix the funding disparity.
Higher education in the US weaves, for many students, a fantastical dream.
You can do what you want and people will pay you for it!
Any intellectual pursuit that interests your young brain will end up as a paying career!
This explains why there are so many English majors who can’t get jobs upon graduation. I know, an easy target. Also see Comm majors.
But we academic scientists are the absolute worst at this.
It results in a pool of postdoc scientist PhDs who are morally outraged to find out the world doesn’t actually work that way.
Yes. High JIF pubs and copious grant funding are viewed as more important than excellent teaching reviews and six-sigma chili peppers or wtfever.
In another context, yeah, maybe translational research is a tiny bit easier to fund than your obsession with esoteric basic research questions.
Thought of the Day
June 8, 2016
Hillary (H-Rod, as Isis the Scientist puts it) gave one heck of a General election speech last night.
She is going to mop the floor with Trump all through the coming campaign.
This will be a bigger landslide win than Reagan’s.
Outcome of SABV initiative
June 8, 2016
I was just thinking the coming 9 mo should reveal a steady trickle of one to two Figure stabs at sex-differences comparisons.
I’m predicting that some of the people who generated their first studies as Preliminary Data to head off SABV grant critique are going to publish.
Yes, even if results were negative. They’ll need this for the next round to excuse the failure to include the female animals in the next proposals.
The latest Open Mike blogpost from NIH Deputy Director for the OER, Mike Lauer, ventures into analysis of TheRealProblem at last.
The setup, in and of itself, is really good information.
We first looked at all research project grants (RPGs) funded between 2003 and 2015. For each year, we identified unique principal investigators who were named on at least one RPG award in that year. Figure 1 shows that the number of NIH-supported investigators has increased only slightly, and has remained fairly constant at about 27,500 over the past thirteen years.
Burn that one into your brain, people. There are about 27,500 unique PIs funded at any given time and this number has been rock steady for at least thirteen years. Sure, it is crazy-making stuff that they do not go back past the doubling interval to see what is really going on but hey, this is a significant improvement. At last the NIH is grappling with their enterprise by funded-investigators instead of funded-applications. This is a key addition and long, long overdue. I approve.
There are some related analyses from DataHound that lead into these considerations as well. I recommend you go back and read Longitudinal PI Analysis: Distributions, Mind the Gap and especially A longitudinal analysis of NIH R-Funded Investigators: 2006-2013. This latter one estimated a similar number of unique PIs but it also estimated the churn rate, that is, the number in each fiscal year that are new and the number who have left the funded-PI distribution (it was about 5,300 PIs per FY).
Back to Lauer’s post for the supplicant information that DataHound couldn’t get:
To determine how many unique researchers want to be funded, we identified unique applicants over 5-year windows. We chose to look at a multi-year window for two reasons: most research grants last for more than one year and most applicants submit applications over a period of time measured in years, not just 12 months, that may overlap with their periods of funding, if they are funded. Figure 2 shows our findings for applicants as well as awardees: the number of unique applicants has increased substantially, from about 60,000 investigators who had applied during the period from 1999 to 2003 to slightly less than 90,000 in who had applied during the period from 2011 to 2015.
The too-many-mouths problem is illustrated. Simply. Cleanly. We can speculate about various factors until the cows come home but this is IT.
Too Many Mouths At The Trough And Not Enough Slops.
The blogpost then goes on to calculate a Cumulative Investigator Rate which is basically how many PIs get funded over a 5 year interval out of those who wish to be funded. In 2003 it was 43% and this declined to 31% in 2015. This was for RPGs. If you limit to R01 only, the CIR goes from 45% to 34% over this interval of time. For R21s, the CIR was at 20% in 2003 and is down around 11% for 2015. Newsbreak: Funding rates for R21s are terrible, despite what you would imagine should be the case for this mechanism.
Now we get to the hard part. Having reviewed these data the person responsible for the entire Extramural Research enterprise of NIH boots the obvious. Hard. First, he tries to off load the responsibility by citing Kimble et al and Pickett et al. Then he basically endorses their red herring distractions (when it comes to this particular issue).
NIH leadership is currently engaged in efforts to explore which policies or policy options best assure efficient and sustainable funding given the current hypercompetitive environment. These efforts include funding opportunity announcements for R35 awards which focus on programs, rather than highly specific projects; new models for training graduate students and postdoctoral fellows; establishment of an office of workforce diversity;
Right? It’s right there in front of you, dude, and you can’t even say it as one of a list of possible suggestions.
We need to stop producing so many PhD scientists.
This is the obvious solution. It is the only thing that will have sustained and systematic effect, while retaining some thin vestige of decency towards the people who have already devoted years and decades to the NIH extramural enterprise.
Oh and don’t get me too wrong. From a personal perspective, clearly Lauer is not completely idiotic:
and even what we are doing here, namely drawing attention to numbers of unique investigators and applicants.
HAHAHHAHAHAAH. What a bureaucratic weasel. He sees it all right. He does. And he’s trying to wink it into the conversation without taking any responsibility whatsoever. I see you, man. I see you. Okay. I’ll take up the hard work for you.
We need to stop training so many PhDs. Now. Yesterday in fact. All of us. Stop pretending your high-falutin program gets to keep all their students and those inferior jerks, over there, need to close up shop. Significant reductions are called for.
Personally, I call for a complete moratorium on new PhD admits for 5 years.
Go.