In the Twittersation that accompanied my post suggesting a 20% reduction in graduate admissions for this cycle, @EugeneDay objected to the paternalism of it all.

Feels awfuly elitest. Do you have a motivation other than lowering the NIH funding line? More PhDs != Bad Science.

and

People get to try to follow their own aspirations! Sometimes they fail. That’s sad, but doesn’t warrant national policy.

This echos PhysioProf’s usual comment that academic science is like elite sports, media entertainment and a number of other professions where a lot of people strive for the Major Leagues but never make it. While I agree with the analysis, I don’t think we need to embrace it fully and enthusiastically. This aspect definitely makes me uncomfortable. As a disclaimer, there are probably several points of my career where I would have washed out under harsher (but probably reasonable) filter conditions. I happen to think I eventually made a contribution as a NIH funded PI so this makes me…sympathetic. To the notion that everyone deserves a chance at the NIH extramurally-funded Prof/PI prize.

But this is not just about making the competition for NIH grants better for me personally- these steps would only affect my chances, what, some 15 years from now? This about trying to restructure the labor market in our industry.

Getting back to the individual and their “rights”, look, we most emphatically do not extend this chance to everyone, as admitted by @EugeneDay:

Of course not, but determined by program capacity, quality of applicant. Not fiats. RT @drugmonkeyblog Open admission?

and

Remember context: departments should restrict to high quality applicants, etc. Not advocating slinging PhDs out window to passersby.

Right? So obviously the principle is established. We already restrict graduate admissions below the level of “all who express desire for training”. We do this at the elite, not so elite and (I presume) even the lowliest ranked programs. It didn’t come up on the Twitters but we also wash people out after they have been admitted. The loss rate in the first year or two of most graduate programs that I am aware of is consistently nonzero. I’d say it is rarer for people to washout at the qualifying exam/dissertation proposal stage and rarer still on the doctoral defense. But it does happen now and again.

So how can we say that my proposal to reduce graduate training classes by 20% (or even 50%) is any different in principle? It is not. Unless we compare to some arbitrarily selected prior interval and argue that the success rate for seeking graduate training is lower. But that seems silly to me. Competition for various job sectors is always in flux. For that matter, “program capacity” is one of the things on the table in this discussion. The ability to pay stipends factors into “capacity”, as does the amount of research funding to support the scientific efforts of graduate students, the amount of time the Profs have for “training” versus “getting the research done efficiently” and, one might argue, the ability of any given program to place their students in various occupations post-PhD.

Otherwise what? What would happen in a peachy, let-the-customer-decide grad school market?

I prefer to let people filter themselves. Other’s decisions are not my business. RT @drugmonkeyblog: Where to put filter?

and they will filter themselves and all will be peachy right? Let the market correct?

Individuals should be informed by their advisors. I have nothing against programs contracting. Just no orders from on high.

and

Whoever pays them now. Each case is unique, right? I’m not advocating more, just in favor of case by case decisions.

So he’s right back to having a filter…but he just has this pipedream that the market will correct itself. While emitting a suspicious indication that this is all about personal discomfort in telling people “no”..

Whose job is it to ruin an aspiring scientist’s dreams? Only mine if I’m the advisor. Everything case by case.

No. A thousand times no. Our business leads to a lot of pain and wasted time for many precisely because we refuse to be engaged in the career aspects of the profession. We evince a hands-off approach that we do not need to be concerned about such tawdry concerns. “Just do great science, young Jedi and all will be well” we say. If all is not well for a given person, clearly they did not do good science and we don’t want them anyway! They are not capable and therefore not deserving. Alternate careers? Not our problem? Too many PhDs being produced? Hey, who are we to restrict the entry point?

These go together. And it is just ever so convenient that for many of us we make out like bandits, professionally, by exploiting the desperation of graduate students. By exploiting the statistically unlikely hope of the eventual Professorial entry card to extract a lot of labor for relatively little compensation.

I think it is time for us, as a profession, to take more responsibility. To remember that our left-wing dominant socio-political orientations should apply to us as well.

We are the exploiters of graduate student labor and it is time for us to restructure our profession.

Some commenter is under the impression that we academics are avoiding discussing the pipeline problem in science. No, not the part that leaks women, nor the part that screens out underrepresented minorities.

The problem of the sheer volume of PhD trained individuals.

Personally, I hear less discussion than I think is necessary but the notion it is undiscussed is ludicrous.

I have been astonished by at least one program I know that has not seriously discussed the notion of shrinking yet. Amazed. But I also know other programs are talking about the issue.

But for blog discussion purposes, here’s my position. I think all PhD programs should admit 20% fewer students, starting this cycle. No weasel room for the “top” programs to claim they get a special exception either. (’cause that is what they are going to do, you betcha)

Slow down the flow, people.

Michael Eisen has an interesting post up today on a topic which comes up occasionally here on this blog. He blames peer review, but really it is an indictment of GlamourMag science. A criticism of the conflation of journal reputation with the quality of any article published therein.

One finger point is directed at the reviewer/editor demands for more data/studies/proof before a paper could be accepted. I agree with much of Eisen’s critique on this point.

What I am pondering today, however, is the tight NIH grant supply.

It strikes me that this is going to be a damn good thing if it stomps down on authors’ willingness to put up with unnecessary* reviewer demands for more work.

*the controls appropriate to evaluate the data as presented are fair game. “gee it would be cool if you also showed blahdeeblah…” are typically not.

This whole storify thing seems intriguing so I’m doing a test case. Nothing fancy and no editorializing. Just the stream at present. Read the rest of this entry »

from Nature:

The 2012 spending bill would cut the salary cap by 17%, from US$199,700 to $165,300, for extramural scientists funded by the National Institutes of Health…

I was wondering when some Congress Critter would figure out s/he can make some hay out of attacking scientists for their exorbitant salaries.

Here’s my question though. Since $250,000 per year is “middle class” according to the last round of political rhetoric which addressed the salary/class issue…by what justification should scientists be under attack?

[bit of a Twittersation going on as well, start here]
__
p.s. The vast majority of NIH funded PIs are way, waaaaaay under the salary cap, going by my experience. I would estimate that a disproportionate number of them are MD’s as well. The theory on this latter is that they need to be bribed, I mean equivalently compensated, away from purely clinical careers. Agree or not, it needs to be considered.

p.p.s. While this sounds good on paper, in the immediate and medium term, this would roll back on those of us who are not BSD investigators making cap. Why? Because the Uni’s would have to come up with the difference. Money being fungible, this means less cash for startup packages, bridging support, faculty senate pilot awards, paying for administrative staff, graduate student salaries….

p.p.p.s. Despite the pain, and the fact that some day I’d love to be at cap as it is right now, I’m actually in support of this. In the abstract. And if there were some way to stave off the immediate pain for junior folks (there isn’t) I’d be a lot happier about it.

Well, well, well. How timely. We were just discussing the situation in which some ICs of the NIH fund some subset of their grant applications out of the order of initial peer review. And what should I stumble upon (thanks to writedit) but some actual data which bear on the matter.

The NIAID website has an interesting analysis up that compares productivity measures for R01 grants from FY01-FY04. It divides the grants into those that were funded after receiving a score within their operating payline(s) and those that were funded via “Select Pay”. This is the term for out-of-order, exception funded proposals. Colloquially known as “pickups”.

NIAID describes the approach as:

Here’s how we conducted the study.

To measure productivity, we analyzed the number of publications from 2,104 applications that ranked within the payline (the WP cohort) and from 122 select pay applications (the SP cohort) shown in Figure 1.

For each indictor, we show only the middle 80 percent of the distribution (we removed the top and bottom 10 percent to make the figures easier to read). The horizontal line within each box represents the median.

Numbers for total publications, impact factor, and citations were 16,389, 102,786 and 196,117, respectively, for the WP cohort, and 860, 5,407 and 11,158 for the SP cohort.

Each indicator was scored for six years; for example, grants issued in FY 2002 were scored from FY 2002 to FY 2007.

Not entirely sure what they are graphing here, a typical box and whiskers plot would be 25%-75% described by the box. The whiskers, however, can be any number of descriptors. I guess the NIAID is putting the whiskers on the 20th and 80th percentiles…lot of room between 20% and 25% and between 75% and 80% if this is the case. [update: 10%ile and 90%ile of course; On reflection, I guess I should be less worried about the distance between 10% and 25% and between 75% and 90%.]

At any rate, the take home message is “no difference”. Same for Journal Impact Factor and number of actual citations of the papers.

So far as we can take such objective measures of grant productivity as relevant* to a fuzzier concept of “excellent science” or “impactful project”, this confirms what many of us familiar with grant review insist. Within that zone of payline and near-payline scores, there is no way to say the one grant is going to be much better than the other. Different, sure. But they are all going to be approximately as productive as each other, considering the groups as a whole.

Thus, the kvetching about how horrible it is that the NIH ICs fund some subset of their awards out of the order emerging from peer review is not really well justified. The “performance” of the NIH’s funded extramural research** is unlikely to be negatively affected by doing this.
__
*yes, I realize. But c’mon. Better something somewhat objective than continuing to shoot off our half-baked opinions without any evidence, no?

**extrapolating from NIAID’s data and with the same caveat about such measures of “performance”

…and if I have it right, the Director of a given IC is really the one who makes the decision to fund or not fund a given grant proposal…and even the input of their own Program Staff is only advisory.

Some comment over at writedit’s thread on paylines is incensed about the NCI:

how are the NCI officials better qualified then the peer reviewers to judge the importance of these “selected” studies?

Because They. Have. A. Different. Perspective. on the fields of science. A broader perspective. In the best of all worlds, the Program Staff have the opportunity to step back and look at the larger trends within the science that is within their jurisdiction. To see where current fads have left holes in their portfolio. To perhaps take a risk where the peers are conservative. To identify the duplications and overlaps within their own portfolio and adjust accordingly. To worry about the next generation of scientists. Etc.

How can individual scientists, or even a panel of 25-30 of them, possibly take the long view? They cannot. So there is a role for Program. We can debate whether a funding agency should be sensitive to the long view, balance of effort, inherent self-referential conservatism that emerges in science now and again, etc. I’ll come down squarely in favor of breadth on that one. But let us not pretend they have no functional role.

They should not be make funding decisions. They are messing up the once well established system.

Yes, yes they should be making funding decisions. This is the job of Program, actually. They are part and parcel of the “well established system”.

I’ve touched on this ever so briefly before, see Program Interferes with…. and NIH Administrators Ignore…

Oh glory…since I started writing this post, the commenter doubled down:

If the grants do not fit the NCI portofolio they should not be sent forward for peer review in the first instance; and more importantly these same grants that do not fit the portofolio in the first place should not be sent back for an A1 revision which consumes a whole lot of funds to generate additional preliminary data. I think this suggests that not all the decisions are made by well qualified individuals at NCI.

and an echo from yet another commenter….

If NCI had told me that my app. did not fit their portofolio, I wouldn’t have wasted freaking 4 yrs submitting A0 and A1( plus wasting peer review efforts). And before submitting my A0, I had even consulted my SRO. I’m pissed that NCI changed rules in the middle of the game

HAHAHAHAAHA!!!! This commenter no doubt would be raising a big stink if his/her grant got rejected without even going out to peer review. But the underlying principle that Program should be highly pro-active about refusing grants before they even get reviewed is stupid. There is no chance for reviewers who are more expert in the science to point out whether it fits the mission or not. Program staff are not omniscient. They need their extramural scientists to educate them. Not to dictate their job priorities, not at all. To educate. To provide a portion of the knowledge, information and evidence that Program staff require to do their part of the job. This is at the root of the investigator-initiated science funding system is it not? It is our job to make our case. The job of peer review to provide one viewpoint on that case. The job of Program Staff to provide another set of inputs.

I like this. I like the ability to make my case for what I am interested in studying. I would be far less enthralled to have to always fit into some pre-existing set of Programmatic interests. I think our friends over at writedit’s blog would be similarly distressed if there were a more heavy handed triage of applications prior to review.

The only way that would work is if someone at the Program level does a lot of triaging on the basis of the Abstract. Because I guarantee you the POs are not going to be reading all the apps in detail under such a scenario. Too many applications and too few POs. And if there were enough of them to do so? We’d all be crying foul about why it took 10X as many Program staff as they have now! Think of how many grants that would suck up. And they still couldn’t offer the kind of specific expertise that the current peer-review system can muster.

and back to the original commenter…..

I would scream bloody murder if the grant was peer reviewed, fell in the upper 10th percentile and then was told that it did not fit the portofolio…….to an extent that is what is happening at the NCI….but you are entitled to your opinion

Of course the grey zone has long been familiar to those of us who seek funding from some of the other institutes. I have my doubts, of course, about assertions that some of the ICs have or had a policy of sticking strictly to the outcome of the initial peer review. But….perhaps this has indeed been the case at NCI. And the Chicken Littles can be excused, a trifle, for not knowing how it goes down. We have only the hard data from NIGMS but these graphs fit very well with my subjective experiences as an applicant, a friend and colleague of applicants, watching what emerges from a study section on which I served, talking with POs, etc. Take this FY2010 R01 funding outcome graph from NIGMS as an example:

Open rectangles depict the number of grants reviewed, the dark bars the number funded. The X axis depicts percentile scores that emerged from initial peer review. What this shows is that genuine “skips” are relatively rare. There is a clear payline (formally published or not, you can see where it lies) below which almost everything gets funded. Above that line is the grey area. A zone above the payline (I note for those who are thinking that they “deserve” to get funded) in which only a subset of the grants get funded. Notice the trendline though? Even this is influenced strongly by the priority score/percentile rank, right? The chances of an application being funded as an exception to the payline increases as the score moves closer to that payline.

The NCI has apparently been talking a payline in the 7-8%ile zone and mollifying investigators that they will “consider” everything up to some 25 %ile for exception funding. Pickups. So the scenario raised by the commenter is asking about apps in the 8-10%ile range and their chances of being skippped over. Unless NCI takes a very different approach to exception funding, the chances for such a score are probably still quite good. Once you get a couple of bins away from the payline in the NIGMS data above (and check the sidebar at the NIGMS page for prior Fiscal Year trends) then the chances for any application funding get pretty slim.

It is difficult for me to understand how anyone can look at the distribution of funded/unfunded grant application in this grey area and think that they “deserve” to be funded with scores that are above the payline. I say rather that people should feel lucky to get the nod, given that few of the apps are being funded.