MWE&G notes that NIAID is particularly upfront about funding strategies, in substantial contrast to most ICs. I don’t like the opacity of most of the ICs on funding strategies either. But one reason they do it is to minimize certain study section behavior. There is a natural and perhaps inescapable psychology to grant review in which the reviewer is, at some level, thinking “fund it” or “don’t fund it”. This results in scores clustering around the “perceived” funding line.

ICs don’t like this because they want a nice flat distribution of scores so that no matter where the funding line is drawn, there are not a ton of “hard calls” to make. The more applications with the same score, the harder the decision. (Actually applicants should favor this approach too because in theory it decreases arbitrary IC behavior with regard to selecting apps for funding.)

Fortunately, from the IC perspective, there is some lack of calibration in the “perceived funding line” in the typical study section. (Also, SRAs are tasked with fighting this tendency by urging reviewers to distribute their scores across the entire available range.) This introduces variance into the result of the same psychological process, namely funding line seeking, in reviewers. I think that if all Institutes were highly vocal about the funding lines, hard and soft alike, the problem of score clustering would increase. I think you would also start to see mean scores for Institutes start to move around to match the funding line. “Oh, NIMH is at 135 and NIAAA is at 140? Well, I can assign a 130 to this one, a 140 to that one and the SRA can’t say I’m not spreading scores!” Over the tens of thousands of apps I think you would start to see effects. Then the ICs would have to cycle back on the funding line by saying “well, our grants average 5 pts higher so our cut line is going up”. So the process would cycle around recursively. Not to mention that ICs do compare on things like scores and percentile, I have no doubt. So they aren’ t really interested in doing things that might put their scores at a disadvantage relative to other ICs because their percentiles would start rising creating the impression that they fund substandard science.

It gets complicated.

To return to the applicant, unfortunately from the individual perspective, variance in the perceived funding line can introduce categorical problems. Often a reviewer who is less experienced or knowledgeable may assign a “good” score that is in fact not a “good” score at present time. So the actual intent of the reviewer is not realized because s/he thinks a 170 is a great score, which it might have been five years ago. So you might get hosed because you were, essentially randomly, assigned a reviewer that is less calibrated than those on another application.