IFCN Clustering: A CRISP analysis.

December 7, 2007

We’ve been discussing the degree to which insular sub-groupings of scientists protect and maintain themselves and their peers through the grant review process. We’re using “bunny hopping” thanks to whimple and the NIH CSR calls this “clustering“. Note upfront that this analysis and discussion does not necessarily require overt malicious intent on anyone’s part. The presentation at the recent PRAC meeting from Don Schneider identified the IFCN (Integrative, Functional and Cognitive Neuroscience) group of study sections as top suspects in the “clustering” phenomenon. Can we derive a little more information one wonders?

The IFCN sections are:

If we ignore the last three special-mechanism sections we have 11 general R-mech sections, some of which are intimately familiar to the DM, BM, apparently the PP and possibly you, DearReader.

Let’s go to the CRISP, shall we? I’ll search on 1R% and 2R% for each, note that the former will include some non-renewable mechs like R21, R03 and double up on R15s, each year of which is a “1” award type. Also, this is just the ones that start in FY2007, noncompeting Type5s not included.
NMB: 28 Type 1, 15 from DA, 5 MH, 4 DK, a DC and an NS. 13 Type 2, 8 DA, 2 MH, 1 each DK, NS, HD.

NNB: 19 Type 1, 5 MH, 5 DK, 3 NS, 2 DA, 2 HD. 15 Type 2, 7 MH, 3 NS, 2 HD, 1each DK, DA, HL

BRS: 998 Type 1 and 283 Type 2 returned, more investigation needed. Is this the “standing SEP”? With multiple actual sections perhaps? [Update 12/10/07: My browser and CRISP were apparently not getting along. 11 Type 1, 5 MH, 4 NS, 1 DA, 1 HL. 7 Type 2, 3 each MH, NS, 1 HD.

SCS: 46 Type 1, 22 NS, 16 DC, 4 DA, 2 AR, 1 DE, 1 NR. 20 Type 2, 12 DC (inc 1 in yr 40!), 3 DE, 3 NS, 1 CA, 1 DA

SMI: 19 Type 1, 13 NS, 4 DC, 1 MH, 1 EY. 12 Type 2, 7 NS, 3 MH, 1 HL, 1 DC

AUD: 31 Type 1, 31 DC. 40 Type 2, 39 DC, 1 HD

LAM: 16 Type 1, 9 MH, 2 each RR, AG, 1 each DA, EY, NS. 22 Type 2, 15 MH, 4 NS, 2 AG, 1 DA

COG: 20 Type 1, 6 NS, 4 EY, 3 AG, 2 MH, 1 DC. 12 Type 2, 4 DC, 3 EY, 3 MH, 2 NS

NAL: 33 Type 1, 18 from AA, 14 ES, 1 NS. 17 Type 2, 14 AA, 2 ES, 1 DA.

CVP: 17 Type 1, 17 EY. 26 Type 2, 26 EY.

Anyone spot the bunny hoppers yet? 🙂

20 Responses to “IFCN Clustering: A CRISP analysis.”

  1. bikemonkey Says:

    Dang, and I thought that my fav I’s were going to come up as the bad guys. Maybe we’re not as inbred as I though.

    One obvious question comes up for this analysis though. differential in “in-house” study sections. My fav I’s have in-house sections still and those would come up as “clustered”.

    Still, makes you wonder if it would be possible to come up with a bunny-hopper quotient doesn’t it? Fraction of Institute dollars in X number of PI hands, percent competing continuations, some sort of turnover or PI-freshness number…

    Like

  2. physioprof Says:

    “BRS: 998 Type 1 and 283 Type 2 returned, more investigation needed. Is this the ‘standing SEP’? With multiple actual sections perhaps?”

    I think you did something wrong. I searched 2007 fiscal year for “1R%” in BRS, and got 11 hits, 8 of which are R01s. “2R%” gives 7 hits, all of which are R01s.

    The Institutes are pretty mixed, with a roughly equal number of MH and NS, and a very few from others.

    Like

  3. physioprof Says:

    Two other interesting points:

    (1) If you search on %A3 you find out that 13 grants were funded in 2007 as A3 applications. I guess some applicants got special permission to submit A3s. (No A4s in 2007, though.)

    (2) Competing awards funded in 2008 fiscal year come up in the search on 2007 fiscal year, presumably by default, because the CRISP UI has not been updated to allow one to select 2008 for searching.

    Like

  4. bikemonkey Says:

    CRISP always behaves a little funny in the current year when I use it. In fact it maybe the case that you never really get fiscal year but calendar year.

    Like

  5. neurolover Says:

    I am very confused. Can someone connect the dots for me?

    Like

  6. physioprof Says:

    Tell me what you don’t understand, and I’ll clarify.

    Like

  7. neurolover Says:

    I don’t understand the conclusion you’re trying to draw and how it relates to the breakdown you show in the data. Is the clustering phenomenon the same as “bunny hopping”, meaning self-serving incestuous scientific fields? Or is the captive study section (i.e. all the grants being funded by the same institute)? And if the second, are the institutes “captive” to the section? or are the sections captive to the institutes? And, finally, how does this all relate to the status of NIs? (i.e. the proposal that NIs are particularly damaged by “bunny-hopping” sections).

    Like

  8. physioprof Says:

    “Is the clustering phenomenon the same as ‘bunny hopping’, meaning self-serving incestuous scientific fields?”

    Pretty much. If bunny-hopping applications and reviewers numerically dominate a study section, then substantial numbers of bunny-hopping applications are guaranteed funding.

    I’m not sure what the significance of the “captive” thing is.

    Like

  9. whimple Says:

    I define “bunny-hopping” pretty much as “self-serving incestuous scientific fields”.

    Clustering seems to be not quite the same thing. Clustering is more generally review of similar applications together by the same review group. Clustering could be bad, but isn’t necessarily so. PRAC said they “aim for 30% clustering”. I have no idea what that means, other than 30% does not equal 0%, or 100%.

    If by “captive” you mean that the institute is “forced” to fund proposals that the study section scores well, regardless of how crappy they are or how poorly they fit the goals of the institute, that seem to vary from institute to institute. Some say “we’re funding everything that scores within our payline, plus some other stuff that didn’t make the payline but that we think is important. Other institutes seem to reserve the right to do whatever they want funding-wise, regardless of the percentile score from the study section. It’s hard to say to what degree they actually exercise this right. In this regard it was interesting to see from the PRAC review that one possibility for “fixing” peer-review was the “just fund it” concept, which I’m guessing means totally ignore the study section score for particularly programmatically appropriate applications.

    Like

  10. physioprof Says:

    “we’re funding everything that scores within our payline”

    I have been told by program officers that this is NINDS policy. NIMH, on the other hand, is known to not do this.

    Like

  11. drugmonkey Says:

    “Can someone connect the dots for me?”
    That’s what we’re trying to do. We can’t get a view at the whole picture from the limited data available. So we try to read the lea leaves.

    “Is the clustering phenomenon the same as “bunny hopping”, meaning self-serving incestuous scientific fields? Or is the captive study section (i.e. all the grants being funded by the same institute)?”

    I think the “clustering” can be viewed as an attempt to generalize the principle of “bunny-hoppers”. The latter gets us a bit too specific and away from thinking about the various dimensions through which the circular back-slapping of grant review can work. I think of the “bunny hopper” space as being a Venn diagram layout. We can think of these being associated with techniques (behavioral hopping speed/fluidity/gait analysis), behaviors (hopping) or animal models. Each will have dimensions and degrees of insularity. For example I imagine the kangaroo-hoppers would be in the “hopper” crowd somewhat but be particularly defensive of the kangaroos as a model. Said kangaroo hoppers might have affiliations that are tighter across traditional fields than they are within traditional fields. Say they are more expert in kangaroo boxing and neonatal development, heck even kangaroo foraging, more so than they are expert in bunny hopping.

    Going in the other direction, a funding Institute is a collection of Bunny-Hoppers, broadly writ. Even “neuroscience” can be viewed as a bunny hopper domain in the context of all biomedical science! All we are talking is the scope of bunny-hopper-ness.

    Like

  12. neurolover Says:

    And does type 1 & type 2 mean a new grant v competitive renewal? If so, I think one of the factors that needs to be concerned in this analysis is what it takes to “complete” a project in different fields. I think folks have been touching on this question in in other comments (for example, the standards in different fields for publication). But, the ICFN’s are particularly prone to research programs designed around big complicated questions (the neural basis of locomotion, or even worse, the neural basis of consciousness). Clearly a career can be spent on different related questions about that topic. They are not questions that will be answered in one set of experiments.

    I think there is a defense of “insularity” (as opposed to biased insularity) that’s not being made here. Insular systems are ones that judge individuals, find them worthy, and then support them. The RO1 is not supposed to be designed that way (it’s supposed to be about the project). But, clearly, there’s some amount of that going on at NIH (along w/ program projects, etc.). Judging an individual can be prone to biases of many sorts, but it also has benefits to the field (and not just to those who have been judged worthy). Of course, it’s devastating to those who have been judged unworthy.

    Like

  13. neurolover Says:

    I think whimple’s comment (bunny-hopping, and badness) is a really important one that needs to be hashed out, because it’s the one that complains that good is being lost at the expense of the bad (i.e. insular fields that should just be shut down). My general take on the NIH and crises is that we just have no way of separating the top 1% from the top 10% from the top 20%. I think that we’re decent at selecting the top 50% rather than the bottom 50. But, if we’re funding the bad at the expense of the good, the enterprise actually suffers, as opposed to just the individual.

    (to explain what I mean by hurting individuals v the enterprise — let me cite a story I heard about journal publishing. An editor bemoaned the fact that they had passed on a very high profile article, which then turned into one of the most important articles of the field. Of course, that was unfortunate for the editor and the journal, but it didn’t hurt the enterprise at all. The paper was published elsewhere, and everyone still got to read it).

    Like

  14. drugmonkey Says:

    “Of course, it’s devastating to those who have been judged unworthy.”

    Well, that’s the rub, innit? And all of us that hold grants are a bit suspect in our views since we have passed the most critical hurdle. ofttimes I try to overtly account for this in my thinking. it is possible that I undervalue the arguments for “in-ness” in some situations. Reading YFS is helpful there!

    It also helps that in a couple of my “fields” I take a slightly nontraditional approach. not that i’m some sort of quixotic iconoclast, just that what interests me is not necessarily the mainstream of popularity. I have a tendency to stand back a bit and view how we have large numbers of people working on what is a narrow corner of the larger issue, how we get methodologically sclerotic because “this is the way it has to be done”, how large scale theoretical approaches may just possibly have missed the entire boat, etc. So I tend to have specific examples of the kind of science in which I am interested where bunny-hopper-ness may just have prevented the data I’d like to see having been generated in the first place. That sucks.

    Like

  15. drugmonkey Says:

    Neurolover, there is a very fundamental difference here which is that without the funding the science won’t get done at all. Not that it will get published elsewhere or that some other lab will do it instead of Dr. FreshFaced. The worry is that those data simply will not be generated.

    Like

  16. whimple Says:

    “The worry is that those data simply will not be generated.”

    Bingo. Specifically I think the “innovation” criterion is being ignored. I would phrase “innovation” something like this:
    Q: If we don’t fund this proposed work, and this lack of funding prevents this lab from doing the proposed work, how long will it take for another (funded) lab to get around to doing these experiments? (the actual worthiness of the experiments in question having already been addressed in the “significance” and “approach” criteria).
    My feeling is that lots of really-well funded labs would get innovation evaluated very badly if innovation were to be measured this way. Particularly in “hot fields” there’s a pile of labs doing nearly the exact same work just racing to be first. If PI#1 were to be hit by a bus, PIs #2, #3, #4 and #5 would do the work anyway, and if it comes out two weeks later, so be it.

    Like

  17. neurolover Says:

    Oh, I totally agree. Not generating data that _should_ be obtained does damage the field, and not just the individual who proposed to do it. But, that’s what one needs to show, not that the Dr. FreshFaced didn’t get to do the research, but that it doesn’t get done. I think there’s something like that going on, but I don’t think it’s going to be solved by trying to address innovation in grant proposals. I think the only way to make it work is to invest in the people, not the project and hope that they innovate. Take the K99 for example: they’re trying to make it fair, so they make it, officially, about the project. In practice, it should be about the person: this post-doc has potential; we’re going to set him up to help him succeed. It’s terribly biased though (the “he” isn’t a mistake). I don’t think we can do any better, though, because I think we’re inherently unable to see the innovative new idea before it pans out (the whole paradigm shift thing). Only the innovator can see it, not the anti-innovators working in their own paradigms.

    I don’t actually see that the best funded labs don’t innovate. It seems to me that it is the “fat” labs that actually get an opportunity to innovate, not the lab of the new investigator PI who needs to get some pubs out if they’re going to make tenure. But, I don’t think I’m intimately familiar with the fields where everyone is working on the same thing. I sometimes admire those fields, ’cause at least things get replicated, but it’s tough on the individuals.

    Like

  18. drugmonkey Says:

    “It seems to me that it is the “fat” labs that actually get an opportunity to innovate, not the lab of the new investigator PI who needs to get some pubs out if they’re going to make tenure.”

    True as far as it goes. This is one of the reasons I rail against the idea of “starter” grants and the idea that a younger investigator shouldn’t pile up too many grants too fast ( a colleague just had this happen as a matter of fact, the PO was pretty upfront about this set of comments). Also the reason I think that MERIT should be applied to those younger investigators who are scoring well. However, if the $$ are equal do you really think the older, more established labs are on average going to be more scientifically innovative?

    Like

  19. bikemonkey Says:

    “Take the K99 … we’re going to set him up to help him succeed. It’s terribly biased though (the “he” isn’t a mistake).”

    Since I know 3 women K99ers and one male K99er I thought I’d CRISP on this. Just going by the reasonably good-bet first name analysis I’m thinking women are overrepresented in K99s versus R01s. Maybe still underrepresented versus eligible post-docs but this category seems to be a lot better than most grant award categories.

    Like


  20. […] do something since perhaps the other competing, well-funded labs will just do it anyway (start with this one). I would argue that this is wishful thinking. While there is some truth to the idea that only by […]

    Like


Leave a comment