Open Access Grantsmanship
October 17, 2007
I was reading one of the summaries of the CSR Peer Review open house roundtable things, from the “Neuroscience” one.
A quick Google is coming up dry so if anyone recalls a summary that includes survey data, please comment. The thing that struck me was that over 50% of participants had never had a grant triaged.
Now the first thing that comes to mind is the people that NIH usually drags in to give them an “authoritative” opinion on various topics of interest. The opinions are most frequently sought from research luminaries, heads of institutions and society officialdom, i.e. (very) senior scientists. These PIs are utterly unrepresentative of the pool of NIH applicants (and potential applicants). In this particular case, however, there is a possible alternate explanation which is that triage is not in fact the norm for “good” scientists and those of us getting streamlined with any regularity are just writing bad grants. Or are bad scientists. etc. This is the sort of thing that keeps junior (and not so junior) scientists awake at night. It would be nice to have some current and specific information on triage rates across the NIH. Trouble is, this is not a data set that is easily obtained. The specific questions of the day being, “What proportion of scientists (as opposed to applications) applying to the NIH are triaged? Have these numbers changed over time? What do the per-PI triage rate distributions look like?”.
I was discussing this a little bit with a colleague and came to the realization that this is the sort of information we don’t even get a clear bead on with our usual collegial chit chat. Mostly because it is hard to keep track in a bunch of casual conversations how many grants someone has put in, how many times they’ve bitched about a triage, etc. I realize also because it is ever so slightly taboo to really ask someone these sorts of things. I, for one, wouldn’t feel comfortable asking “So, how many grants have you put in and how many triages?” The few (two?) times someone has asked me for similar specifics on how many grants I’ve put in, I recall having a slight negative reaction. Like “Hey, that’s private dude!”. There’s another issue which I realized after quashing the first sentiment which is that even I don’t keep track of the numbers very well. I know this sounds strange to someone in year 2 of grant writing but after a while…
So, for today’s lesson and in the spirit of OpenScience, I’ve bothered to pull my grant submission data from Commons for your entertainment and derision. Be kind. YMMV, of course. And naturally, this only counts the stuff where I have been the PI for the submission.
I’ve been putting proposals in since early 2000. I count up 20 Type 1 and 2 Type 2 applications submitted. Yikes, has it really been that many? In that list I have 8 A1s and 2 A2s. Fifteen of the applications were scored and 7 were streamlined. Three were funded so far. (I will note that this is not sufficient in soft-money land, of course and the answer to “how?” is that I’ve acquired at least an equivalent portfolio through sub-components, pilots and the like.)
Of course, the question of individual “success rate” beyond simple definitional purposes is complicated. I’ve abandoned my Type 2 after two triages so there would theoretically have been a third chance that I’ve chosen not to take. I’ve lost interest in pursuing a particular line with two grants because a competitor in the field (and now at least two I note) has gotten funded to do very similar stuff. I have at least three that are in active revision mode, although these will trickle across several rounds as I get to them. I also have a few more that are somewhat promising after the -01 or A1 review but are a bit down my priority list. Never gone, of course. Especially the ideas. Nevertheless it is hard to determine what the denominator should be. As the DM is fond of pointing out, if you haven’t revised you haven’t really submitted an application.
I note something a little more subtle which is that I have, for the most part, taken at least two lines of attack on a given research area of interest to me. This means that one of the two usually gets abandoned if the other is funded or sidelined if the other has a more promising score. Sometimes a sidelined one later gets dropped because my own work has moved on, the field indicates different directions or a competitor gets a related grant funded.
What other fun things does this sort of review reveal?
- So far, I’ve never had a grant triaged in an SEP review even though a couple of 250-280 range scores show that it was a close thing.
- In the salad days of the early noughties a 170 was a 19%-ile and fundable, in this most recent round a 28.6%-ile and not even close.
- I have 5 scores in the 160-175 range over the years. I might encourage people to view this range as a good indicator that you are being taken seriously as a scientist, you have good ideas and can write a grant. It may be a matter of luck to improve from this range. For example it is hard to show where my one 120/1.6% scored application is this dramatically/categorically different, say, from my 160-170 scored ones.
- Of the 7 applications that have been reviewed in the most-frequent study section, I have 4 triages as well as a personal best score. This is relevant to theories that “that particular study section hates me”.
- None of my proposals funded to date have gotten there after an initial triage. This stat is contaminated by the reduced chance that I will have revised a triaged proposal, of course.
So, nothing too surprising here since I had a pretty decent seat-of-the-pants recollection of how I’ve been doing. I think the most interesting thing is the fate of triaged applications. Mostly because a very common question from new applicants is “Should I abandon a triaged application?”. My default response to this is “No way”, mostly because of the way revisions are treated; DM has similar attitudes posted here and there. Also because of a sort of back-of-the-head suspicion that we’ve had applications triaged in our section which then are revised into highly competitive applications. But are they? Again, this is an area where some hard CSR data could be useful. The anecdote of me suggests at this point that perhaps it is wise to abandon a triaged application. This counters my gut feeling but the data are what they are.
The link regarding the 50% never-triaged also suggests that 23% of respondents had a previously triaged grant eventually win funding.
[Editorial post-posting comment: This is the singularly worst-prepped and proofed entry I've posted yet. Apologies to those who read this pre-correcting. There would be a StockCritique for the initial version...]