Managing your CV and why pre-print waccaloons should be ignored
February 2, 2016
For whatever reasons I was thinking of at the time I was motivated to twttr this:
What I mean by this is that somewhere on the pile of motivations you have to finish that manuscript off and get it submitted, you should have “keeping your calendar years populated”.
It may not always be a big deal and it certainly pales in comparison to JIF factors, but all else equal you want to maintain a consistent rate of output of papers. Because eventually someone will look at that in either an approving or disapproving way, depending on how your publication record looks to them.
Like it or not, one way that people will consider your consistency over the long haul is by looking at how many papers you have published in each calendar year. Published. Meaning assigned to a print issue that is dated in a particular calendar year. You cannot go back and fill these in when you notice you have let a gap develop.
If you can avoid gaps*, do so. This means that you have to have a little bit of knowledge about the typical timeline from submission of your manuscript for the first time through until the hard publication date is determined. This will vary tremendously from journal to journal and from case to case because you don’t know specifically how many times you are going to have to revise and resubmit.
But you should develop some rough notion of the timeline for your typical journals. Some have long pre-print queues. Some have short ones. Some move rapidly from acceptance to print issue. Some take 18 mo or more. Some journals have priority systems for their pre-print queue and some just go in strict chronological order.
And in this context, you need to realize something very simple and clear. Published is published.
Yes, mmhmm, very nice. Pre-print archives are going to save us all. Well, this nonsense does nothing for the retrospective review of your CV for publication consistency. At present the culture of scientific career evaluation in the biomedical disciplines does not pay attention to pre-print archives. It doesn’t really even respect the date of first appearance online in a pre-publication journal queue. If your work goes up in 2016 but never makes it to a print article until 2017, history will cite it as 2017.
__
*Obviously it happens sometimes. We can’t always dictate the pace of everything in terms of results, funding, trainee life-cycles, personal circumstances and whatnot. I’m just saying you should try to keep as consistent as possible. Keep the gaps as short as possible and try to look like you are compensating. An unusually high number of pubs following a gap year goes a long way, for example.
CSR Head Nakamura Makes Bizarre Pronouncement
February 2, 2016
An email from the CSR of the NIH hit late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be “some selected few of us will enjoy” because
“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .
Yeah, as suspected, that money is already accounted for.
The part that has me fired up is the continuation after that ellipsis and a continuing header item.
So make sure you put your best effort into your application before you apply.”
Counterproductive Efforts
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”
As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I’m going from memory here because I can’t seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.
At a very basic level Nakamura is using the lie of the truncated distribution. If you don’t submit any grant applications, your success rate is going to be zero. I’m sure he’s excluding those because seemingly that would make a nice correlation.
But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.
Wrong. Wrong. Wrong.
Not everyone’s chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.
By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.
In short, this is just another version of the advice to young faculty to “write better grants, just like the greybeards do”.
The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.
I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with “….but it’s Dr. BigShot and we know she does great work and can pull this off”. Or similar.
I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the “best” grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.
This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.
You cannot beat this system by writing a “perfect” grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.
Nakamura should know this. He probably does. Which makes his “advice” a cynical ploy to decrease submissions so that his success rate will look better.
One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren’t very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.
I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn’t really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven’t ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?
Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs’ grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.
For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.
And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.
__
*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.