This is my query of the day to you, Dear Reader.
We’ve discussed the basics in the past but a quick overview.
1) Since the priority score and percentile rank of your grant application is all important (not exclusively so but HEAVILY so) it is critical that it be reviewed by the right panel of reviewers
2) You are allowed request in your cover letter that the CSR route your NIH grant application to a particular study section for review.
3) Standing study section descriptions are available at the CSR website as are the standing rosters and the rosters for the prior three rounds of review (i.e., including any ad hoc reviewers).
4) RePORTER allows you to search for grants by study section which gives you a pretty good idea of what they really, really like.
5) You can, therefore, use this information to slant your grant application towards the study section in which you hope it will be reviewed.
A couple of Twitts from @drIgg today raised the question of study section “fit”. Presumably this is related to an applicant concluding that despite all the above, he or she has not managed to get many of his or her applications reviewed by the properly “fitting” panel of reviewers.
This was related to the observation that despite ones’ request and despite hitting what seem to be the right keywords it is still possible that CSR will assign your grant to some other study section. It has happened to me a few times and it is very annoying. But does this mean these applications didn’t get the right fit?
I don’t know how one would tell.
As I’ve related on occasion, I’ve obtained the largest number of triages from a study section that has also handed me some fundable scores over the past *cough*cough*cough* years. This is usually by way of addressing people’s conclusion after the first 1, 2 or maybe 3 submissions that “this study section HATES me!!!“. In my case I really think this section is a good fit for a lot of my work, and therefore proposals, so the logic is inescapable. Send a given section a lot of apps and they are going to triage a lot of them. Even if the “fit” is top notch.
It is also the case that there can be a process of getting to know a study section. Of getting to know the subtleties of how they tend to feel about different aspects of the grant structure. Is it a section that is really swayed by Innovation and could give a fig about detailed Interpretations, Alternatives and Potential Pitfalls? Or is it an orthodox StockCritiqueSpewing type of section that prioritizes structure over the content? Do they like to see it chock full of ideas or do they wring their hands over feasibility? On the other side, I assert there is a certain sympathy vote that emerges after a section has reviewed a half dozen of your proposals and never found themselves able to give you a top score. Yeah, it happens. Deal. Less perniciously, I would say that you may actually convince the section of the importance of something that you are proposing through an arc of many proposal rounds*.
This leaves me rather confused as to how one would be able to draw strong conclusions about “fit” without a substantial number of summary statements in hand.
It also speaks to something that every applicant should keep in the back of his or her head. If you can never find what you think is a good fit with a section there are only a few options that I can think of.
1) You do this amazing cross-disciplinary shit that nobody really understands.
2) Your applications actually suck and nobody is going to review it well.
3) You are imagining some Rainbow Fairy Care-a-lot Study section that doesn’t actually exist.
What do you think are the signs of a good or bad “fit” with a study section, Dear Reader? I’m curious.
__
*I have seen situations where a proposal was explicitly mentioned to have been on the fourth or fifth round (this was in the A2 days) in a section.
Additional Reading:
Study Section: Act I
Study Section: Act II
No Return
June 27, 2013

President Obama and first lady Michelle Obama look out of a doorway that slaves departed from on Goree Island in Dakar, Senegal on June 27, 2013. (REUTERS/Gary Cameron)
via Washington Post article by David Nakamura
Will you be an angel?
June 26, 2013
via a Reader
Grumpy reviewer is….
June 25, 2013
grumpy.
Honestly people. What in the hell happened to old fashioned scholarship when constructing a paper? Pub Med has removed all damn excuse you might possibly have had. Especially when the relevant literature comprises only about a dozen or two score papers.
It is not too much to expect some member of this healthy author list to have 1) read the papers and 2) understood them sufficiently to cite them PROPERLY! i.e., with some modest understanding of what is and is not demonstrated by the paper you are citing.
Who the hell is training these kids these days?
__
Yes, I am literally shaking my cane.
Rockey explains the percentiling of tied NIH scores
June 24, 2013
Admittedly I hadn’t looked all that hard but I was previously uncertain as to how NIH grants with tied scores were percentiled. Since the percentiles are incredibly important* for funding decisions, this was a serious question after the new scoring approach (reviewers vote 1-9 integer values, the average is multiplied by 10 for final score. lower is better.) which was designed to generate more tied scores.
The new system poses the chance that a lot of “ranges” for the application are going to be 1-2 or 2-3 and, in some emerging experiences, a whole lot more applications where the three assigned reviewers agree on a single number. Now, if that is the case and nobody from the panel votes outside the range (which they do not frequently do), you are going to end up with a lot of tied 20 and 30 priority scores. That was the prediction anyway.
NIAID has data from one study section that verifies the prediction.
As a bit of an aside, we also learned along the way that percentile ranks are always rounded UP.
Percentiles range from 1 to 99 in whole numbers. Rounding is always up, e.g., 10.1 percentile becomes 11.
So you should be starting to see that the number of applications assigned to your percentile base** and the number that receive tying scores is going to occasionally throw some whopping discontinuities into the score-percentile relationship.
Rock Talk explains:
However, as you can see, this formula doesn’t work as is for applications with tied scores (see the highlighted cells above) so the tied application are all assigned their respective average percentile
In her example, the top applications in a 15 application pool scored impact scores of 10, 11, 19, 20, 20, 20…. This is a highly pertinent example, btw. Since reviewers concurring on a 2 overall impact is very common and represents a score that is potentially in the hunt for funding***.
In Rockey’s example, these tied applications block out the 23, 30 and 37 percentile ranks in this distribution of 15 possible scores. (The top score gets a 3%ile rank, btw. Although this is an absurdly small example of a base for calculation, you can see the effect of base size…10 is the best possible score and in an era of 6-9%ile paylines the rounding-up takes a bite.) The average is assigned so all three get 30%ile. Waaaaay out of the money for an application that has the reviewers concurring on the next-to-best score? Sheesh. In this example, the next-best-scoring application averaged a 19, only just barely below the three tied 20s and yet it got a 17%ile for comparison with their 30%ile.
You can just hear the inchoate screaming in the halls as people compare their scores and percentiles, can’t you?
Rockey lists the next score above the ties as a 28 but it could have just as easily been a 21. And it garners a 43%ile.
Again, cue screaming.
Heck, I’m getting a little screamy myself, just thinking about sections which are averse to throwing 1s for Overall Impact and yet send up a lot of 20 ties. Instead of putting all those tied apps in contention for consideration they are basically guaranteeing none of them get funded because they are all kicked up to their average percentile rank. I don’t assert that people are intentionally putting up a bunch of tied scores so that they will all be considered. But I do assert that there is a sort of mental or cultural block at going below (better than) a 2 and for many reviewers, when they vote a 2 they think this application should be funded.
In closing, I am currently breaking my will to live by trying to figure out the possible percentile base sizes that let X number of perfect scores (10s) receive 1%iles versus being rounded up to 2%iles and then what would be associated with the next-best few scores. NIAID has posted an 8%ile payline and rumours of NCI working at 5%ile or 6%ile for next year are rumbling. The percentile increments that are permitted, based on the size of the percentile base and their round-up policy, become acute.
__
*Rumor of a certain IC director who “goes by score” rather than percentile becomes a little more understandable with this example from Rock Talk. The swing of a 20 Overall Impact score from 10%ile to 30%ile is not necessarily reflective of a tough versus a softball study section. It may have been due to the accident of ties and the size of the percentile base.
**typically the grants in that study section round and the two prior rounds for that study section.
***IME, review panels have a reluctance to throw out impact scores of 1. The 2 represents a hesitation point for sure.
The 2012 Journal Impact Factors are out
June 24, 2013
Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains….I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.
whoo-hoo!
Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.
So I thought I’d take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience’s journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I’m going to score the last couple of years’ of gain to this, personally.
Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That’s a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index….I do wonder why the pre-print queue length is not of concern to ISI.
PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.
Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don’t enjoy as much time to acquire citations.
Regulatory Science at NIH
June 21, 2013
One of the more fascinating things I attended at the recent meeting of the College on Problems of Drug Dependence was a Workshop on “Novel Tobacco and Nicotine Products and Regulatory Science”, chaired by Dorothy Hatsukami and Stacey Sigmon. The focus on tobacco is of interest, of course, but what was really fascinating for my audience was the “Regulatory Science” part.
As background the Family Smoking Prevention and Tobacco Control Act became law on June 22, 2009 (sidebar, um…four years later and..ahhh. sigh.) This Act gave “the Food and Drug Administration (FDA) the authority to regulate the manufacture, distribution, and marketing of tobacco products to protect public health.”
As the Discussant, David Shurtleff (up until recently Acting Deputy Director at NIDA and now Deputy Director at NCCAM), noted this is the first foray for the NIH into “Regulatory Science”. I.e., the usual suspect ICs of the NIH will be overseeing conduct of scientific projects designed directly to inform regulation. I repeat, SCIENCE conducted EXPLICITLY to inform regulation! This is great. [R01 RFA; R21 RFA]
Don’t get me wrong, regulatory science has existed in the past. The FDA has whole research installments of its very own to do toxicity testing of various kinds. And we on the investigator-initiated side of the world interact with such folks. I certainly do. But this brings all of us together, brings all of the diverse expert laboratory talents together on a common problem. Getting the best people involved doing the most specific study has to be for the better.
In terms of specifics of tobacco control, there were many on this topic that you would find interesting. The Act doesn’t permit the actual banning of all tobacco products and it doesn’t permit reducing the nicotine in cigarettes to zero. However, it can address questions of nicotine content, the inclusion of adulterants (say menthol flavor) to tobacco and what comes out of a cigarette (Monoamine Oxidase Inhibiting compounds that increase the nicotine effect, minor constituents, etc). It can do something about a proliferation of nicotine-containing consumer products which range from explicit smoking replacements to alleged dietary supplements.
Replacing cigarette smoking with some sort of nicotine inhaler would be a net plus, right? Well…..unless it lured in more consumers or maintained dependence in those who might otherwise have quit. Nicotine “dietary supplements” that function as agonist therapy are coolio….again, unless they perpetuate and expand cigarette use. Or nicotine exposure…while the drug itself is a boatload less harmful than is the smoking of cigarettes it is not benign.
There are already some grants funded for this purpose.
NIH administers several and there was a suggestion that this is new money coming into the NIH from the FDA. Also a comment that this was non-appropriated money, it was being taken from some tobacco-tax fund. So don’t think of this as competing with the rest of us for funding.
I was enthused. One of the younger guns of my fields of interest has received a LARGE mechanism to captain. The rest of the people who seem to be involved are excellent. The science is going to be very solid.
I really, really (REALLY) like this expansion of the notion that we need to back regulatory policy with good data. And that we are willing as a society to pay to get it. Sure, in this case we all know that it is because the forces *opposing* regulation are very powerful and well funded. And so it will take a LOT of data to overcome their objections. Nevertheless, it sets a good tone. We should have good reason for every regulatory act even if the opposition is nonexistent or powerless.
That brings me to cannabis.
I’m really hoping to see some efforts along these lines [hint, hmmmm] to address both the medical marijuana and the recreational marijuana policy moves that are under experimentation by the States. In the past some US States have used state cigarette tax money (or settlement money) to fund research, so this doesn’t have to be at the Federal level. Looking at you, Colorado and Washington.
__
As always, see Disclaimer. I’m an interested party in this stuff as I could very easily see myself competing for “regulation science” money on certain relevant topics.
Tweep @biochemprof pointed to a story of the day about a judicial ruling that unpaid interns on a movie production should have been paid. The story via via NBC:
In the decision, Judge William H. Pauley III ruled that Fox Searchlight should have paid two interns on the movie “Black Swan,” because they were essentially regular employees.
The judge noted that these internships did not foster an educational environment and that the studio received the benefits of the work. The case could have broad implications. Young people have flocked to internships, especially against the backdrop of a weak job market.
“Weak job market”, my eye. I still recall the disbelief I was in during the end of my senior year in college when my friends described how they “had to” take unpaid internships. There were several industries (I can’t recall the specifics at this far remove) for which my fellow newly bachelor degree’d worker drones were convinced they had to start their careers by working for free. Having secured what I thought was a pretty good gig, being paid the 2013 equivalent of $23,000 per year to earn my PhD, I felt comparatively fortunate. There is no way in hell, or so I thought at the time, that I would be able to have followed such a path. I needed to do something that was going to put a roof over my head and at least some cheap pasta on the table. As I’ve mentioned in the past, I grew up in an academic household. So the parental support for me going into academics was pretty good. However, it was by no means a fantastically well-off household either, being academic, and there was no way in hell my parents were going to pay all my bills deep into my 20s. I had to get a job that was going to pay me something. So I did.
As far as I can tell, the phenomenon of “unpaid internships” for both recent college grads and other long term or temporary would-be-workers has not diminished substantially.
Unpaid internships are a labor-exploitation scam.
Period.
In any industry.
And according to the NBC bit, this is the beginning of a long slog of court cases making exactly this point.
The “Black Swan” case was the first in a series of lawsuits filed by unpaid interns.
In February 2012, a former Harper’s Bazaar intern sued Hearst Magazines, asserting that she regularly worked 40 to 55 hours a week without being paid. Last July, a federal court ruled that the plaintiff could proceed with her lawsuit as a collective action, certifying a class of all unpaid interns who worked in the company’s magazines division since February 2009. This February, an unpaid intern sued Elite Model Management, seeking $50 million.
After a lawsuit brought by unpaid interns, Charlie Rose and his production company announced last December that they would pay back wages to as many as 189 interns. The settlement called for many of the interns to receive about $1,100 each — amounting to roughly $110 a week in back pay, for a maximum of 10 weeks, the approximate length of a school semester.
As part of his ruling on Tuesday, Judge Pauley also granted class certification to a group of unpaid interns in New York who worked in several divisions of the Fox Entertainment Group.
Good.
Look, obviously there will be much legal parsing about the relative benefit of unpaid work to both the employer and the employee. But the basic principles should be clear and easily understood in plain language and we should be highly attentive to where the putative “educational” or “training” benefit to the employee is being oversold and the relative work-product benefit to the employer is being intentionally undersold to justify the exploitation.
This brings me to us, DearReader. By which I mean my academic science peers, our research laboratories and the phenomenon of undergraduate or high-school “interns” who work without financial compensation. It is wrong, exploitative and immoral. We, you… our industry as a whole, should knock it off.
I am not swayed by arguments that you and your lab put more effort into summer interns than you get back in return. If this is so, stop taking them. Clearly, if you do take them then you get some sort of benefit. Even if that benefit is only that you can brag that you have trained numerous undergraduates or “provided a research experience” to several. But in many cases, these freebie interns do much that is of value and that you would otherwise have to pay someone else in the lab to do. At worst, this saves your lab on technician salaries or frees up the time of the betters in the lab to work on the more complicated stuff instead of washing glassware or making up buffers. In better situations the intern produces data that helps the lab forward on a project.
If this is the case, ever, then you have exploited the internship scam. You have accepted someone working for you for free. This is almost mind bogglingly immoral to me and I do not know how my fellow left-leaning academic types can bring themselves to ignore it.
I don’t care one whit that you have 10 or 20 requests each and every Spring from some undergrad on campus or some undergrad from another University that happens to live in your town and is home for the summer. I get them myself. They make it clear that they expect no compensation…all this tells me is that our business has successfully created a system of exploitation. We have convinced the suckers that they “have to” take these positions to advance in their own career goals.
This is absolutely no different from times in the past, prior to labor protections, in which workers “had to” accept dangerous working conditions, longer than 40 hour weeks, no breaks, employment of juveniles, low pay, company stores/towns that stole back much of the wages, etc, etc. The list is lengthy. In every case the industry had fantastic reasons for why they “had to” treat their employees in such a way. The workers themselves were often convinced things “had to” be that way. And what do you know? After hard fought labor protections were put in place the industries got along just fine.
So far, I have gotten along just fine without exploiting unpaid interns in my laboratory. If they are not getting compensated in some way, they don’t work in my lab. I plan to stick with this principle. In my book, training, recommendation letters and the nebulous concept of experience do not qualify as compensation. There should be an hourly wage that is at least as great as the local minimum wage. In some cases, under the formal structure of an undergraduate institution, course credit can be acceptable compensation. I would recommend keeping this to a minimum, particularly when it comes to summer internships and/or work conducted outside of the academic semester. With respect to this latter, no, you can’t skate on the scam that they are just finishing up what they started under a for-credit stint during the regular academic calendar.
In addition to the general immorality of science labs exploiting the powerless (those desiring to enter the career) there is another factor for you to consider. The unpaid internship scam has the effect of blocking the financially disadvantaged from entering a particular career. Think about your mental (or your department’s formal) graduate admissions schema. Does it prioritize those who have had some prior experience working in a research laboratory, preferably in a closely related field of work? Of course it does. Which means it prioritizes those who could afford to gain such experiences. Those who had parents who were willing to float their rent and food bills over the summer months instead of making them find a real job, such as installing itchy insulation in scorching hot attics for 10 hr days, digging ditches, busing tables or changing oil filters. (As I have come to hear postdocs making upwards of $35,000 per year and graduate students $29,000 per year — Federal minimum wage is about $15,000 at present — complain about their treatment, I am certainly coming to reconsider which type of undergraduate summer experience is really the best way to select doctoral students.)
Even if we do not apply an admissions filter, how would the latter type of undergraduate student even come to appreciate that a laboratory career might be for them?
Clearly the solution is to find a way to pay our scientific interns. Much of the time, these mechanisms exist and it is mere laziness on the part of the PI that keeps the intern from being paid. There are administrative supplements to NIH grants for disadvantaged students that are, from what I hear, pretty much there for the asking as they are underutilized. Local summer-experience programs, small scale philanthropy and academic senate funds. Even if you cough up some grant money, what does 10 weeks cost you? Not that much. Can you look yourself square in the mirror and tell yourself honestly that you can’t afford the outlay from your grant and that you are not getting any value out of this prospective intern?
I can’t.
Unpaid internships are as much a scam and a labor exploitation in academic science labs as they are at Fox Searchlight Pictures.
Knock it off people.
Unfunded Overhead
June 6, 2013
It struck me today
thanks to the referenced comment from Jim Woodgett that we’ve never really had a discussion of unfunded overhead situations, despite several discussions of overhead rates in the ongoing effort to determine TheRealProblemTM with NIH budgets these days. It is worth bringing up, particularly for anyone who might be job seeking or negotiating in the near future. As we continue, you’ll see what you need to ask about, and what you need to get in writing along with your job offer.
As a brief introduction the overhead (or Indirect Costs; IDC) associated with a research grant award is the amount that disappears into the University, research institution (or what have you) instead of going into the PI’s account to spend.
When it comes to federal awards from the NIH (and some other agencies beloved of my Readership) the IDC rate varies across the Universities, research institutes and varied other applicant institutions. For discussion’s sake, I’ll throw out that the general rate for larger public Universities is about 56%. Smaller (private) Universities and not-for-profit research institutes tend to have higher ones with overhead rates of over 80% not uncommon. Rumors abound of 100% overhead rates but I’ve not directly seen one of those myself. To my recollection. This research crossroads site used to have a handy database of the federally-negotiated overhead rates but it has been down for some time now and I suspect it is defunct. I don’t know where they were scraping their data from but presumably these overhead rates are public info.
There are numerous non-federal sources of funding that a given PI might see as appropriate to pursue for her laboratory. Contracts with biotech or Big Pharma companies. Larger or smaller disease focused foundations (American Heart, Michael J Fox). Less-focused foundations (like Bill and Melinda Gates Foundation). Local philanthropic donors. State foundations or funds (like those diverted from tobacco or alcohol taxes). In many, if not most, cases these funding streams do not wish to pay your University the federally-negotiated overhead rate.
The differential can be large. Such as a foundation that will pay 10% maximum…and your federal rate sits at 70%. Perhaps a donor doesn’t want to pay any overhead at all and expects the full donation to go into the research lab’s coffers.
The ways that Universities and research institutions deal with this issue varies considerably. Across institutions, of course. But also within an institution depending on the money source, the amount of funding involved, the identity of the PI, etc.
The best case scenario for PIs is the institution that doesn’t care. Money is money and….they’ll take it. I’ve heard rumor of such things but it is fantasy as far as I am concerned.
What is more common is that the University has a way to cover the “unfunded overhead” situation to make it appear that the full federally negotiated rate is being applied to each and every grant of consequence*. Sometimes this is accomplished through the mumbo-jumbo of money being fungible and the University simply using their endowment proceeds or some other source of funds not easily connected to a grant to “cover” the overhead. This is good, if you can get it. That is, if your University has a default, no-questions-asked way to do this for a given source of grant support. That’s a supportive place to be.
Considerably less-good is the situation where the PI is supposed to “cover” this for herself. Now sometimes it is the case that the Chair of the Department covers it through a slush fund and, obviously, this would be a more limited pool of money. Consequently, the Chair has to balance who gets the slush. This leaves a lot of room for shenanigans having to do with departmental politics. A lot of room for problems based on how many faculty are trying to tap this pot of slush money in a given year. This is why you, as a prospective new hire, need to ask how these situations are covered and get as much in writing as you can.
There are two remaining horrible options which I hesitate to rank.
Some Universities will pull the overhead out of the new-hire’s startup funds. That’s a dicey game for a new faculty member to play. It might be worth it, it might not. Why would it be worth it? Well, that startup is a fixed, nonrenewable pool of money that is supposed to get you launched, right? This means, in essence, to help you secure a grant. Having grant funding awarded to your lab is a good thing and catapults you into the “funded investigator” category. Depending on the size of it, your use of startup to secure that award, instead of continuing the uncertain game of generating more preliminary data, may be advisable. You just have to look at the leverage that contributing startup to the unfunded overhead will give you.
Some places (and here I find the very high overhead, small not-for-profit research institutes to raise their heads) simply refuse to let faculty (even new hires) apply for anything that doesn’t come with full overhead.
Yes, this seems an unbelievably stupid policy and a way to cripple the prospects of your newly-hired faculty, but there you have it.
For anybody on the job market that is reading this, the conclusions are clear. If the unfunded overhead policies of your prospective institutions are not handed to you when you visit, ask. Determining what grants you will and will not be allowed to apply for in your first few years (or across your career) should not be left up to the (entirely logical) assumption that any grant available is attractive to your University.
ETA: A comment from Jim Woodgett
In essence, NIH subsidizes those agencies and philanthropists that don’t allow or who restrict overhead.
reminded me I forgot to address why the Universities are doing this. My assumption is that if the federal negotiators thought this statement sufficiently true, they would lower the IDC rate for that University. As I said, my assumption. I’ve never been able to get an institutional official to verify this directly though.
Additional reading: Cost principles, Proflike Substance on what overhead pays for.
__
*there can be blanket exceptions for trainee fellowships or exclusions based on an upper limit on the “award”.
Post-publication peer review and preprint fans
June 6, 2013
Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.
LinkedIn: yea or nay?
June 4, 2013
It’s been a few years but I still have about the same approach to LinkedIn. I’m on there mostly for the networking that might extend to my trainees and other junior scientists in the field. I don’t find it that useful for me in any direct sense.
How about you, Dear Reader*?
UPDATE 06/05/2013: Arlenna points to a page on creepy LinkedIn behavior and a privacy setting you might want to check.
__
*and PhysioProf
A big thanks to all of you who have donated to DonorsChoose in recent days. As a reminder, DonorsChoose has come up with a $25,000 matching fund to double the donations of all of you who read the Scientopia blogs!
This promotion will expires at midnight Hawaii time Friday, June 7 (6am eastern June 8) and is good for up to $100 per donor for a total of $25,000 in matching funds. This is a great time to pat a teacher on the back after a long school year and let them know that people out there value their work and value their students as much as they do. Not to mention get them all set to hit the ground running in the Fall.
So click on over to the Scientopia blogs’ DonorsChoose Giving page and see if there is any project that catches your eye. If not, just use the project browser on the sidebar to find a classroom or project that gains your sympathy. There is no need to stick to the giving page suggestions. Maybe you have a certain topic that is dear to your heart? Perhaps there is a geographical location that you want to support in some way? Browse around, there are many very needy projects.
Then, when you are in the payment checkout page, enter SCIENTOPIA in the “match or gift code” field. See here for screenshot.
I urge you, even if you can’t donate at the moment, to post a link on your Tweeterers, Facebooks and what not. I find my friends and family who have never heard of DonorsChoose before to be grateful to be made aware of this place for their charitable giving.