Answering my question from yesterday it appears that I have done relatively little bashing of the Impact Factor in recent months. Odd that. And since our beloved commenter whimple is stirring up trouble I thought I’d repost something that appeared Sept 21, 2007 on the old blog. I also ran across this post relevant to the malleability of the IF.


People argue back and forth over whether Impact Factor of journals, the h-index, Total Cites, specific paper cites, etc should be used as the primary assessment of scientific quality. Many folks talk out of both sides of their mouths, bemoaning the irrelevance of journal Impact Factor while beavering away to get their papers into those journals and using the criterion to judge others. In this you will note people arguing the case that makes their CV look the best. I have a proposal:

Read the rest of this entry »

Finally.
An opinion bit written by a senior investigator who actually seems to have a brain in his head and is not blinded by selfishness.

The argument that grants should be funded only on the basis of priority scores is fallacious. There is only a rough correlation between the quality of the science in an application and the priority score. As anyone who has ever served on a study section will attest, a host of different–and sometimes scientifically irrelevant–criteria can creep into play when arriving at a priority score, such as whether there are lots of typos in a grant (even the most accomplished scientists are not always great spellers). This is not because reviewers are vindictive or evil. Just that they are emotional and human. Until human judgment is perfected, granting agencies will always need to consider more than the priority score in making funding decisions.

Sounds familiar, doesn’t it?
Go Play.
[h/t: @BoraZ]

There’s a new entry up over at the Golden Thoughts blog (she’s a nephrologist, so..yes) that talks about the all important journal Impact Factor, Harold Varmus’ opinion of same and journals gaming the system.


Dr. Varmus pointed out that many of his most significant works appeared in “lesser” journals that served the appropriate audience for the science.

However, like all numbers, the IF can be gamed, and its validity has been questioned:

Dr. Varmus plead for an end to IF insanity.

The IF works about as well right now as the Bowl Championship Series algorithm does for college football.

Ouch, that last one is an insult that goes farther than I ever have.
Go Read.

Some time ago SciWo laid out her approach to developing a proposal for research funding in “Eight Easy Steps“. You can dash over and read it yourself I won’t try to summarize. What emerged in the comments is that people have very different approaches to this topic. For example Comrade PhysioProf opined:

Read the rest of this entry »

Discussing Talent and Luck

November 16, 2009

Some Twitt chain or other that I was following had me eventually landing on a NYT book review by Steven Pinker which takes a critical approach to Malcolm Gladwell’s new book of essays “What the Dog Saw: and other adventures“. I was particularly struck by this passage:

The common thread in Gladwell’s writing is a kind of populism, which seeks to undermine the ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition. For an apolitical writer like Gladwell, this has the advantage of appealing both to the Horatio Alger right and to the egalitarian left. Unfortunately he wildly overstates his empirical case. It is simply not true that a quarter­back’s rank in the draft is uncorrelated with his success in the pros, that cognitive skills don’t predict a teacher’s effectiveness, that intelligence scores are poorly related to job performance or (the major claim in “Outliers”) that above a minimum I.Q. of 120, higher intelligence does not bring greater intellectual achievements.

Not only because it is the source of some of my own queasiness when reading (and trying to discuss) Gladwell, but also because I fall into this trap when talking about science careers.

Read the rest of this entry »

Not a very "SMART Plan", no.

November 13, 2009

PhysioProf has the call on a letter published in Science Magazine. One Professor Debomoy K. Lahiri, Ph.D. (Univ website; Research Crossroads) is kvetching about the NIH policy to support previously unfunded investigators and as usual comes off looking idiotic.

Increasing the grants funded below the quality cutoff to nearly one-fifth of all funded grants will not serve the goal of helping new investigators. If such applicants are not held to the stringent process of producing a grant that meets R01 quality requirements, what will happen to them when they are no longer new investigators and are then subject to the same rigors as the rest of the field?

Read the rest of this entry »

Commenter qaz raised an issue the I think I last took up following an observation of Larry Moran. That was also in the context of discussing so-called over-production of PhDs. The new comment from qaz frames the issue as follows:

I AM advocating graduate PhD-level science training for the rest of the population – imagine if our politicians actually understood science (or even critical thinking) for example. A lot of professions would be improved by having scientific training. (But they don’t need it, you say. I say, why can’t they have it? Why can’t spending five years doing some good science not be a part of someone’s path in life, even if they don’t go on to do NIH-R01-Research?)

Read the rest of this entry »

Oh this is rich. More money to consolidate boondoggle service cores?
Consolidation…to me that connotes a cost savings. Efficiency. And somehow even more cash is required to gain this efficiency?

So the new bullet-point NIH grant review format has been in place for two rounds and I am finally hearing a bit of feedback from friends and colleagues. I also have had a chance to be subjected to a nonzero number of reviews as an applicant, instead of only as a reviewer.
Some of the chatter I am hearing reflects confusion.. with a lot of comment that the person (applicant) can’t tell how to interpret things. Even more frightening is one report of a Program Officer making the same complaint-after all the new format was supposed to be to help the POs make their decisions! That’s not a good thing.
From my small sample, I think it is perfectly fine. I mean, I used to go through the old summary statements with two highlighters- one for positive comments and one for negative comments -in the past. This just distills the process. The comments I’ve received are no more confusing than in the past and a lot of extraneous nattering has been left off.
I like it.

A recent post noted the decision by the NHLBI to adopt a payline policy that varied by grant revision status. The new R01 submissions would be subject to a 16% payline, the first revision to a 9% payline and any left-over grandfathered second revision A2 applications to a 7% payline.
In the course of discussion a reader proposed that what we really need is for the NIH to grade the payline based on how many grants a given PI already has. Commenter qaz said:

Maybe it would be enough to share the funding around better – make the first R01 easy to fund, the second harder, etc. If we made it possible for people to be funded at the 25% range (or even below that) if they didn’t have any other grants, then maybe it wouldn’t be a problem.

This idea was seconded by Principle Investigator.
Knowing a landmined topic when I see it, I had a few observations.

Read the rest of this entry »

As I noted previously The Society for Neuroscience encouraged its members to blog and Twitt the annual meeting in Chicago (Oct 17-22, 2009). The experiment was far from a smashing success although I do believe that there were some hints of what could / should be for the future. The main problem* was, I wager, one of numbers. It was a meeting that registered some 30,000 attendees. I counted something maybe on the order of 30 people actively trying to Twitt or blog the meeting. I think you have to have a bit higher participation for the conversation to really take off, but that’s just speculation.
At any rate, I had a thought today. The USC Annenberg School for Communication & Journalism is holding an event that provides an interesting contrast.

USC Annenberg’s California Endowment Health Journalism Fellowships program is holding a day-long brainstorming event aimed at helping Annenberg leaders launch a new, all-expenses-paid, professional seminar series to educate and encourage dialogue among health professional bloggers and Health 2.0 visionaries. The attendees, who include leading Health 2.0 professionals Matthew Holt of The Health Care Blog and Dr. Val Jones of BetterHealth.com, will discuss the best ways to promote transparency, credibility, accuracy and journalistic principles for the emerging health blogosphere, as well as exposure to larger public health and community health policy issues. This event is by invitation only.

Follow the Twittering on this meeting by the #uscblogcon hashtag. I think this may give you some ideas of what could be, if you are on the fence as to whether Twittering/blogging scientific meetings would have value.
__
*apart from some technical difficulties with WiFi coverage and too many iPhoners loading up the AT&T network.

Well, well, well. As my more dedicated readers are well aware one of my ongoing criticisms of the peer review of NIH grants is the seeming obsession with revision status of the application. I’ve just reposted this old entry from 2007.
I was, quite naturally, sensitized to this issue originally as a grant applicant. As with many of you, I developed this sneaking suspicion that complaints and bad scores on the original submission of some of my proposals were not in good faith. I mean this in the usual sense that the same or essentially unchanged parts of the proposal were stomped on very hard on the first submission and essentially ignored later. Also from the growing realization that essentially none of my more-junior colleagues and friends had received an award un-revised.
Some years down the road, I entered service on a study section and in hearing the way grants of the three different revision steps were reviewed, well, I started to suspect.

Read the rest of this entry »

As I have noted before, if there is one modal complaint of the newly hired Assistant Professor in the laboratory sciences…


…(i)t boils down to a failure of the hiring University to live up to the spirit (and even letter) of what was promised during the recruiting phase. The space that magically becomes “shared space”. The startup funds that get reduced or restricted. The surprises that one is supposed to pay for “out of your startup”. The new building renovations that are slow, “Oh just use this temporary space for now” becomes “Well, you have a lab we promised that to the next sucker”. Etc. The excuse is almost always “The dean won’t go for it”, “The dean denied it” and the like while the Chair insists s/he went to the mat for you. Everyone has problems doncha know….

This brings me to today’s edition of “Ask DrugMonkey”.

Read the rest of this entry »

I have a post I’m working on that references a topic I’ve been talking about on the blog for a long time. I was about to quote extensively from this one but I figured I’d just better repost the whole thing. This originally appeared on 10 Sep 2007.


I’ve made reference a time or two to what I describe as “bias” for amended (revised) applications. In the lifecycle of the standard, investigator initiated research project grant (the R01) application, it is initially submitted and reviewed and if not funded, the application can be revised/amended one (called the A1) or two (A2) times. (Thereafter the PI must submit a substantially new proposal.) First, the evidence that revised applications score better and are more likely to get funded relative to initial submissions is readily available.

Read the rest of this entry »

Our good blogfriend, Scibling and scientist-artist BioE! has a post up discussing the intersection of drug abuse health care, drug abuse science, research funding and the political process. I recommend you start with:

Double standards, politics, and drug treatment research

But there’s a huge double standard in the media, and in society in general, when it comes to drug abuse treatment…Maybe it’s because these other addicts are meth addicts, or potheads, or heroin addicts – probably not people you relate to or approve of. That makes it pretty easy for the media to take cheap shots at crack, etc. addicts, and question whether we should waste money trying to help them…But here’s an even easier target than pot smokers: drug-using Thai transgendered prostitutes!

That last is not a joke.

Read the rest of this entry »