Scientopia transitions

April 18, 2012

You will notice a new PayPal button on the sidebar of this blog and possibly other blogs around the Scientopia collective. Very likely you did not notice new amendments to our Code. The short version is, we finally have a way to handle money. This is good because it allows us to try to cover the operational costs of this blog collective which had heretofore been borne by a single member. It was thought that we should get our financial/tax/LLC/blahdeblah in order first and this apparently prevented even donations from ourselves to the cause.

We have apparently negotiated those rough waters. There are three issues on the table at the moment for your understanding and consideration.

First the button and my sidebar text: Your donation helps to support the operation of Scientopia – thanks for your consideration.

This is provided for anyone who would care to support Scientopia. Our expenses are the hosting and bandwidth charges, at the moment there is nobody getting paid to do anything service-related for the upkeep.

Second, the Scientopia schwag shop items now have a modest markup. Said markup will be routed into the Scientopia coffers.

Third, at some point in the future there will be ads on the blogs. Not sure who/what/how just yet, but it is in the works.

I won’t make the NPR style plea, you folks can do the maths for yourselves.

In a recent post Comraddde PhysioProffe supplies a necessary correction to the oft-repeated claim that scientific fraud is on the rise. Science blog legend Carl Zimmer’s bit in the NYT is only a reflection of a constant drumbeat which you can see in comments posted after many accounts of fraud, say over at retraction watch. As CPP puts it:

I keep hearing this asserted, but I see zero evidence that it is the case. What is clearly the case is that there is now an all-time vastly greater ability for interested sleuths to reveal failures of research integrity (e.g., by image analysis, sophisticated statistical analysis, etc).

Even so, his position tends to ignore some basic reality about the contingencies which influence human behavior in this instance. His first commenter notes this and in fact Zimmer had a followup NYT bit which talks mainly about the fact that scientists busted for fraud never admit their wrongdoing. Like one Michael W. Miller, a scientific fraudster discussed on this blog. One counter-example is listed by Zimmer:

One notable exception to this pattern…Eric Poehlman, was convicted of lying on federal grant applications and was sentenced to a year in jail. For the previous decade, he had fabricated data in papers he published on obesity, menopause and aging.

During his sentencing hearing, Dr. Poehlman apologized for his actions and offered an explanation.

“I had placed myself, in all honesty, in a situation, in an academic position which the amount of grants that you held basically determined one’s self-worth,” Dr. Poehlman said. “Everything flowed from that.”

Unless he could get grants, he couldn’t pay his lab workers, and to get those grants, he cut corners on his research and then began to fabricate data.

It’s just reality. Grant getting is harder and yet laboratory heads are still expected to land plenty of funds. More mouths to feed and fewer grant dollars to throw into them means the pressure is on. And the choices are sometimes stark, or seemingly so. Failing to get a grant can mean losing your job. Take the case of Peter Francis, previously of OHSU. He had a foundation award which notes that his faculty appointment was in 2006. A RePORTER search pulls up just the one award funded in 2011…this R01 was the one that included falsified data and was the (sole) subject of the ORI finding of research misconduct.

As always, I assert my possibly naive belief: Nobody sets out in a science career because they want to fake data and publish made-up results.

It proceeds from this that the data fakers must stray from the path at some point. And the reasons for straying are not due to random cerebral infarct. The reasons for straying are heavily influenced by contingencies. Facing a failure to secure grant funding is a pretty big contingency. Thinking that faking up a preliminary result (hey, it’s just pilot data, we shouldn’t take it as true until a full study follows it, right?) will make the difference in a fundable score is a pretty big contingency.

People like CPP can insist that contingencies were always at play. But they simply were not. The success rates for NIH grant getting show a clear difference in the difficulty of getting funded across scientific generations.

Those who are our older and more established scientists have been shaped by three cycles of NIH budget woes forcing down grant success rates- the early 80s, late 80s into the early 90s (which caused the political pressure leading to the doubling) and the present one starting about 2004 (after the decade-of-the-double). Some of them may have only been trainees for the first one but the campfire lore and attitudes were transmitted. The graph gives us a point of reference. For established investigators in the mid-80s, a success rate of about 37% represented the dismal landing from a down cycle! Then just one cycle later the success rates were down at 25%- OMMFG we have to DO SOMETHING!! The doubling was great and indeed success rates started to go back up towards the 35% value.

Yeah well the success rate was 17.7% in FY2011.

The contingencies are most assuredly different. And to think this plays no role in the rates of data faking and scientific fraud is dangerously naive.

People who suspect non-scientific shenanigans (of the political or craven variety) have blocked the acceptance of their paper or findable score for their grant often cry for double-blind review.

At present the reviewers are not blinded as the the identity of the authors or grant proposer(s).

The thought is that not providing reviewers with author/Investigator identity will prevent reputation or other seemingly irrelevant social cachet/hand/power from contaminating a sober, objective evaluation of the science.

It can’t work. There are simply too many clues in the topic, cited literature, approaches and interpretive framework. In the fields that I inhabit, anyway.

Even if in some cases the reviewers would be unable to nail down which lab was involved, it would be uneven. And let me tell you, the chances of detection would be highest for well-known groups.

All this would do would be to reinforce the reputation bias!

Please, I beg you my idealistic friends, tell me how this is supposed to work? Think hard about all the manuscripts you’ve reviewed and tell me how they could be restructured so as to provide a near guarantee (p<0.05) of blinding.

Oh, you can yammer on about how you were done dirty, too. Sure you can get all red about how I am an apologist for status quo and defeatist and all that. And by all means rant about the horrible effects on science.

But try, this once, not to sidestep my main point…..how is blinding supposed to work?