Detection is up, yes, but contingencies still matter

April 18, 2012

In a recent post Comraddde PhysioProffe supplies a necessary correction to the oft-repeated claim that scientific fraud is on the rise. Science blog legend Carl Zimmer’s bit in the NYT is only a reflection of a constant drumbeat which you can see in comments posted after many accounts of fraud, say over at retraction watch. As CPP puts it:

I keep hearing this asserted, but I see zero evidence that it is the case. What is clearly the case is that there is now an all-time vastly greater ability for interested sleuths to reveal failures of research integrity (e.g., by image analysis, sophisticated statistical analysis, etc).

Even so, his position tends to ignore some basic reality about the contingencies which influence human behavior in this instance. His first commenter notes this and in fact Zimmer had a followup NYT bit which talks mainly about the fact that scientists busted for fraud never admit their wrongdoing. Like one Michael W. Miller, a scientific fraudster discussed on this blog. One counter-example is listed by Zimmer:

One notable exception to this pattern…Eric Poehlman, was convicted of lying on federal grant applications and was sentenced to a year in jail. For the previous decade, he had fabricated data in papers he published on obesity, menopause and aging.

During his sentencing hearing, Dr. Poehlman apologized for his actions and offered an explanation.

“I had placed myself, in all honesty, in a situation, in an academic position which the amount of grants that you held basically determined one’s self-worth,” Dr. Poehlman said. “Everything flowed from that.”

Unless he could get grants, he couldn’t pay his lab workers, and to get those grants, he cut corners on his research and then began to fabricate data.

It’s just reality. Grant getting is harder and yet laboratory heads are still expected to land plenty of funds. More mouths to feed and fewer grant dollars to throw into them means the pressure is on. And the choices are sometimes stark, or seemingly so. Failing to get a grant can mean losing your job. Take the case of Peter Francis, previously of OHSU. He had a foundation award which notes that his faculty appointment was in 2006. A RePORTER search pulls up just the one award funded in 2011…this R01 was the one that included falsified data and was the (sole) subject of the ORI finding of research misconduct.

As always, I assert my possibly naive belief: Nobody sets out in a science career because they want to fake data and publish made-up results.

It proceeds from this that the data fakers must stray from the path at some point. And the reasons for straying are not due to random cerebral infarct. The reasons for straying are heavily influenced by contingencies. Facing a failure to secure grant funding is a pretty big contingency. Thinking that faking up a preliminary result (hey, it’s just pilot data, we shouldn’t take it as true until a full study follows it, right?) will make the difference in a fundable score is a pretty big contingency.

People like CPP can insist that contingencies were always at play. But they simply were not. The success rates for NIH grant getting show a clear difference in the difficulty of getting funded across scientific generations.

Those who are our older and more established scientists have been shaped by three cycles of NIH budget woes forcing down grant success rates- the early 80s, late 80s into the early 90s (which caused the political pressure leading to the doubling) and the present one starting about 2004 (after the decade-of-the-double). Some of them may have only been trainees for the first one but the campfire lore and attitudes were transmitted. The graph gives us a point of reference. For established investigators in the mid-80s, a success rate of about 37% represented the dismal landing from a down cycle! Then just one cycle later the success rates were down at 25%- OMMFG we have to DO SOMETHING!! The doubling was great and indeed success rates started to go back up towards the 35% value.

Yeah well the success rate was 17.7% in FY2011.

The contingencies are most assuredly different. And to think this plays no role in the rates of data faking and scientific fraud is dangerously naive.

No Responses Yet to “Detection is up, yes, but contingencies still matter”

  1. Grumble Says:

    Of course this kind of pressure is a problem. I’d go so far as to say it’s exactly this sort of pressure that leads not only to willfull faking of results, but to what most of us call sloppiness or loose interpretation of results. (For instance, you do two replicates; one shows the “exciting” results you want and the other doesn’t, so you show the first set in a paper and don’t bother mentioning the second.) Maybe this is another reason why (as I commented before), when a pharma company tried to replicate a bunch of basic science papers, they could only successfully do so for 11% of them.

    If that number is reflective of biomedicine in general, and if pressure causing sloppiness/fraud/whatever is the cause, I’d say we have a *huge* problem.

    Like

  2. bill Says:

    Most of the examples that I notice were not detected by sophisticated means. You don’t need the latest version of Photoshop to see that some asshole is publishing the same gel lanes in different combinations, or highfalutin’ statistics to see basic patterns that arise from faux-randomization, and so on. You also don’t need much of anything when a disgruntled lab peon blows the whistle.

    I’d love to see the known fraud cases broken down by method and “level of sophistication” needed to detect, to see just what impact on the increasing rate of detection improved methods have had.

    What does get a bit chicken-and-egg is the question of whether more fraud is being detected because more people are looking for it (regardless of method). Journals routinely run manuscripts through plagiarism checkers, reviewers are on the alert for figures that don’t look right, etc etc. You’ll find more if you look harder — why are we looking harder? Could it be because we’re well aware of the changing contingencies of which DM writes?

    Like

  3. drugmonkey Says:

    why are we looking harder? Could it be because we’re well aware of the changing contingencies of which DM writes?

    I hope so. I hope that people are finally understanding that cheating has real benefits for one, and costs for another, individual.

    Someone fakes a CNS paper as a postdoc, well, this can be the difference between being hired and not. Five or eight years later when the ORI finding finally emerges…well that’s far too late for those people who made the short list.

    A PI gets a grant, those dollars are not going to someone else who didn’t falsify preliminary data…by the time it is caught, those $$ have been expended. Even if the NIH recovered everything from the University…those $$ aren’t going to go to the person who was outcompeted in the first place are they? (hmm, action item for NIH- maybe they should go back and pick up the next application in line).

    And actually, as we’ve learned from both the MW Miller and PJ Francis cases, the NIH DOESN’T EVEN BOTHER TO STOP FUNDING THE AWARD!!!!!

    Another PI took over Miller’s center and Francis’ R01 has new PI(s?) as well

    GRRRR. Okay, now I’m working up a fair bit of pissed-offed-ness at the NIH about this.

    Like


  4. “As always, I assert my possibly naive belief: Nobody sets out in a science career because they want to fake data and publish made-up results.”

    I agree: no one starts down the path of fraud because they want to fake data, etc. But I do wonder if, after the first couple of successful frauds (and what I find interesting is that, in most cases, there are multiple instances of fraud), they get used to it. After all, they got away with it once (or twice, or…), so why not do it again? As Bill (#2) notes, most frauds are easy to detect–it’s a lot of work to make up ‘rigorous’ fake data–but if the ‘standards’ for fake data are low, fraud is easier and less work.

    What’s sad is that they have completely forgotten why they got into science in the first place for one reason or another.

    Like

  5. DrLizzyMoore Says:

    @Grumble-I don’t know many folks that only do an experiment twice, particularly when one gets disparate experimental results…….

    Like

  6. neuromusic Says:

    DM – why should the NIH terminate the award? awarded to institution, not the PI, right?

    Like

  7. drugmonkey Says:

    After all, they got away with it once (or twice, or…), so why not do it again?

    oh, I agree entirely. Slippery slope for sure.

    DM – why should the NIH terminate the award? awarded to institution, not the PI, right?

    I think in the case where a scientist commits fraud only in the course of pursuing a project that was itself fairly won, you have a point.

    When it was won based on the fruits of fraud, then I have a problem. The Francis case revolves around falsification of data in the grant. We can’t say for sure but it is likely that without such data the grant may have scored more poorly and not been funded. We know it was not a slam dunk score because the A0 was funded yet an A1 was submitted and the faked figure was in both versions. This means that some other proposal didn’t get funded.

    In the Miller fraud, the P50 was mentioned as part of the misconduct finding. More complicated because there were other component investigators who, we’ll assume, didn’t fake. however, a Center is not awarded to just anyone. And part of the good review HAD to be the reputation of Miller as the PD (overall Center PI). Had to be. and that reputation rested on his immediately prior record of (fake) productivity we can only assume.

    Do we feel sorry for the innocent victims that worked with Miller (and possibly Francis, hard to tell if the replacement PI(s) were already on the project)? Well, they didn’t deserve the award in the first place if it depended in any way on the faked bit….

    Like

  8. bill Says:

    On a positive note, between looking harder and using more sophisticated methods, we are clearly catching more frauds. That’s necessary but not sufficient: we also need effective methods for dealing with those caught cheating.

    We need to reach the point where the consequences of being caught are bad enough, and the likelihood of being caught high enough (conversely: the effort required to not get caught is more than the effort required to just do honest science).

    Like

  9. AcademicLurker Says:

    I agree entirely. Slippery slope for sure.

    One interesting exception to the slippery slope principle seems to be the Schon scandal. From the book that I read on that case, it seems that Schon never took a single legitimate measurement from the time he arrived at Bell Labs to begin his postdoc. As far as anyone can tell, he fabricated everything out of whole cloth from day 1.

    Like


  10. [W]hy are we looking harder?

    Because it is a kajillion times easier to look, now that everything is on the motherfucken internet and can be downloaded in milliseconds by the enthusiastic sleuth.

    Like

  11. MudraFinger Says:

    The reasons for straying are heavily influenced by contingencies.

    Re: the Zimmer NYTimes piece – the referenced Fang & Casadevall commentaries in Infection and Immunity were even more interesting. They correctly identify many of the contingencies in operation, and offer several sensible (but never in a million years will they be undertaken) reforms.
    http://iai.asm.org/content/80/3/891.abstract
    http://iai.asm.org/content/80/3/897.abstract

    Paula Stephan also had a nice piece recently, in April 5 Nature, also addressing these contingencies –
    http://www.nature.com/nature/journal/v484/n7392/full/484029a.html
    It’s a commentary, but certainly informed by the mountains of data she has amassed, reviewed, analyzed and created. Particularly interesting was her chart showing the rapidly increasing “footprint” of the biomedical sciences relative to others over a recent 20 yr period, with a notable upward bend right around the time of the NIH budget doubling (with a few years lag – c’mon, it takes time to get the building contracts in place after all!). If the intent of NIH in seeking the doubling was truly to increase success rates, the VPs and PUs seemed to view it more as an opportunity to grow the ranks of their cash cows by building them bigger stables.

    Like

  12. HCA Says:

    Coming from a non-Biomedical background, I have to wonder what percentage of these fraudsters were in soft-money appointments, and had grad students who had to be supported only by grant funds?

    In my field (comparative organismal biology), everyone has hard-money positions, and grad students are usually funded off TA positions (unless paid for via grant money), so the idea of a soft-money position (for PIs and students) seems nothing short of terrifying to me, especially with the current funding rates. I can definitely see how that sort of sink-or-swim environment would lead to a tremendous amount of psychological pressure, and being honest, I know I wouldn’t be able to handle it (even if I was getting grants, it’d be hell on my mental health).

    Like

  13. drugmonkey Says:

    I have to wonder what percentage of these fraudsters were in soft-money appointments

    Miller was a newly hired Chair so one presumes there was hard money involved. Who knows what the ratio / grant expectations were but I bet there was a great deal of expectation if not formal requirement to support his salary.

    Francis was clinical…hard to know what percentage of clinical versus research was expected or required. From what I know of dual clinical/research appointments there is flexibility to up the clinical duties to make up for lack of grant funds. Nevertheless, of *course* there was a lot of pressure on the guy.

    OTOH, many hard money appointments have de facto requirements to land grant funding, tenure may depend on at least one R01 or equivalent major funding for many people on seemingly hard money contracts.

    I can definitely see how that sort of sink-or-swim environment would lead to a tremendous amount of psychological pressure

    It does. But so does trying to balance excellent teaching with reasonable research output. Those soft-money slackers don’t have to worry about making lectures sound half-way coherent or grading or any of that jazz. They often have far fewer expectations for University service as well.

    Like

  14. Grumble Says:

    “But so does trying to balance excellent teaching with reasonable research output. Those soft-money slackers don’t have to worry about making lectures sound half-way coherent or grading or any of that jazz. They often have far fewer expectations for University service as well.”

    Maybe. But I’d argue that the psychological stress coming from fear of losing one’s funding (as a soft-money faculty) is probably far greater than the fear of getting crappy teaching reviews or… ahem, failing to be perceived as providing service (as any kind of research faculty). If a hard money prof loses her funding, she just has to teach more, and the chair isn’t going to give a shit whether she’s a good teacher or not. If a soft money faculty loses her funding, she can’t put food in her baby’s mouth.

    HCA is right. The pressure is far greater on soft-money profs. Get rid of soft money positions, and I’ll bet the amount of fraud would go down.

    Like

  15. drugmonkey Says:

    I bet you are wrong. Among other things your suggestion would make Assistant Professor slots even more rare and thereby increase the competition for them.

    Like

  16. drugmonkey Says:

    Oh, and you seem to be ignoring the whole tenure thing…

    Like

  17. zb Says:

    Of course the soft money thing affects the contingencies. But, so does working towards tenure, post doc success, immigration status, the proportion of salary that must be raised by grants, the number of people supported off grants, . . . .

    My solution does involve getting rid of 100% soft money PIs (though haven’t they already? because of he no lobbying for funds when your being paid off public funds). But you also have to hold PIs rigidly reponsible for fraud that occurs under their watch (I.e. desperate post docs who rid PIs of their turbulent data.)

    I thnk post hoc detection and punishment is insufficient because too much of science depends on the honor system. And, if the system becomes generally corrupt (or perceived that way) honest people get weeded out (or become cheaters too). (As in the English schoolboy novel where it turns out that everyone cheats at Latin, so our hero cheats too, but not enough to do well, just enough to pass.)

    Like

  18. zb Says:

    Haven’t figured out a way to decrease the pressure to cheat per-tenure.

    Like

  19. MudraFinger Says:

    As one of those “soft money slackers” to which you so compassionately refer, DM (I’m required to bring in 90% of my own salary at a 50% indirect rate), I agree with Grumble that doing away with the kinds of positions such as the one I hold could go a long way towards mitigating some of the more nefarious contingencies in the stew. It’s true, I don’t have the worries of lecturing and grading, but Grumble is correct in noting that I do worry pretty regularly about whether I’ll have a job in a year, and whether I’ll be able to put food in my baby’s mouth. And BTW, what is this “tenure” of which you speak?

    “Among other things your suggestion would make Assistant Professor slots even more rare and thereby increase the competition for them.”

    Aside from the usual answer of “more,” what IS the optimal number of Assistant Professor slots? Are those slots all in biomedical sciences or should a few of them also be in fields such as physics, math, engineering or heaven forefend, the social sciences? What’s the right composition of faculty positions across fields? Who decides this, and based on what criteria? Based on data I’ve seen Paula Stephan site, (https://www.nber.org/~sewp/Early%20Careers%20for%20Biomedical%20Scientists.pdf) most recent grow in biomedical faculty positions has been in medical colleges, and most of that has been in non-tenure track positions (or tenure-track without salary – which is what, other than more of the same?). I rather like Pierre Azoulay’s recent suggestion that we apply some science to answering these kinds of questions.
    http://www.nature.com/nature/journal/v484/n7392/full/484031a.html

    If the soft-money positions are artificially inflating the number of truly sustainable faculty positions, then yes, there is going to be pain in coming back to reality – just as there has been pain in the coming back to reality resulting from the bursting of the housing bubble – another unsustainable perpetual growth machine. Most objectionable to me personally, I would likely lose my current job, and I’d surely have plenty of company! Yet stepping outside of a purely self-interested frame, which is worse – dealing with the pain of a necessary and appropriate correction now, or dealing with the ongoing slaughter of the lambs, corrosion of standards of behavior and loss of integrity (and public trust) that is fostered by continuing to inflate the bubble and further fetishizing competition for competition’s sake?

    Like

  20. Grumble Says:

    Right on, Mudra. Because Congress is, at the moment, unwilling to provide serious increases in NIH funding, maybe a slowdown in new positions per annum would help restore balance.

    Soft money has opened up something of an arms race among universities. They want their hands on all that delicious overhead, and to get it, they need to have lots of grant writing faculty. They attract the “best” to come with generous start-up, renovated labs, etc. Then, when a new AssProf can’t get grants within a few years, they cut their losses by firing him.

    But if universities were prevented from firing faculty for not having funding, they would be far more cautious with expanding their ranks. And that would eventually make the competition for NIH funds less intense – in other words, it would restore balance to a system that is seriously out of whack at the moment. Explain to me why that’s a bad thing.

    Like

  21. drugmonkey Says:

    Well hell, Grunbie, you and I will probably survive the GreatPurge so great idea. Screw all those postdocs who are deep in the training pipeline. Gotta break a few eggs….

    Like

  22. Grumble Says:

    Is it fair to those post-docs to invite them to set up labs and then throw them out when they can’t get funding? All because universities have so many soft-money positions and the NIH doesn’t have enough money for all of them?

    A long term solution (assuming the NIH budget doesn’t increase) would involve fewer grad students and post-docs being trained to begin with, and hence fewer eggs broken.

    Like

  23. bill Says:

    As an almost-permadoc escapee from the academic hamster wheel*, I gotta say many of those eggs might be better off getting broken sooner rather than later, DM.

    As Grumble says, the postdocs to whom you refer are fucked anyway — at least most of ’em are — and NOT stopping the soft money shuffle only means that they take a few more years to realize it. Those years are additional opportunity cost.

    *I’m not as bitter as that sounds. My failure to make it up the academic food chain was largely my own fault. My sympathies lie with the many postdocs of my acquaintance who didn’t fare any better than me despite being smarter, harder working and more career focused.

    Like


Leave a comment