Can a fraudster be rehabilitated?

November 20, 2012

The ORI blog has an entry up about a new program which attempts to rehabilitate those who have committed academic misconduct.

RePAIR’s premise is that an intense period of intervention, with multiple participants from different institutions who spend several days together at a neutral site, followed by a lengthy period of follow up activities back at their home institution, will rebuild their ethical views. ORI doesn’t know whether RePAIR will work and cannot formally endorse it. But ORI staff do find RePAIR an intriguing and high-minded experiment that research institutions may wish to consider as a resource.

I like the idea of experimenting. But I have to admit I’m skeptical. I do not believe that academic misconduct at the grad student, postdoc and professorial level is done out of ignorance. I believe that it occurs because someone is desperate and/or weak and allows the pressures of this career path to nudge them down the slippery slope.

Now true, many cognitive defenses are erected to convince themselves that they are justified. Perhaps the “everyone is doing it” one is something that can be addressed with these re-education camps. But many of the contingencies won’t go away. There is no weekend seminar that can change the reality of the NIH payline or the GlamourMag chase.

I suspect this will be a fig leaf that Universities use to cover up the stench should they choose to retain a convicted fraudster or to hire one.

Speaking of which, a Twitt yesterday alleged that Marc Hauser has been reaching out to colleagues, seeking collaboration. It made me wonder if anyone with an ORI finding against them has ever returned in a meaningful way? Whether any University would hire them, whether they would be able to secure funding and whether the peer review process would accept their data for publication.

Can anyone think of such a person?

No Responses Yet to “Can a fraudster be rehabilitated?”


  1. Some people can actually be scared straight … the punishment itself was likely enough, and this program is just to allow institutions to justify re-hiring/funding them.

    Like

  2. DrugMonkey Says:

    An interesting point. It has much appeal. Gaining a new lease on professional life may be enough to put the fraudster back on the path.

    Like

  3. DJMH Says:

    I just don’t see the point. It’s not as though there are scads of PI positions going unfilled for lack of qualified applicants. Why let back in someone with a proven record of fraud? *Especially* because whatever remains on their CV may well have been fake, just not yet identified as such?

    Also, way to send the message down the food chain: Go ahead, fake that data. IF we catch you for it, it’s a couple of years of lying low and saying Sorry, and you’re back on track!

    Like


  4. Given the lack of sufficient resources, such as grant funding, to support all of the ethical scientists in the community, why not just tell those researchers who have engaged in misconduct to find something else to do with their lives?

    Like

  5. DJMH Says:

    Yeah, the whole prospect of devoting EXTRA resources to these people (the time and money required to maintain the “repair” system) is sort of mind-boggling.

    Like

  6. drugmonkey Says:

    You raise very good points indeed, DJMH and PP. I wonder this myself and I cannot bring to mind any cases where a person with an ORI conviction (or whatever you call it) has ever returned to research.

    Like

  7. becca Says:

    There are no unethical or ethical scientists*, there are only more and less desperate ones.
    The only question I have is which effect will be greater:
    1) the possibility of redemption from fraud removes the deterrent effect of any punishments that might be available for fraud (keeping in mind that punishments need to be both severe and likely to be effective)
    2) the possibility of redemption removes any hesitations people have about reporting someone for fraud, thus bringing much more incorrect science to light much quicker

    *NB: Based on the Stanford prison experiment ect., I think it’s always misleading to view virtue as an intrinsic character trait. There are a small percent of people who will do the right thing in very difficult circumstances, and we’d always like to think we’d be that person. Most of the time, statistically, we’re wrong. Pretty much every scientist would cheat if the incentives were right.

    Like


  8. Grad students, sure. Early stage postdocs, too, maybe, if there’s evidence that they were led astray by a toxic PI / lab culture. Beyond that? No way.

    Like

  9. eeke Says:

    Cath@VWXYNot? – grad students? I know of at least two cases who were dismissed from their programs immediately after misconduct was discovered. One ended up with a job as a tech and never got a PhD. I don’t know what happened with the other one – they were deported soon after the incident. I can’t see graduate programs having much tolerance for students who fake data. I agree with the position here that there are better ways to spend the limited resources we have than on rehab for fraudsters.

    Like

  10. Dave Bridges Says:

    this is a larger question, like is it worth rehabilitating someone/something relative to their crime. I agree with most posters above, why take the risk. So much of science is trusting that we are being presented with representative information. If you distrust someone a little but, then their argument will always be weak. Especially when someone has a limited auditable dataset (a dozen papers or so) then one fraudulent one will make all the other ones seem suspicious.

    On the other hand, if someone was brilliant, but a fraud. Is it in our best interests to banish that person from science. If someone wants to cheat, get caught and make the obstacles to their success greater thats fine by me. But if they want to keep at it, who am I to tell them to find another career.

    Like

  11. anonymous postdoc Says:

    When a fraudster is caught:

    The person who is most rehabilitate-able, the graduate student, gets drummed out of their program (with perhaps an honorary masters if they are fortunate), as this person is obviously of no worth as a scientist.

    In contrast, the person who has been at it the longest, the principal investigator, is a valuable scientist that we should retain, as their years of training and other productivity speaks to their ability to recover from this lapse and produce valuable knowledge.

    I do not believe the premises of these statements. They are illogical in the extreme. Nevertheless, these statements would appear to illustrate the mindset and the impetus behind the “RePAIR” program.

    Does social psychology tell us that we are more likely to forgive the PI because we can identify with them more than the student? Or is there actually a logic to the above statements because the greater crime is cheating when the stakes are lower?

    Like


  12. But do we really need to ban someone from science FOR LIFE? Because when someone commits another type of crime, we have a justice system that determines how long someone has to go to prison, and afterwards they get a second chance and go back into society. I feel that for scientists committing misconduct it should be the same: depending on the severity of their crime they should deserve a second chance. But I do agree that it is not right if this should cost society extra by means of a rehabilitation program. Also, I don’t believe they should get a third chance.

    Like

  13. odyssey Says:

    I’m not sure any PI’s are banned for life. Typically, at least in the US, they’re ineligible for Federal funding for X years. e.g. This guy is for 7 years (one might say nowhere near enough given the extent of the fraud). After that, they’re free to try again.

    If anyone will give them a job.

    Not so different to the penal system in that regard.

    Like

  14. Dr Becca Says:

    On the other hand, if someone was brilliant, but a fraud. Is it in our best interests to banish that person from science. If someone wants to cheat, get caught and make the obstacles to their success greater thats fine by me. But if they want to keep at it, who am I to tell them to find another career.

    If someone is brilliant but a fraud, then their brilliance has little value. As has been pointed out, there is no dearth of excellent scientists out there, and purging academia of convicted fraudsters is unlikely to result in a slowing of scientific progress.

    Like

  15. Dave Bridges Says:

    What about mendel?

    Like

  16. Dave Bridges Says:

    Think about it another way, for the PI’s in the audience are there any conditions in which you would ever hire and/or trust someone previously caught in some scientific malarkey?

    Like

  17. DrugMonkey Says:

    I don’t think so.

    Like

  18. whimple Says:

    Stupid idea. Total waste of resources. I bet someone got a grant to try it is the only reason anyone would do it. Like there’s a shortage of scientists, proposals or papers. Sheesh!

    Like

  19. Alex Says:

    It’s hard to have any sympathy for senior people who commit fraud, so I won’t. However, for students, part of me wants to believe that they can be “fixed.” Isn’t the whole point of a career in an educational institution the belief that people can (at least sometimes) be made better than they were? I don’t like the idea of writing somebody off young, so there’s something appealing about a place to send them to be fixed.

    The problem is that I just don’t know how I could bring myself to work with a student who got caught doing something fraudulent. I mean, I supposed that I could stand over them and demand a level of documentation and demonstration that I would never demand of anybody else, but that is time that could be spent on having more productive scientific interactions with students who have earned some trust. I’d rather focus my time with students on discussing what they are doing and testing their understanding and execution of it, not on getting proof that they actually did it.

    A more productive exercise for everyone would be to design research ethics training sessions that don’t insult anybody’s intelligence and that actually delve into truly hard questions that come up in real research. Some productive discussions might save a lot of people from wandering into gray areas where they shouldn’t wander, or help people spot the mistakes and manipulations of others sooner. That sort of approach seems like it would be useful for a lot of people in a lot of ways. There are some good ethics training modules out there, but there are also some useless ones, and it would be nice to have more good ones.

    Like

  20. Beaker Says:

    If they promise not to do it again, can’t we just ignore previous crimes and remember the awesomeness of their other stuff?

    But seriously, DJMH and CPP: word.

    Like

  21. DrugMonkey Says:

    Srs question Alex…do you really think people don’t know what is the right thing to do? Even graduate students? Of course they know, even in your “truly hard” scenarios.

    Like

  22. Alex Says:

    I can’t think of any “truly hard” scenarios involving fabricated data.

    I can think of a scenario where a student saw a very subtle mistake that the rest of us missed, but he lacked confidence in his take and didn’t say anything…we had to print an erratum. Fortunately all of the results still held after we re-ran everything, but we had to write “Here’s what we did wrong, here’s how we fixed it, and here’s the revised Figure 2 showing the same trends as before.” That’s never fun. Everybody involved (me most especially) learned something, but I wish that I had done more to build up his confidence about how to speak up in group meeting.

    To be clear, the fault lies with me, not the student, because I’m the guy in charge, I failed to see the mistake myself, and I didn’t establish the right communication culture in the group. He’s learned a lot and has taken leadership roles on projects since then, and I’ve made a point of having a lot more conversations, both as a group and one-on-one, to make sure that everyone is on board with what everyone is doing. We are much better now at looking over each other’s shoulders in a helpful way.

    Still, maybe it would have saved us an erratum if he’d been given more encouragement to demand explanations (from me and everyone else) for anything that doesn’t seem right, rather than assuming that he must be the one who’s wrong because he’s less experienced. Maybe it would have saved us an erratum if he’d been through something that gave him the confidence to say “You know, even though the professor already approved this, I’m still not quite getting why it works.”

    I think that good training materials (for me and him) could help. I think that safe but candid conversations among colleagues, on a regular basis, could elicit educational examples like this from other colleagues, so that I could get useful reminders from their experiences and they could get useful reminders from mine. All of this would be better than a presentation on why we shouldn’t fake data.

    Like

  23. Siveal Says:

    Gradstudents and postdocs (at early stages) can be coerced / tricked / forced into some kind of misconduct. These can (and should) be rehabilitated. Above this level I can not imagine it happening.

    Like

  24. Alex Says:

    Oh, I can think of a situation where somebody didn’t know half as much as they thought they knew of the relevant rules on IP, and wanted to go talking to sharks. That person got talked down, eventually, and the good news is that their project wasn’t terribly valuable anyway, so the stakes were low. Still, it was a mess that could have been avoided with proper training.

    Like

  25. DrugMonkey Says:

    I just don’t see where a training class on speaking up in lab meeting is not obvious…and as useless as all other scenario training when it comes to the reality. It doesn’t embolden the noob, doesn’t change contingencies of a really overbearing PI…

    Like

  26. eeke Says:

    Alex, I think you are talking about honest mistakes made by students, which is a lot different from deliberate fraud. By the time someone reaches the grad student stage, they ought to know better and deserve to be kicked out. There is already a filtering process in which students are selected from a pool of applicants to join a program. Whatever the selection process, it’s not perfect, and there are probably excellent candidates who are being missed, and likewise, some who make it through who end up being duds. The student fraudsters will hopefully learn their lesson and still have some sort of future ahead of them. They can rehabilitate outside of their former program, and come back later into another program as one option, for example. As for the older fraudsters, I think prison time should be considered. Seriously. Using government funds to fake data, which could ultimately cause harm to patients (depending on the level of fraud), or to others in terms of wasting funds chasing after a faked result, is criminal. Suspension from being eligible for funding is no more than a slap on the wrist and isn’t enough of a punishment to fit the crime. I have no sympathy for these people.

    Like

  27. Alex Says:

    Maybe it isn’t the student who needs any training. Maybe I do. Maybe the best thing is regular, sare, but candid discussions among faculty, where people are encouraged to open up about their mistakes and near-mistakes. Maybe that would be better than watching a video: regularly hearing those humbling, fear-inducing stories from people in similar circumstances.

    FWIW, the hard question here is not “should a student ask questions in group meeting?” The hard question is “how do I know if I am doing enough to get students to communicate?” I spent a lot of time going over the data and methods with the students and asked them a lot of hard questions. But somehow he still kept that reservation to himself. He only came out of his shell when he got validation of his concern from a third party. I don’t know an easy recipe for fixing that and building confidence, but it is an important conversation for faculty to have.

    Like

  28. Alex Says:

    And I know that we are talking about honest mistakes rather than fraud. That is the point: I consider the prevention of honest mistakes a hard problem (indeed, a timeless one) and hence worth discussing. I consider most (all?) questions around fraud fairly easy.

    Like

  29. Virgil Says:

    In terms of examples of people who’ve done naughty things, then gone on to have normal careers, two names spring to mind – Lou Ignarro and David Baltimore. Their wikipedia entries detail their infractions.

    There are, of course, several other examples where people get fired from a place for undisclosed reasons, and surface somewhere else, but it all gets hushed up and no-one can write about it through fear of being sued. I know of at least 3 such cases which can’t be talked about because the people/institutions involved have gag orders in place.

    Like

  30. Emily Says:

    I just went through responsible conduct of research training, and I thought it was moderately useful and could help trainees understand some of the less obvious danger zones. For example, everyone knows faking data is bad, but data get processed in ways that can cross from legit to sketchy. The student’s mentor knows where the line is, but the student has to learn it somehow, which can be difficult with an absentee, what-have-you-done-for-me-lately PI. I’m not sure you can expect a student to know that automatically, especially if they haven’t had formal research experience prior to grad school.

    The section on a mentor’s responsibility to trainees was pretty interesting, too.

    Like

  31. toto@club-med.so Says:

    Lou Ignarro and David Baltimore? The Wiki says Ignarro failed to disclose a conflict of interest. Baltimore was embroiled into a suspected case of fraud (not of his own doing) which was eventually dropped.

    That’s not quite the same as being convicted of actually fabricating stuff.

    It will be interesting to see what happens to Marc Hauser’s career.

    Like

  32. MudraFinger Says:

    “It made me wonder if anyone with an ORI finding against them has ever returned in a meaningful way?”

    I’m aware of a single article, published in Science in 2008 that attempted to address this specific question.

    https://www.sciencemag.org/content/321/5890/775.summary?sid=3ec917ac-5714-46ea-a51c-1a9c9bd60443

    FTA:
    “We found that 43% of academic scientists whom we could trace remained employed in academia after being found guilty of misconduct, and overall 19 of 37 scientists (51%) found to have committed misconduct continued to publish at least an average of one paper per year after their cases were decided.”

    Like

  33. DrugMonkey Says:

    COI cases don’t go through ORI do they? Nemeroff springs to mind…

    Like

  34. becca Says:

    “I just don’t see where a training class on speaking up in lab meeting is not obvious”
    HAHAHAHA. Yeah, because this is totally not culturally or context specific, and questioning people up the totem pole is obviously always the wise career move that all trainees will instinctively know how to (diplomatically) make. HAHAHAHAHAHAHHAHAHAHAHAHAHAHHA

    Anyway, that ridiculousness aside- Virgil, David Baltimore at least already had a Nobel before the fraud case (which wasn’t exactly in his lab). Though he did resign as president of Rockefeller. If there is a case for training about fraud, perhaps it is how to set up structural incentives whereby Bigwigs pay attention to the complaints of postdocs in their collaborator’s labs (and university presidents pay attention to the complaints of grad students working for the very lucrative football coach)- *THIS* is apparently the hardest thing to get right (or at least, the thing that bites people in the rear most spectacularly when they get it wrong).

    Like

  35. MudraFinger Says:

    “COI cases don’t go through ORI do they? Nemeroff springs to mind…”

    That’s correct. Moreover, ORI has of late implemented a stance of NOT investigating MOST plagiarism accusations, limiting themselves primarily to questions of fabrication and falsification of data. Apparently just too many accusations of plagiarism turn out to be what they consider “authorship disputes.”

    Like

  36. Alex Says:

    Becca, given your comment, I would be genuinely appreciative of your feedback on the situation that I described.

    Like

  37. Monisha Says:

    I can’t help but think about some of the iatrogenic effects of group therapy for adolescents with various behavioral problems (work by the oregon group that includes Dishion). Basically, if you get juvenile delinquents together to talk, sometimes the effects are opposite to those you hoped for; there are all kinds of reasons why this isn’t quite the same as the program described above, but there is this way in which being in a GROUP of people who have done the same thing may inadvertently serve to normalize the behaviors that are supposed to get prevented.

    All that is of course a separate issue from the one of whether it makes sense to spend additional resources from a limited pool on people who already abused their privileges.

    Like

  38. becca Says:

    Alex- One of the phenomenons I was getting at that I’m aware of is encapsulated by Malcolm Gladwell’s description of some spectacular Korean airplane crash as attributable directly to a co-pilot who was clearly uncomfortable directly alerting the pilot to an error. Whether this attitude is truly enriched in Korean culture compared to e.g. US culture is open for debate (there are also transcripts from US flights that went down where people noticed but did not make sure the pilot noticed a lack of fuel for the plane), but there likely are statistical associations between (sub)culture and habits surrounding questioning authority. And of course there are all the myriad differences in power and privilege that may influence who is an “authority”- there’s a lot of things to consider about WHO is in your lab that may give you some hints about how to encourage them to raise concerns.

    I would note that either the planecrash thing or your own history with the erratum would be good anecdotes to open a dialog about people’s different expectations, and the tone you hope to set for the lab. I suppose you could flat out ask your lab members what norms they grew up with and do some scenario discussions, like in ethics training, though some people find those things annoying. And it’s also important to respond well to people questioning things.

    All that said, it’s one thing to have that little “maybe I should say something” inkling and quite another to actually be assertive in correcting the work of others. My understanding of human psychology suggests that it’s surprisingly hard for most of us to go against a group. Perhaps especially a group of smart people, and especially a group we identify with. Nurturing healthy conflict is one of the trickiest aspects of group dynamics, and it’s not an aspect of the job of “PI” that I necessarily envy (to my dismay, it doesn’t even appear to be on every PI’s radar, perhaps because conflict has such a bad rap).

    Like


  39. […] spurred by the uptick in reported cases of fraud in science over the past few years, and in part by DrugMonkey's post on the RePAIR program for rehabilitating fraudsters. I have no idea whether this apparent uptick in fraud is due to more fraud occurring (probably) or […]

    Like

  40. University Staff Employee Says:

    Unfortunately, some ethics problems are related to the atmosphere created by those who think they are above the rules. This is perpetuated from PI to trainee and advanced trainee to new trainee.

    Despite the fact that our university has experienced some public airing of dirty laundry, the things that are actually done do not change. We have more paperwork and “assurances” but they are not worth the paper they are written on without the will of the researchers.

    I will be leaving the field I have been in for two decades, because I am burnt out. Part of the problem is dealing with those above the rules. I look forward to not hearing that a rule, regulation or decree from an assurance committee “doesn’t matter” or doesn’t mean anything”.

    Like

  41. Julie Williamson Says:

    What about faking sources (A certain New York Times reporter) Faking strength and endurance (Certain athletes) Maybe the issue is more why people feel they aren’t good enough? Why people feel they need to cross the line to get ahead? Is it really the system — the university, the newspaper, the institution, the coaching or really just the person?

    Like


Leave a comment