The only way to survive is to fake data

October 8, 2013

I hope this commenter was being facetious.

With paylines around 5-percentile, the only way to have a shot at having a proposal approved is to quite simply fake data.

and I hope this other commenter was just wising off in frustration.

Certainly in my field the proportion of cheaters at the top venues seems to have increased the harder it is to get in. In fact, in one specific venue that shall remain nameless in my estimation over half of the papers contain some fake data.

Don’t get me wrong. I am concerned about cheating in science. I am convinced that the contingencies that affect the careers of individuals scientists is a significant motivating factor in data fraud. I am not naive.

but for today, I wish to object to this normalization behavior. It is not normal to cheat in science. Data faking is NOT standard old stuff that everybody is doing.

“Everybody does it.”

This is one of the standard defenses of the cheater pants. It is the easy justification we have seen time and time again in the revelations of performance-enhancing drug use in professional sports. It is the excuse of the data faker as well.

Consequently it is imperative that we do not leave the impression of normalcy unchallenged.

It is not the norm. Faking is not endemic to science. It may be more common than we would like. It may be more common than we estimate. But it is not normal.

Despite claims, it is not necessary. I have more than one grant score that was better than the 5th percentile and I didn’t have to fake any data to get those. So that first claim is wrong for sure. It is not required to fake data.

84 Responses to “The only way to survive is to fake data”

  1. Dave Says:

    The more I become “one of the lads”, the more people are letting their guard down to me and making some shocking admissions/statements regarding…..well…..cheating. Someone even said to me recently that a p-value of 0.08 is basically significant so it’s OK in a grant app to change it to 0.05 (I’m very serious). We can argue about the somewhat arbitrary nature of a 0.05 cutoff in some experiments, but this is not kosher in my opinion and this creates an uneven playing field for those of us who painfully honest.

    I don’t doubt that cheating or heavy manipulation is rampant. No doubt whatsoever.

    Like

  2. drugmonkey Says:

    Maybe I’m just not “one of the lads” then.

    Like

  3. mikka Says:

    The NIH gravy train is like one of those Indian trains packed to capacity, plus people on the roof and precariously hanging on to the sides. Moreover turns out that some people didn’t buy a ticket. I feel like it doesn’t matter to me anyway because I was late to the station and missed it, and besides it’s heading towards a sure derailment because sections of the track are gone due to budget cuts.

    Sorry, I got carried away with the metaphor.

    Like

  4. Dave Says:

    ^and you have to take a shit through a hole in the train floor as it is moving…….?

    Like

  5. kevin. Says:

    But just think how that second grant could have fared if you’d just dropped an outlier or two. Huh? Yeah??? Thinka bout it. It’s just so…. easy. You didn’t need to do those last two animals really, the data were just fine without them…

    Like

  6. EcoRV Says:

    http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738

    According to this study, faking data in science is quite common these days. Also, just as DM claims to be SO pure, there is the belief among scientists that “others fake” but “I don’t fake.” I guess what constitutes faking is in the eye of the beholder.

    Bottom line, DATA FAKING IS WIDESPREAD IN SCIENCE, but scientists refuse to admit it.

    Like

  7. Ola Says:

    Cheating is fucking rampant. Don’t let anyone convince you otherwise. As a reviewer for 30+ journals and member of 6 editorial boards, I would put the number at somewhere around 40% of all papers with at least one ethical transgression. At a Department/career level, one of the key reasons I have tended to adopt an “in your face” attitude about image manipulation/creative data management (whatever you want to call it) is I simply refuse to play on an uneven field. It doesn’t make any difference if you faked data in a lowly IF 2 journal or in C/N/S, it gives you an advantage at funding time, and nobody likes a greedy fuckface, so I will take you down at all costs so I can get funding. In the past this has involved legal threats to me, being thrown off grants (and having to explain to my chair why he needs to pick up more of my salary now), resigning from editorial boards at journals, resigning from thesis committees, and other things which are probably not good for my career. But hey, I sleep great at night!

    Like

  8. Physician Scientist Says:

    I am an associate editor at a society level journal. At one point last spring, I had 5, yes 5, manuscripts in a row that showed image manipulation, repeat figures between papers or stats that couldn’t possibly be correct. This was reported and none of those papers was accepted.

    With this being said, I don’t think faking data helps at grant time. I’ve been at study section where questionable results have been called out. I’ve been in study section meetings where it was stated that somebody was “publishing too fast” for the size of their lab. I’ve been in study sections where the mere possibility of impropriety has disqualified a candidate (much like the reason Jeff Bagwell will never be in the hall of fame). Good, well-controlled science always wins out in the end.

    Like

  9. runforsushi Says:

    Thanks for this. I’m a PhD candidate and my roommate said to me yesterday “the sooner you start fudging your data, the sooner you graduate.” -___-. and she has (what I believed to be) a really cool 1st author paper in PLOS Pathogens! So. I’ll hold on to my integrity, no matter what lowly journal my thesis ends up in.

    Like

  10. IluvScience Says:

    And there also this story:

    http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

    I don’t know where DM lives, but it seems that in the US data faking in academia is fairly widespread.

    Like

  11. Mum Says:

    Right next to outright fraud is knowingly doing poor work. But what about doing poor work unknowingly? Where does this fit? It is clearly poor science, and a waste of money, of course.

    I worked at a lab that was, and is, regarded as excellent. Their grant reviews come back glowing with comments on their achievements. When I joined I was exstatic because I would learn from the best. Then I saw how the sausage is made… There was some open cheating, and a lot of dishonest stuff. But what surprised me most was the number of things that were just bad science, and they didn’t know. The PI scientific skills were poor. He had moved up through the system by being “street smart” (his words). When someone pushed to do things better it got nasty “we don’t have time for running that control… After the renewal…”

    I wonder what is more common, fake data or junk data.

    Like

  12. Joe Says:

    Putting fake data in a grant is just stupid. It’s like stealing petty cash. You’re risking your reputation, your career, everything you have worked for up to that point, for a small gain.

    Putting fake data in a paper is anathema to what it means to be a scientist. You have to think that the work you do in your career will have some important discoveries, but really it is the aggregate of your work and that of others that comes together to lead to advances in knowledge and understanding of the world. If you put out fake data you do harm to the aggregate data, and you may make others spend years going down the wrong paths.

    Like

  13. 380347 Says:

    @Joe

    “Men do not despise a thief, if he steal to satisfy his soul when he is hungry.”

    When one’s neck is on the guillotine, I think few people would even consider your idealistic views. Nowadays, in my opinion, the funding situation is so bad that faking data has become one more tool in the toolkit.

    Like

  14. rs Says:

    DM, I don’t think people outrightly fake data, but definitely ignores data which do not match their hypothesis, so adding up to more noise. As someone suggested, ignoring outliers, ignoring statistics, ignoring proper control (so data is actually not reliable) and finally ignoring published studies which might undermine your proposal/paper are all too rampant.

    Like

  15. Busy Says:

    I’m actually dead serious about 50% of the papers at the top venue in my field containing some form of fake data. I’m not talking about making up an entire experiment that didn’t happen (even though we have some of those too), I mean “minor” transgressions such as those pointed here like removing an outlier, never talking about a data series that not only didn’t match the hypothesis but actually raised questions about the conclusions altogether, or some crossing of the Ts and dotting of the Is that should have been done but there wasn’t time, so they just fake some of the additional controls. Also synthetic data hidden in there (we couldn’t measure one of the dimensions so we just assumed it grows linearly and used this silently in the charts without ever acknowledging that there is no experiment behind these numbers).

    Again, I think the ratio of cheating is higher at the ultra competitive top venues than at the middle of the road ones.

    Like

  16. DrugMonkey Says:

    Bad science because you don’t know is not quite the same as intentional fraud. If you don’t add the right control or use the wrong kind of Bunny or something…it isn’t fraud just so long as you report what you did accurately. Ditto the wrong statistical analysis. The reader is free to evaluate the quality of a paper that is honestly reported.

    It is also best to realize that sub field practices that differ from our own are not automatically data faking either. There is no single standard for many of the things that an uncharitable eye might view as either cherry picking or data massaging. These kinds of minor issues (field practices) distract from the real issue of intentional deception. IMO.

    Like

  17. DrugMonkey Says:

    EcoRV-

    I’m not getting “widespread” out of that paper at all. Less than 2% admit. 14% claim direct knowledge of others’ fakery. Since a whole lab might know about a fake…this doesn’t mean 14% have faked at all. Plus you have the rumourmill effect. Then there is the fact that knowledge of one fraud doesn’t mean that everything produced by that particular scientist has been fraudulent.

    Fakery is by no means vanishingly insignificant but it sure isn’t “WIDESPREAD IN SCIENCE” either. At least not by the evidence of that meta review

    Like

  18. DrugMonkey Says:

    Busy- I think you are conflating things that are not fraud with things that are.

    Badly conducted science is not fraud. In most cases, failing to state something is not fraud. Fraud is saying “we did X and observed Y” when this never happened.

    Like

  19. Cynric Says:

    I just had a student come and talk to me about their summer placement in a highly prestigious institution. It was ultracompetitive lab, with a PI who demands a certain result, and keeps sending the trainees back to the HPLC until they get it.

    The PI gets the result they want, oftentimes with real data (if you run the experiment a hundred times the outlier will eventually arrive) , but otherwise with plausible deniability (and a convenient scapegoat trainee).

    Outright fraud? Bad science?

    Either way, the lab is very well funded.

    Like

  20. eeke Says:

    “I’ve been in study sections where the mere possibility of impropriety has disqualified a candidate”

    This does not seem fair to me. I think faked data is something that is extremely difficult to detect, unless for some reason, it seems too good to be true, or there is a similar grant proposal/manuscript being considered that shows contradictory results. Still, this does not prove fraud. As a reviewer, I take things at face value (is there an appropriate control, etc?), and give the author the benefit of the doubt. Innocent until proven guilty…

    How infuriating would it be to be accused of fakery or dismissed because you’ve been “too productive”. wtf?

    Like

  21. dsks Says:

    Just for balance (because I think there’s a tendency for only those who have experienced controversy to comment), I’ve no firsthand anecdotes of outright fraud myself. About the height of questionable ethics in re massaging stats has been the usual, “Damn, that’s almost down to 0.05… maybe if we do another 3-4 experiments and drop this outlier…”. It’s wrong, but I’m prepared to believe that this is widespread… outright fiddling figures and making up data I (hope) is still a minority infraction.

    It’s surely going on out there though, I don’t deny that. There’s a sentiment that competition is encouraging this, and there’s definitely something to that argument, but competition can also be a good force pushing the other way, imho. I’m writing a review right now, and I’ve been pondering how the rival factions in our field have commonly duplicated each others experiments. In some small part because it’s, y’know, good science and all… but probably in larger part there’s the hope of sticking the boot in to another group by suggesting that they’re incompetent/wrong/full-of-shit. Either way, I know that has always been a pressure on my work in terms of trying to be sure that I’m seeing what I think I’m seeing.

    Like

  22. Ola Says:

    I want to add to the noise here by stating that a lot of fraud does get detected at the paper review and grant review stages, and never makes it into the pubic domain of the literature. If you’re going to come up with numbers on the occurrence of fraud, you have to use a measure which is based on something other than just the material that makes it through peer review. Unfortunately, because of all the confidentiality problems, what goes on behind closed doors at journals and grant awarding bodies is often overlooked when quantifying misconduct.

    Perhaps an example/anecdote, to illustrate the kind of shit that goes on…
    I reviewed an early career grant proposal for a large charity whose name sounds like a Norwegian pop band from the ’80s. I found problems in the images, so I let the study section chair know before the meeting, and they said we had to discuss it anyway, and at the end of the scientific discussion the ethical problem would be raised. At that point the whole committee would vote on whether the problem reached the level of an ethical violation worthy of triage. A unanimous vote would be required to triage the proposal. In the end, the proposal did not get high enough scores to come up for discussion, so nobody found out about the problem. The reviews received by the individual consisted of only scientific comments with initial scores, no mention of the ethical issue.

    I then went to look at a few of this individual’s other work, and rapidly discovered almost identical problems in 3 papers. The manipulations were verified by ORI forensic droplets, and were of a type that would be impossible to occur by accident. There is absolute certainty this is manipulation with intent to deceive. The work in the papers was published from the grant applicant’s time as a post-doc’, and the work was supported by NIH funding. I wrote it all up (including the grant application example) and sent it to the US Office of Research Integrity.

    A year later I got an email from the grant applicant’s mentor, thanking me for my discoveries and informing me the individual had been fired. Turns out the ORI had simply forwarded my email, non-redacted, to the institution. This is default ORI policy; you have to specifically request anonymity if you want it.

    Almost 3 years have now passed. There has been no official ORI report. The 3 papers are still in the literature, despite more letters to the editors of the journals. The institutions have made no official reports. The charity has released no report, and has not issued any sanctions against the individual (another one of their grants just came up at study section just this year). The individual concerned is still in science – they found another job at the same University and I saw them at a scientific conference earlier this year, presenting a poster which itself contained manipulated data.

    So, here we have a case in which the ORI, a major grant awarding charity, and several journals, as well as officials at an institution, are all in possession of bona-fide evidence of data manipulation with intent to deceive. Yet, the perpetrator continues to enjoy their job, and the public is none the wiser. From the outside, this person is a respected scientist with absolutely nothing whatsoever to indicate that they committed scientific fraud. Right now it’s my word against theirs, and of course if I name the individual in public then I could get sued for defamation. Are there other people out there like this? Oh fuck yes! Lots.

    TL/DR – there are plenty of individuals out there who have faked data, and you are unlikely to ever find out about it. Fraud is a lot more widespread than what’s already published, and what the ORI reports on.

    Like

  23. Ola Says:

    Another point:

    If you do find data manipulation in a grant or paper, there’s something to be said for letting it fly and then taking the perp’ down later on. If you point it out at the review stage, there’s no telling how the journal or grant-awarding-body may react, but often times they’ll just back off for fear of raising a ruckus and engaging with expensive lawyers. This oftens result in the individual “getting away with it”.

    At least if you let it get published, then go after the data in the published paper, you get the public shaming associated with a retraction or correction, and in the long run this might draw more attention to the individual and help uncover more shady practices. As has emerged on PubPeer recently, there are some cases where peer reviewers have come forward and said they had problems with a paper during peer review, but somehow it got published anyways.

    Is letting known fraud through the peer review system, for the sole reason of a bigger show-down later on, a form of entrapment? It this ethical? I’m not saying because I’ve yet to take that option, although I’ve been tempted on occasion.

    Like

  24. The Other Dave Says:

    http://www.economist.com/news/china/21586845-flawed-system-judging-research-leading-academic-fraud-looks-good-paper

    “As China tries to take its seat at the top table of global academia, the criminal underworld has seized on a feature in its research system: the fact that research grants and promotions are awarded on the basis of the number of articles published, not on the quality of the original research. This has fostered an industry of plagiarism, invented research and fake journals that Wuhan University estimated in 2009 was worth $150m, a fivefold increase on just two years earlier.”

    The same perverse incentives are increasingly common in the U.S. And I think they will have / are having the same result.

    Anyone wanna buy a quickie review paper? How about a nice study for Neuron? For a larger fee, I can produce a Cell paper for you introducing some immense new database of high-throughput brain connectome interactions. Don’t worry, no one will criticize it because like all the other giant high throughput databases published in cell, it’ll be incredibly impressive sounding but impossible to figure out how to get any useful specific information out of it.

    Like

  25. The Other Dave Says:

    I’ve reviewed two papers in the last few years that contained suspicious ‘things’, which I pointed out politely in my reviews.

    At J. Neuroscience, the editor rejected the paper immediately with a stern letter. I have never seen another paper from that group in J. Neuroscience.

    At PNAS, my entire review was ignored and the paper was published relatively quickly. That PI is doing very well. Just saw that he is on some fancy new advisorial board.

    Like

  26. Evelyn Says:

    During graduate school, one of my lab mates had what our PI called “golden hands.” I could never figure out how she was able to get every protocol to work the first time. She had a couple of impressive high profile publications, and a good portion of her work was used in a grant application. It was only in my last two years there (she had stayed on as an incredibly well paid post-doc) when I had some confidence and was doing some parallel work that I figured out she was faking/fudging/ignoring data. At the same time, a younger student was unable to repeat some experiments. We had brought this to the PI’s attention who then promptly did nothing. I refused to share any authorship with her and the younger student was asked to leave the lab a year after I had moved on.

    I tell all my PI’s that the only thing you have as a scientist is your integrity. Once you jeopardize that, you may as well be looking for a new career.

    Like

  27. pablito Says:

    Good, well-controlled science always wins out in the end.

    Totally agree, but I am mindful that the “end” may occur after your career is over.

    As John Maynard Keynes said, “In the long run we are all dead.”

    Like

  28. 291643 Says:

    @Physician Scientist

    “I’ve been in study section meetings where it was stated that somebody was “publishing too fast” for the size of their lab.”

    WTF!!! That sounds like a cheap excuse to shift funds away from young, newer PIs to old, established ones. The selfishness of the Boomers never ceases to amaze me. It’s limitless.

    —-

    And about data faking, it is out of control and widespread. DM must be high or in denial to believe otherwise.

    Actually, DM, can you create a poll? If so, ask your readers to vote on whether scientific faking/fudging/pimping/etc is rampant/widespread/endemic/etc.

    Like

  29. Evelyn Says:

    I wouldn’t say it is out of control: I worked in three labs (one rotation in grad school, my PhD lab, my post-doc lab) and the only person I knew that I would say was cheating, was the one I described earlier. Everyone else I encountered worked hard and was honest.
    I see all of the raw data in my department. They bring me actual gel films, instrument read outs, send me raw files. I have yet to catch anyone doing anything even resembling cheating. I see more problems from the lack of proper planning (missing controls is common) or execution (used the wrong reagent) than anything malicious. Let’s not use this as an excuse as to why our careers/papers/grants are faltering.

    Like

  30. dsks Says:

    “I’m not getting “widespread” out of that paper at all. Less than 2% admit. 14% claim direct knowledge of others’ fakery. “

    14% isn’t the end of the world. Hell, it would always be nice to get that number lower but there’s no way in hell the entire endeavor is going to go to shit on the basis of this level of chicanery. The genuine concern is that any sort of fakery could potentially lead to a good, honest scientist getting nudged out by a charlatan, and I think that’s what drives the paranoia in these cash-strapped times. It’s not a healthy state of mind to be in, though.

    Arguably, the pervasive belief that fakery is endemic and widespread creates an environment of suspicion and cynicism that has the potential to be even more damaging that the actual accurate level of fakery itself.

    Like

  31. Optimus Prime Says:

    “There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns – there are things we do not know we don’t know.”

    —United States Secretary of Defense, Donald Rumsfeld

    ——-

    I bet that what we suspect or know to be fake data is only the tip of the iceberg, which is already fairly prominent (14%). There could be lots of data faking that only the faker knows about, and thus it is unknown.

    Like

  32. DrugMonkey Says:

    Exactly, dsks.

    Evelyn- PIs who are seduced by the “golden” trainee are seriously fucking up. Seriously.

    Ola- that’s stupid. You make your comments during review. Then if the Editor doesn’t take your side, you can move on to the takedown. Post-pub review IS peer review.

    Ola- on the ORI case, how many individuals have you reviewed work for and *not* seen any evidence of a problem? Several hundred?

    Like

  33. Miguel Says:

    If others are doing it and you are not, you are putting yourself in a competitive disadvantage.

    Like

  34. The Other Dave Says:

    dsks: “14% isn’t the end of the world. Hell, it would always be nice to get that number lower but there’s no way in hell the entire endeavor is going to go to shit on the basis of this level of chicanery. “

    DM: “Exactly, dsks.”

    This little exchange is frightening. 1 in 7 scientists admits direct knowledge of scientific fraud, dsks says that’s OK, and DM agrees. The complacency is more shocking than the number itself.

    Like

  35. DrugMonkey Says:

    Claims knowledge. And you are missing the point about multiple people knowing about a single fraud.

    Nobody is saying *any* fraud is good. But we are saying the overwrought and unsupported memes that “everyone” is cheating and that it is required to succeed is in the long term a worse problem than 2% admitting ever-faking.

    Like

  36. dsks Says:

    “1 in 7 scientists admits direct knowledge of scientific fraud”

    Which means the actual fraud fraction will be substantially lower (unless, by sheer chance, they’re all talking about a different fraudster).

    And who said anything about complacency? I’m all for a response to the problem Ola outlined above in re rigorous and timely execution of justice for those who are guilty of fraud. But there’s a far cry between being proactive about stopping genuine fraud, and believing that 10% of the people we know in our department, 10 % of the people we hang with at meetings, 10% of grant applicants, study section personnel, article co-authors &c are scheming shysters.

    I just don’t buy that it’s quite time to nominate a Witchfinder General at the NIH. We’ve got bigger and more immediate problems in re funding science period, let alone funding a small fraction of cheaters.

    Like

  37. DrugMonkey Says:

    Miguel- honesty and ethical behavior almost always places one at a competitive disadvantages. Many walks of life. It is never an excuse. Ever.

    Like

  38. odyssey Says:

    How many of you who have seen all this data faking have done anything about it?

    Like

  39. Czar-Ivan Says:

    I think DM and dsks are doing what PIs normally do: When there is problem that would jeopardize or taint academia, instead of dealing with it, they simply sweep the issue under the rug and say, “Please move on, there is nothing to see.”

    And to all the other PIs here I say: Just because you don’t know or suspect of it, it does not mean it did not happen. PIs are so removed from the raw, primary data these days that they are blissfully unaware of what happens in dark rooms, luciferase machines, specs, gels, etc… 😉

    Like

  40. Busy Says:

    Sorry DS, but I don’t see any of the cases I listed as bad science. They all involve fudging data to achieve a desired result to a level of confidence that is presently not there. In my book that qualifies as fraud of the minor variety.

    Major fraud is outright faking of the whole thing. We also have a few of those papers, but that is a much smaller set; around 5% in my estimation at that highly competitive venue. In other venues just below the percentage drops rapidly because you do not need to have a picture perfect experimental proof of your hypothesis to get the paper in. Those venues tolerate the odd outlier in the data, publish the paper and let the subsequent literature resolve it one way or another.

    Like

  41. Jim Woodgett Says:

    There is enormous pressure to publish and significant inertia in reporting or acting on fraud. Universities often treat whistleblowers (especially websites reporting dodgy data/images) with disdain. There is strong benefit of doubt – especially afforded to well known scientists – reluctance to offend and suspicion of malicious intent on the part of the complainer.

    Evidence of fraud in grant applications (where rigour is usually less – access to figure compromised, etc) is often either ignored or overlooked and, at worst, reduces the score. Yet, if someone is willing to fudge data there, it’s a small step to fudging data elsewhere.

    Call it out when you see it.

    Like

  42. Anonydoc Says:

    One problem is that the people in best position to recognize the fraud (ie grad student and postdoc colleagues of the faker, whether the faker is a PI or not) are in the worst position to report it, in terms of being low in hierarchy as well as less confident about what constitutes fraud. We had a major fraudster kicked out and later questioning revealed that many people from his prior walks of life had found him suspicious, but hadn’t had the confidence or the standing to do anything about it.

    Like

  43. The Other Dave Says:

    DM, dsks: You guys are definitely delusional ‘glass is half full’ people.

    Let’s take an example…

    The U.S. Commerce department estimates that 30-40% of people cheat on their tax returns. 7% of people admit to doing so.

    There’s a big discrepancy there. What makes you think the discrepancy would be different in science?

    Besides that, there are all the people who do bad/dishonest science and don’t even know it. Is it OK to ignore ‘outliers’? Is it OK to keep gathering replicates until your P value drops below 0.05? Is it OK to not report experiments that yielded ‘negative’ results?

    Scroll back up and read Ola’s comments. Is she wrong?

    Like

  44. Mikka Says:

    I don’t think that piling up the replicates is bad in itself. If you need a lot of replicates for a p below whatever cutoff you name that usually means that the effect you are measuring is small compared to the variability due to all other factors combined. So all you are saying is that your factor affects the variable in some way, which isn’t too impressive by itself unless you quantify the effect and show it to be sizable. This is one of those cases where the reader should be able to spot if this is missing and arch a skeptical eyebrow.

    But sometimes the mere presence of an effect is interesting enough. Just look at how many replicates the CERN people had to run to be confident about the presence of the Higgs boson. That meant the signal was small with respect to noise, but that’s the magic of the central limit theorem: the more data, the less noise matters.

    Like

  45. dsks Says:

    TOD
    “The U.S. Commerce department estimates that 30-40% of people cheat on their tax returns. 7% of people admit to doing so.

    There’s a big discrepancy there. What makes you think the discrepancy would be different in science?”

    I don’t think there necessarily is a difference, but I think you’re making a fruit salad with the number comparisons here.

    7 % admit to tax evasion, which is different from the 14% of scientists who have heard about someone else cooking their lab books. As it is, the fraction of people who know/have-heard-about someone evading taxes is probably something nearer 100%.

    But, taking your tax evasion theme and using that as an analogy, let’s take a step back and look at the big picture. Tax evasion has existed in the US ever since the IRS started collecting them. The same can be said for Greece and its revenue service. Greece is fucked, the US continues to dominate by every meaningful economic metric.

    What’s the difference? Basically, tax evasion is policed in the US to a sufficient degree as to prevent it from overly harming the spending power of the government. That’s moreorless it (differences in credit rating are no major factor here, as Greece was allowed to borrow as much as she wanted in the period leading up to the collapse, as was the US). In Greece, virtually unrestricted tax evasion paired with excessive government borrowing and spending essentially collapsed the economy. (I think tax evasion over there was something thoroughly insane, like 80%, but I’d have to check)

    Tax evasion, as you state, has not even come close to being eradicated in the US (although some of that 40% is going to be due to error rather than fraud). But the point is that it doesn’t need to be. It just has to be acknowledged to exist and restricted to a sufficient degree that the efforts to policed it don’t cause more harm than good. e.g. paranoia and extra bureaucracy and oversight that would likely cost as much in time and money as the fraud does any way.

    (BTW, overzealous attempts to eradicate tax evasion certainly can end up causing more harm than good economically, which is what’s happening in Portugal right now, where a crackdown has had the effect of shutting down a substantial number of small businesses that can’t afford to pay for the tax evasion proof tills and similar government mandated mechanisms. But this is perhaps letting the metaphor stray a bit…)

    There is cheating in science. The nature of modern academic career incentives certainly provides a driving force that likely means cheating is worse now that it use to be. I’m just not yet convinced that it is so thoroughly pervasive that it’s substantially harming the overall enterprise, which I still think is largely self-correcting anyway. The evidence that we’re still moving forward still seems pretty compelling for most fields.

    The two things we can do is: 1) Call it when we see it; 2) lobby for a more rapid turnaround on investigations into ethical violations and, more importantly, make sure that fraud is punished sufficiently to provide a genuine disincentive to perpetrating it.

    Like

  46. The Other Dave Says:

    DM, I think you missed my point: Whatever the percentage of fraud we think there might be, the true percentage is likely to be much higher. I think Optimus prime above was trying to say the same thing, with their Donald Rumsfeld quote.

    This is consistent with social psychology studies. Dan Ariely’s book ‘The Honest Truth about Dishonesty’ is a reasonable (albeit somewhat annoying) introduction to the topic. According to him, the number of actual cheaters is way above the number who admit it, and usually closer to the number that people estimate.

    Now read this first paragraph of the discussion of the PLoSOne paper:

    “It found that, on average, about 2% of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct my any standard [10], [36], [37]– and up to one third admitted a variety of other questionable research practices including “dropping data points based on a gut feeling”, and “changing the design, methodology or results of a study in response to pressures from a funding source”. In surveys asking about the behaviour of colleagues, fabrication, falsification and modification had been observed, on average, by over 14% of respondents, and other questionable practices by up to 72%.”

    No matter how you slice it, it’s bad. really bad.

    And that doesn’t even take into account the fact that increased competition for biomedical science funding is likely to make it worse.

    Nor does it take into account that, even excluding fraud, most reported results are likely to be wrong just because of the way science is fundamentally practiced and results are reported.
    http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

    This is all pretty serious business. How do we justify to the American taxpayer that investment in science is a good thing? We might be no better than the fuckers at Goldman Sachs!

    Like

  47. DrugMonkey Says:

    What makes you think the discrepancy would be different in science?

    Maybe because you don’t have about 45% of the population (maybe more) identifying with, and a huge media sound machine constantly supporting, a line of thought that says doing non-fraudulent research is a horrible and terrible violation of basic rights?

    Also, very few people overtly and laboriously choose to become taxpayers in the face of an array of alternative options. It is forced upon them.

    Finally, failing to pay taxes only passively undermines the enterprise of government and shifts the burden onto others. Scientific fraud creates a more active harm of the enterprise. It doesn’t only require someone else to do the real work but it may cause many people to go down blind alleys on the false assumption.

    Now, if we’re “delusional”, how can you support that with evidence? For your anecdotal estimates (and have you provided the data on how many scientists’ work you know or have reviewed and how many have *not* committed fraud that you’ve detected?) we have mine or those of dsks. The meta review paper gives a range from 2% admitted to 14% claim-knowledge. The glass is not even half empty from this. The people who are claiming “WIDESPREAD” are the ones that are delusional here.

    Like

  48. drugmonkey Says:

    Whatever the percentage of fraud we think there might be, the true percentage is likely to be much higher.

    Since we have paranoid shouters talking about “WIDESPREAD” in this thread and your fevered analogies and descriptions, I disagree. I would agree that the percentage is higher than the ORI conviction rate and the paper retraction rate. It *may be* higher than the 2% admission rate in the PLoS paper.

    I observe that there is a tendency to glide over the distinction between a scientist who admits *ever* committing fraud and the idea that this represents the fraud rate “in science”. Similar problem for the estimates of first-person knowledge of other scientists’ fraudulent behavior.

    I am familiar with subfields* in which there is a large number of people arguing the “everybody does it” line, people who have first person knowledge of fakers and, coincidentally or not, a substantial number of paper retractions and suspicious corrections of “placeholder” figures. I am likewise aware of other subfields in which retractions seem very rare, accusations of first-person knowledge of fraud are infrequent and the retraction rate is low.

    This makes it hard to assess anecdata derived from single individuals and promotes the use of broader-scope data sets (like retractions, meta reviews, ORI findings, etc) to anchor our estimates.

    *I find it telling that the most specific accusations in this thread seem to involve certain types of data.

    Like

  49. Unable To Reproduce Says:

    Doesn’t the Reuters/Nature article say that Amgen could NOT reproduce results from 47/53 landmark publications?

    And doesn’t it also say that Bayer could only reproduce “less than one-quarter” of published findings?

    Faked or not, those figures are pathetic. Could someone please explain to me why the US taxpayer should keep funding academia when their generated and peer-reviewed data is so useless?

    If the data coming from academia is totally unreliable or even misleading, we might as well just stop funding the NIH and save ourselves 30 billion per year.

    Like

  50. The Other Dave Says:

    @DM:

    ?!? Dude, quit cherry-picking. Read the paragraph I quoted for you. A THIRD admitted to bogus practices. This is consistent with Ola’s seemingly extensive experience.

    In contrast, you have provided NOTHING in support of your contention that science fraud is not a real problem.

    As for the tax analogy… Yes, people are forced to pay taxes. No, that does not mean that it’s OK to cheat there either. But you think fewer people will cheat on taxes compared to a grant proposal? Cheating on taxes can get you massive fines and in jail. Cheating on a grant proposal… makes you ineligible to submit for 5 years, maybe. The motivation to cheat on taxes is save a few bucks. The motivation to cheat on a grant proposal is to save your career.

    Honestly, DM, I am beginning to wonder if you are willfully ignorant on this issue.

    Like

  51. The Other Dave Says:

    If you got this far without reading Unable to Reproduce’s comment above, go back and read it. It is the most pithy important thing I have read on the subject in the past year.

    I am embarrassed, as a professional biomedical scientist, to not have a good answer. We all should be embarrassed.

    Like

  52. DrugMonkey Says:

    Drug companies that “can’t reproduce” studies are entirely uninformative without knowing how hard they worked at it and how good they are.

    I have numerous examples in my fields of interest in which “can’t replicate” turned out to be a version of RTFM problems.

    Like

  53. The Other Dave Says:

    “I have numerous examples in my fields of interest in which “can’t replicate” turned out to be a version of RTFM problems”

    That is also an example of wasted research money.

    Like

  54. DrugMonkey Says:

    Sure. I’d be much happier if people asked questions of the experts, and those experts responded in a timely and helpful fashion before *any* experiments were done. But that doesn’t happen. people think they can figure it out. or their hope springs eternal that these particular differences won’t make a qualitative difference.

    sometimes the small stuff matters. or different people assume different things about experimental conditions.

    the upside is that as a whole we get a better overall estimate on how much a phenomenon generalizes across contexts and how much it does not. that in and of itself is helpful.

    but when you are in science long enough, and do enough stuff that should be similar to what someone else has published, you find that simply saying “hey I tried it once just like the paper said and it didn’t work so it isn’t replicable” is bullshit.

    Not every attempt to replicate is a methodological replication. Not every failure to replicate is evidence of fraud or cherry picking or getting (un)lucky from a statistical perspective.

    Like

  55. Junk Data Everywhere Says:

    DM, What a pathetic answer!

    Reproducibility is **the** bedrock of science. And when asked why the vast majority of academic studies cannot be reproduced, your answer basically is,

    Scientists are Amgen and Bayer must be either lazy or incompetent.

    May I remind you that Amgen and Bayer are among the most successful biomedical companies in the world, and that they each have life-saving products on the market that have passed rigorous testing showing safety and efficacy, as judged by multiple regulatory bodies in multiple countries. So, one would think that they have their shit together!!!

    Do you believe that scientists at MD Anderson are also lazy and incompetent? Well, it seems they are also having trouble reproducing the junk data generated by their fellow academicians:

    http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0063221

    Like

  56. DrugMonkey Says:

    when their generated and peer-reviewed data is so useless?

    If the data coming from academia is totally unreliable or even misleading,

    There’s another thing that annoys me about such stupid statement, particularly when motivated by industry trying to use free information to advantage themselves economically.

    No single study is a done deal. Properly done, science moves in gradual steps towards a successively better approximation of the truth. A single amazing finding requires, wait for it, replication and extension before reasonable scientists conclude that it is most likely true. There *is* the chance that the very first report of a phenomenon was by luck. A false alarm error. There *is* the chance that a real phenomenon is dependent on highly specific experimental conditions that may render it essentially meaningless *for another person’s purpose*. There is the chance there was some sort of material screw up and it will never be replicated, ever.

    This is the way science works.

    If a company is trying to skim off the very latest and greatest results and run with that into a new approved medication….well, they run the same risk anyone else does when assuming support that hasn’t yet been generated. They can feel free to throw a whole new expensive development program together but I’m not feeling sorry for them when it doesn’t pan out.

    So on these estimates of “not replicable” studies, I’d like to know more information. How many of these are one-offs, how many are reported only by a single lab and how many have been verified by multiple labs?

    If the BigPharma whiners are complaining that the most amazing new stuff reported in the last issue of Science or Nature doesn’t broadly generalize well….that’s their own failing to understand science.

    Not one whit a problem with the conduct of academic science under the funding of NIH or any other agency.

    Like

  57. Junk Data Everywhere Says:

    Also, you should go back and read the Reuters story. My favorite part is this one:

    . . . Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

    “We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning. . . .”

    Like

  58. Junk Data Everywhere Says:

    And yet another article:

    Cancer research in crisis: Are the drugs we count on based on bad science?

    http://www.salon.com/2013/09/01/is_cancer_research_facing_a_crisis/

    —-

    Bottom lime, the quality of what’s published by academia is JUNK.

    Like

  59. DrugMonkey Says:

    JDE-

    Not at all. What I am saying is that industry scientists don’t have any magic carpet that the rest of us don’t have. It’s hard to replicate stuff sometimes. And the rawer and newer the result, the more like that it won’t replicate at all, the more likely that it depends on highly specific circumstances. This not a failure, this is *science*.

    So without knowing just how hard someone worked at replicating something, and without knowing how much support there was for the finding in the first place, these estimates of unreplicatable studies are worthless.

    I’d also be curious to know if the definition of “replicable” requires the same effect size or does merely obtaining a significant result suffice?

    The MD Anderson paper you cite is interesting but is hardly the sweeping indictment you seem to think it is. the fact that we have a lot of incentives not to be helpful in some corners of biomedical science cannot be taken as evidence of fraud. nor can the “unresolved” ones be all scooped off as definitive evidence of fraud either. it’s unknown.

    OTOH, the approximately half of the cases in which a response from the other lab was obtained providing “resolution” is support for the notion that *not every failure to replicate is because the original finding was incorrect*. right? This right here supports my contention that you have to know more about how trite “failure to replicate” numbers were generated before you go brandishing them as the true estimate of error (or fraud) in the literature.

    as a sidebar, I’ve heard complaints before from industry scientist colleagues about how “Professor X won’t talk to us at all” when I personally know that Professor X behaves in a reasonably collegial manner with academic colleagues. I’m not saying it is right but there are people who feel like they have no obligation to talk to industry but also seem to feel the usual professional obligation to talk to fellow academics. the failure to respond to query issue raised in the MD Anderson paper may be a significant issue for industry.

    Like

  60. DrugMonkey Says:

    JDE-

    The Amgen researchers had deliberately chosen highly innovative cancer research papers

    duh.

    Unfortunately, the Amgen scientists were bound by confidentiality agreements that they had entered into with the scientists whose work they attempted to replicate. They could not reveal which papers were irreproducible or specific details regarding the experiments,

    oh, how convenient. So they are just making shit up. their paper itself “cannot be replicated”. c’mon now.

    the Salon article goes on to assert:

    The list of scientific journals in which some of the irreproducible papers were published includes the the “elite” of scientific publications: The prestigious Nature tops the list with ten mentions, but one can also find Cancer Research (nine mentions), Cell (six mentions), PNAS (six mentions) and Science (three mentions).

    yeah, you all know how I feel about GlamourMags. keep the analysis to real science journals if you want me to believe your conclusions about lack of replicability of real science.

    Second, grant-funding agencies should provide adequate research funding for scientists to conduct replication studies. Currently, research grants are awarded to those who propose the most innovative experiments, but few — if any — funds are available for researchers who want to confirm or refute a published scientific paper. While innovation is obviously important, attempts to replicate published findings deserve recognition and funding because new work can only succeed if it is built on solid, reproducible scientific data.

    no argument whatsoever from me here. It speaks to what I’m saying above about our confidence in a given thing being “true” from a single experimental study. Real scientists understand that our confidence should grade with the number of replications and extensions, not the IF of the journal in which something is published, nor the sexiness of the result, nor with how much we want it to be true (so we can make a drug from it, for example).

    The Salon article skirts another error of assumption you are making with the survey study. The fact that someone has, in a long career, failed to replicate *one* result is all it takes to count in that bin. How many successful replications has that person conducted? Those are the sorts of numbers that are necessary for your broadly sweeping conclusions.

    Like

  61. DrugMonkey Says:

    “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story.

    If they did this for each result, and the 50 times included all of the most obvious tweaks to try to get it to work using input from field experts to help them…. then I am convinced.

    I would also like to see the companion confession from the labs of each result but that’s probably asking for too much.

    Like

  62. drugmonkey Says:

    One problem is that the people in best position to recognize the fraud (ie grad student and postdoc colleagues of the faker, whether the faker is a PI or not) are in the worst position to report it, in terms of being low in hierarchy as well as less confident about what constitutes fraud. We had a major fraudster kicked out and later questioning revealed that many people from his prior walks of life had found him suspicious, but hadn’t had the confidence or the standing to do anything about it.

    Absolutely.

    And this reminds us that anyone in a position of authority in the laboratory or department who does not take allusions, hints and rumours about data faking seriously is screwing up. The hesitations on the part of junior scientists should be taken into account.

    Now if literally nobody ever brings it to the attention of anyone higher up…well I’m not sure what can be done. From a certain institutional standpoint, if the system isn’t informed, there is no way to address the situation.

    Obviously I think that one solution is to do as I am trying to do with the post here and to state clearly that it is NOT normal to fake and fraud.

    I think that a PI who goes along with this meme and rolls her eyes along with the “everybody does it” crowd is herself increasing the chances someone will pull shenanigans in her own lab. and that the honest junior scientists will be less likely to say anything.

    It is important to make it clear that the PI would rather not get that fancy journal acceptance or not get a publication at all if the cost of doing so is fake data. Ditto for grants. And promotions. etc.

    Like

  63. Ola Says:

    DM Ola- on the ORI case, how many individuals have you reviewed work for and *not* seen any evidence of a problem? Several hundred?

    Overall, for the past decade or so, both as a standing study section member as well as ad-hoc on some other NIH panels, and a couple of other big grant awarding bodies, I’d say I review about 30-40 proposals a year, and lay eyes on a couple hundred a year (i.e., not a reviewer but flicking through it during the reviews session while the primary reads their blurb). Of those, usually 2 or 3 a year contain problem data, so 1% would be a best estimate. That might not sound like a lot, but multiply one dodgy PI per study section per cycle by the # of sections at CSR, and you begin to see the magnitude of the problem.

    Is 1% worth getting riled up about? When billions of dollars to taxpayer and charitable foundation money are involved, you bet your fucking life it is! Yeah yeah, I get the point that it pales in significance compared to other “waste streams” in the system. However, this is an addressable entity. We know what the problem is, we have the tools to deal with it, and the results/rewards of addressing this problem are beneficial to all. I am frequently asked by my chair and others in my department “why are you so ticked off about this stuff?” My only response is “why are you not?”

    Like

  64. dsks Says:

    “Doesn’t the Reuters/Nature article say that Amgen could NOT reproduce results from 47/53 landmark publications?”

    It certainly does say that. In a commentary not subjected to peer review, and in which it is not clear that even the editors of the journal were able to access and review the the company’s data.

    You see the paradox here, no? If ~90% of peer reviewed data is actually “junk”, then what stock does this allow us to put in the conclusions from the single, secret, unverified, non-peer reviewed study making this assertion? Surely, following the logical path from the authors own conclusions, we should be supremely skeptical of their findings? An order of magnitude more so given that nobody has been allowed to see them.

    They did that experiment 50 times, eh? But we’re not allowed to see the data because – and this is a kicker for me – they have an agreement with the original investigator not to go public? WTF? Since when do scientists need to have permission to reproduce another scientist’s work and publish the resulting data? What a startling confusing admission! One you would expect to raise a red flag among the “It’s all lies!” crowd. But confirmation bias can be so compelling…

    Like

  65. Cynric Says:

    Unfortunately, the Amgen scientists were bound by confidentiality agreements that they had entered into with the scientists whose work they attempted to replicate.

    Yeah, got to agree with dsks on this. WTF? A confidentiality agreement about what? They’re purportedly trying to reproduce published work. Is this an admission that the original methods write up was deliberately inadequate, so that when Amgen are told about the super secret ingredient that makes things work they have to sign a gagging order?

    This is all tangled up with the bizarre and unscientific acceptance that journal “prestige” and research quality are synonymous, and clearly Pharma is just as much in thrall as academia.

    Like

  66. DrugMonkey Says:

    I bet they are talking about latest greatest stuff that is in the midst of IP negotiations and the company comes to find they’ve bought a pig in a poke. Their lookout, way I see it.

    Like

  67. Dave Says:

    This stuff about poor old industry scientists not being able to reproduce academic results is pure nonsense and is a distraction. We work closely with pharma – big and small – and I can assure you that their “in-house” R&D is no more “reproducible” than academic work, in general. Reproducibility depends on many factors and it is more than conceivable to me that their scientists just didn’t have the time or skills to reproduce certain experiments properly. Just because one or two labs can get a result from a technique, does not mean it is bogus.

    Like

  68. DrugMonkey Says:

    their scientists just didn’t have the time

    It is always good to remember that industry bosses are not the same as academy bosses. Let’s say a PI puts a grad student or postdoc on replicating some key result. S/he may not give a crap if one of their lab of 10 is working for a year trying to get one damn thing going. Especially if they are on a fellowship and the money cannot be devoted anywhere else.

    The divisional director of some Pharma unit might take a little more practical view since any employee working on a dry well can’t be working on one of the other critical projects. Their patience level for tweaking on 50 replication attempts may not be that good.

    Like

  69. The Other Dave Says:

    Dave: “Just because one or two labs can get a result from a technique, does not mean it is bogus.”

    I agree (and DM was saying the same thing earlier with his RTFM comment), but that’s sort of beside the point.

    It is easy to imagine many reasons why research results could not be reproduced. 1) The original results are outright lies, 2) The reproducers are incompetent, 3) The original results are atypical, 4) The methodology used to obtain the original results is poorly- or mis-described, etc.

    Every single one of these things is a problem. They all represent wasted research money. The public doesn’t care whether we scientists are liars, incompetent, cherry-pickers, or bad writers. They are paying us billions of dollars a year, and want us to do a good job. They TRUST us to do a good job.

    When we screw up, it is a breach of that trust. And the teeny tiny wheeniest whiff of misconduct represents the most serious breach of that trust. We are like the guy on the corner asking for spare change because he is a war vet trying to get some food for his handicapped son and coping with the recent death of his wife who was killed by a drunk driver …who then turns out to be a never married serial thief who habitually spends the money on crack and beats his wife when high. It. just. looks. bad. So no wonder science funding gets cut.

    Our scientific house needs to not just be in order, but be a shining example of what a house should be. We need to all be more like Ola. She is my newest hero. I am in love.

    Like

  70. Boehninglab Says:

    Wow, this was an eye opening comment thread. Why the hell go into science if you are not seeking the truth? Certainly we didn’t choose this career for the money. I try my best to emulate Ola and go out of my way to bust cheaters, and I have done it as journal reviewer, reader, committee member, and study section member. We should all do this.

    Like

  71. CD0 Says:

    I think that we have to distinguish grant proposals from peer-reviewed publications. Both are of coursed anonymously scrutinized, but I see how there can be more fraud in low quality publications.
    Firstly, the very few grant proposals that get fundable scores (at least in NIH study sections) typically report impressive preliminary results, yes, but what differentiates them from the pack is the ideas, methods, innovative concepts and important questions that they propose before doing the experiments. In many cases with reagents that are available to other applicants that do not have the same high-quality ideas. And the ideas need to be feasible. This is at least my experience. Top grants are funded because there is an amazing intellectual exercise behind and not because there is much possibility of fraud. The work has not been done.
    Regarding publications, the most cited, most impactful manuscripts are so because they are reproduced and report seminal discoveries that provide the basis for new scientific avenues. Well respected scientists are famous because they discovered this or that, or have a trajectory in a field in which other follow the lead. There may be fellows in the lab willing to cut corners, but the most valuable possession of these guys is their reputation, won through a long-term effort convincing others that they are right.
    Then there are resentful, limited people in the profession who are better candidates for cheating. But they cannot go too far. If their work has some impact because they became too ambitious, they eventually get caught. Sometimes it takes a while (e.g., Van Paris), but sooner or later their nudity becomes obvious.
    And then there is a lot of paranoia and jealousy. We all have an obligation to scrutinize our peer’s work, but assuming that everybody cheats is outrageous to me. Because I do not do it and I believe that most of my colleagues who have become successful through decades of perseverance deserve some of the comments that I read above

    Like

  72. WTF Says:

    DM,

    I read your post and all of the comments and cannot figure out what the point of your post is other than be some sort of propaganda pro-science. I say so because we don’t know how much scientists are faking/fudging data. We may know of a few obvious cases that can be readily discovered, but what about of all the cases we don’t know about. Since we have no data, conclusions on this matter are useless.

    What were you expecting? That fakers were going to come here and admit their wrongdoings?

    On the subject of data reproducibility, I think you are being silly. There are indeed many reasons that data cannot be readily reproduced, but instead of making excuses for that, we should be trying to find solutions. You should not defend data irreproducibility so vehemently. The onus should always be on the reported data and the scientists behind it. Otherwise, it becomes really easy to fudge data and then say, “Oh, well, that’s how *real* science works; sometimes things cannot be reproduced. Sorry.”

    I also take issue with your views about “GlamourMags” and them not being “real science.” That’s your opinion, fine. But you should not let your views blind you to the fact that most of the scientific world does not view these GlamourMags that way. Most scientists treat them as one of the highest achievements of science. Data from them not being reproducible is not a-OK. In fact, irreproducible data from them should serious red flags precisely because of their high impact on science and society.

    Like

  73. Mum Says:

    TOD, you are way off again with your metaphor of the drunk beggar. Scientific funding, misused and abused as is, IS ALREADY a shining example of good use of public funds. Compare it with a lot of other expenses. How much is the public getting for the billion dollar B-2 bombers (not counting development)? How much for all the money spent on politicians and lawyers? The process of review, evaluation and accountability in science, flawed as it is, is way better than outside. So, stop the whole whining about misused dollars. If the taxpayers are getting robbed, it isn’t by the scientists funded by NIH.

    Like

  74. Industry Scientist Says:

    “This stuff about poor old industry scientists not being able to reproduce academic results is pure nonsense and is a distraction.”

    Wrongo.

    The one truly surprising thing to me about going from academia to industry is just how many academic papers can’t be reproduced. Industry has the capability to move quickly to develop therapies based on academic research, however they typically don’t because 1) before any drug target enters a development program it needs to be “internally validated,” i.e. the data needs to be repeated in house and 2) most of the time (and I do mean most), it can’t be replicated. Even using the exact same commercially available reagents. It’s one of the most incredibly frustrating things to read all of this cool science, try multiple times to repeat it and find it’s all bupkis.

    One of my first big projects when I joined was to try and validate a hot, new drug target which had been published in Nature. I spent about seven months of time trying to replicate the results in multiple ways and while we saw some small effects of the protein, the results were wildly inconsistent from experiment to experiment and nowhere near the magnitude of the published paper. I showed the results to our VP and told him basically it was crap, so we parked the program. Then we started hearing whispers in the field this lab (who really is one of the top labs in this area of research) was having trouble replicating their own data. Now two papers have come out basically saying there’s no way this paper can be true.

    Industry research certainly isn’t infallible, but there’s much less incentive here to commit fraud – there’s no pressure to publish and, in contrast, there’s definite pressure to be absolutely 100% sure your data is rock solid before presenting it to upper management. Data is always repeated in house and notebooks are signed and countersigned – the incentive here is to make sure your data is as good as can be and everyone reviews the raw data. After all, raw data tables need to be included in any FDA filing, along with all the stats on said data.

    All data is reviewed with a fine-toothed comb at multiple levels and I think it’s this sort of peer-review that really needs to be implemented in academia.

    Like

  75. Cynric Says:

    there’s no pressure to publish and, in contrast, there’s definite pressure to be absolutely 100% sure your data is rock solid before presenting it to upper management.

    I think this is the heart of the issue. Because academics are under pressure to publish in high impact journals, they need to rush out initial reports of exciting preliminary data, which causes a big stir. It takes years to determine whether the preliminary results hold, and if not, the field moves on and drops it. Industry meanwhile may have wasted a lot of time and money reproducing it (the failure of which they also don’t publish).

    However, for the academic, sending off a paper with “this looks good, wonder if it’s broadly applicable and reproducible” will not get you a slot in a glamour journal, and so the pressure to sex-up and oversell results to establish your career is intense.

    In a vicious circle, industry and academia then overestimate the robustness of the data by conflating it with the prestige of the journal. Time and effort is the only way to determine how reproducible ANY result is, regardless of where it was published. It suits the glamour journals to be viewed as the home of the “best” science, and we are all buying into their game.

    Like

  76. dsks Says:

    “… the results were wildly inconsistent from experiment to experiment and nowhere near the magnitude of the published paper. I showed the results to our VP and told him basically it was crap, so we parked the program.”

    And this contrary article was published, I presume? Otherwise it’s all just insider hearsay isn’t it? Because here in academia, in most of the lively and competitive fields I’ve been in contact with folk rarely waste an opportunity to show that a competitor is incompetent or just plain full of shit. This is what Cynric is getting at when he says that so long as people publish their findings one way or the other, the truth will out. You can’t build science and make progress on bullshit, so bullshit is invariably left behind by a process of selection.

    Of course, Nature commentaries aside, industry hasn’t remotely lost faith in academia. On the contrary, we need only look at the actions of Big Pharma over the last decade to see that they actually appear to be increasing their stock in academic science, paring back their own in-house basic science and subcontracting it out to small biotechs and… academic labs!

    Why the hell would Pfizer go all-in with Wash U STL if it thought the latter couldn’t be trusted?

    Like

  77. Dave Says:

    Industry research certainly isn’t infallible, but there’s much less incentive here to commit fraud….

    Never said anything about fraud. My fairly extensive experience with pharma in-house research is that it is just not that good or rigorous, signed lab books or not.

    And we should clarify that industry scientists ARE under lots of pressure to get certain results, but for very different reasons. This is very evident in my field right now. Because patents have run out for many “blockbuster” drugs, many firms are simply reinventing the wheel and coming out with (in our opinion) identical compounds that do exactly the same thing and hit exactly the same mechanisms. They call it something different, of course, and claim different mechanisms but, in our hands in the lab, it’s the same drug. Now, when they come to us looking to do human studies, we want to do it properly and compare their new golden drug to the (now generic) version which has been on the market for a decade or more. Of course, they never want to do that and cite “financial reasons” for this decision. But we all know the real reason. This kind of stuff is not fraud per se, but it certainly is unethical in my opinion. Not to mention that it is just bad science.

    So lets not pretend that industry is the gatekeeper of what is good science and what is not. I would submit that it is the exact opposite.

    Like

  78. univ Says:

    10 points for that video :). Anyway good luck with this cause

    Like

  79. miko Says:

    A lot of this shit would stop in a hurry if PIs were not allowed to blame the most convenient author-who-now-resides-in-another-country and wash their hands of fraud.

    With both of my PIs, I would never in a million years have been able to get manipulated images or obviously cooked data (the kind of fraud that is routinely caught by readers and reviewers), because we always sat side by side looking together at the source and analysis for every single data point in every single figure. Not because there was a lack of trust, but because mistakes DO happen and we wanted to make super sure every thing was right. Because we are scientists.

    If you’re not doing this as a PI–which is just good practice and would stop so man kinds of EASILY PREVENTABLE bullshit from going out of your lab–you’re an idiot or your lab is too big for your time and abilities to allow you to effectively manage. And you should bear the consequences of preventable cheatfuckery with your name on it as corresponding author. If there is fraud a reader can catch, you have absolutely no excuse for not catching it.

    I grant there are kinds of fraud that a PI would NOT be able to detect, but nor would a reader. It is impossible to know the numbers on this. I think self reportage and rumor reportage are so wildly inaccurate as to be useless.

    Finally, the ROI on NIH funded research is absurdly fantastically huge compared to almost any other category of government spending, end of story.

    Like

  80. Kati Says:

    Maybe cheating is relatively common in the biological sciences, but I don’t know anyone in physics who has fabricated data or inappropriately manipulated data.

    Like

  81. CD0 Says:

    Sure, sure. The biggest scandal in recent times – the retraction of 8 papers by Science and 7 papers by Nature (among many others) from the work of Jan Schon on semiconductors – should have been caused by a marine biologist…

    Like

  82. Robert L Bell Says:

    It’s not just academica, I’m sorry to so.

    In my work at an industrial laboratory, my manager has been known to step in an change reports to make them more friendly towards the answer the customer wants to hear. I have to grit my teeth and take what comfort I can from knowing that my hands are not directly dirty, but the whole system stinks to high heaven.

    Like

  83. Dave Says:

    …..but your reports are signed, so the data must be correct.

    Like

  84. Anonymous Scientist Says:

    Awesome thread. More relevant now than ever.

    Know this thread is old and dusty, but I just read it now so I’ll share my experiences. I’m a tenured, funded faculty at a US Medical school. What I’ve learned is that if you want the truth, listen to the graduate students… I’ve seen everything already described. Students being told what data to get before the experiment; not being allowed to graduate because their data doesn’t fit the PI’s previously published models; PI’s trying to fail students on qualification exams because they couldn’t reproduce previously published data, and that’s just the obvious examples. To be fair, it’s not rampant, but these things happen more often than they should. And I do agree, sloppy science is everywhere, not because people are cheating, but because they are always under the gun to publish. I can’t hate a scientist for that…

    From my vantage point, down in the trenches, the system is horribly broken, probably amplified in recent years due to the extreme duress caused by lack of funding, which auto-selects for those willing to do whatever it takes to be successful; and by the hoarding of grant $$$ by “elite” labs (one lab in my department has more RO1’s than half the department combined). If you are a younger PI and didn’t get into the system early enough to become established before all this crap started, or aren’t part of a powerful lineage/mafia that can protect you, you either play the game the bad PI way, maybe get lucky and discover something cool that is actually reproducible, or accept that you are going to barely survive and/or wind up being the teacher…

    Reminds me of the Lance Armstrong documentary. They were interviewing a French rider who said there were two types of riders on the Tour de France back then: those who took Epo, and those who didn’t. I believe he claimed those who didn’t take Epo never placed in the top 50%. Turns out Lance was on no less than 5 different PEDs, so he was the superstar… sound familiar? And oh how people defended Lance for the longest time…

    I think on some level scientists struggle to admit just how low the system has sunk because many of us got into this business initially because we believed in the “purity” of science; a discipline that could provide concrete answers to the mysteries of the Universe, discoveries that could subsequently be used to produce functional products that improved human qualify of life. Because of this, many scientists defend the system (or maybe the system has been good to them so they don’t care), when in fact they should just be defending science itself. Science, like everything else in human society, only works when the system holds people accountable and is willing to discipline offenders, and this quality is almost entirely absent in our current NIH funded system, for all the reasons stated in the posts above. Good scientists are not trained to be cops… most of them are pretty nice people, at least the ones I’ve met, and have a hard time being the heavy hand of the law. How many of you actually fail PhD students anymore??? We haven’t failed one in the 10+ years I’ve been a faculty…

    My solution? No more than 2 RO1’s per lab. Spread the wealth. Make it is so it is impossible to become rich on NIH $$$, the crooks will leave the system and go somewhere else searching for the bags of money. This age war is a lame distractor. There are good old scientists, there are bad old scientists; same for young. Spreading the wealth and capping RO1s will enable scientists to spend more time doing careful experiments, rather than worrying about surviving and writing grant after grant after grant. Look at sports leagues. Those that have instituted salary caps have ensured all teams have an equal chance to win each year, and their success is entirely dependent on how well they manage their money, make decisions etc… not who their friends are, what group they belong to, being able to outspend the competition, or how much they cheat/bend the rules. And we need refs. Scientists who know the system, were successful at it, and have the ability to watch and call fouls/penalties. Maybe older scientists who already made their contribution, are winding down, and now care about the future of science. Why not?

    I believe deep down most scientists are good people, but most people in a hypercompetitive, unregulated environment will do whatever they have to do to keep their jobs, support their families, feed their children, pay their mortgage, etc…

    Wishing you all the best of luck.

    Like


Leave a comment