Ethical Manifestos and the Culture of Science
June 1, 2007
Dr. Free-Ride has been grappling with the question of scientific ethics which as been a fantastic series of reads. Now she’s calling me out for some of my commentary. The call out also comes with a handy Manifesto. I have some thoughts.
The Free-Ride manifesto starts off with assumptions:
- All scientists appreciate the need for honesty in reporting scientific findings and the wrongness of fabrication, falsification, and plagiarism.
- Despite (1), a certain (alarming?) number of scientists nevertheless engage in fabrication, falsification, and plagiarism with some regularity.
- A certain (even larger?) number of scientists are aware of the scientists who engage in fabrication, falsification, and plagiarism.
- The known bad actors seem to get rewarded, rather than smacked down, for committing fabrication, falsification, and plagiarism.
Regarding (1), yes, it can safely be assumed that “all” (the meaningful majority, although we may return to the issue of the minority of sociopaths later) scientists, at least starting out, believe that science is supposed to be about truth. A comment suggests that unselected undergraduate populations may not express any personal commitment to ethics but I think the pool of pre-scientists is different. Sure there are aspects of our current business and legal models that suggest personal ethics are dead- I submit that this is not our problem in science.
We move on to (2) which is my ongoing point in my comments to Dr. Free-Ride’s posts. We all have examples from our favorite fields. Retractions in journals and ORI findings of violations just keep on coming. The critical question, that should shape our approach to ethics training, is WHY? What explains why scientists start out honest (assumption 1) but become dishonest?
(3). Indeed. Although it is probably a good question whether the average estimate of data faking is a good match for reality, too high or too low. Does it matter? Yes. Cynicism that “everyone is doing it” is likely part of the answer to the question of why. There are subtler aspects too, including false accusation/belief. It is semi in vogue to say “but I don’t believe their data”. This is a serious problem because it is all too easy to engage in low level rumor mongering that has eventual effects. It creates the problem of “everyone is doing it”. etc.
The biggie, (4). I’ve outlined a case for direct reaping of rewards from some thing that was, at best, a serious scientific mistake that should have never happened. In some senses this is a case study in what happens when a PI whom we might assume to be good intentioned, sees “Science paper” and “grant renewal” written all over an unusual result from the laboratory. There are other contingencies at play. A publication in a very high impact journal such as Science, Nature and the like means a LOT and has an incredibly lasting effect on career. The mere fact of getting an article published in these journals trumps the actual content of the paper six ways to Sunday. Really. It puts one in the club. Makes the editorial staff take your call next time. Gives one weight when arguing to one set of editors that their journal had better take your paper because otherwise the competition will. Puts you on the list to review for that journal. Gives you a leg up getting the next paper into the journal. Against all these benefits to career, well, the embarrassment of having to retract is pretty small stuff. In most cases, one just submits a revised alleged figure saying “we mistakenly used the wrong one, our bad” or, at worst, blames some chump pre/postdoc. It is no accident that there are more retractions from higher impact journals, no matter what excuses those with a stake in the matter might argue.
Okay, now onto the Free-Ride solutions:
- Reiterate to scientists (in some mandatory ethics training) that fabrication, falsification, and plagiarism are wrong.
Despite appearance, I actually agree. One should not abandon the basics. There is value in community assertions of belief, expectation and rules of conduct. This should not be overlooked, even if it is not sufficient. The key is recognizing that a couple hours in a snooze-inducing class (not the good Dr. F-Rs, I have no doubt) is not enough.
- Cut off funding to scientists.
We can dream, can’t we? This is where these cases get sticky because the accused, guilty or not, is going to fight tooth and nail because of this very issue. The Institute is going to fight tooth and nail to defend their Indirect Costs. This is why it takes a very long and, in retrospect, obvious series of violations to really impact the career of a independent investigator. Postdocs, grad students and techs? Sure, they get shelled with little fuss because there is no cost to the powers that be to dump them. PIs on the other hand… But let’s be positive shall we? Go ahead. Take your favorite case of faking and ask program officers at the funding Institute why he/she is still drenched in NIH funds why don’t you? See how that turns out, eh?
- Fed up with the bad actors and the lack of real consequences for their fabrication, falsification, and plagiarism, honest scientists quit in disgust.
I’m not so worried about good people quitting in disgust. What I am worried about is that the honest scientists might be forced out because they can’t get their grants funded in tight times because their lab is not “hot” enough and all the money is going to cheaters. That’s the real problem here.
- Fed up with the bad actors and the lack of real consequences for their fabrication, falsification, and plagiarism, honest scientists stay in science but distrust everyone.
We have some trends this way. See above regarding “I don’t trust their data”. But this is epiphenomenal. Get the cheating out and this will fix itself.
- Change the reward structure within the institutions where science is practiced to undercut the ability of bad actors to profit from their fabrication, falsification, and plagiarism.
Yeah right. Everyone says how flawed Impact Factor is, how it shouldn’t matter, how what really is important is showing your “seminal contribution” and the like. Bull. Pubs in High Impact factor journals mean a great deal on the CV and all trends point toward a worsening of this trend. Talk to some journal editors and you’ll find that even the lowly 1.x impact factor journals think of nothing other than how to upgrade their rating. Senior scientists who have gotten by on the previous model (keep plugging away in the “good” Society type of journal) are now publishing in whatever seems to be a 1 point higher Impact Factor journal than the last one. Getting your work in front of the right audience has now been completely overtaken by Impact Factor seeking. Case in point, I sit on a study section which doesn’t typically get Science/Cell/Nature type applicants. Well, we got one and everyone was amazed by the proposal and the science. Decent score, didn’t fund, the revision came in and…oh dang. No extra preliminary data to address that nagging issue of control-conclusion in some peoples’ mind that what we have here is a fake. There is no effect and the lack of supporting controls in the preliminary data is not due to lack of time getting the proposal together but because in fact there is no effect there. So the score went down. Whaddaya know, Program chose to pull up the prior version with the better score and fund it! Why? Well they salivate over getting labs that are likely to publish in Cell/Nature/Science into their portfolio. Contingencies.
- Stop freakin’ tolerating other scientists engaged in fabrication, falsification, and plagiarism, and other practiced that undermine the honesty needed to do good science.
We’re trying. Honestly. But what’s to do. Those poor, poor fools at Wisconsin paid a huge price for whistleblowing their PI’s faking. In one huge, high Impact Factor publishing lab with which I’m familiar, there’s a cultural split between the real scientists and the chronic data fakers. (If you are in bench work and every single experiment comes out as the PI hypothesized at lab meeting…you might be a data faker. If you get great results with an antibody that nobody else can get to work…you might be a data faker. If you are a Photoshop expert who creates “data” figures by cutting gels and blots into miniscule sections…you might be a data faker.) Guess which ones are the apples of the PIs eye? Guess which postdocs get jobs at high falutin’ universities? Ask the luminaries who now edit journals some tough questions and you’ll get the usual pablum about how people “should” act.