The Hippocratic Oath for Graduate Students? Really?

June 24, 2008

A letter printed in the 20 June 2008 issue of Science magazine proposes an Oath of integrity, professionalism and ethical conduct for graduate students. The authors review the well known Hippocratic Oath which is one of the basic tenets of the medical profession:

The Hippocratic Oath, recited by medical school graduates worldwide, is arguably the best-known professional honor code. This centuries-old oath instills a commitment to altruism, professionalism, honesty, skill, knowledge, duty, loyalty, and fraternity among medical doctors.

Is it time for scientists to adopt something similar?


The group of authors from the University of Toronto propose that it is indeed time.

We created an oath to be recited voluntarily at the first meeting of each year’s new graduate student body in IMS. We specifically chose to hold the oath ceremony at the entry point to graduate studies rather than at graduation day in order to introduce students to these concepts early.
-snip-
We propose that a graduate student oath should constitute a standard requirement of life science graduate programs. This oath should be the cornerstone of a programmatic series of information modules addressing issues of community, professionalism, and ethical conduct provided by the graduate department and reinforced throughout the student’s training by their faculty mentor.

A bold pronouncement indeed. So what is this oath anyway?

“I, [NAME], have entered the serious pursuit of new knowledge as a member of the community of graduate students at the University of Toronto.
“I declare the following:
“Pride: I solemnly declare my pride in belonging to the international community of research scholars.
“Integrity: I promise never to allow financial gain, competitiveness, or ambition cloud my judgment in the conduct of ethical research and scholarship.
“Pursuit: I will pursue knowledge and create knowledge for the greater good, but never to the detriment of colleagues, supervisors, research subjects or the international community of scholars of which I am now a member.
“By pronouncing this Graduate Student Oath, I affirm my commitment to professional conduct and to abide by the principles of ethical conduct and research policies as set out by the University of Toronto.”

I will close by noting that I was led to this article by a discussion on this topic underway over at MWE&G.
__
Update 06/24/08: GrrlScientist has a more thoughtful post on this.

No Responses Yet to “The Hippocratic Oath for Graduate Students? Really?”

  1. bob koepp Says:

    While I applaud the sentiments expressed, the fact remains that talk is cheap.
    While it’s still in the realm of “talk”, I’d be more impressed if, rather than having newbies take an oath, they had to write expository essays competently explaining the various ethical principles supported by the suggested oath.

    Like

  2. Bob O'H Says:

    It’ll be about as effective as a virginity pledge.

    Like

  3. Nat Blair Says:

    Yeah, it’s a nice idea, but why for only the incoming graduate students?
    Why not get all the faculty, postdocs and existing students together, and one by one come up and do the same thing? Seems quite one sided in my view.
    And as bob koepp says, which my dad always said too, is that talk is cheap. Yes it’s a nice gesture, maybe it makes people feel good. But it’s hard to imagine it having any real effect.
    A better focus on the underlying pressures to lead to misconduct along with improved methods to deal with misconduct as it occurs, seem much more worthwhile in my opinion.

    Like

  4. PhysioProf Says:

    In the absence of detailed guidance on specific ethical quandaries, these kind of vague hortatory maxims are a total fucking joke. If scientists are serious about this shit, they should take a lesson from the legal profession. Typical state legal codes of ethics in the United States are ~35,000 words of detailed guidance on how to handle a variety of situations that arise in the context of legal practice.

    Like

  5. neurolover Says:

    And, the lawyers write that detailed guidance because they know that maxims like “do no evil” are all well and good, but that it’s the implementation that matters. Implementation mattes because it can be sincerely difficult even for an honorable person to tell when perfectly acceptable becomes gray and then becomes fraud (take the manipulation of western blots, or the removal of outliers from data as examples). I don’t know what you’re allowed to do to your western blots, but I know that you’re allowed to remove certain kinds of outliers (reaction times are almost always filtered to exclude the really really big numbers where the subject took a break and which skew the means unbearably). When does your data smoothing become fraud?
    Historically science relies on the mentor to provide this guidance, but do mentors do it, when what they really want to hear is that their student got the wow result that gets them the press?
    And therein lies the second implementation rub. If the system is set up so that unethical behavior ends up resulting in significant reward, no amount of education is going to really prevent fraud. We have to set up the system so that we don’t reward the people who fudge the numbers (or the people who supervise the people who fudge the numbers).
    (I worry, actually. In the end, replication is supposed to be the bottom-line guard against the kind of fraud that hurts society, rather than the scientists, but in some fields replication takes a very very very long time, and then never really changes the past).

    Like

  6. bleh Says:

    it’s sad when one has to spell these things out. first, aren’t all grad students supposed to take some sort of ethics course? and second, it harms one’s reputation and the mentor’s reputation when unethical actions are brought to light. if the sheer embarrassment isn’t worth anything, it isn’t far from (academic) career suicide.

    Like

  7. PhysioProf Says:

    And, the lawyers write that detailed guidance because they know that maxims like “do no evil” are all well and good, but that it’s the implementation that matters. Implementation mattes because it can be sincerely difficult even for an honorable person to tell when perfectly acceptable becomes gray and then becomes fraud[.]

    That’s what I’m sayin’! Having scientists take an oath to “be honest” is a waste of fucking time, and just diverts attention from what would really need to be done to provide useful guidance.

    it’s sad when one has to spell these things out.

    It may be sad to have to tell scientists to “be honest”. It is, however, far from trivial to figure out how to be honest in complex fact-intensive subjective situations. Therefore, it is far from sad to provide scientists with specific guidance on what “honesty” demands in specific situations.
    Where the fuck’s Dr. Free-Ride, anyway!?

    Like

  8. BikeMonkey Says:

    And, the lawyers write that detailed guidance because they know that maxims like “do no evil” are all well and good, but that it’s the implementation that matters.

    Or the lawyers need all that detailed guidance because..
    oh, it’s just too easy PP.
    Look even knowing what one is supposed to do in specific situations only gets you so far. I assert that at some point in time, the data fakers knew what was right and what was wrong. It just isn’t a business for people who think in advance “Hey, I can just fake up some shit and coast along…”
    What happens is that the forces at work edge otherwise well-intentioned people down the slippery slope. And that is the sort of thing that I find most important. What structural factors of our industry reinforce (instead of punish) cheating? I would argue that prioritizing “first” over “best” is one such factor. What forces at work within-lab reinforce cheating? Do we need to take a hard and broad ranging look at PIs in situations where the blame is apparently on the trainee–did s/he encourage a culture of credulous acceptance of data as long as it fit the hypothesis?). Etc.
    I think that focusing on the moral failings of individual bad actors is just the wrong way to go.

    Like

  9. DrugMonkey Says:

    but I know that you’re allowed to remove certain kinds of outliers (reaction times are almost always filtered to exclude the really really big numbers where the subject took a break and which skew the means unbearably). When does your data smoothing become fraud?
    One real big watershed is the extent to which you do/do not describe what you did to select your data. It’s been awhile, but IIRC the RT folks would say pretty explicitly “We used a 3 SD cutoff to exclude outliers”. And perhaps an indication of how many values (or subjects, or sessions, or whatnot) were excluded. To refer to a situation I’ve mentioned before, in drug self-admin studies some people drop rats who fail initial acquisition in cases where this is not the point of the study. Again, so long as you say something about your criteria, it is on the up and up. Sure you can debate about the scientific merits of one data handling approach or another, but there isn’t really any fraud going on.
    As readers know, one of the trickiest things I grapple with in terms of cherry picking / excluding / analyzing data is that field specific situations may not really be comparable. When I first started to understand what bench scientists mean by experiments “working” or “not working” and the failure to describe the population outcome, I was appalled. Because my experience is in datasets for which a varying distribution is expected and one is expected to describe and deal with that variation. And put together an experiment that provides a representative sampling of the likely outcome. I still think some classic bench techniques and approaches to “an experiment” could learn from our side of the world but I’m partway to understanding why they see it the way they do.

    Like

  10. PhysioProf Says:

    What happens is that the forces at work edge otherwise well-intentioned people down the slippery slope.

    The existence of relatively clear-cut rules that define a limit on that slope beyond which one is clearly behaving improperly are considered very helpful in the legal context. The legal profession also has come to the conclusion that the best way to deal with the possibility of unethical behavior is not the pipe dream of trying to eliminate the positive rewards of cheating (provided one doesn’t get caught), but rather to increase the perceived likelihood of negative consequences of cheating: getting caught and being severely punished.
    The existence of clear-cut specific rules, the violation of which is perceived as having a reasonable likelihood of being found out and punished, goes a long way towards “clarifying the mind”.
    Trying to eliminate the incentive to cheat is doomed to failure. What are you gonna do, try to convince post-docs that it’s just as good to publish a paper in the Journal of Glomerular Filtration as in Cell? Try to convince PIs that it’s ok if they don’t get tenure?
    If you think unethical scientific conduct is a problem, the way to deal with it is to increase the perception of scientists that if they behave unethically, they will be caught and punished severely. An important component of any program to increase this perception is the formulation of specific rules that define improper conduct.
    It is a lot easier to convince oneself that one is not violating an injunction to “be honest” as one fakes data–but “knows” that if the experiment were done, the results would be as faked–than it is to convince oneself that one is not violating a specific injunction to, for example, not differentially apply contrast adjustments to different panels in a figure.

    Like

  11. JSinger Says:

    What structural factors of our industry reinforce (instead of punish) cheating?
    It’s the Least Publishable Unit. For grad students and postdocs, years of work boil down to whether they were able to get a paper to gel once, twice, three or four times. (And because they don’t understand the informal politics, they don’t understand that an advisor does have some room to pull strings for someone whose PubMed output doesn’t adequately reflect his/her merit.) It creates an all-or-nothing-mentality where a handful of data points make the difference between a PhD right now and having to wait some unspecified number of additional years.
    The same goes for reporting misconduct by others. Remember those poor grad students at Wisconsin who turned in their PI, despite everyone telling them they’d be the ones paying for it? And the university shrugged and said, yeah, their integrity was great but, gosh you have to earn your doctorate, so you all have to start from scratch in whatever labs will take you?
    As PhysioProf says, it’s hard to know how to begin changing that, which is why this “oath” and its aggrandizing of faculty and demeaning of new students seems like a good alternative.

    Like

  12. DrugMonkey Says:

    It’s the Least Publishable Unit. For grad students and postdocs, years of work boil down to whether they were able to get a paper to gel once, twice, three or four times. …It creates an all-or-nothing-mentality where a handful of data points make the difference between a PhD right now and having to wait some unspecified number of additional years.
    Well this is certainly a vote in favor of the now considerably out-of-fashion approach in which the PhD is awarded for a body of dissertation work regardless of whether anything ends up being publishable. That approach might lean more toward the educational process of making sure a candidate knows how to work through a problem in a reasonable way- irrespective if their hypothesis was right/wrong or if the state of objective reality led to apparently successful / unsuccessful results.
    While I am generally critical of programs which hew to the monolithic thesis over the incremental publication approach for careerism reasons, you raise an interesting consideration.
    I’m not sure why you use the term “Least Publishable Unit” however. This would seem to be a good thing in that perhaps the benefits of getting some data published as a first author graduate student are more important than the impact of that particular work. The knowledge of grad students that they are very likely to end up with one or more first-author pubs might have a calming effect versus situations in which the grad student knows they will have to fight like hell to get the first author slot on a CNS pub of uncertain expectancy……

    Like

  13. bob koepp Says:

    PhisioProf says, “If you think unethical scientific conduct is a problem, the way to deal with it is to increase the perception of scientists that if they behave unethically, they will be caught and punished severely.”
    If the basis of ethical behavior is fear of punishment (or, for that matter, hope for rewards), we’re in big trouble — and not just in the scientific sphere, but generally.

    Like

  14. DrugMonkey Says:

    If the basis of ethical behavior is fear of punishment (or, for that matter, hope for rewards), we’re in big trouble — and not just in the scientific sphere, but generally.
    Explain the persistence of religious traditions then, my friend.

    Like

  15. bob koepp Says:

    Ummm… what does the persistence of religious traditions have to do with inculcating ethical behavior?
    I don’t know if you’ve got kids, but if so, and if you’re reasonably satisfied that they are ethically inclined people, when did you stop beating them (or, for that matter, bribing them)?

    Like

  16. PhysioProf Says:

    I’m no child psychologist, but I thought the basic idea was that you can stop punishing and rewarding children when they internalize and conceptualize the actual punishments and rewards they have been receiving, and guide their behaviors anticipatorily in relation to what punishments and/or rewards would be expected.
    I’m a pragmatic person, and tend to subscribe to the same pragmatic view of human nature as the founders of the United States and framers of our Constitution, which is that if you expect people to follow some abstract moral code simply because it is “the right thing to do”, you are dooming yourself to chaos, disorder, and disappointment.

    Like

  17. DrugMonkey Says:

    It has to do with the recognition of what our species IS and how it behaves, not how you might like it to be.
    The behaviorists may have been a little over the top but they got the fundamental approach right. We are products of the contingencies that operate upon us now, and those that have operated upon us in the past. In the context of environmental sources of reinforcing and punishing stimuli and outcomes, the role of some nebulous concept of will and a priori ethical behavior is unconvincing. At best, we inculcate apparent “free will” ethical behavior by inculcating decreasingly specific sources of reinforcement and/or punishment such that you can persuade yourself that these have left the picture. They have not.
    Yes I’m a parent and I’ll stop applying reinforcing and punishing stimuli when I last interact with them before I die, I suspect.
    Your cartoon version of “beating” and “bribing” betrays a fundamental lack of appreciation for the subtle and nearly unconscious ways we interact with each other on a minute-by-minute basis. Not to fear, you are far from alone in this. It is one of my greater fascinations that people who are steeped in the behaviorist perspective and can identify these reinforcer/punisher relationships are inevitably assumed to be uniquely Machiavellian users of these principles.

    Like

  18. DrugMonkey Says:

    …and I failed to directly address religion. So, in case it is not obvious.
    Religious traditions are a continual feature of human societies. Many, if not most, have as a primary theme telling people how to behave (morally/ethically/in accordance with God, etc). The traditions also back this up with a variety of threats and rewards. In Christian parlance this boils down to the simple dichotomy of ending up in Heaven or Hell depending on one’s actions.
    I am unaware of many religious traditions which argue for certain behaviors without any trappings of eventual reward or punishment. The dominance of such approaches should tell you something about our species’ characteristics.

    Like

  19. bob koepp Says:

    Where did I introduce a nebulous concept of will or a priori ethical behavior? If potted S-R psychology is the best explanation you can offer for why people behave morally/immorally, then you really shouldn’t be telling others that they lack an appreciation for the subtleties of human interaction.
    Here’s a bit of pragmatism. Smart people (including at least a few scientists) can pretty effectively hide a lot of serious ethical transgressions — with a small expenditure of effort, they can make it very unlikely that they will be caught. Plug that small probability of getting caught into a decision matrix, turn the crank, and then tell me how a regime of rewards and punishments is an effective method to promote ehtical behavior.

    Like

  20. DrugMonkey Says:

    then you really shouldn’t be telling others that they lack an appreciation for the subtleties of human interaction.
    oh, I’m passing familiar with philo of ethics and all that crap. I just don’t see a whole lot of evidence for it that cannot be better explained by behavioral analysis. Some of that nonsense is indistinguishable from religion when it comes right down to it.
    The point is that when you philo types want to explain some human behavior that has no immediately obvious sources of quotidian reward or punishment on casual inspection, you throw up your hands on the scientific front of identifying more subtle chains of behavior and start inventing all sorts of dualist concepts. For which there is no evidence, save your lack of imagination in analyzing chained behaviors and our capacity for indirect reinforcers and punishers. You see a demonstration that an ape or monkey exhibits “altruistic” or “fair” behavior and start arguing about how they must be “like us” instead of concluding that it is we who are “like them”. Animals. Animals subject to stimulus-reward contingencies yes, no matter the complexity of said contingencies that have been established over a complex life.

    Like

  21. PhysioProf Says:

    Smart people (including at least a few scientists) can pretty effectively hide a lot of serious ethical transgressions — with a small expenditure of effort, they can make it very unlikely that they will be caught.

    I’m pretty sure that this is far from the case, unless you are talking about the type of fraudsters that publish solely in

    Like

  22. JSinger Says:

    I’m not sure why you use the term “Least Publishable Unit” however.
    I’d meant it from the side of the minimum threshold for a publication (by that lab’s standards of what is publishable), below which the person has nothing. (Or less than nothing, in that a fourth-year with nothing arguably looks worse than a first-year with nothing.) In cases where you have more than the LPU and are trying to see how many publications you can dice it into — sure, I agree.
    If the basis of ethical behavior is fear of punishment (or, for that matter, hope for rewards), we’re in big trouble — and not just in the scientific sphere, but generally.
    I think all of you are missing that it’s implicit that most scientists are reasonably ethical most of the time without needing constant positive or negative reinforcement. The question facing the scientific community is how to reduce the remaining number of cases, which is where you, I, they, Stemwedel, the U of Toronto and everyone else parts company.

    Like

  23. bob koepp Says:

    DrugMonkey – I won’t burden you with philo of ethics and all that crap, since nothing in this discussion comes within a mile of any topic in philosophical ethics. Where, as a philo type, have I proffered _any_ explanation of human behavior, or appealed to any dualistic concepts? And did I say something about apes or monkeys being “like us” rather than us being “like them” (and isn’t ‘like’ symmetrical?)? Why, instead of addressing simple points I’ve raised, do you go off on these tangents, attacking positions that I’ve never taken? Why, instead of offereing reasons for your stated position, do you spin your insulting caricature of philsophy? Perhaps you have a tidy S-R explanation of your behavior.
    Now, if you really are interested in how moral behavior might be inculcated without relying primarily on a regime of reward and punishment, consider mentors and teaching by modeling the behaviors you want to nurture. And if you want to give this some neuropsych substance, consider how mirror neurons might figure in the teaching of ethical behavior. There’s actual empirical evidence that bears on these issues.
    Oh, as for your digression on religious traditions… Perhaps we take a different view of the actual evidence, but it seems to me that religions have been remarkably unsuccessful in getting people to adhere to religious precepts by offers of eternal rewards and threats of eternal punishments. So I don’t see how this provides any support for the view you’ve expressed.

    Like

  24. JSinger Says:

    Perhaps we take a different view of the actual evidence, but it seems to me that religions have been remarkably unsuccessful in getting people to adhere to religious precepts by offers of eternal rewards and threats of eternal punishments.
    I certainly take a different view — how saintly do you expect people to be that their actual behavior is so disappointing to you? The vast majority of people are in accordance with most of their respective codes of ethics most of the time, just like most scientists are.
    Also, while I’m no authority on this, it seems like compliance with ethics in the absence of incentives is different from compliance with ethics despite contrary incentives, which is arguably what we’re talking about.

    Like

  25. bleh Says:

    i’d love to see rules in writing as per PP’s comment (#7 above). do you anticipate scientific fraud decreasing as a result? i don’t but let’s try the experiment. can’t hurt.

    Like

  26. DrugMonkey Says:

    Why, instead of addressing simple points I’ve raised, do you go off on these tangents, attacking positions that I’ve never taken?
    Because when you make oblique comments without actually saying what you mean, one is obliged to make assumptions with respect to the most likely alternatives. So we can either proceed with a series of “dude, wtf are you saying” or we can proceed by making a substantive comment that is of interest despite whether that was where you were going or not.
    Now, if you really are interested in how moral behavior might be inculcated without relying primarily on a regime of reward and punishment, consider mentors and teaching by modeling the behaviors you want to nurture. And if you want to give this some neuropsych substance, consider how mirror neurons might figure in the teaching of ethical behavior. There’s actual empirical evidence that bears on these issues.
    ‘k. now we’re getting somewhere. So you are asserting that “modeling the behaviors” does not rely on the history of experiences that respected authority figures’ advice is frequently a good way to avoid bad outcomes (punishments) and to gain good outcomes (rewards)? really?
    If not, how or why does “modeling” work? Why does the individual pay attention to what any other individual is doing and mimic it? Is this some sort of innate human property? These mirror neurons of which you speak are a genetic imperative to mimic? So if someone has failed to mimic the modeled ethical behavior..what? They have a genetic deficiency in mirror neuron function?
    Perhaps we take a different view of the actual evidence, but it seems to me that religions have been remarkably unsuccessful in getting people to adhere to religious precepts by offers of eternal rewards and threats of eternal punishments. So I don’t see how this provides any support for the view you’ve expressed.
    On the contrary. Religions are remarkably successful in manipulating the behavior of huge numbers of people, many of whom will overtly swear that the eternal rewards and / or eternal punishments are critical.
    What we might disagree over is what is the base rate in the absence of religion and of course you can not perform that experiment over historical and global timescales.
    Which is why I couched my remarks as a query as to why such reward/damnation structures pervaded religion, which itself pervades human culture. This has little to do with whether it actually works or not….although I would argue that it does.

    Like

  27. linx Says:

    don’t we already have this? for neuroscience, anyway…
    http://www.sfn.org/index.cfm?pagename=responsibleConduct

    Like

  28. JSinger Says:

    Over at Young Spaghetti Monster, Gibbiex illustrates both of the points I’m flogging. As with all news, what you hear about is the people who screw up, not all the quiet heroes like him.
    i’d love to see rules in writing as per PP’s comment (#7 above). do you anticipate scientific fraud decreasing as a result?
    I mentioned in a discussion of image editing policies somewhere that I used to engage in behavior that’s now considered inappropriate, and as soon as norms emerged I embraced them. When people are out of compliance due to sincere misunderstanding, I’m sure that education will be extremely valuable. The people putting acrylamide in labmates’ coffee aren’t doing so because no one told them not to, though.

    Like

  29. bob koepp Says:

    I don’t know what the genetic background of mirror neurons is, but it seems likely that a predisposition to mimic high status models is a fairly widely distributed trait among humans (and other apes). Is that a matter of dispute? Or are you saying that the distribution of this trait is a product of operant conditioning? I think that would require some evidence.
    If I’ve been oblique, I apologize. But could you provide an example where I didn’t say what I actually meant? I haven’t proposed any grand philosophical theories; just questioned whether the threat of penalties is the best way to reduce the incidence of violations. I haven’t even maligned S-R psychology, even if I don’t think it’s viable as a global theory of behavior. But that’s old hat.

    Like

  30. S. Rivlin Says:

    The code of ethical conduct is an unwritten one and every scientist and wannabe one knows it very well. With or without the Hippocratic Oath, some medical doctors will ignore it just like some scientists will ignore the rules of ethical conduct of science. A Hippocratic Oath for scientists will do nothing to reduce scientific misconduct. Just as the Hippocratic Oath for medical doctors is mainly a ceremonial event, just as the Pledge of Allegiance is a ceremonial event. Those who do not recite it or do not participate in it are not crooked doctors or citizens who are ready to commit treason.
    Crooks exist in every walk of life. I do not believe that there are less crooks among scientists than there are among postal workers or military personnel.
    The only difference between science and any other walk of life is the lightness of the penalties, if any, that crooked scientists face compared to other professionals. In many ways, the crooked scientist is just like George W. Bush; he is secretive and in almost full control over anything that could expose him.
    linx, the SFN, just like most academic societies and institutions, has a document that describes the dos and don’ts and spells out the steps to be taken when a possible misconduct has been committed. This and similar documents are just a “White Paper”. When the SFN document was put to the test, it is not worth the white paper it has been printed on (personal experience).

    Like

  31. bob koepp Says:

    Are people who profess some religious belief significantly less likely than the irreligious or the population at large to lie, cheat, kill, … and/or significantly more likely to be kind and respectful toward their fellows? If so, can this be attributed to fear of divine punishment and/or hope for divine rewards?

    Like

  32. S. Rivlin Says:

    No, no, yes and no! Religious people are not less likely than secular people to lie, cheat, kill; there are not significantly more likely to be kind and respectful, but more than a few with kindness and respectfulness are so due to fear of punishment rather than hope for a divine reward.

    Like

  33. thanks for changing the subject Says:

    S. Rivlin @#31, you totally missed the point of my post. my post was a response to PP’s call for explicit guidelines in scientific research. i don’t know what your “personal experience” was. have you seen so many cases personally that you think it’s of no use? how about doing something about it instead of turning up your nose at it? implementation is a different topic altogether.

    Like

  34. neurolover Says:

    I think there should be a rule like Godwin’s law for the mention of mirror neurons in any cognitive neuroscience discussion.
    ugh — mirror neurons feature in the teaching of ethical behavior?
    citing to “chaff” at the MacDonnel foundation: “And then… we remembered our Neuro 101! – Neurons don’t actually sing or dance or think or feel – individual neurons – even super-duper mirror neurons – either fire action potentials or they don’t. Now we know that writing about neurons like they are little people with their own thoughts and actions pulling the strings that make we people-puppets do what we do is just a journalistic invention – still it makes us a little squeamish. Wait…could it be these feelings are simply the jealous reactions of our squirm neurons? Well, maybe when everyone else understands it – so will we.”
    http://www.jsmf.org/badneuro/chaff.htm

    Like

  35. S. Rivlin Says:

    thanks for changing the subject,
    Although I have seen my share of scientific misconduct cases, one does not have to see them in one’s own eyes to understand that misconduct in science is a major problem that will not be solved by making people standing, raising their right hand and promise to behave. Scientists, as is true for all people, have different thresholds where temptation is concerned. The ones with the low threshold all know what is allowed and what is not. All those who commit scientific misconduct are doing it after weighing in all the risks, including the risk of being caught, and decide that their actions worth the risks. Many of these fraudsters are repeat offenders, just like most other criminals and they repeat their offenses because they got away with their previous ones.
    As long as the scientific community continues to treat those scientists who commit misconduct with soft gloves or simply denies that such misconduct is a serious offense, as long as institutions prefer to brush misconduct cases under the rug, these offenders will get away with their misconduct.
    In this very moment, a federal criminal investigation is taking place at my university in a case of a dean who allegedly misused federal funds. Have you heard in the past of a criminal investigation such as this? Two years ago a PI here was caught and convicted of a similar offense. Unfortunately, severe punishment is the only language criminals understand and the only tool, if used correctly, that will curtail scientific misconduct.

    Like


Leave a comment