How To Read A Retraction, Number Fucktillion

July 10, 2009

A retraction was published today in Nature Structural & Molecular Biology that Comrade PhysioProf does not know how to interpret:

Retraction: Cocrystal structure of synaptobrevin-II bound to botulinum neurotoxin type B at 2.0 Å resolution
Michael A Hanson & Raymond C Stevens
In this paper, we described both the three-dimensional crystal structure of a botulinum toxin catalytic domain separated from the holotoxin (BoNT/B-LC, PDB 1F82) and a structure of the toxin catalytic domain in complex with a peptide (Sb2-BoNT/B-LC, PDB 1F83). The complex was later refined and deposited in the Protein Data Bank (PDB 3G94). The apo structure (PDB 1F82) remains valid. However, because of the lack of clear and continuous electron density for the peptide in the complex structure, the paper is being retracted. We apologize for any confusion this may have caused.

What the fuck does “because of the lack of clear and continuous electron density for the peptide in the complex structure, the paper is being retracted” mean? Any crystallomotherfuckinographers care to weigh in? Honest mistake? Fake data? Peer review FAIL? Wut?

No Responses Yet to “How To Read A Retraction, Number Fucktillion”


  1. I’m not a crystallomotherfuckingographer, but I interpret that to mean:
    “Our model was based on data that provided insufficient resolution for us to determine fit in the way that we reported. We fit the model the way we did not because the data could actually support it, but because the resolution was so poor that we had lots of wiggle room, and well hell, this was the sexiest possible outcome that we could imagine. The data were inconclusive but we reported it anyway.”
    I would think this would be analogous to reporting a statistically significant difference on a sample size of n=2. Sure, when you get enough samples to actually run statistics that whopping difference you see between your groups might actually hold. But you can’t count on it, so you shouldn’t report it until you can.

    Like

  2. B. Says:

    I’m not a crystallographer either, but it sounds to me like they were not sure about the synaptobrevin-II peptide electron density, so therefore the “Co-crystal” structure was not correct. The apo (or unbound) structure was correct though. They probably miscalculated and then went back and realized they’d made a mistake. I don’t think it is that uncommon in protein structures.

    Like

  3. S. Rivlin Says:

    Clearly a failure of peer review. The data and its presentation could not support the conclusion regarding the three dimentional structure, hence, the reviewers should reject the paper.

    Like

  4. Dr. Zeek Says:

    Somehwat of a crystallomotherfuckingographer (I am an eznymologist trained by a crystallopgrapher)–basically, they over interpreted their mutherfucking data– the blobs of electron density that they saw, while possibly due to the peptide, was most likely due to noise. esp if they are saying the density wasn’t continuos, since it should be in long bumps somehwat matching the backbone, etc. You can model a structure, a very pretty structure, based on shitty electron density if you set the parameters of the program low enough. It got past the peer review since you aren’t required to deposit the structure cordinates (ie electron density etc.) until later/while the paper is being published.
    (sorry for any typos and rambelings–its been a long day)
    This is why when we peer review crystal structure papers we ask to have the authors send us the pdb files so we can make sure the data is somewhat legit, but if you aren’t a crystallographer/familiar with the programs you generally look at the structure pictures and assume them to be true.
    With graphs and error bars- you can see the shitty data, with protein structures you have to dig way to deep sometimes to uncover the shadiness–was it dishonest– possibly. I would lean on the side of yes. In this case, its better to err on the side of “we see a blob, it could be this structure/ligand/piece of shit, or it could be just noise” in the paper. Just my two cents.

    Like

  5. Jewbacca Says:

    Also not a crystallomotherfuckingographer, but I think you might find your answer here: Limitations and lessons in the use of X-ray structural information in drug design, which cites this paper as an example, natch. Sounds like someone followed up (nine years after the original paper) and found deviations from the assumed e- density that resulted in systematic errors in the crystal reconstruction. I guess nobody thought to second-guess this until now.

    Like

  6. Jewbacca Says:

    Dr. Zeek,
    Thanks for offering some informed commentary on this. All I know about crystallography is what I remember from undergrad (i.e., not much). It sounds like you’re saying they saw features that were basically aliasing and didn’t think or didn’t bother to consider that possibility or look at the data for signs of it. Makes sense with the relative lack of other crystallography skimming the authors’ PubMed searches.

    Like

  7. mcmillan Says:

    Structure person here, though I don’t do crystallography myself. I’d say ambivalent academic has it pretty much. They didn’t have good density, so they pretty much fit whatever they wanted in there. I think this is a little more than just miscalculations that B suggests.
    From the quick skim of the original paper I’d say it’s not exactly clear whether peer review should have caught it or if it was the reviewers making legitmate assumptions that the authors didn’t screw things up. I don’t know if it’s typical for the reviewers to get to see the actual density themselves, or just get to see what’s reported in the paper (more than just coordinates) and from what Dr. Zeek says it sounds like they probably didn’t get it. From what’s in the paper there’s some numbers in the statistics given in table 1 that look a little off, but not enough to raise too much suspicion beyond anything else I’d see in Nature S&MB. But like Dr. Zeek says, if you could look at the actual density you probably should have spotted blobs that the authors was calling parts of the structure that are more likely noise.

    Like

  8. Structuremonkey Says:

    PhysioProf,
    This is a classic example of FAILing to remove model bias from density map calculations while performing crystallographic refinement. Plenty of tools exist to prevent this sort of thing, but these authors were too sloppy and/or incompetent to use them. Amazingly, the authors continued to stand by this result for years even as it became totally obvious (at least to crystallographers who examined their primary data) that it was bogus. They withdrew the original structure, 1F83.pdb, without retracting the corresponding publication. Later, they even went so far as to deposit a second structure (3g94.pdb) and included a phony (or badly miscalculated) bias-reduction map along with the revised structure. Methinks this crosses a line between mere incompetence and scientific misconduct. In any case it represents the second retraction from these authors on this topic–see J. Am. Chem. Soc., 2002, 124 (34), p 10248 for the other one. Take home message: there are a few bad apples in the crystallomotherfuckingography world that must be *heavily* scrutinized.

    Like

  9. bioephemera Says:

    Useful Modesty Abbreviations:
    “IANAL” = I am not a lawyer
    “IANACMFG” = I am not a crystallomotherfuckingographer
    “IASWOACMFG” = I am somewhat of a crystallomotherfuckingographer
    “WTF” = I can’t seem to force my eyes to read this post and comments all the way through because I never studied crystallomotherfuckingography

    Like

  10. Russell Says:

    I’m also not a crystaloMFographer, but that sounds like a high-falutin’ way of saying, “we have better data and we now think we were wrong about some of what we said then, and want to publish this mealy-mouth retraction so that no one thinks we’re sticking to the errors.”

    Like

  11. S. Rivlin Says:

    Structuremonkey,
    I believe that the majority of scientists would fight for their ideas, results, models, conclusions and hypotheses no matter how much evidence exists to refute them. The scientific literature is loaded with papers that over the years were proven to be wrong in one aspect or another. If every wrong paper had to be retracted, libraries would shrink significantly, volume-wise. That said, I think that the present retracted paper is an example of a case in which the authors probably knew early after its publication that the structure they proposed is wrong yet, despite possibly many requests by peers, editors and others to retract it, the fact that it was published in Nature probably made it much more painful to do. Thus the length of time that passed from publication to retraction.

    Like

  12. RobC Says:

    Crystallographer here. Two things first:
    The lab in question has done some really nice work. Second, there are limitations to all structures-resolution, taking a static image of a dynamic object, and experimental uncertainty.
    My take is either:
    1) Willful over-interpretation of data-they saw globby, discontinuous electron density (from water, ions, partially bound peptide) and tossed in a peptide, and proceeded to describe its orientation and binding.
    2) Semi-incompetence or use of the wrong/poor programs. Crystallography involves interpreting maps combined with model. To refine phases, you combine model + experimental phases. If you aren’t careful, you can get a situation where what you put in is what you see. This is why papers often present “omit maps” of the density. This map omitts a key part of the model. If you leave the peptide out and the density comes back, it might be real. (Though how you generate the map is an issue-totally naive, or simulated anneal omit maps are best. If you take the refined data from the workup that has seen the “omitted part” and don’t do simulated annealing, the bias persists). I’m recently trained, and I really don’t know what the status of programs was 9-10 years ago. Better programs and techniques are still being developed. http://scripts.iucr.org/cgi-bin/paper?wd5088
    To me, the real test is always what comes after the structure. Lesson learned in almost all crystallography kerfuffles is that if the biochemistry or biology doesn’t match the structure, someone better take a second look.
    I’m not sure if you can blame the peer reviewers-the statistics, figures and maps they saw may have looked fine (i.e. nice density from a biased map). People can deposit their data, in addition to the model, to the PDB. Maybe the peer review process needs to start calling for specific types of maps, better descriptions of how those maps were generated, and public deposition of reflection files or structure factors.

    Like


  13. Lesson learned in almost all crystallography kerfuffles is that if the biochemistry or biology doesn’t match the structure, someone better take a second look.

    In the case of that Chang dude that fucked up about three Science/Nature structures with some kind of erroneous minus sign in his homebrew computer programs, hadn’t the actual biochemists and biologists been saying all along that his structures couldn’t possibly be correct?

    Like

  14. Lab Lemming Says:

    How important is it to draw a line between sloppy science, bad science, and willful fakery?
    If anyone wants to weigh in on the topic of how to catch bad science as a peer reviewer, swing by the lounge at:
    http://lablemminglounge.blogspot.com/2009/07/should-reviewers-phone-friend.html

    Like


  15. I think that the present retracted paper is an example of a case in which the authors probably knew early after its publication that the structure they proposed is wrong yet, despite possibly many requests by peers, editors and others to retract it, the fact that it was published in Nature probably made it much more painful to do. Thus the length of time that passed from publication to retraction.

    Like

  16. expat postdoc Says:

    you guys need to hang around structural biologists more often.
    Locher, whom somewhat blew-the-whistle on Chang, is an interesting guy. Much more so than Chang, himself. But, I find that the whole subfield (membrane-protein crystallographers) is a very unique subculture among life scientists. No (absolutely none) exchange of scientific information. And their talks suck, a lot … ugh.
    I find it more interesting that group involved in the incident spends more than 1M USD per month on GPCR stuff … ugh.
    Guys, time for some new targets.

    Like

  17. biochem belle Says:

    Not a CMFG, but I was taught how to tell the difference between a structure that’s as good as the authors claim, one that’s not as good as the authors claim but still probably reasonable, and one that should be approached with skepticism. (As an aside, I think this is something that every biomedical scientist, regardless of their field, should learn given the boom in macromolecular structures being published.)
    Also of interest (though there is a little disagreement about the interpretation) are findings summarized in this CEN article. From this and other sources, it’s not so uncommon for resolution (and interpretation) to be overextended in high impact journals like C/N/S-because let’s face it, how often are they willing to accept 3-3.5 A structures these days unless they’re of massive complexes or transmembrane proteins.
    And yes, PhysioProf, you’re right about the Chang dude. I heard him speak at a conference on those same structures about 6 months before the retraction. When someone in the audience inquired about the seeming inconsistencies between those structures and the body of biochemical and biophysical research, the guy asking the question was basically told–We’re right. You’re wrong. Shut up and sit down.
    (I took a few liberties with the paraphrasing but the sentiment was clear-even to a grad student.)

    Like

  18. Batman Says:

    Why are so many sketchy results coming out of Scripps these days? At least in the case of Chang’s ‘great pentaretraction’ they withdrew the results quickly after a Locher’s structure made it obvious they were wrong. As for Stevens, this is consistent with a broader pattern of ethically questionable behavior. Waiting 9 years to retract a bad result (and trying to remove the PDB entry without retracting the paper) is totally irresponsible and inexcusable. But for an example of *extreme* weaselly behavior, you may want to look into how he hijacked the GPCR project from the Kobilka group. Now that’s one of more depressing stories I’ve heard…

    Like

  19. msphd Says:

    LOL.
    I have it on good authority that people in the know pointed this out at the time, see for example this comment in NSMB.
    Personally, I think the low occupancy (30-40%? You’ve got to be kidding) and high B-factors alone are major red flags, but the Rupp and Segelke letter also says the stereochemical restraints are not within reasonable ranges… That’s enough to make me think somebody did not know what they were doing.
    Always makes me sad to see these things happening. I think it says as much about the reviewers as it does about the authors, and a lot about the climate in science these days. Publication reigns supreme, and is almost irreversible unless you’re willing to risk putting your own neck on the guillotine.
    On the other hand, I find it heartening when something gets retracted even way after the fact. It actually gives me hope that some people still care about getting the right answers.

    Like


  20. Regardless of the procedure or programs used (wARP3, sigmaA4, CNS5, XtalView6/SHELXL7), none of the maps that we have produced reveals electron density of the apparent clarity shown in the published report at the reported contour levels (Fig. 1c).

    Thanks for pointing out this letter to NS&MB. Does this sentence imply that they just faked the shit? Would a PI in a structural biology lab typically look at this level of detail if a post-doc brings her a fancy-ass picture, or only look at the picture?

    Like

  21. Structuremonkey Says:

    S. Rivlin,
    While I agree that the original reviewers of this manuscript probably should have been a bit more rigorous, I would say this case represents an ultimate success for the peer-review process. In the end, the scientific method is designed to expose and marginalize fraudulent or erroneous results regardless of whether the problem is discovered before or after publication. A responsible crystallographer should retract a structure as soon as it becomes apparent that it is fundamentally wrong, as was the case here. Failing to do so would violate the trust non-structure people place in crystallographers–thereby making the offending investigators huge douches.

    Like

  22. ponderingfool Says:

    Thanks for pointing out this letter to NS&MB. Does this sentence imply that they just faked the shit? Would a PI in a structural biology lab typically look at this level of detail if a post-doc brings her a fancy-ass picture, or only look at the picture?
    **********************************************
    Depends on the lab. A junior faculty member, who over refined her/his model while a trainee, would take a look at this level of detail from his/her trainees. He/she would also point out in the class his mistake and why it is wrong to do. She/He showed the images. Another faculty member who was a full professor with a large lab and lots of other commitments did not look at such detail unless a trainee asked. Even then, that faculty member is not well versed with computers.

    Like

  23. DSKS Says:

    “A responsible crystallographer should retract a structure as soon as it becomes apparent that it is fundamentally wrong, as was the case here.”
    I think a paper should be retracted if its conclusions are flawed as a result of gross error or fraud, but not simply for being wrong. The legitimacy of findings presented in good faith are sufficiently determined by subsequent publications.
    I’d like to reiterate the sentiments of #21 that peer review doesn’t end but properly begins with publication. Some dodgy papers often manage to squeak through the first line of critiquing, but so what? Letters and subsequent publications are supposed to rectify that.
    I’m not convinced that peer review is a failure, and I’d sooner continue to see crap papers slipping through the net than have a heightened self-righteous pedantry causing v. good papers to get held up in the works by overzealous reviewers and paranoid editors.

    Like

  24. msphd Says:

    CPP, I don’t think they faked it. I think they just misinterpreted noise (as someone else pointed out). It is possible in most fields that involve modeling and visualization to over-process data to the point where noise looks like signal. Especially if you don’t really know what you’re doing!
    One of the major problems with crystallography is that even published data is rarely ever reproduced. There is no requirement for robustness. To solve a structure, you generally need two crystals, total. Can you imagine if the rest of biology worked that way? If we only did two samples and then published a paper on that? There’s no way that would ever fly!
    DSKS, I’m really on the fence about this question of whether peer review should be more stringent or less so.
    I’ve seen my field set back what might end up being 10 years (like this field) by a couple of crappy papers that should not have gone through being held up as The Truth, and everyone else forced to justify why our results are different. As if Precedence (i.e. prior publication) makes their results True and ours somehow automatically Wrong.
    What kills me is that the original reviewers brought up all the relevant points, and the authors just went to different journals to avoid having to address them. And, perhaps worse than that, most people just put in citations without having read carefully or thought about the work to the same extent as the reviewers at those first journals apparently did.
    I think we have only two choices at this point: either we to have to go to completely open publishing, or systematically enforce some kind of uniform, quantitative standards and format (e.g. minimum number of samples, and require the primary data be shown for all quantitative reporting).
    Anything in between, namely arbitrary standards applied arbitrarily, which is what we have now, just leads to unfair bias, power plays, and slows down scientific progress.

    Like

  25. Anonymous Says:

    “One of the major problems with crystallography is that even published data is rarely ever reproduced. There is no requirement for robustness. To solve a structure, you generally need two crystals, total. Can you imagine if the rest of biology worked that way? If we only did two samples and then published a paper on that? There’s no way that would ever fly!”
    This is the standard in more fields than just crystallography. Primate neurophys comes to mind, for example. I’m sure there are others. They all share a characteristic, which is that the experiments are *** difficult to do, and there are lots and lots of reasons why an attempt at replication would fail. So, merely being unable to replicate a published study doesn’t by itself undermine the published study. Thus, the rigor of the publication itself matters more than in fields where a “true” result is expected to be replicated (as a base for future experimentation, for example).

    Like

  26. Structuremonkey Says:

    For the crystalloMFographers out there–it’s actually an interesting exercise to examine the pdb entry 3G94 structure factor file. As long as this entry is still up, you can download the mmCIF and calculate your own model bias reduction maps for the ligand by any method you prefer. See how your peptide density compares to the “OMIT map” Hanson & Stevens included in the mmCIF. You may be surprised by the amount of discrepancy–which seems to shed some light on the issue of whether or not this was an “honest mistake” or something more pathological. Maybe it started as a mistake (and I generally like to give people the benefit of the doubt) but here it seems there was some deliberate slithering going on. Since the entire focus of the manuscript was on a protein-protein complex, and one of the proteins is clearly not present, it’s a no-brainer that this entry should be removed from the PDB and the paper retracted.

    Like

  27. RobC Says:

    Anon presents an indictment of crystallography I’ve heard before-that n is low, with only 1 or 2 crystals collected on. You should know that data collection on a few crystals involves hundreds to thousands of independent images that have many, many reflections (data points) each. Collecting more would enhance redundancy, but probably won’t do much for the structure. The number of cases where multiple independent crystal forms have been solved is understandably limited.
    I really do think people need to respect the biochemistry that should come with any structure more. Treat a crystal structure as a data-backed model. Structure-based mutagenesis better fit the structure. That said, I think the field worked here. A bad structure slipped through. Very quickly, a notice that the structure was suspect was published, followed by structures by another group. I think crystallography lends itself to retractions because it is one of the few fields that primary data can be stored, shared, and reprocessed-and that something can be shown to be objectively wrong.

    Like

  28. Anonymous Says:

    “I think crystallography lends itself to retractions because it is one of the few fields that primary data can be stored, shared, and reprocessed-and that something can be shown to be objectively wrong.”
    I think we really need to be moving towards more sharing of primary data. Structuremonkey’s description of downloading the data and calculating your own “model bias reduction maps” (“IANACMFG”) is fascinating, and could be done in other fields. It’s not because it used to be difficult to do (because standardized data sets were difficult to develop and distribute). But now, I think there are fields where data could be shared, but isn’t, because people have gotten used to not sharing.
    Sharing primary data allows a kind of “replication” in fields where real replication is going to be difficult — allowing the primary data to be examined as absolutely thoroughly as possible.

    Like

  29. Anonymous Says:

    If the results are inconclusive, then the paper is weak. hence the retraction.

    Like

  30. msphd Says:

    Anon wrote: They all share a characteristic, which is that the experiments are *** difficult to do, and there are lots and lots of reasons why an attempt at replication would fail. So, merely being unable to replicate a published study doesn’t by itself undermine the published study.
    Um, this makes no sense. It’s *too hard* to reproduce? Are you fucking kidding me??
    The whole idea of the scientific method, as far as I understand it, is that it should be REPRODUCIBLE???!!!
    Otherwise, it’s just anecdata!
    In other words, if YOU can’t reproduce it, how can it be verifiable? Falsifiable? It can’t. Therefore, it shouldn’t be publishable.
    However, it could still be deposited in a database and shared, and it could still be counted as scientific productivity.
    I totally agree that we need to share primary data. Any and all of it. I don’t understand why we continue to submit our work to restriction by the rules laid out by antiquated publishing company costs. Online databases are the way to go. Sooner, please.

    Like

  31. whimple Says:

    No, please don’t bury me under all your primary data. Be a good scientist, publish clear work, and make your data / published reagents available upon request. That’ll be fine.

    Like

  32. Nat Says:

    No, please don’t bury me under all your primary data. Be a good scientist, publish clear work, and make your data / published reagents available upon request. That’ll be fine.

    Are you currently trapped under the combined weight of every PubMed abstract in the database? Or every web page on the internet?

    Like

  33. whimple Says:

    Are you currently trapped under the combined weight of every PubMed abstract in the database? Or every web page on the internet?
    Yes. Badly trapped. Can barely keep up just screening the titles of weekly pubmed automated searches returning 300 – 400 papers per week.

    Like

  34. Anonymous Says:

    poor whimple. perhaps you should give up the primary *literature* and just read the New York Times for all your science information.

    Like

  35. expat posdtoc Says:

    I think that anyone involved with crystallography would admit that it’s not really SCIENCE … especially anyone at an SGC.
    It’s not hypothesis-driven and has its origins in engineering and technical physics.
    Therefore, if we agree that it’s not hypothesis-driven research, which it most cases it isn’t … especially with membrane proteins … the we can agree that it doesn’t need to be REPRODUCIBLE just OBSERVABLE.

    Like

  36. Nat Says:

    Can barely keep up just screening the titles of weekly pubmed automated searches returning 300 – 400 papers per week.

    If you’re barely getting through the titles, then why would adding raw data behind the figures matter?

    Like

  37. whimple Says:

    If you’re barely getting through the titles, then why would adding raw data behind the figures matter?
    It matters because asking for raw data to be added in creates more work but doesn’t add significant value. Only a small fraction of people who thoroughly read a paper are going to have any interest in the raw data, and it’s simple enough to provide the raw data to those people upon request. For everyone else, the published figures and tables are perfectly fine. Essentially, the published work should contain all the information necessary and sufficient for reproduction of the work and to build upon the work, and that doesn’t include raw data. Requests for “supplemental online information” are already getting out of control. Why make it worse?

    Like

  38. neurolover Says:

    “Requests for “supplemental online information” are already getting out of control. Why make it worse?”
    I don’t see the primary data request as being a form of additional analysis, as in the supplemental online information, and, in fact, see the primary data as being a substitute for endless reanalysis of the data sometimes requested in the supplemental analysis.
    The extra work involved in making the data available to everyone is exactly the reason why making it available upon request doesn’t work to fulfill the mission behind having data easily available. The only reason data on request creates less work is that one relies on people not asking for it, because they know that it’s going to take work (and thus time) and, furthermore, create an reciprocal relationship between the provider (who needs to do extra work to get you the data) and the user. That makes it harder to use it to show that someone’s statistics are wrong (compared to downloading the data from a database).
    The extra work involved is field specific, but I’m convinced that with the changes in data analysis (for example, the use of Matlab rather than proprietary programs to analyze a lot of data) a couple of committees could work with, for example, fMRI folks to produce standardized data sets. We know that doesn’t happen because people want to retain exclusive rights to their data sets. That motivation makes the “data available” less meaningful, since no one really believes that you’re going to get the data quickly in a format you can really use without a lot of work.

    Like

  39. Nat Says:

    It matters because asking for raw data to be added in creates more work but doesn’t add significant value.

    Ok, it’s just now you’ve shifted the perspective. Initially it seemed like your complaint came from a literature consumer, being buried under the literature. Now you’re objection is that having raw data available is too much work for the producer.
    This range of producer side complaints are definitely worth discussing. Most seem to fall under the “It’s too much work for too little gain” vein. And it may well be that it is too much work. Currently, I don’t have a strong opinion either way.
    On the other hand, the consumer side complaints to linking raw data to final publications (which CPP inveighed against some time ago) seem ungrounded to me.

    Like

  40. Eric Toth Says:

    msphd,
    “One of the major problems with crystallography is that even published data is rarely ever reproduced. There is no requirement for robustness. To solve a structure, you generally need two crystals, total. Can you imagine if the rest of biology worked that way? If we only did two samples and then published a paper on that? There’s no way that would ever fly!”
    Do me a favor and find the person who told you that you “generally need two crystals total” to solve a structure and punch them in the mouth. Then go ask a crystallographer how it’s actually done. What you generally need is actually two fantastic data sets, which might have come after collecting data on 500 crystals.
    All macromolecular crystallographic data are bad, which is why crystallography is still hard despite some excellent technological advances in the recent past. It’s an imaging technique, so you’re essentially taking a picture. If stuff moves, you can’t see it so well. If the stuff you can’t see so well is limited to the end of a surface lysine whose precise conformation doesn’t contribute much to the story of how that protein works, it’s no biggie. Just don’t include what you can’t see very well in the model and proceed. The problem in this paper is that what they couldn’t see very well was the most important part, so their model building devolved into wishful thinking. The fact that the fantasy part contributed so little to the overall scattering by mass and was downweighted by the low occupancy made some of the objective statistical critera that flag mistakes in crystallography less useful. Of course, some crystallographers noticed early on that other statistical criteria (high temperature factors, bad geometry) indicated a problem. Crystallography worked in this case, but the peer review and retraction process didn’t because the authors didn’t pull the plug when they should have. In their very slight defense, they might have been periodically trying to improve on this structure for the last nine years and the PI finally decided he’d had enough, they couldn’t get a clear picture of the peptide, so they retracted the paper.
    And don’t kid yourself about the data not being reproduced. If someone spots a crystal structure paper like this where things look fishy, that’s the first thing they’ll do (provided of course they work on that particular system and have the ability to produce the reagents–I doubt that group was the only one on the planet working on the project). Think about it, the authors just gave you the info required to produce and crystallize this protein and the peptide complex. Working out those conditions can take years, so the impetus for a competitor to go and either disprove or improve upon the data is strong. it’s not as common now because statistical validation of experimentally determined protein structures is pretty robust, but many in the past have gone back and re-solved suspect structures (sometimes their own) and published the new results.

    Like

  41. PhilJ Says:

    As a card carrying CMFG:
    Truth lies somewhere between 2 xtals and 500 xtals. And in general life is considered hard if I end up shooting more than 100. (The number of *structures* I’ve done lies between 2 and 100, as well).
    “All macromolecular crystallographic data are bad” is accurate when comparing it to well-defined physics data, and wildly inaccurate compared to pretty much every other piece of data in biology. The data I am working with on this computer right now, for instance, is actually relatively good in terms of defining a pretty accurate structure (1.75 Angstrom resolution). Better than any other method for determining 3D structures of macromolecules. Period. When I’m done with it and it gets published, the model (PDB file) and the data are made publicly available so that anyone can look at the fit and decide for themselves. Please also try and do that with most other branches of biology.
    Reproducing crystals is still tough and potentially time- and $- consumptive for anything other than small soluble proteins, however. It is by no means a normal approach toward validation.

    Like

  42. Eric Toth Says:

    PhilJ,
    I gots my xtal card too, done my time with good crystals and lousy ones, easy structures and rough ones. 500 crystals was, admittedly, an exaggeration, but probably not for the ribosome or RNA PolII. I’ve never been fortunate enough to sole a structure with “two crystals total”. Hell, I wouldn’t even send that few to the synchrotron unless that’s all I had.
    All macromolecular crystallographic data are bad, in the statistical sense, compared to small molecule data. Just because your Free R is acceptable doesn’t mean that much of the data that went into structure solution process weren’t of sub-optimal quality. It just means we have a robust way of making a model that agrees with the data (most of the time). You’re reading too much into that sentence. I was merely trying to emphasize the point that the data that we’re interested in and give the details that we want are generally weak, and thus sometimes it’s hard to interpret the important parts of the structure. I mean, what’s your I/sig and Rsym between 2 and 1.75 A, where all of the important info, and most of the reflections, are? Not stellar, I bet, but likely fine for the field and you still can get the info you need to tell a compelling story about your protein. In the case of the retraction, it was probably a nightmare.
    As far as reproducing crystals, I was thinking of Chang’s structure for a recent example, and Rubisco (chain traced backwards) and photoreactive yellow protein (really wrong) for older examples that sprung to mind without doing a literature search. Also, I was speaking specifically of competitors, who would have the resouces and incentive to reproduce crystals and prove someone wrong, and mostly in the past tense. I wasn’t saying it was normal, but it happened. If your grant funding depended on structural work on a certain protein, and your main competitor scooped you, and you knew they completely blew it, you’re telling me you wouldn’t set out to prove them wrong?

    Like


Leave a comment