News Article | March 24, 2016
Molecular dynamics, quantum chemistry and quantum materials researchers use HPC resources to perform highly compute-intensive mathematical calculations and to simulate images of molecular structures. While HPC systems have been updated to increase processing speed and parallelization, many HPC codes are not optimized to take advantage of coming hardware advances, such as next-generation Intel Xeon Phi processors. This is the first in a series of articles, told from the perspective of the individuals who are updating the codes, that describes the changes various national laboratories and scientific research centers are making to improve code optimization and parallelization in order to reduce scientists’ time to discovery. Quicker time to discovery. That’s what scientists focused on quantum chemistry are looking for. According to Bert de Jong, Computational Chemistry, Materials and Climate Group Lead, Computational Research Division, Lawrence Berkeley National Lab (LBNL), “I’m a computational chemist working extensively with experimentalists doing interdisciplinary research. To shorten time to scientific discovery, I need to be able to run simulations at near-real-time, or at least overnight, to drive or guide the next experiments.” Changes must be made in the HPC software used in quantum chemistry research to take advantage of advanced HPC systems to meet the research needs of scientists both today and in the future. NWChem is a widely used open source software computational chemistry package that includes both quantum chemical and molecular dynamics functionality. The NWChem project started around the mid-1990s, and the code was designed from the beginning to take advantage of parallel computer systems. NWChem is actively developed by a consortium of developers and maintained by the Environmental Molecular Sciences Laboratory (EMSL) located at the Pacific Northwest National Laboratory (PNNL) in Washington State. NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. “Rapid evolution of the computational hardware also requires significant effort geared toward the modernization of the code to meet current research needs,” states Karol Kowalski, Capability Lead for NWChem Development at PNNL. Both Pacific Northwest National Laboratory (PNNL) and Lawrence Berkeley National Laboratory (LBNL) are Intel Parallel Computing Centers (IPCCs) dedicated to optimizing NWChem code to effectively use the capability of the Intel Xeon Phi coprocessor. The main goal of the two teams is to optimize and modernize the NWChem code toward effective utilization of the emergent hybrid parallel computer systems based on the Intel Many Integrated Core Architecture (Intel MIC) technology to enable the scientific community to pursue new frontiers in the fields of chemistry and materials modeling. The aim of the LBNL and PNNL work is to create an optimized version of NWChem that enables the scientific community to pursue new frontiers in the fields of chemistry and materials and climate modeling. LBNL research and development focuses on implementing greater amounts of parallelism in the codes, starting with simple modifications such as adding/modifying OpenMP pragmas and refactoring to enable effective vectorization for performance improvement, all the way to exploring new algorithmic approaches that can better exploit manycore architectures. Catalytic materials used in conversion of cellulose to sugars and bio-oils In-silico design is critical in accelerating the development of new catalysts and chemical reaction and transformation processes that tackle key scientific and engineering challenges in the production of sustainable products in efficient, environmentally friendly and cost-effective ways at industrial scale. Heterogeneous catalysis has a rich history of facilitating energy-efficient selective chemical transformations and contributes to 90 percent of chemical manufacturing processes. Catalysts are central to overcoming the engineering and scientific barriers to economically feasible routes for the conversion of biomass-derived and solar-mediated fuel and chemicals into usable products. “An example is the conversion of cellulose into sugars and bio-oils, which through catalytic processes can be converted into biofuels or building blocks for industrial applications. Accurate simulations of the kinetics and thermodynamics of chemical transformations enable scientists to discover new and novel ways to predict, control and design optimal — industrially viable — catalytic activity and selectivity by rapidly scanning the large design space. It is crucial for the catalyst development process that compute-intensive computational chemistry simulations with NWChem run as efficiently on the next-generation computing platforms with the fastest time-to-solution possible,” states de Jong. Chem algorithms to take advantage of the new HPC architectures. Their work addresses the performance of several computational drivers in NWChem in these areas: Gaussian Density Functional Theory (DFT) methods, plane wave DFT formulations and multi-reference coupled cluster methods. The team is reworking codes with four major aims in mind: PNNL is rewriting NWChem algorithms to take advantage of the new HPC architectures. Their work addresses the performance of several computational drivers in NWChem in these areas: Gaussian Density Functional Theory (DFT) methods, plane wave DFT formulations and multi-reference coupled cluster methods. The team is reworking codes with four major aims in mind: In particular, the PNNL Intel PCC project focuses on the development of Intel MIC implementations of the so-called multi-reference coupled cluster (MRCC) methods, which are considered one of the most accurate methods in quantum chemistry. “These methods can be used to describe bond breaking processes, transition states in molecules, open-shell low-spin electronic states, and excited states. Through combining novel algorithms for harnessing combined power of Intel Xeon processors and Intel Xeon Phi co-processors with several levels of parallelism, the PNNL team has achieved a considerable (3X) speedup of the MRCC codes. The achieved progress will allow us to apply expensive CC methodologies to model challenging problems in catalysis and to describe important enzymatic reactions,” indicates Kowalski. The PNNL team is also using a directive-based offload model and OpenMP 4.0 in their work. The team uses an offload model that keeps large parts of the existing Fortran code unmodified by inserting Intel Language Extensions for Offloading (LEO) constructs. This approach significantly reduces time needed to develop and validates the code and, at the same time, provides tangible performance speedups associated with the utilization of the Intel Xeon Phi coprocessors. While the Intel Xeon Phi coprocessor supports several offload programming models, each with unique properties, only the Intel LEO constructs — unfortunately a proprietary offload language that predates the availability of the target directives in OpenMP 4.0 — provide the needed flexibility of transfer of data and control, and required incremental changes to the existing Fortran source code. “Novel offloading algorithms developed in the last two years at PNNL not only helped to reduce time-to-solution of accurate methods, but also improved the scalability of the targeted implementations,” said Dr. Edoardo Aprà the lead author of the Supercomputing 2014 paper reporting on the first Intel MIC implementation of the ubiquitous CCSD(T) formalism. PNNL is currently using the OpenMP multithreading features for the parallelization of floating-point intensive computational kernels. They plan to convert the Intel LEO directives currently used for offloading to the target directives that are part of the OpenMP 4.0 standard. The LBNL team is modifying and optimizing the NWChem code to more fully utilize features of the Intel Xeon Phi processor. The team is focused on end-to-end treading of plane wave capability, the Hartree-Fock and Density Functional Theory (DFT) Fock build, and coupled cluster triples on Intel Xeon Phi processors. The work of the LBNL team is key to getting NWChem ready for the arrival of NERSC’s Cori supercomputer, which will feature the Knights Landing Xeon Phi processor code name Knight Landing. “Our goal is to at least double the performance of computational chemistry and materials software, specifically NWChem, to accelerate scientific discovery. We want to use the Intel processors as efficiently as possible to get the fastest time-to-solution for the science problems we are trying to simulate using Intel processors,” states de Jong. The LBNL team has found the integral computation to be poorly vectorizable and is having thread-safety issues with data structures present in the legacy code; they will be integrating other better vectorizable integral codes, and will explore implementing a new algorithm, using AVX-512 instructions, to improve their performance. They are also looking at using heterogeneous computing approaches, where different processors work on different operations, instead of simple OpenMP constructs to increase the amount of parallelism available to the processors. For the Fock matrix build in the Hartree-Fock or DFT algorithm, they achieved a 1.6x speedup running natively on the Intel Xeon Phi coprocessor. LBNL achieved a 3.3x speedup for the triples correction in the coupled cluster algorithm using both the Intel Xeon processor and the Intel Xeon Phi coprocessor. The key to achieving this speedup was to optimize not just the expensive loops (for which the team integrated advances made by the joint PNNL-Intel efforts), but integrating OpenMP directives in all parts of the triples code (such as those that set up data structures and the summations that occur as part of the post processing for the triples energy). “Our latest work is on the plane-wave algorithm where the most time-consuming parts are the FFT (Fast Fourier Transform) and matrix multiplications on tall and skinny matrices (200x10,000, for example). Working together with PNNL, we have transformed the complete plane-wave algorithm to utilize OpenMP. Our work at LBNL has focused on the tall and skinny matrices. We developed a reduce algorithm for these special matrix multiplications using OpenMP teams and we achieved a 5.5x speedup over Intel’s MKL library on the Intel Xeon processor E5-2680 v3, formerly known as Haswell. We’re currently implementing this in our plane-wave algorithm and will be doing benchmarking on the whole code to assess the overall speedup,” states de Jong. How HPC will aid chemistry and materials research The goal at PNNL is to optimize coding efforts in such a way that they will be easily available to integrate into other open-source computational chemistry software efforts. PNNL, LBNL and Georgia Tech are working together on the project to update NWChem code. NWChem code optimized to run on advanced hardware, such as the Intel Xeon Phi coprocessor, will be reviewed and released so that it can be used by all NWChem users and researchers. Scientists are currently restricted in what they can do in their research by limitations of the supercomputers or HPC software models. They often have to reduce their research models to make them fit in the HPC resources currently available. Researchers need to be able to use the next-generation computers to run large computations faster in research areas such as solar cells or to develop novel complex organic-inorganic materials for batteries, photovoltaic or thermo-electrics that can serve sustainable energy sources. “Many of our algorithms inherently do not have long vector loops that can be easily transformed using OpenMP. I am not sure OpenMP is the complete solution. We will have to develop models that utilize heterogeneous computing on nodes with different threads/cores working on different parts of the algorithm simultaneously,” states de Jong. Modernization of underlying codes for efficient use of manycore architectures is required for benefits to be realized. According to Kowalski, “The rapid development of computer technology and emergence of petascale hybrid architectures offer an enormous increase in computational power, which will have a transformative effect on the whole area of computational chemistry, including the shift in the system-size limit tractable by many-body methods and opportunity of using accurate many-body techniques, which are characterized by steep numerical scaling. The main goal is to enable accurate methods capable of describing complicated electron correlations effects, stemming from the instantaneous interactions between electrons in molecules. This includes two major classes of methodologies: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! .
Neeb M.,University of Marburg |
Czodrowski P.,Computational Chemistry |
Heine A.,University of Marburg |
Barandun L.J.,ETH Zurich |
And 3 more authors.
Journal of Medicinal Chemistry | Year: 2014
Drug molecules should remain uncharged while traveling through the body and crossing membranes and should only adopt charged state upon protein binding, particularly if charge-assisted interactions can be established in deeply buried binding pockets. Such strategy requires careful pKa design and methods to elucidate whether and where protonation-state changes occur. We investigated the protonation inventory in a series of lin-benzoguanines binding to tRNA-guanine transglycosylase, showing pronounced buffer dependency during ITC measurements. Chemical modifications of the parent scaffold along with ITC measurements, pKa calculations, and site-directed mutagenesis allow elucidating the protonation site. The parent scaffold exhibits two guanidine-type portions, both likely candidates for proton uptake. Even mutually compensating effects resulting from proton release of the protein and simultaneous uptake by the ligand can be excluded. Two adjacent aspartates induce a strong pKa shift at the ligand site, resulting in protonation-state transition. Furthermore, an array of two parallel H-bonds avoiding secondary repulsive effects contributes to the high-affinity binding of the lin-benzoguanines. © 2014 American Chemical Society.
Loryan I.,Uppsala University |
Sinha V.,Janssen Pharmaceutical |
Mackie C.,Janssen Pharmaceutical |
Van Peer A.,Janssen Pharmaceutical |
And 6 more authors.
Molecular Pharmaceutics | Year: 2015
In the present work we sought to gain a mechanistic understanding of the physicochemical properties that influence the transport of unbound drug across the blood-brain barrier (BBB) as well as the intra- and extracellular drug exposure in the brain. Interpretable molecular descriptors that significantly contribute to the three key neuropharmacokinetic properties related to BBB drug transport (Kp,uu,brain), intracellular accumulation (Kp,uu,cell), and binding and distribution in the brain (Vu,brain) for a set of 40 compounds were identified using partial least-squares (PLS) analysis. The tailoring of drug properties for improved brain exposure includes decreasing the polarity and/or hydrogen bonding capacity. The design of CNS drug candidates with intracellular targets may benefit from an increase in basicity and/or the number of hydrogen bond donors. Applying this knowledge in drug discovery chemistry programs will allow designing compounds with more desirable CNS pharmacokinetic properties. © 2014 American Chemical Society.
News Article | April 13, 2016
« Study: Combination of active and passive engine-off coasting delivers optimal fuel savings with 48V hybrid systems | Main | Convergent Science forms Computational Chemistry Consortium » Toyota, in partnership with the University of Michigan Transportation Research Institute (UMTRI), is working to transform the streets of Ann Arbor, Mich. into the world’s largest operational, real-world deployment of connected vehicles and infrastructure. The Ann Arbor Connected Vehicle Test Environment (AACVTE) is a real-world implementation of connected vehicle safety technologies being used by everyday drivers in Ann Arbor and around Southeast Michigan. Begun in 2014, it expands the Connected Vehicle Safety Pilot Model Deployment to include 9,000 equipped vehicles—including private automobiles, trucks, buses, motorcycles, bicycles and pedestrians—over 27 square miles of equipped infrastructure covering the greater Ann Arbor area. Ann Arbor is already the largest Dedicated Short Range Communication (DSRC) test bed in the world, and the Michigan State government is very active in expanding deployment throughout the State of Michigan. As part of its partnership agreement with UMTRI, Toyota will invite team members and their families to participate in the AACVTE initiative. The Toyota participants will allow their vehicles to be equipped with devices to support accelerated research and deployment of advanced Vehicle-to-Vehicle (V2V)/Vehicle-to-Infrastructure (V2I) systems in the region. The goal is to deploy 5,000 vehicles with vehicle awareness devices throughout the Ann Arbor area. The vehicle awareness device to be installed on participating vehicles is a small box, hidden out of sight in the vehicle’s trunk or rear area, with two small antennas—one on or near the rear windshield and another either on the trunk lid or the vehicle’s roof. The device continuously transmits speed and position data from the participating vehicle to other, similarly equipped vehicles, as well as into the surrounding environment where this information can be recognized by research equipment located along the roadside and at intersections. The information transmission in this study occurs during the participant’s usual everyday driving. While the data broadcast by the vehicles does include a unique identifier, the data gathered in this experiment will be treated confidentially. The results of this study will provide UMTRI and the US Department of Transportation with valuable information for the development of future V2V/V2I communication-based driver assistance and safety systems for passenger vehicles. In 2012, UMTRI and the USDOT launched SPMD, a $30-million connected vehicle research project, Safety Pilot Model Deployment (SPMD). SPMD included nearly 3,000 vehicles, the majority of which were owned by families in the AAPS. The research area was situated in northeast Ann Arbor, UMTRI. Building on the success of SPMD, UMTRI and its partners will expand the existing infrastructure footprint from northeast Ann Arbor to the entire 27-square miles of the City of Ann Arbor. Additional vehicles also will be deployed at the rate of 1,500 per year. AACVTE will move from a model deployment to an early operational deployment.
Liu Q.,Dana-Farber Cancer Institute |
Liu Q.,Harvard University |
Sabnis Y.,Computational Chemistry |
Sabnis Y.,UCB Pharma |
And 7 more authors.
Chemistry and Biology | Year: 2013
Protein kinases are a large family of approximately 530 highly conserved enzymes that transfer a γ-phosphate group from ATP to a variety of amino acid residues, such as tyrosine, serine, and threonine, that serves as a ubiquitous mechanism for cellular signal transduction. The clinical success of a number of kinase-directed drugs and the frequent observation of disease causing mutations in protein kinases suggest that a large number of kinases may represent therapeutically relevant targets. To date, the majority of clinical and preclinical kinase inhibitors are ATP competitive, noncovalent inhibitors that achieve selectivity through recognition of unique features of particular protein kinases. Recently, there has been renewed interest in the development of irreversible inhibitors that form covalent bonds with cysteine or other nucleophilic residues in the ATP-binding pocket. Irreversible kinase inhibitors have a number of potential advantages including prolonged pharmacodynamics, suitability for rational design, high potency, and ability to validate pharmacological specificity through mutation of the reactive cysteine residue. Here, we review recent efforts to develop cysteine-targeted irreversible protein kinase inhibitors and discuss their modes of recognizing the ATP-binding pocket and their biological activity profiles. In addition, we provided an informatics assessment of the potential "kinase cysteinome" and discuss strategies for the efficient development of new covalent inhibitors. © 2013 Elsevier Ltd.
Alberto Flores-Hidalgo M.,International Advanced Research Center for Powder Metallurgy And New Materials |
Glossman-Mitnik D.,International Advanced Research Center for Powder Metallurgy And New Materials |
Galvan D.H.,National Autonomous University of Mexico |
Barraza-Jimenez D.,Computational Chemistry
Journal of the Chinese Chemical Society | Year: 2013
Density Functional Theory employing hybrid andM06 functionals in combination with three different basis sets is used to calculate the ground state of a cage like (ZnO)12 nanocluster which has been consistently reported as the more stable cluster for its particular size. B3LYP and B3PW91 hybrid functionals combined with 6-31+G*, Lanl2dz and SDD basis sets are employed to treat the ZnO molecular system. Alternatively, threeM06 functionals in combination with three basis sets are employed in the nanostructure calculations. Results obtained by treating ZnO sodalite cage nanocluster with M06 functionals demonstrated comparable quality to results obtained with hybrid functionals. Within this study, efficient theoretical DFT methods with the widely known hybrid and the recently created M06 meta-hybrid functionals are employed to study nanostructured ZnO. Our resulting parameters provide a fresh approach performance wise on the different theoretical methods to treat transition metal nanostructures, particularly, ZnO nanoclusters geometry and electronic structure. © 2013 The Chemical Society Located in Taipei &Wiley-VCH Verlag GmbH &Co. KGaA,Weinheim.
News Article | April 13, 2016
« Convergent Science forms Computational Chemistry Consortium | Main | GM recycles water bottles to make part for Chevy Equinox, coat insulation for the homeless, air filters » The 2016 SAE World Congress is again featuring a technical track (PFL 720) on advances in hydrogen fuel cell vehicle technology, followed by a technical expert panel discussion (PFL 799) on the commercialization of fuel cell vehicles and hydrogen infrastructure. The PFL 720 / 799 sessions are on Thursday, 14 April, in room 420A, Cobo Hall. During the PFL 720 sessions at last year’s Congress, Toyota presented a set of four technical papers describing some of the technology innovations used in its production fuel cell hybrid electric vehicle Mirai. (Earlier post.) As was the case last year, the 2016 PFL 720 / PFL 799 combination is chaired by Jesse Schneider of BMW. The PFL 799 panel, an annual wrap-up after the technical sessions, this year features:
News Article | April 13, 2016
« Toyota to deploy 5,000 vehicles with vehicle awareness devices in Ann Arbor; expanding AACVTE | Main | SAE World Congress technical track on advances in fuel cell vehicle applications » CFD software company Convergent Science (earlier post) has formed the Computational Chemistry Consortium (C3). The goal of the consortium is to provide the most accurate and comprehensive computation chemistry combustion and emissions models, tools, and mechanisms to the entire scientific community. The consortium is being established in response to requests from several engine manufacturers to facilitate collaboration and reduce redundancy in their efforts to improve chemistry and emissions modeling. C3 will operate for three years with an optional renewal. Consortium money will be used to fund academic institutions, national laboratories, consortium members, and potentially non-consortium members to conduct R&D. The models, tools, and mechanisms developed by the consortium will be for the exclusive use of its members for the three-year duration of C3. An advisory board will be comprised of all paying consortium members. The board will vote on fuels and operating conditions of interest, with voting rights proportional to the amount of money invested by each member.
News Article | March 31, 2016
The process can help convert solar energy into usable electricity, or drive molecular machines whose parts rotate when an infusion of light prompts the molecules to change shape. To propel these and other technologies forward, a University at Buffalo researcher has developed a new tool for studying photoexcitation: a computer program called Libra. The open-source software gives scientists the building blocks to design their own algorithms for studying how a material or electronic state evolves over time. Among the program's capabilities: simulating what happens over time when a specific material is exposed to light. Photoexcitation can alter molecules in ways that are invisible to microscopes, but important to understand. These changes, which can sometimes cause molecules to behave differently or adjust their shape, can be modeled using algorithms designed through Libra. "Libra gives researchers a way to study the properties of many novel materials without having to fabricate each one experimentally," said the program's developer, Alexey Akimov, an assistant professor of chemistry in the UB College of Arts and Sciences. "This helps save time and money, as it takes resources to synthesize new materials in the lab." So if a scientist is trying to build a molecular machine with a propeller that moves in a certain direction when exposed to light, he or she could use Libra to identify the molecular structures most likely to perform this task efficiently before deciding which structures to actually create in experiments. Libra's capabilities are described in an article published on March 26, in the Journal of Computational Chemistry. Explore further: Tiny, light-activated crystal sponges fail over time. Why? More information: Alexey V. Akimov. Libra: An open-Source "methodology discovery" library for quantum and classical dynamics simulations, Journal of Computational Chemistry (2016). DOI: 10.1002/jcc.24367
News Article | November 9, 2015
These findings were revealed in the 6th blind test of crystal structure prediction, an exercise conducted by twenty five international research groups that was organised by the Cambridge Crystallographic Data Centre (CCDC). Crystal structures describe the periodically repeating arrangement of molecules in a material and determine many of a material's properties, such as solubility, dissolution rate, hardness, colour and external shape. The ability to predict crystal structures could therefore enable the design of materials with superior properties, for example the creation of brighter pigments, more effective pharmaceuticals, or even lower calorie foodstuff. In particular, the pharmaceutical industry would gain huge benefit from being able to reliably predict crystal structure because pharmaceutical molecules are prone to crystallise in more than one crystal structure (or polymorph), depending on the conditions under which the molecule is crystallised. The specific polymorph that goes into a formulation must be strictly controlled to ensure consistency of delivery to the patient. The ability to predict crystal structures could save pharmaceutical companies time and money by being able to quickly identify and develop polymorphs with superior properties. It would also help pharmaceutical companies with patent protection and product life cycle management. Different approaches to the problem have been developed and these have been evaluated over the years in international exercises, known as the blind tests of crystal structure prediction. Twenty five research groups who have been developing methods for predicting crystal structures of organic molecules took part in the latest test. In this test participants were challenged to predict nine recently determined crystal structures of five target compounds given only the chemical diagram of the molecules and conditions of crystallisation, with two sets of predictions allowed per target compound. Only one group managed to predict nearly all targets correctly. These very successful results were obtained by Dr Marcus Neumann of Avant-garde Materials Simulation and Prof Frank Leusen and Dr John Kendrick of the University of Bradford. Dr Marcus Neumann, author of the computer program GRACE for crystal structure prediction, which predicted eight out of nine targets correctly in this blind test and eight out of ten targets in the previous two blind tests, said: "Obviously, we are delighted with these results, in particular because unlike in earlier blind tests they have been obtained by a fully automated procedure that can be used as a black box in industrial working environments." Dr Frank Leusen, Professor of Computational Chemistry, University of Bradford, said: "I am particularly impressed that GRACE correctly predicted the crystal structure of a hydrated chloride salt, which poses a real challenge both in terms of the size of the search problem and in terms of the required accuracy. This result will be of particular interest to the pharmaceutical industry as they often deal with this type of compound." Dr John Kendrick, University of Bradford, added: "Recent developments within the Grace package meant that the process of predicting the crystal structures in the Blind Test was nearly automatic, very little intervention was required from the user." Although the whole problem is not solved - the predictions cannot yet explain the influence of solvent, impurities, additives or temperature on the outcome of a crystallisation experiment - these recent results demonstrate significant capabilities in the field. The results of previous blind tests, in 1999, 2001, 2004, 2007 and 2010, demonstrated that the crystal structures of small organic molecules can be predicted under favourable conditions. Success rates were low in the first three blind tests, but the fourth blind test in 2007 saw a major breakthrough with one group predicting all four target crystal structures, each as their most likely prediction. This was achieved by the same group of Drs. Neumann, Kendrick and Leusen who collaborated to predict eight out of nine targets in the latest blind test. The target compounds in the fifth blind test in 2010 became significantly more complex, but the success rate remained high, with particularly good results for a large flexible molecule which could be regarded as a prototype pharmaceutical compound. In the current, sixth, blind test, there were five targets, including a small semi-rigid molecule, a medium sized flexible molecule with five known polymorphs, a hydrated Chloride salt, a co-crystal and a large flexible molecule. The results were discussed at the Blind Test workshop on 27 and 28 October 2015 at the Cambridge Crystallographic Data Centre and featured in the journal Nature. Explore further: Tough crystal nut cracked: Correct prediction of all three known crystal structures of a sulfonimide