Entity

Time filter

Source Type

Reading, United Kingdom

Home > Press > A new way to display the 3-D structure of molecules: Metal-organic frameworks provide a new platform for solving the structure of hard-to-study samples Abstract: Researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have created a sort of nanoscale display case that enables new atomic-scale views of hard-to-study chemical and biological samples. Their work, published online Aug. 18 in the journal Science, could help to reveal new structural details for a range of challenging molecules--including complex chemical compounds and potentially new drugs--by stabilizing them inside sturdy structures known as metal-organic frameworks (MOFs). The researchers introduced a series of different molecules that were chemically bound inside these porous MOFs, each measuring about 100 millionths of a meter across, and then used X-ray techniques to determine the precise molecular structure of the samples inside the MOFs. The samples ranged from a simple alcohol to a complex plant hormone, and the new method, dubbed "CAL" for covalent alignment (the molecules form a type of chemical bond known as a covalent bond in the MOFs), enables researchers to determine the complete structure of a molecule from a single MOF crystal that contains the sample molecules in its pores. The MOFs in the study, which are identical and are easy to manufacture in large numbers, provided a sort of backbone for the sample molecules that held them still for the X-ray studies--the molecules otherwise can be wobbly and difficult to stabilize. The researchers prepared the samples by dipping the MOFs into solutions containing different molecular mixes and then heating them until they crystallized. "We wanted to demonstrate that any of these molecules, no matter how complex, can be incorporated and their structure determined inside the MOFs," said Omar Yaghi, a materials scientist at Berkeley Lab and chemistry professor at UC Berkeley who led the research. The MOFs also possess a particular handedness known as "chirality"--like a left-handed person vs. a right-handed person--that selectively binds with molecular samples that also possess this handedness. The difference in a molecule's handedness is particularly important for pharmaceuticals, as it can mean the difference between a medicine and a poison. "This is one of the holy grails: how to crystallize complex molecules, and to determine their chirality," Yaghi said. Seungkyu Lee and Eugene A. Kapustin, Berkeley Lab researchers and UC Berkeley graduate students who participated in the latest work, said hard-to-study proteins, such as those important for drug development, are high-priority targets for the new technique. "We are aiming for those molecules that have never been crystallized before," Kapustin said. "That's our next step. So we cannot only show the arrangement of atoms, but also the handedness of molecules, in which pharmaceutical companies are interested." One of the best methods for studying any molecule's 3-D structure in atomic detail is to form it into a crystal. Then, researchers point intense X-ray light at the crystal, which produces a pattern of spots--like light off of a disco ball. Such patterns serve as a fingerprint for fully mapping the molecule's 3-D structure. Some molecules are difficult to form into crystals, though, and the process of crystallizing a single molecule can in some cases involve years of effort and expense. "To crystallize a molecule typically involves a trial-and-error method," Yaghi said. "Every chemist and biologist has to submit to this process. But in this MOF material you don't need all that--it traps the molecule and orders it. It's a way to bypass that trial-and-error approach to crystallography." Different types of MOFs, with different pore sizes, could be tested to find out which ones work best with different types of samples, Lee said. Importantly, the MOFs in the latest study did not appear to distort the natural, intact structure of the molecules. Researchers say it's possible to determine the complete 3-D structure of a molecule even if the samples only fill about 30 percent of a MOF's pores. Researchers determined the atomic structure of the MOFs and the bound molecules with X-rays at Berkeley Lab's Advanced Light Source (ALS), and they also studied the MOFs using a technique called nuclear magnetic resonance (NMR) at Berkeley Lab's Molecular Foundry. In all, the researchers studied 16 different molecules bound inside the MOF pores, including a plant hormone called jasmonic acid whose chiral structure had never been directly determined before, other plant hormone known as gibberellins, methanol, and other acids and alcohols. The metals in the MOF framework itself can actually serve to enhance the quality of the X-ray images, Kapustin said, adding that in one case the technique allowed researchers to distinguish between two nearly identical plant hormones based on the difference in a single atomic bond. Researchers could see structural details down to hundredths of a nanometer--less than the diameter of some atoms. "You can see with such precision whether it is a double bond or a single bond, or if this is a carbon atom or some other atom," Lee said. "Once you bind a molecule in the MOF, you can learn the absolute structure very precisely since the chirality of the MOF serves as a reference during the structure refinement." ### This work was supported by BASF SE in Germany and the King Abdulaziz City for Science and Technology Center of Excellence for Nanomaterials and Clean Energy Applications. The Advanced Light Source and Molecular Foundry are both DOE Office of Science User Facilities. About Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Noonan D.M.,University of Insubria | Noonan D.M.,Science and Technology Center
Journal of the National Cancer Institute | Year: 2015

It is impossible to predict exactly who will develop a cancer and who will not. We know that several "risk factors" may increase the chance of getting cancer and that risk increases with age. However, even with that in mind we seem to be able to explain only a certain number of cancers. Recently, Tomasetti and Vogelstein published a provocative article in Science stating that a large percentage of cancers may be due to "bad luck" (stochastic mutation events during DNA replication) and only a few to carcinogens, pathogens, or inherited genes and that this should impact public health policies. However, their intriguing analysis has numerous limitations, some of which have already been commented upon, including the likely biased subset of cancers and that finding a correlation does not signify a cause-effect mechanism. Here, we point out that there may also be an alternative explanation for the data, the cancer stem cell hypothesis, which postulates that cancers are derived from tissue stem cells and not from somatic differentiated cells. We also highlight the importance of the tissue microenvironment in the growth of transformed cells and outline a table of concurrent factors for several cancers. The message communicated to the public should not be one of helplessness in avoiding cancers, particularly given the now extensive knowledge of known risk factors and several agents/behaviors that can lower risk for specific cancers. While some tumors will still be due to chance, prevention should still be a primary goal for public health policies. © 2015 The Author. Source


News Article | April 6, 2016
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

CAMBRIDGE, MA — Intel and the Broad Institute of MIT and Harvard announced at the Bio-IT World Conference & Expo that they are co-developing new tools, and advancing fundamental capabilities, so large genomic workflows can run at cloud scale. Broad Institute also announced collaborations with cloud providers to enable cloud-based access to its Genome Analysis Toolkit (GATK) software package. This is expected to expand access to the GATK Best Practices pipeline. The new tools Broad is developing with Intel aim to simplify the execution of large genomic workflows such as GATK, and to improve the storage, scalability, and processing of genomic data. This has the potential to not only speed variant detection and biomarker discovery, but enable discoveries that would not have been detected with smaller cohorts. Broad’s workflow execution engine, called “Cromwell,” is designed to launch genomic pipelines on private or public clouds in a portable and reproducible manner. Broad is working with Intel to extend Cromwell’s capabilities to support multiple input languages and execute on multiple back ends simultaneously, enabling researchers to run jobs anywhere. This integrated workflow engine has built-in intelligence capable of finding the optimal way to execute tasks, the most appropriate hardware resources to run those tasks on, and methods to avoid redundant steps. “Orchestrating genomic workflows at cloud scale is complex,” said Dr. Eric Banks, Senior Director of Data Sciences and Data Engineering at Broad and a creator of the GATK software package. “We wanted to simplify the execution of common genomic data types like reads and variants and to create an environment that allows any researcher to do this at scale in an easy-to-use way.” Another area of joint innovation is in the processing and storing of genomic variant datasets, which often consist of large, sparse data matrices. Gene sequence variation data is commonly stored as text files for bioinformatics. The declining cost of DNA sequencing has driven an increase in the volume of genomic data sets that researchers want to incorporate, making it increasingly difficult to jointly analyze large volumes of data from text files. Large scale reads and writes of variant call data, joint genotyping, or variant recalibration require next-generation databases that are built and optimized for genomic data. Broad and Intel are collaborating on a faster, more flexible, and scalable solution. ‘GenomicsDB’ is a novel way to store vast amounts of patient variant data and to perform fast processing with unprecedented scalability. Built and optimized for the management of genomic variant data, GenomicsDB runs on top of an array database system optimized for sparse data called ‘TileDB.’ TileDB was developed by MIT and Intel researchers working at the Intel Science and Technology Center for Big Data, which is based at MIT's Computer Science and Artificial Intelligence Lab. GenomicsDB is now used in the Broad’s production pipeline running on an Intel Xeon processor based cloud environment to perform joint genotyping. “The time it now takes to perform the variant discovery process went from eight days to 18 hours,” Banks said. “However, that’s with 100 whole genomes. We routinely process projects with thousands of samples, so that speedup itself is truly transformative. We recently needed to abandon our attempt to run variant discovery on an eight thousand sample project, because we estimated it would take 90 days without GenomicsDB. With GenomicsDB, however, it should take under a week. This means we can say ‘yes’ to our researchers far more often, on far more ambitious projects.” “With the integration of these two tools into the genomic pipeline that we are running on a cloud environment, the orchestration and execution of the workflow is not only simplified but significantly accelerated,” said Ben Neale, an institute member at the Broad Institute’s Stanley Center for Psychiatric Research and the Broad’s Program in Medical and Population Genetics. “We are excited that the research community will be able to start testing GenomicsDB and Cromwell.” Intel is releasing TileDB and GenomicsDB as open source tools. Engineers building the ‘Collaborative Cancer Cloud,’ a precision medicine network including Oregon Health Sciences University (OHSU), Dana-Farber Cancer Institute (DFCI), and Ontario Institute for Cancer Research (OICR) are already using these tools across their collective data sets. Long-term goals are to expand upon these tools to enable joint genotyping with other large genomic research centers in a federated and secure model, regardless of the location of data. Broad will continue to work with Intel on next-generation computing technologies that address the size, speed, security and scalability challenges associated with large scale genomic sequencing data and analytics. “The progress that we’re seeing in our development work with Broad represents another step in the moonshot goal of taming cancer and other maladies,” said Eric Dishman, Intel Vice President, Health and Life Sciences. “Harnessing and analyzing massive amounts of genomic data may eventually be a key factor in enabling people around the world to live longer, healthier lives.”


News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

In a breakthrough for energy-efficient computing, UC Berkeley engineers have shown for the first time that magnetic chips can actually operate at the lowest fundamental energy dissipation theoretically possible under the laws of thermodynamics. The findings, published in the peer-reviewed journal Science Advances, mean that dramatic reductions in power consumption are possible — down to as little as one-millionth the amount of energy per operation used by transistors in modern computers. This is critical for mobile devices, which demand powerful processors that can run for a day or more on small, lightweight batteries. On a larger, industrial scale, as computing increasingly moves into “the cloud,” the electricity demands of the giant cloud data centers are multiplying, collectively taking an increasing share of the country’s — and world’s — electrical grid. “We wanted to know how small we could shrink the amount of energy needed for computing,” said senior author Jeffrey Bokor, a UC Berkeley professor of electrical engineering and computer sciences and a faculty scientist at the Lawrence Berkeley National Laboratory. “The biggest challenge in designing computers and, in fact, all our electronics today is reducing their energy consumption.” Lowering energy use is a relatively recent shift in focus in chip manufacturing after decades of emphasis on packing greater numbers of increasingly tiny and faster transistors onto chips. “Making transistors go faster was requiring too much energy,” said Bokor, who is also the deputy director the Center for Energy Efficient Electronics Science, a Science and Technology Center at UC Berkeley funded by the National Science Foundation. “The chips were getting so hot, they’d just melt.” Researchers have been turning to alternatives to conventional transistors, which currently rely upon the movement of electrons to switch between 0s and 1s. Partly because of electrical resistance, it takes a fair amount of energy to ensure that the signal between the two states is clear and reliably distinguishable, and this results in excess heat. Magnetic computing emerged as a promising candidate because the magnetic bits can be differentiated by direction, and it takes just as much energy to get the magnet to point left as it does to point right. “These are two equal energy states, so we don’t throw energy away creating a high and low energy,” said Bokor. Bokor teamed up with UC Berkeley postdoctoral researcher Jeongmin Hong, UC Berkeley graduate student Brian Lambson and Scott Dhuey at the Berkeley Lab’s Molecular Foundry, where the nanomagnets used in the study were fabricated. They experimentally tested and confirmed the Landauer limit, named after IBM Research Lab’s Rolf Landauer, who in 1961 found that in any computer, each single bit operation must expend an absolute minimum amount of energy. Landauer’s discovery is based on the second law of thermodynamics, which states that, as any physical system is transformed, going from a state of higher concentration to lower concentration, it gets increasingly disordered. That loss of order is called entropy, and it comes off as waste heat. Landauer developed a formula to calculate this lowest limit of energy required for a computer operation. The result depends on the temperature of the computer; at room temperature, the limit amounts to about 3 zeptojoules, or one-hundredth the energy given up by a single atom when it emits one photon of light. The UC Berkeley team used an innovative technique to measure the tiny amount of energy dissipation that resulted when they flipped a nanomagnetic bit. The researchers used a laser probe to carefully follow the direction that the magnet was pointing as an external magnetic field was used to rotate the magnet from “up” to “down” or vice versa. They determined that it only took 15 millielectron volts of energy — the equivalent of 3 zeptojoules — to flip a magnetic bit at room temperature, effectively demonstrating the Landauer limit. This is the first time that a practical memory bit could be manipulated and observed under conditions that would allow the Landauer limit to be reached, the authors said. Bokor and his team published a paper in 2011 that said this could theoretically be done, but it had not been demonstrated until now. While this paper is a proof of principle, he noted that putting such chips into practical production will take more time. But the authors noted in the paper that “the significance of this result is that today’s computers are far from the fundamental limit and that future dramatic reductions in power consumption are possible.” The National Science Foundation and the U.S. Department of Energy supported this research.


News Article
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

In a breakthrough for energy-efficient computing, engineers at the University of California, Berkeley, have shown for the first time that magnetic chips can operate with the lowest fundamental level of energy dissipation possible under the laws of thermodynamics. The findings, to be published Friday, March 11, 2016 in the peer-reviewed journalScience Advances, mean that dramatic reductions in power consumption are possible -- as much as one-millionth the amount of energy per operation used by transistors in modern computers. This is critical for mobile devices, which demand powerful processors that can run for a day or more on small, lightweight batteries. On a larger, industrial scale, as computing increasingly moves into 'the cloud,' the electricity demands of the giant cloud data centers are multiplying, collectively taking an increasing share of the country's -- and world's -- electrical grid. "We wanted to know how small we could shrink the amount of energy needed for computing," said senior author Jeffrey Bokor, a UC Berkeley professor of electrical engineering and computer sciences and a faculty scientist at the Lawrence Berkeley National Laboratory. "The biggest challenge in designing computers and, in fact, all our electronics today is reducing their energy consumption." Lowering energy use is a relatively recent shift in focus in chip manufacturing after decades of emphasis on packing greater numbers of increasingly tiny and faster transistors onto chips. "Making transistors go faster was requiring too much energy," said Bokor, who is also the deputy director the Center for Energy Efficient Electronics Science, a Science and Technology Center at UC Berkeley funded by the National Science Foundation. "The chips were getting so hot they'd just melt." Researchers have been turning to alternatives to conventional transistors, which currently rely upon the movement of electrons to switch between 0s and 1s. Partly because of electrical resistance, it takes a fair amount of energy to ensure that the signal between the two states is clear and reliably distinguishable, and this results in excess heat. Magnetic computing emerged as a promising candidate because the magnetic bits can be differentiated by direction, and it takes just as much energy to get the magnet to point left as it does to point right. "These are two equal energy states, so we don't throw energy away creating a high and low energy," said Bokor. Bokor teamed up with UC Berkeley postdoctoral researcher Jeongmin Hong, UC Berkeley graduate student Brian Lambson and Scott Dhuey at the Berkeley Lab's Molecular Foundry, where the nanomagnets used in the study were fabricated. They experimentally tested and confirmed the Landauer limit, named after IBM Research Lab's Rolf Landauer, who in 1961 found that in any computer, each single bit operation must expend an absolute minimum amount of energy. Landauer's discovery is based on the second law of thermodynamics, which states that as any physical system is transformed, going from a state of higher concentration to lower concentration, it gets increasingly disordered. That loss of order is called entropy, and it comes off as waste heat. Landauer developed a formula to calculate this lowest limit of energy required for a computer operation. The result depends on the temperature of the computer; at room temperature, the limit amounts to about 3 zeptojoules, or one-hundredth the energy given up by a single atom when it emits one photon of light. The UC Berkeley team used an innovative technique to measure the tiny amount of energy dissipation that resulted when they flipped a nanomagnetic bit. The researchers used a laser probe to carefully follow the direction that the magnet was pointing as an external magnetic field was used to rotate the magnet from "up" to "down" or vice versa. They determined that it only took 15 millielectron volts of energy - the equivalent of 3 zeptojoules - to flip a magnetic bit at room temperature, effectively demonstrating the Landauer limit. This is the first time that a practical memory bit could be manipulated and observed under conditions that would allow the Landauer limit to be reached, the authors said. Bokor and his team published a paper in 2011 that said this could theoretically be done, but it had not been demonstrated until now. While this paper is a proof of principle, he noted that putting such chips into practical production will take more time. But the authors noted in the paper that "the significance of this result is that today's computers are far from the fundamental limit and that future dramatic reductions in power consumption are possible."

Discover hidden collaborations