Berkeley, CA, United States
Berkeley, CA, United States

Time filter

Source Type

News Article | May 10, 2017
Site: www.greencarcongress.com

« Praj cellulosic ethanol demo plant running in India; commercial projects coming | Main | Energy companies partner with RMI and Grid Singularity to launch global blockchain initiative for energy » Chemists from the US Department of Energy’s Brookhaven National Laboratory and their collaborators have definitively identified the active sites of a catalyst commonly used for making methanol from CO . The results, published in the journal Science, resolve a longstanding debate about exactly which catalytic components take part in the chemical reactions—and thus which should be the focus of efforts to boost performance. The hydrogenation of carbon dioxide is a key step in the production of methanol; catalysts made from copper (Cu) and zinc oxide (ZnO) on alumina supports are often used. There has been a debate over the actual active sites for the reaction on the catalyst. Different groups of scientists had proposed two different active sites for the catalyst—a portion of the system with just copper and zinc atoms, or a portion with copper zinc oxide. To determine which part of the molecular structure binds and breaks and makes bonds to convert reactants to product—and how it does that—co-author Jose Rodriguez performed a series of laboratory experiments using well-defined model catalysts, including one made of zinc nanoparticles supported on a copper surface, and another with zinc oxide nanoparticles on copper. To tell the two apart, he used an energetic x-ray beam to zap the samples, and measured the properties of electrons emitted. These electronic “signatures” contain information about the oxidation state of the atoms the electrons came from—whether zinc or zinc oxide. Meanwhile Liu, Jingguang Chen of Brookhaven Lab and Columbia University, and Shyam Kattel, the first author of the paper and a postdoctoral fellow co-advised by Liu and Chen, used computational resources at Brookhaven’s Center for Functional Nanomaterials (CFN) and the National Energy Research Scientific Computing Center (NERSC)—two DOE Office of Science User Facilities—to model how these two types of catalysts would engage in the CO -to-methanol transformations. These theoretical studies use calculations that take into account the basic principles of breaking and making chemical bonds, including the energy required, the electronic states of the atoms, and the reaction conditions, allowing scientists to derive the reaction rates and determine which catalyst will give the best rate of conversion. Those predictions matched what Rodriguez observed in the laboratory. All the sites participating in these reactions were copper zinc oxide. Optimizing the copper/zinc oxide interface will become the driving principal for designing a new catalyst, the scientists say. Rodriguez said that the team will try different configurations of the atoms at the copper/zinc oxide interface to see how that affects the reaction rate. Additionally, they will move from studying the model system to systems that would be more practical for use by industry. An essential tool for this next step will be Brookhaven’s National Synchrotron Light Source II (NSLS-II), another Office of Science User Facility. NSLS-II produces extremely bright beams of x-rays—about 10,000 times brighter than the broad-beam laboratory x-ray source used in this study. Those intense x-ray beams will allow the scientists to take high-resolution snapshots that reveal both structural and chemical information about the catalyst, the reactants, and the chemical intermediates that form as the reaction occurs. An additional co-author, Pedro Ramírez of Universidad Central de Venezuela, made important contributions to this study by helping to test the activity of the copper zinc and copper zinc oxide catalysts. This research was supported by the DOE Office of Science.


News Article | May 15, 2017
Site: www.sciencedaily.com

Simulations run at the U.S. Department of Energy's Lawrence Berkeley National Laboratory as part of a unique collaboration with Lawrence Livermore National Laboratory and an industry consortium could help U.S. paper manufacturers significantly reduce production costs and increase energy efficiencies. The project is one of the seedlings for the DOE's HPC for Manufacturing (HPC4Mfg) initiative, a multi-lab effort to use high performance computing to address complex challenges in U.S. manufacturing. Through HPC4Mfg, Berkeley Lab and LLNL are partnering with the Agenda 2020 Technology Alliance, a group of paper manufacturing companies that has a roadmap to reduce their energy use by 20 percent by 2020. The papermaking industry ranks third among the country's largest energy users, behind only petroleum refining and chemical production, according to the U.S. Energy Information Administration. To address this issue, the LLNL and Berkeley Lab researchers are using advanced supercomputer modeling techniques to identify ways that paper manufacturers could reduce energy and water consumption during the papermaking process. The first phase of the project targeted "wet pressing" -- an energy-intensive process in which water is removed by mechanical pressure from the wood pulp into press felts that help absorb water from the system like a sponge before it is sent through a drying process. "The major purpose is to leverage our advanced simulation capabilities, high performance computing resources and industry paper press data to help develop integrated models to accurately simulate the water papering process," said Yue Hao, an LLNL scientist and a co-principal investigator on the project. "If we can increase the paper dryness after pressing and before the drying (stage), that would provide the opportunity for the paper industry to save energy." If manufacturers could increase the paper's dryness by 10-15 percent, he added, it would save paper manufacturers up to 20 percent of the energy used in the drying stage -- up to 80 trillion BTUs (thermal energy units) per year -- and as much as $400 million for the industry annually. For the HPC4Mfg project, the researchers used a computer simulation framework, developed at LLNL, that integrates mechanical deformation and two-phase flow models, and a full-scale microscale flow model, developed at Berkeley Lab, to model the complex pore structures in the press felts. Berkeley Lab's contribution centered around a flow and transport solver in complex geometries developed by David Trebotich, a computational scientist in the Computational Research Division at Berkeley Lab and co-PI on the project. This solver is based on the Chombo software libraries developed in the lab's Applied Numerical Algorithms Group and is the basis for other application codes including Chombo-Crunch, a subsurface flow and reactive transport code that has been used to study reactive transport processes associated with carbon sequestration and fracture evolution. This suite of simulation tools has been known to run at scale on supercomputers at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. "I used the flow and transport solvers in Chombo-Crunch to model flow in paper press felt, which is used in the drying process," Trebotich explained. "The team at LLNL has an approach that can capture the larger scale pressing or deformation as well as the flow in bulk terms. However, not all of the intricacies of the felt and the paper are captured by this model, just the bulk properties of the flow and deformation. My job was to improve their modeling at the continuum scale by providing them with an upscaled permeability-to-porosity ratio from pore scale simulation data. " Trebotich ran a series of production runs on NERSC's Edison system and was successful in providing his LLNL colleagues with numbers from these microscale simulations at compressed and uncompressed pressures, which improved their model, he added. "This was true 'HPC for manufacturing,'" Trebotich said, noting that the team recently released its final report on the first phase of the pilot project. "We used 50,000-60,000 cores at NERSC to do these simulations. It's one thing to take a research code and tune it for a specific application, but it's another thing to make it effective for industry purposes. Through this project we have been able to help engineering-scale models be more accurate by informing better parameterizations from micro-scale data." Going forward, to create a more accurate and reliable computational model and develop a better understanding of these complex phenomena, the researchers say they need to acquire more complete data from the industry, such as paper material properties, high-resolution micro-CT images of paper and experimental data derived from scientifically controlled dewatering tests. "The scientific challenge is that we need to develop a fundamental understanding of how water flows and migrates," Hao said. "All the physical phenomena involved make this problem a tough one because the dewatering process isn't fully understood due to lack of sufficient data."


News Article | May 15, 2017
Site: phys.org

The project is one of the seedlings for the DOE's HPC for Manufacturing (HPC4Mfg) initiative, a multi-lab effort to use high performance computing to address complex challenges in U.S. manufacturing. Through HPC4Mfg, Berkeley Lab and LLNL are partnering with the Agenda 2020 Technology Alliance, a group of paper manufacturing companies that has a roadmap to reduce their energy use by 20 percent by 2020. The papermaking industry ranks third among the country's largest energy users, behind only petroleum refining and chemical production, according to the U.S. Energy Information Administration. To address this issue, the LLNL and Berkeley Lab researchers are using advanced supercomputer modeling techniques to identify ways that paper manufacturers could reduce energy and water consumption during the papermaking process. The first phase of the project targeted "wet pressing"—an energy-intensive process in which water is removed by mechanical pressure from the wood pulp into press felts that help absorb water from the system like a sponge before it is sent through a drying process. "The major purpose is to leverage our advanced simulation capabilities, high performance computing resources and industry paper press data to help develop integrated models to accurately simulate the water papering process," said Yue Hao, an LLNL scientist and a co-principal investigator on the project. "If we can increase the paper dryness after pressing and before the drying (stage), that would provide the opportunity for the paper industry to save energy." If manufacturers could increase the paper's dryness by 10-15 percent, he added, it would save paper manufacturers up to 20 percent of the energy used in the drying stage—up to 80 trillion BTUs (thermal energy units) per year—and as much as $400 million for the industry annually. For the HPC4Mfg project, the researchers used a computer simulation framework, developed at LLNL, that integrates mechanical deformation and two-phase flow models, and a full-scale microscale flow model, developed at Berkeley Lab, to model the complex pore structures in the press felts. Berkeley Lab's contribution centered around a flow and transport solver in complex geometries developed by David Trebotich, a computational scientist in the Computational Research Division at Berkeley Lab and co-PI on the project. This solver is based on the Chombo software libraries developed in the lab's Applied Numerical Algorithms Group and is the basis for other application codes including Chombo-Crunch, a subsurface flow and reactive transport code that has been used to study reactive transport processes associated with carbon sequestration and fracture evolution. This suite of simulation tools has been known to run at scale on supercomputers at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. "I used the flow and transport solvers in Chombo-Crunch to model flow in paper press felt, which is used in the drying process," Trebotich explained. "The team at LLNL has an approach that can capture the larger scale pressing or deformation as well as the flow in bulk terms. However, not all of the intricacies of the felt and the paper are captured by this model, just the bulk properties of the flow and deformation. My job was to improve their modeling at the continuum scale by providing them with an upscaled permeability-to-porosity ratio from pore scale simulation data. " Trebotich ran a series of production runs on NERSC's Edison system and was successful in providing his LLNL colleagues with numbers from these microscale simulations at compressed and uncompressed pressures, which improved their model, he added. "This was true 'HPC for manufacturing,'" Trebotich said, noting that the team recently released its final report on the first phase of the pilot project. "We used 50,000-60,000 cores at NERSC to do these simulations. It's one thing to take a research code and tune it for a specific application, but it's another thing to make it effective for industry purposes. Through this project we have been able to help engineering-scale models be more accurate by informing better parameterizations from micro-scale data." Going forward, to create a more accurate and reliable computational model and develop a better understanding of these complex phenomena, the researchers say they need to acquire more complete data from the industry, such as paper material properties, high-resolution micro-CT images of paper and experimental data derived from scientifically controlled dewatering tests. "The scientific challenge is that we need to develop a fundamental understanding of how water flows and migrates," Hao said. "All the physical phenomena involved make this problem a tough one because the dewatering process isn't fully understood due to lack of sufficient data." Explore further: Chombo-Crunch sinks its teeth into fluid dynamics


News Article | May 15, 2017
Site: www.eurekalert.org

Simulations run at the U.S. Department of Energy's Lawrence Berkeley National Laboratory as part of a unique collaboration with Lawrence Livermore National Laboratory and an industry consortium could help U.S. paper manufacturers significantly reduce production costs and increase energy efficiencies. The project is one of the seedlings for the DOE's HPC for Manufacturing (HPC4Mfg) initiative, a multi-lab effort to use high performance computing to address complex challenges in U.S. manufacturing. Through HPC4Mfg, Berkeley Lab and LLNL are partnering with the Agenda 2020 Technology Alliance, a group of paper manufacturing companies that has a roadmap to reduce their energy use by 20 percent by 2020. The papermaking industry ranks third among the country's largest energy users, behind only petroleum refining and chemical production, according to the U.S. Energy Information Administration. To address this issue, the LLNL and Berkeley Lab researchers are using advanced supercomputer modeling techniques to identify ways that paper manufacturers could reduce energy and water consumption during the papermaking process. The first phase of the project targeted "wet pressing"--an energy-intensive process in which water is removed by mechanical pressure from the wood pulp into press felts that help absorb water from the system like a sponge before it is sent through a drying process. "The major purpose is to leverage our advanced simulation capabilities, high performance computing resources and industry paper press data to help develop integrated models to accurately simulate the water papering process," said Yue Hao, an LLNL scientist and a co-principal investigator on the project. "If we can increase the paper dryness after pressing and before the drying (stage), that would provide the opportunity for the paper industry to save energy." If manufacturers could increase the paper's dryness by 10-15 percent, he added, it would save paper manufacturers up to 20 percent of the energy used in the drying stage--up to 80 trillion BTUs (thermal energy units) per year--and as much as $400 million for the industry annually. For the HPC4Mfg project, the researchers used a computer simulation framework, developed at LLNL, that integrates mechanical deformation and two-phase flow models, and a full-scale microscale flow model, developed at Berkeley Lab, to model the complex pore structures in the press felts. Berkeley Lab's contribution centered around a flow and transport solver in complex geometries developed by David Trebotich, a computational scientist in the Computational Research Division at Berkeley Lab and co-PI on the project. This solver is based on the Chombo software libraries developed in the lab's Applied Numerical Algorithms Group and is the basis for other application codes including Chombo-Crunch, a subsurface flow and reactive transport code that has been used to study reactive transport processes associated with carbon sequestration and fracture evolution. This suite of simulation tools has been known to run at scale on supercomputers at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. "I used the flow and transport solvers in Chombo-Crunch to model flow in paper press felt, which is used in the drying process," Trebotich explained. "The team at LLNL has an approach that can capture the larger scale pressing or deformation as well as the flow in bulk terms. However, not all of the intricacies of the felt and the paper are captured by this model, just the bulk properties of the flow and deformation. My job was to improve their modeling at the continuum scale by providing them with an upscaled permeability-to-porosity ratio from pore scale simulation data. " Trebotich ran a series of production runs on NERSC's Edison system and was successful in providing his LLNL colleagues with numbers from these microscale simulations at compressed and uncompressed pressures, which improved their model, he added. "This was true 'HPC for manufacturing,'" Trebotich said, noting that the team recently released its final report on the first phase of the pilot project. "We used 50,000-60,000 cores at NERSC to do these simulations. It's one thing to take a research code and tune it for a specific application, but it's another thing to make it effective for industry purposes. Through this project we have been able to help engineering-scale models be more accurate by informing better parameterizations from micro-scale data." Going forward, to create a more accurate and reliable computational model and develop a better understanding of these complex phenomena, the researchers say they need to acquire more complete data from the industry, such as paper material properties, high-resolution micro-CT images of paper and experimental data derived from scientifically controlled dewatering tests. "The scientific challenge is that we need to develop a fundamental understanding of how water flows and migrates," Hao said. "All the physical phenomena involved make this problem a tough one because the dewatering process isn't fully understood due to lack of sufficient data." This study was conducted with funding from the DOE's Advanced Manufacturing Office within the Office of Energy Efficiency and Renewable Energy. The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy's research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab's Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. NERSC and ESnet are DOE Office of Science User Facilities. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the DOE's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | April 20, 2017
Site: phys.org

This phenomenon called 'gravitational lensing' is an effect of Einstein's Theory of Relativity—mass bends light. This means that the gravitational field of a massive object—like a galaxy—can bend light rays that pass nearby and refocus them somewhere else, causing background objects to appear brighter and sometimes in multiple locations. Astrophysicists believe that if they can find more of these magni-fied Type Ia's, they may be able to measure the rate of the Universe's expansion to unprecedented accuracy and shed some light on the distribution of matter in the cosmos. Fortunately, by taking a closer look at the properties of this rare event, two Law-rence Berkeley National Laboratory (Berkeley Lab) researchers have come up with a method—a pipeline— for identifying more of these so-called "strongly lensed Type Ia supernovae" in existing and future wide-field surveys. A paper describing their approach was recently published in the Astrophysical Journal Letters. Mean-while, a paper detailing the discovery and observations of the 4 billion year old Type Ia supernova, iPTF16geu, was published in Science on April 21. "It is extremely difficult to find a gravitationally lensed supernova, let alone a lensed Type Ia. Statistically, we suspect that there may be approximately one of these in every 50,000 supernovae that we identify," says Peter Nugent, an astrophysicist in Berkeley Lab's Computational Research Division (CRD) and an author on both pa-pers. "But since the discovery of iPTF16geu, we now have some thoughts on how to improve our pipeline to identify more of these events." For many years, the transient nature of supernovae made them extremely difficult to detect. Thirty years ago, the discovery rate was about two per month. But thanks to the Intermediate Palomar Transient Factory (iPTF), a new survey with an innova-tive pipeline, these events are being detected daily, some within hours of when their initial explosions appear. The process of identifying transient events, like supernovae, begins every night at the Palomar Observatory in Southern California, where a wide-field camera mounted on the robotic Samuel Oschin Telescope scans the sky. As soon as observa-tions are taken, the data travel more than 400 miles to the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), which is located at Berkeley Lab. At NERSC, machine learning algorithms running on the fa-cility's supercomputers sift through the data in real-time and identify transients for researchers to follow up on. On September 5, 2016, the pipeline identified iPTF16geu as a supernova candidate. At first glance, the event didn't look particularly out of the ordinary. Nugent notes that many astronomers thought it was just a typical Type Ia supernova sitting about 1 billion light years away from Earth. Like most supernovae that are discovered relatively early on, this event got brighter with time. Shortly after it reached peak brightness (19th magnitude) Stockholm Uni-versity Professor in Experimental Particle Astrophysics Ariel Goobar decided to take a spectrum—or detailed light study—of the object. The results confirmed that the object was indeed a Type Ia supernova, but they also showed that, surprisingly, it was located 4 billion light years away. A second spectrum taken with the OSIRIS in-strument on the Keck telescope on Mauna Kea, Hawaii, showed without a doubt that the supernova was 4 billion light years away, and also revealed its host galaxy and another galaxy located about 2 billion light years away that was acting as a gravita-tional lens, which amplified the brightness of the supernova and caused it to appear in four different places on the sky. "I've been looking for a lensed supernova for about 15 years. I looked in every pos-sible survey, I've tried a variety of techniques to do this and essentially gave up, so this result came as a huge surprise," says Goobar, who is lead author of the Science paper. "One of the reasons I'm interested in studying gravitational lensing is that it allows you to measure the structure of matter—both visible and dark matter—at scales that are very hard to get." According to Goobar, the survey at Palomar was set up to look at objects in the nearby Universe, about 1 billion light years away. But finding a distant Type Ia su-pernova in this survey allowed researchers to follow up with even more powerful telescopes that resolved small-scale structures in the supernova host galaxy, as well as the lens galaxy that is magnifying it. "There are billions of galaxies in the observable universe and it takes a tremendous effort to look in a very small patch of the sky to find these kind of events. It would be impossible to find an event like this without a magnified supernova directing you where to look," says Goobar. "We got very lucky with this discovery because we can see the small scale structures in these galaxies, but we won't know how lucky we are until we find more of these events and confirm that what we are seeing isn't an anomaly." Another benefit of finding more of these events is that they can be used as tools to precisely measure the expansion rate of the Universe. One of the keys to this is gravitational lensing. When a strong gravitational lens produces multiple images of a background object, each image's light travels a slightly different path around the lens on its way to Earth. The paths have different lengths, so light from each image takes a different amount of time to arrive at Earth. "If you measure the arrival times of the different images, that turns out to be a good way to measure the expansion rate of the Universe," says Goobar. "When people measure the expansion rate of the Universe now locally using supernovae or Cepheid stars they get a different number from those looking at early universe ob-servations and the cosmic microwave background. There is tension out there and it would be neat if we could contribute to resolving that quest." According to Danny Goldstein, a UC Berkeley astronomy graduate student and an author of the Astrophysical Journal letter, there have only been a few gravitationally lensed supernovae of any type ever discovered, including iPTF16geu, and they've all been discovered by chance. "By figuring out how to systematically find strongly lensed Type Ia supernovae like iPTF16geu, we hope to pave the way for large-scale lensed supernova searches, which will unlock the potential of these objects as tools for precision cosmology," says Goldstein, who worked with Nugent to devise a method of for finding them in existing and upcoming wide-field surveys. The key idea of their technique is to use the fact that Type Ia supernovae are "stan-dard candles"—objects with the same intrinsic brightness—to identify ones that are magnified by lensing. They suggest starting with supernovae that appear to go off in red galaxies that have stopped forming stars. These galaxies only host Type Ia su-pernovae and make up the bulk of gravitational lenses. If a supernova candidate that appears to be hosted in such a galaxy is brighter than the "standard" brightness of a Type Ia supernova, Goldstein and Nugent argue that there is a strong chance the su-pernova does not actually reside in the galaxy, but is instead a background super-nova lensed by the apparent host. "One of the innovations of this method is that we don't have to detect multiple im-ages to infer that a supernova is lensed," says Goldstein. "This is a huge advantage that should enable us to find more of these events than previously thought possi-ble." Using this method, Nugent and Goldstein predict that the upcoming Large Synoptic Survey Telescope should be able to detect about 500 strongly lensed Type Ia super-novae over the course of 10 years—about 10 times more than previous estimates. Meanwhile, the Zwicky Transient Facility, which begins taking data in August 2017 at Palomar, should find approximately 10 of these events in a three-year search. On-going studies show that each lensed Type Ia supernova image has the potential to make a four percent, or better, measurement of the expansion rate of the universe. If realized, this could add a very powerful tool to probe and measure the cosmological parameters. "We are just now getting to the point where our transient surveys are big enough, our pipelines are efficient enough, and our external data sets are rich enough that we can weave through the data and get at these rare events," adds Goldstein. "It's an exciting time to be working in this field." iPTF is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin, Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University Sys-tem of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Uni-verse in Japan. NERSC is a DOE Office of Science User Facility. Daniel A. Goldstein et al. HOW TO FIND GRAVITATIONALLY LENSED TYPE Ia SUPERNOVAE, The Astrophysical Journal (2016). DOI: 10.3847/2041-8213/834/1/L5


News Article | April 20, 2017
Site: www.eurekalert.org

VIDEO:  This animation shows the phenomenon of strong gravitational lensing. This effect caused the supernova iPTF16geu to appear 50 times brighter than under normal circumstances and to be visible on the... view more With the help of an automated supernova-hunting pipeline and a galaxy sitting 2 bil-lion light years away from Earth that's acting as a "magnifying glass,'' astronomers have captured multiple images of a Type Ia supernova--the brilliant explosion of a star--appearing in four different locations on the sky. So far this is the only Type Ia discovered that has exhibited this effect. This phenomenon called 'gravitational lensing' is an effect of Einstein's Theory of Relativity--mass bends light. This means that the gravitational field of a massive object--like a galaxy--can bend light rays that pass nearby and refocus them somewhere else, causing background objects to appear brighter and sometimes in multiple locations. Astrophysicists believe that if they can find more of these magni-fied Type Ia's, they may be able to measure the rate of the Universe's expansion to unprecedented accuracy and shed some light on the distribution of matter in the cosmos. Fortunately, by taking a closer look at the properties of this rare event, two Law-rence Berkeley National Laboratory (Berkeley Lab) researchers have come up with a method--a pipeline-- for identifying more of these so-called "strongly lensed Type Ia supernovae" in existing and future wide-field surveys. A paper describing their approach was recently published in the Astrophysical Journal Letters. Mean-while, a paper detailing the discovery and observations of the 4 billion year old Type Ia supernova, iPTF16geu, was published in Science on April 21. "It is extremely difficult to find a gravitationally lensed supernova, let alone a lensed Type Ia. Statistically, we suspect that there may be approximately one of these in every 50,000 supernovae that we identify," says Peter Nugent, an astrophysicist in Berkeley Lab's Computational Research Division (CRD) and an author on both pa-pers. "But since the discovery of iPTF16geu, we now have some thoughts on how to improve our pipeline to identify more of these events." For many years, the transient nature of supernovae made them extremely difficult to detect. Thirty years ago, the discovery rate was about two per month. But thanks to the Intermediate Palomar Transient Factory (iPTF), a new survey with an innova-tive pipeline, these events are being detected daily, some within hours of when their initial explosions appear. The process of identifying transient events, like supernovae, begins every night at the Palomar Observatory in Southern California, where a wide-field camera mounted on the robotic Samuel Oschin Telescope scans the sky. As soon as observa-tions are taken, the data travel more than 400 miles to the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), which is located at Berkeley Lab. At NERSC, machine learning algorithms running on the fa-cility's supercomputers sift through the data in real-time and identify transients for researchers to follow up on. On September 5, 2016, the pipeline identified iPTF16geu as a supernova candidate. At first glance, the event didn't look particularly out of the ordinary. Nugent notes that many astronomers thought it was just a typical Type Ia supernova sitting about 1 billion light years away from Earth. Like most supernovae that are discovered relatively early on, this event got brighter with time. Shortly after it reached peak brightness (19th magnitude) Stockholm Uni-versity Professor in Experimental Particle Astrophysics Ariel Goobar decided to take a spectrum--or detailed light study--of the object. The results confirmed that the object was indeed a Type Ia supernova, but they also showed that, surprisingly, it was located 4 billion light years away. A second spectrum taken with the OSIRIS in-strument on the Keck telescope on Mauna Kea, Hawaii, showed without a doubt that the supernova was 4 billion light years away, and also revealed its host galaxy and another galaxy located about 2 billion light years away that was acting as a gravita-tional lens, which amplified the brightness of the supernova and caused it to appear in four different places on the sky. "I've been looking for a lensed supernova for about 15 years. I looked in every pos-sible survey, I've tried a variety of techniques to do this and essentially gave up, so this result came as a huge surprise," says Goobar, who is lead author of the Science paper. "One of the reasons I'm interested in studying gravitational lensing is that it allows you to measure the structure of matter--both visible and dark matter--at scales that are very hard to get." According to Goobar, the survey at Palomar was set up to look at objects in the nearby Universe, about 1 billion light years away. But finding a distant Type Ia su-pernova in this survey allowed researchers to follow up with even more powerful telescopes that resolved small-scale structures in the supernova host galaxy, as well as the lens galaxy that is magnifying it. "There are billions of galaxies in the observable universe and it takes a tremendous effort to look in a very small patch of the sky to find these kind of events. It would be impossible to find an event like this without a magnified supernova directing you where to look," says Goobar. "We got very lucky with this discovery because we can see the small scale structures in these galaxies, but we won't know how lucky we are until we find more of these events and confirm that what we are seeing isn't an anomaly." Another benefit of finding more of these events is that they can be used as tools to precisely measure the expansion rate of the Universe. One of the keys to this is gravitational lensing. When a strong gravitational lens produces multiple images of a background object, each image's light travels a slightly different path around the lens on its way to Earth. The paths have different lengths, so light from each image takes a different amount of time to arrive at Earth. "If you measure the arrival times of the different images, that turns out to be a good way to measure the expansion rate of the Universe," says Goobar. "When people measure the expansion rate of the Universe now locally using supernovae or Cepheid stars they get a different number from those looking at early universe ob-servations and the cosmic microwave background. There is tension out there and it would be neat if we could contribute to resolving that quest." According to Danny Goldstein, a UC Berkeley astronomy graduate student and an author of the Astrophysical Journal letter, there have only been a few gravitationally lensed supernovae of any type ever discovered, including iPTF16geu, and they've all been discovered by chance. "By figuring out how to systematically find strongly lensed Type Ia supernovae like iPTF16geu, we hope to pave the way for large-scale lensed supernova searches, which will unlock the potential of these objects as tools for precision cosmology," says Goldstein, who worked with Nugent to devise a method of for finding them in existing and upcoming wide-field surveys. The key idea of their technique is to use the fact that Type Ia supernovae are "stan-dard candles"--objects with the same intrinsic brightness--to identify ones that are magnified by lensing. They suggest starting with supernovae that appear to go off in red galaxies that have stopped forming stars. These galaxies only host Type Ia su-pernovae and make up the bulk of gravitational lenses. If a supernova candidate that appears to be hosted in such a galaxy is brighter than the "standard" brightness of a Type Ia supernova, Goldstein and Nugent argue that there is a strong chance the su-pernova does not actually reside in the galaxy, but is instead a background super-nova lensed by the apparent host. "One of the innovations of this method is that we don't have to detect multiple im-ages to infer that a supernova is lensed," says Goldstein. "This is a huge advantage that should enable us to find more of these events than previously thought possi-ble." Using this method, Nugent and Goldstein predict that the upcoming Large Synoptic Survey Telescope should be able to detect about 500 strongly lensed Type Ia super-novae over the course of 10 years--about 10 times more than previous estimates. Meanwhile, the Zwicky Transient Facility, which begins taking data in August 2017 at Palomar, should find approximately 10 of these events in a three-year search. On-going studies show that each lensed Type Ia supernova image has the potential to make a four percent, or better, measurement of the expansion rate of the universe. If realized, this could add a very powerful tool to probe and measure the cosmological parameters. "We are just now getting to the point where our transient surveys are big enough, our pipelines are efficient enough, and our external data sets are rich enough that we can weave through the data and get at these rare events," adds Goldstein. "It's an exciting time to be working in this field." iPTF is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin, Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University Sys-tem of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Uni-verse in Japan. NERSC is a DOE Office of Science User Facility.


News Article | April 19, 2017
Site: phys.org

It sounds like an easy task – after all, any animal with basic vision can see a moving object, decide whether it is food or a threat and react accordingly, but what comes easily to a scallop is a challenge for the world's biggest supercomputers. Crutchfield, along with physics graduate student Adam Rupe and postdoc Ryan James, is designing these new machine learning systems to allow supercomputers to spot large-scale atmospheric structures, such as hurricanes and atmospheric rivers, in climate data.  The UC Davis Complexity Sciences Center, which Crutchfield leads, was recently named as an Intel Parallel Computing Center and is collaborating with Intel Research, the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley Lab, Stanford University, and University of Montreal. The entire Big Data Center project is led by Prabhat, leader of the Data And Analytics Services Group at the Berkeley lab. The team works on NERSC's CORI supercomputer, in the top five of the world's fastest machines with over 600,000 CPU cores. Modern science is full of "big data." For climate science, that includes both satellite- and ground-based measurements that span the planet, as well as "big" simulations. "We need new kind of machine learning to interpret very large data and planet-wide simulations," Crutchfield said. Climate and weather systems evolve over time, so the machines need to be able to find patterns not only in space but over time. "Dynamics are key to this," Crutchfield said. Humans (and other visual animals) recognize dynamic changes very quickly, but it's much harder for machines. Pattern Discovery is more than Pattern Recognition With existing technology, computers recognize patterns based on an existing template. That's how voice recognition systems work, by comparing your voice to an existing catalog of sounds. These pattern recognition systems can be very useful but they can't identify anything truly new – that isn't represented in their template. Crutchfield and his team are taking a different approach, based on pattern discovery. They are working on algorithms that allow computers to identify structures in data without knowing what they are in advance. "Learning novel patterns is what humans are uniquely good at, but machines can't do it," he said. Using pattern discovery, a supercomputer would learn how to identify hurricanes or other features in climate and weather data. It might also identify new kinds of structures that are too complex for humans to perceive at all. While this application is in global climate modeling, Crutchfield hopes to make it a new paradigm for analyzing very large datasets. "Usually, you apply known models to interpret the data. To say that you will extract your model directly from the data is a radical claim," he said. The collaboration is part of the Intel Parallel Computing Centers program, which provides funding to universities, institutions, and research labs to modernize key community codes used across a wide range of disciplines to run on industry-standard parallel architectures. Explore further: "Colony" Computer to Look for a Theory of Theories


Swingedouw D.,LSCE IPSL | Rodehacke C.B.,Max Planck Institute For Meteorologie | Behrens E.,Leibniz Institute of Marine Science | Menary M.,UK Met Office | And 5 more authors.
Climate Dynamics | Year: 2013

The recent increase in the rate of the Greenland ice sheet melting has raised with urgency the question of the impact of such a melting on the climate. As former model projections, based on a coarse representation of the melting, show very different sensitivity to this melting, it seems necessary to consider a multi-model ensemble to tackle this question. Here we use five coupled climate models and one ocean-only model to evaluate the impact of 0.1 Sv (1 Sv = 106 m3/s) of freshwater equally distributed around the coast of Greenland during the historical era 1965-2004. The ocean-only model helps to discriminate between oceanic and coupled responses. In this idealized framework, we find similar fingerprints in the fourth decade of hosing among the models, with a general weakening of the Atlantic Meridional Overturning Circulation (AMOC). Initially, the additional freshwater spreads along the main currents of the subpolar gyre. Part of the anomaly crosses the Atlantic eastward and enters into the Canary Current constituting a freshwater leakage tapping the subpolar gyre system. As a consequence, we show that the AMOC weakening is smaller if the leakage is larger. We argue that the magnitude of the freshwater leakage is related to the asymmetry between the subpolar-subtropical gyres in the control simulations, which may ultimately be a primary cause for the diversity of AMOC responses to the hosing in the multi-model ensemble. Another important fingerprint concerns a warming in the Nordic Seas in response to the re-emergence of Atlantic subsurface waters capped by the freshwater in the subpolar gyre. This subsurface heat anomaly reaches the Arctic where it emerges and induces a positive upper ocean salinity anomaly by introducing more Atlantic waters. We found similar climatic impacts in all the coupled ocean-atmosphere models with an atmospheric cooling of the North Atlantic except in the region around the Nordic Seas and a slight warming south of the equator in the Atlantic. This meridional gradient of temperature is associated with a southward shift of the tropical rains. The free surface models also show similar sea-level fingerprints notably with a comma-shape of high sea-level rise following the Canary Current. © 2012 Springer-Verlag.


Thorne P.,NERSC Inc | Thorne P.,National University of Ireland
Proceedings of the Institution of Civil Engineers: Forensic Engineering | Year: 2015

This briefing outlines the evidence from instrumental records that leads to an unequivocal finding that the world has warmed. It then goes on to address the underlying causes, showing that only through invoking the effects of humans can the last 50 years be adequately explained. Finally, it addresses the recent hiatus/pause in warming of global surface temperatures and the implications. © Thomas Telford Services Ltd. All rights reserved.

Loading NERSC Inc collaborators
Loading NERSC Inc collaborators