Los Alamos, NM, United States
Los Alamos, NM, United States

Los Alamos National Laboratory is one of two laboratories in the United States where classified work towards the design of nuclear weapons is undertaken. The other, since 1952, is Lawrence Livermore National Laboratory. LANL is a United States Department of Energy national laboratory, managed and operated by Los Alamos National Security , located in Los Alamos, New Mexico. The laboratory is one of the largest science and technology institutions in the world. It conducts multidisciplinary research in fields such as national security, space exploration, renewable energy, medicine, nanotechnology, and supercomputing.LANL is the largest institution and the largest employer in northern New Mexico, with approximately 9,000 direct employees and around 650 contractor personnel. Additionally, there are roughly 120 DOE employees stationed at the laboratory to provide federal oversight of LANL's work and operations. Approximately one-third of the laboratory's technical staff members are physicists, one quarter are engineers, one-sixth are chemists and materials scientists, and the remainder work in mathematics and computational science, biology, geoscience, and other disciplines. Professional scientists and students also come to Los Alamos as visitors to participate in scientific projects. The staff collaborates with universities and industry in both basic and applied research to develop resources for the future. The annual budget is approximately US$2.2 billion. Wikipedia.


Time filter

Source Type

Carlson J.,Los Alamos National Laboratory | Gandolfi S.,Los Alamos National Laboratory | Pederiva F.,University of Trento | Pederiva F.,National Institute of Nuclear Physics, Italy | And 5 more authors.
Reviews of Modern Physics | Year: 2015

Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents. © 2015 American Physical Society. © 2015 American Physical Society.


Plank T.,Lamont Doherty Earth Observatory | Kelley K.A.,University of Rhode Island | Zimmer M.M.,Los Alamos National Laboratory | Hauri E.H.,Carnegie Institution of Washington | Wallace P.J.,University of Oregon
Earth and Planetary Science Letters | Year: 2013

The last 15yr have seen an explosion of data on the volatile contents of magmas parental to arc volcanoes. This has occurred due to the intense study of melt inclusions trapped in volcanic phenocrysts, aliquots of magma that have presumably escaped degassing during eruption. The surprising first-order result is the narrow range in H2O concentrations in the least degassed melt inclusions from each volcano. Nearly all arc volcanoes are sourced with mafic magmas that contain 2-6wt% H2O. The average for each arc varies even less, from 3.2 (for the Cascades) to 4.5 (for the Marianas), with a global average of 3.9±0.4wt% H2O. Significant variations occur from volcano to volcano within each arc, but the means are indistinguishable within one s.d. The narrow range and common average value for H2O are in stark contrast to the concentrations of most other subduction tracers, such as Nb or Ba, which vary by orders of magnitude. A modulating process, either in the crust or mantle, is likely responsible for the restricted range in the H2O contents of arc melt inclusions. One possibility is that melt inclusion H2O values reflect vapor saturation at the last storage depth in the crust prior to eruption. In this scenario, magmas rise from the mantle with variable H2O contents (>4wt%), become vapor-saturated and start degassing, and continue to degas up until the depth at which they stall. If the stalling depths are ~6km, which is common for storage depths beneath volcanoes, magmas would be saturated at ~4wt% H2O, and melt inclusions, most of which become closed during further ascent, would thus record ≤4wt% H2O. Another possibility is that the mantle melting process modulates water content in the melt such that magmas rise out of the mantle with ~4wt% H2O. A strong relationship between the water content of the source, H2O(o) and the degree of melting (F) maintains nearly constant water contents in the melt for a restricted range in mantle temperature. Magmas with 3-4wt% H2O can be generated at ~50° below the dry solidus for a wide range in F and H2O(o). The narrow range in wedge temperatures may be another manifestation of a planet with average upper mantle of 1400°C potential temperature. The characteristic mean and range of H2O contents of arc magmas has implications for both the volatile fuel for explosive eruptions and the mass balance of H2O recycled through subduction zones. © 2012 Elsevier B.V.


Xia W.,Peking University | Zou R.,Peking University | An L.,Peking University | Xia D.,Peking University | Guo S.,Los Alamos National Laboratory
Energy and Environmental Science | Year: 2015

Rational design of non-noble metal catalysts with an electrocatalytic activity comparable or even superior to Pt is extremely important for future fuel cell-based renewable energy devices. Herein, we demonstrate a new concept that a metal-organic framework (MOF) can be used as a novel precursor to in situ encapsulate Co@Co3O4@C core@bishell nanoparticles (NPs) into a highly ordered porous carbon matrix (CM) (denoted as Co@Co3O4@C-CM). The central cobalt ions from the MOF are used as a metal source to produce Co metal cores, which are later transformed into a fancy Co@Co3O4 nanostructure via a controlled oxidation. The most notable feature of our Co@Co3O4@C-CM is that the highly ordered CM can provide much better transport pathways than the disordered pure MOF derived nanostructure that can facilitate the mass transport of O2 and an electrolyte. As a result, the well-designed Co@Co3O4@C-CM derived from the MOF shows almost identical activity but superior stability and methanol tolerance for the ORR relative to the commercial Pt/C in alkaline medium. Our work reports a novel Co@Co3O4@C nanostructure from a MOF for the first time and also reveals the important role of the introduction of a highly ordered carbon matrix into the MOF derived catalyst in enhancing the ORR activity and stability. To the best of our knowledge, the Co@Co3O4@C-CM is the most efficient non-noble metal nanocatalyst ever reported for the ORR. © 2015 The Royal Society of Chemistry.


Kang Z.-B.,Los Alamos National Laboratory
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2012

We study the single transverse spin asymmetry of prompt photon production in high energy proton-proton scattering. We include the contributions from both the direct and fragmentation photons. While the asymmetry for direct photon production receives only the Sivers type of contribution, the asymmetry for fragmentation photons receives both the Sivers and Collins types of contributions. We make a model calculation for quark-to-photon Collins function, which is then used to estimate the Collins asymmetry for fragmentation photons. We find that the Collins asymmetry for fragmentation photons is very small, thus the single transverse spin asymmetry of prompt photon production is mainly coming from the Sivers asymmetry in direct and fragmentation photons. We make predictions for the prompt photon spin asymmetry at RHIC energy, and emphasize the importance of such a measurement. The asymmetry of prompt photon production can provide a good measurement for the important twist-three quark-gluon correlation function, which is urgently needed in order to resolve the "sign mismatch" puzzle. © 2012 Elsevier B.V.


Ji Z.,Chemistry Division | Doorn S.K.,Los Alamos National Laboratory | Sykora M.,Chemistry Division
ACS Nano | Year: 2015

Polyclic aromatic hydrocarbons also called Graphene Molecules (GMs), with chemical composition C132H36(COOH)2 were synthesized in situ on the surface of transparent nanocrystalline indium tin oxide (nc-ITO) electrodes and their electronic structure was studied electrochemically and spectro-electrochemically. Variations in the potential applied onto the nc-ITO/GM electrodes induce only small changes in the observed current, but they produce dramatic changes in the absorption of the GMs, which are associated with their oxidation and reduction. Analysis of the absorption changes using a modified Nernst equation is used to determine standard potentials associated with the individual charge transfer processes. For the GMs prepared here, these were found to be E1,ox0 = 0.77 ± 0.01 V and E2,ox0 = 1.24 ± 0.02 V vs NHE for the first and second oxidation and E1,red0 = 1.50 ± 0.04 V for the first reduction. The charge transfer processes are found to be nonideal. The nonideality factors associated with the oxidation and reduction processes are attributed to strong interactions between the GM redox centers. Under the conditions of potential cycling, GMs show rapid (seconds) color change with high contrast and stability. An electrochromic application is demonstrated wherein the GMs are used as the optically active component. © 2015 American Chemical Society.


Mu Q.,University of Montana | Zhao M.,University of Montana | Kimball J.S.,University of Montana | McDowell N.G.,Los Alamos National Laboratory | Running S.W.,University of Montana
Bulletin of the American Meteorological Society | Year: 2013

A new global index used operational satellite remote sensing as primary inputs and enhances near real-time drought monitoring and mitigation efforts. A DSI algorithm was developed using satellite-derived ET, PET, and NDVI products to detect and monitor droughts on a global basis. The DSI algorithm was developed to overcome several limitations and to exploit the relative volume of operational satellite records and associated vegetation indicators. The input datasets and the DSI model were introduced and DSI patterns and anomalies in relation to alternative global PDSI information and documented regional drought events. The MODIS operational net primary production (NPP) product was used as an indicator of vegetation productivity changes under documented severe droughts in the Amazon, Europe, and Russia, and to evaluate corresponding DSI- and PDSI-based vegetation drought responses.


Li N.,Tongji University | Li N.,National University of Singapore | Li N.,Max Planck Institute for the Physics of Complex Systems | Ren J.,Max Planck Institute for the Physics of Complex Systems | And 9 more authors.
Reviews of Modern Physics | Year: 2012

The form of energy termed heat that typically derives from lattice vibrations, i.e., phonons, is usually considered as waste energy and, moreover, deleterious to information processing. However, in this Colloquium, an attempt is made to rebut this common view: By use of tailored models it is demonstrated that phonons can be manipulated similarly to electrons and photons, thus enabling controlled heat transport. Moreover, it is explained that phonons can be put to beneficial use to carry and process information. In the first part ways are presented to control heat transport and to process information for physical systems which are driven by a temperature bias. In particular, a toolkit of familiar electronic analogs for use of phononics is put forward, i.e., phononic devices are described which act as thermal diodes, thermal transistors, thermal logic gates, and thermal memories. These concepts are then put to work to transport, control, and rectify heat in physically realistic nanosystems by devising practical designs of hybrid nanostructures that permit the operation of functional phononic devices; the first experimental realizations are also reported. Next, richer possibilities to manipulate heat flow by use of time-varying thermal bath temperatures or various other external fields are discussed. These give rise to many intriguing phononic nonequilibrium phenomena such as, for example, the directed shuttling of heat, geometrical phase-induced heat pumping, or the phonon Hall effect, which may all find their way into operation with electronic analogs. © 2012 American Physical Society.


Grant
Agency: Department of Defense | Branch: Army | Program: STTR | Phase: Phase II | Award Amount: 749.90K | Year: 2010

We propose to realize a new class of microbolometer FPA technology with significantly reduced 1/f noise by replacing the sensing material (e.g. VOx) with colossal magnetoresistive (CMR) oxides. The program uniquely combines Agiltron's experience in novel IR imager development and leading academic research on CMR at Los Alamos National Lab (LANL). The LANL Team has developed breakthrough processing technique for CMR film deposition. Agiltron will further advance LANL's CMR film fabrication techniques and develop a film transfer technology, allowing formation of microbolometers on top of CMOS circuitry as practiced in current technology. We anticipate reducing the NETD of state of the art bolometers based on VOx and amorphous silicon from 29 mK to 12 mK. Functional microbolometers based on CMR oxides and a performance comparison with state of the art devices will be produced in Phase II.


Grant
Agency: Department of Energy | Branch: | Program: STTR | Phase: Phase I | Award Amount: 149.95K | Year: 2014

The ionomer dispersion technology invented at LANL has demonstrated great potential to significantly improve the lifetime of proton exchange membrane (PEM) fuel cells via accelerated stress tests. However, more validation and improvements are needed to make this technology commercially viable for PEM fuel cell-based hydrogen vehicles. The overall project is to further improve and validate this ionomer dispersion technology under more practical fuel cell conditions. The ionomer dispersion technology will be combined with dimension-stabilized membrane (DSM) technology to create more durable membranes and membrane and electrode assemblies for PEM fuel cell and electrolyzers. Low equivalent weight (EW) ionomers will be dispersed using selected non-aqueous solvents. The ionomer dispersions will be incorporated into porous matrix to make DSMs and mixed with the catalysts to make electrodes. The resulted membrane and electrode assemblies will be evaluated for both fuel cells and electrolyzers under more practical operating conditions. Commercial Applications and Other Benefits: The successful completion of this project will improve the durability and performance of PEM fuel cells, significantly reducing the cost of hydrogen vehicles. This will lead to a widespread deployment of hydrogen vehicles, which will relieve the nation heavy dependence on imported oil and reduce air pollutants.


Patent
National University of Singapore and Los Alamos National Laboratory | Date: 2013-06-03

Disclosed herein are methods for producing compounds (such as 3,4-dihydroxybenzoate, catechol, cis,cis-muconate, or -carboxy-cis,cis-muconic acid) utilizing biosynthetic pathways in prokaryotic organisms expressing one or more heterologous genes. In some embodiments, the method includes expressing a heterologous asbF gene (for example, a gene having dehydroshikimate dehydratase activity) in a prokaryotic cell under conditions sufficient to produce the one or more compounds and purifying the compound. In additional embodiments, the method further includes expressing one or more of a heterologous 3,4-DHB decarboxylase gene, a heterologous catechol 1,2-dioxygenase gene, and a heterologous 3,4-DHB dioxygenase gene in the prokaryotic cell and purifying the compound.


Grant
Agency: Department of Defense | Branch: Army | Program: STTR | Phase: Phase II | Award Amount: 749.90K | Year: 2010

We propose to realize a new class of microbolometer FPA technology with significantly reduced 1/f noise by replacing the sensing material (e.g. VOx) with colossal magnetoresistive (CMR) oxides. The program uniquely combines Agiltron’s experience in novel IR imager development and leading academic research on CMR at Los Alamos National Lab (LANL). The LANL Team has developed breakthrough processing technique for CMR film deposition. Agiltron will further advance LANL’s CMR film fabrication techniques and develop a film transfer technology, allowing formation of microbolometers on top of CMOS circuitry as practiced in current technology. We anticipate reducing the NETD of state of the art bolometers based on VOx and amorphous silicon from 29 mK to 12 mK. Functional microbolometers based on CMR oxides and a performance comparison with state of the art devices will be produced in Phase II.


News Article | August 31, 2016
Site: www.greencarcongress.com

« Hyundai Mobis develops Electronic Parking Brakes (eDIH) for large pickup trucks | Main | SAKOR Technologies provides dual-dynamometer testing system to major global powertrain manufacturer in China » Among the 43 small businesses participating in the newly announced second round of the US Department of Energy (DOE) Small Business Vouchers (SBV) pilot are 10 projects in the area of hydrogen and fuel cells. These selected projects are aimed at reducing the costs and improving the performance of hydrogen and fuel cell technologies, such as finding ways to lower costs of membrane electrode assemblies (MEAs), a critical component of fuel cell stacks and electrolyzers. Some of these collaborations will explore using lower cost materials, reducing the quantity of pricey raw material required, and developing better and more precise methods in manufacturing MEAs. The ten companies receiving Round 2 SBVs for hydrogen and fuel cells projects include: Altergy. Altergy will work with the National Renewable Energy Laboratory (NREL) to leverage lab expertise to reduce cost and improve performance in its proprietary fuel cell products. The objective of their work with NREL is to observe the fabrication of MEAs at NREL using non- proprietary methods, and evaluate MEAs made at NREL with ETFECS-based catalyst to reduce costs and/or enhance durability. American Fuel Cell. AFC will work with Oak Ridge National Laboratory (ORNL) to optimize deposition techniques for roll-to-roll direct coating of electrodes on anode and cathode gas diffusion media leading to improved quality and lower-cost manufacturing of various fuel cell applications. Garmor Inc. Garmor, working with NREL, will utilize existing bipolar plate (BPP) technology to develop a BPP that will include composites that will be used to form microstructured surface features to improve BPP surface tension for water droplet control. Ion Power. Ion Power will work with Los Alamos National Laboratory (LANL) to analyze various performance improvement mechanisms to improve the impact on durability of its fuel cell catalyst layer. Nanosonic. NanoSonic, working with LANL, will receive assistance in the preparation of membrane electrode assembly and performance of fuel cell testing and validation of new anion exchange fuel cell membranes. Nzyme2HC LLC. Nzyme2HC has novel acellular biological approach to scalable/clean/potentially low-cost H generation. Combining minimal electric current, bacterially-extricated hydrogenase, and industrial waste as the feedstocks/production materials. RFA award monies would go to building end-to-end bench prototype at NREL to test/metric. Oorja Fuel Cells. Oorja Fuel Cells will work with NREL to determine the efficacy and performance of membrane electrode assembly (MEA) of Oorja’s direct methanol cells operating conditions (DMFC) and compare the results with the commercially available MEA used in Oorja DMFC power system. Opus 12. Opus 12 will receive assistance to develop a computational model of ions, products, reactants, and water transport within their PEM electrolyzer for the co-electrolysis of water and carbon dioxide to make syngas. The model will aid in increasing system performance to make Opus 12’s syngas cost-competitive with traditional syngas sources. Working with Lawrence Berkeley National Laboratory. Pajarito Powder. Pajarito will work with LANL to improve electrode structures and to implement electrode manufacturing techniques that allow full catalyst utilization. Ultimately, this work will increase customer confidence in the technology and accelerate commercial adoption. Proton Energy Systems. Proton Energy Systems will work with NREL to develop, design and validate a small-scale advanced power converter to provide a pathway for reduced capital cost and improved efficiency of Proton’s M-series electrolyzer that offers a carbon-free source of hydrogen fuel or process gas.


News Article | February 15, 2017
Site: www.eurekalert.org

A star -- as big or bigger than our sun -- in Pegasus constellation is expanding and contracting in 3 directions at once every 2.5 hours, the result of heating and cooling of hydrogen fuel burning 28 million degrees Fahrenheit at its core Astronomers are reporting a rare star as big -- or bigger -- than the Earth's sun and that is expanding and contracting in a unique pattern in three different directions. The star is one that pulsates and so is characterized by varying brightness over time. It's situated 7,000 light years away from the Earth in the constellation Pegasus, said astronomer Farley Ferrante, a member of the team that made the discovery at Southern Methodist University, Dallas. Called a variable star, this particular star is one of only seven known stars of its kind in our Milky Way galaxy. "It was challenging to identify it," Ferrante said. "This is the first time we'd encountered this rare type." The Milky Way has more than 100 billion stars. But just over 400,900 are catalogued as variable stars. Of those, a mere seven -- including the one identified at SMU -- are the rare intrinsic variable star called a Triple Mode 'high amplitude delta Scuti' (pronounced SKOO-tee) or Triple Mode HADS(B), for short. "The discovery of this object helps to flesh out the characteristics of this unique type of variable star. These and further measurements can be used to probe the way the pulsations happen," said SMU's Robert Kehoe, a professor in the Department of Physics who leads the SMU astronomy team. "Pulsating stars have also been important to improving our understanding of the expansion of the universe and its origins, which is another exciting piece of this puzzle." The star doesn't yet have a common name, only an official designation based on the telescope that recorded it and its celestial coordinates. The star can be observed through a telescope, but identifying it was much more complicated. A high school student in an SMU summer astronomy program made the initial discovery upon culling through archived star observation data recorded by the small but powerful ROTSE-I telescope formerly at Los Alamos National Laboratory in New Mexico. Upon verification, the star was logged into the International Variable Star Index as ROTSE1 J232056.45+345150.9 by the American Association of Variable Star Observers at https:/ . SMU's astrophysicists discovered the variable star by analyzing light curve shape, a key identifier of star type. Light curves were created from archived data procured by ROTSE-I during multiple nights in September 2000. The telescope generates images of optical light from electrical signals based on the intensity of the source. Data representing light intensity versus time is plotted on a scale to create the light curves. Plano Senior High School student Derek Hornung first discovered the object in the ROTSE-I data and prepared the initial light curves. From the light curves, the astronomers knew they had something special. It became even more challenging to determine the specific kind of variable star. Then Eric Guzman, a physics graduate from the University of Texas at Dallas, who is entering SMU's graduate program, solved the puzzle, identifying the star as pulsating. "Light curve patterns are well established, and these standard shapes correspond to different types of stars," Ferrante said. "In a particular field of the night sky under observation there may have been hundreds or even thousands of stars. So the software we use generates a light curve for each one, for one night. Then -- and here's the human part -- we use our brain's capacity for pattern recognition to find something that looks interesting and that has a variation. This allows the initial variable star candidate to be identified. From there, you look at data from several other nights. We combine all of those into one plot, as well as add data sets from other telescopes, and that's the evidence for discerning what kind of variable star it is." That was accomplished conclusively during the referee process with the Variable Star Index moderator. The work to discover and analyze this rare variable star was carried out in conjunction with analyses by eight other high school students and two other undergraduates working on other variable candidates. The high school students were supported by SMU's chapter of the Department of Energy/National Science Foundation QuarkNet program. Of the stars that vary in brightness intrinsically, a large number exhibit amazingly regular oscillations in their brightness which is a sign of some pulsation phenomenon in the star, Ferrante said. Pulsation results from expanding and contracting as the star ages and exhausts the hydrogen fuel at its core. As the hydrogen fuel burns hotter, the star expands, then cools, then gravity shrinks it back, and contraction heats it back up. "I'm speaking very generally, because there's a lot of nuance, but there's this continual struggle between thermal expansion and gravitational contraction," Ferrante said. "The star oscillates like a spring, but it always overshoots its equilibrium, doing that for many millions of years until it evolves into the next phase, where it burns helium in its core. And if it's about the size and mass of the sun -- then helium fusion and carbon is the end stage. And when helium is used up, we're left with a dying ember called a white dwarf." Within the pulsating category is a class of stars called delta Scuti, of which there are thousands. They are named for a prototype star whose characteristic features -- including short periods of pulsating on the scale of a few hours -- are typical of the entire class. Within delta Scuti is a subtype of which hundreds have been identified, called high amplitude delta Scuti, or HADS. Their brightness varies to a particularly large degree, registering more than 10 percent difference between their minimum and maximum brightness, indicating larger pulsations. Common delta Scuti pulsate along the radius in a uniform contraction like blowing up a balloon. A smaller sub-category are the HADS, which show asymmetrical-like pulsating curves. Within HADS, there's the relatively rare subtype called HADS(B) , of which there are only 114 identified. A HADS(B) is distinguished by its two modes of oscillation -- different parts of the star expanding at different rates in different directions but the ratio of those two periods is always the same. For the SMU star, two modes of oscillation weren't immediately obvious in its light curve. "But we knew there was something going on because the light curve didn't quite match known light curves of other delta Scuti's and HADS' objects we had studied. The light curves -- when laid on top of each other -- presented an asymmetry," Ferrante said. "Ultimately the HADS(B) we discovered is even more unique than that though -- it's a Triple Mode HADS(B) and there were previously only six identified in the Milky Way. So it has three modes of oscillation, all three with a distinct period, overlapping, and happening simultaneously." So rare, in fact, there's no name yet for this new category nor a separate registry designation for it. Guzman, the student researcher who analyzed and categorized the object, recalled how the mystery unfolded. "When I began the analysis of the object, we had an initial idea of what type it could be," Guzman said. "My task was to take the data and try to confirm the type by finding a second period that matched a known constant period ratio. After successfully finding the second mode, I noticed a third signal. After checking the results, I discovered the third signal coincided with what is predicted of a third pulsation mode." The SMU Triple Mode HADS(B) oscillates on a scale of 2.5 hours, so it will expand and contract 10 times in one Earth day. It and the other known six HADS(B)'s are in the same general region of the Milky Way galaxy, within a few thousand light years of one another. "I'm sure there are more out there," Ferrante said, "but they're still rare, a small fraction." SMU's Triple Mode HADS(B) is unstable and further along in its stellar evolution than our sun, which is about middle-aged and whose pulsating variations occur over a much longer period of time. SMU's Triple Mode HADS(B) core temperature, heated from the burning of hydrogen fuel, is about 15 million Kelvin or 28 million degrees Fahrenheit. Someday, millions of years from now, SMU's Triple Mode HADS(B) will deplete the hydrogen fuel at its core, and expand into a red giant. "Our sun might eventually experience this as well," Ferrante said. "But Earth will be inhospitable long before then. We won't be here to see it." Funding was through the Texas Space Grant Consortium, an affiliate of NASA; SMU Dedman College; and the Department of Energy/National Science Foundation QuarkNet program. ROTSE-I began operating in late 1997, surveying the sky all night, every clear night of the year for three years. It was decommissioned in 2001 and replaced by ROTSE-III. SMU owns the ROTSE-IIIb telescope at McDonald Observatory, Fort Davis, Texas.


News Article | December 23, 2016
Site: phys.org

Agency officials said the move confirms that numerous corrective actions uncovered during a recent review of the Waste Isolation Pilot Plant in southern New Mexico have been completed. A team of Energy Department experts from around the nation had found paperwork and documentation issues that needed to be fixed. It also had found inadequate procedures for new requirements on accepting waste from national laboratories and other defense sites. "Safety has and will continue to be our number one priority," Energy Department spokeswoman Bridget Bartol said, noting that workers will complete minor maintenance on the walls and floors of the underground disposal area before waste-handling work resumes. The Waste Isolation Pilot Plant has been closed since February 2014 when a radiation release contaminated a significant portion of the repository, where disposal rooms are carved out of a thick salt formation deep underground. The shutdown put shipments from around the country on indefinite hold as the federal government poured hundreds of millions of dollars into recovery efforts and policy overhauls. New Mexico regulators cited the government and the contractor for numerous permit violations related to the radiation release and an unrelated fire involving a mining truck weeks earlier. That resulted in a multimillion-dollar settlement with the state. Los Alamos National Laboratory, in northern New Mexico, also was cited because the breached container of waste from 2014 had been inappropriately packed there before being shipped to the repository. Investigators said the incident could have been avoided had managers not ignored existing policies and procedures for handling and treating the waste, which includes gloves, tools, clothing and other materials from decades of bomb-making and research. The Energy Department released a slew of documents Friday related to reviews of the facility's readiness to reopen that state and federal officials conducted in recent weeks. State officials gave their approval for work to resume in letters sent to the federal agency late last week. Some watchdog groups had asked the department not to rush, citing the previous incidents and the importance of the facility to the nation's multibillion-dollar cleanup of Cold War-era waste. Explore further: Feds: No 'show-stoppers' found in nuke readiness review


News Article | October 23, 2015
Site: news.mit.edu

When MIT Plasma Science and Fusion Center (PSFC) graduate students Hans Rinderknecht and Alex Zylstra applied for a prestigious Lawrence Postdoctoral Fellowship, they knew they were competing with each other, and over 200 other outstanding applicants. Because Lawrence Livermore National Laboratory (LLNL) awards only two to four fellowships per year, the chances of winning were slim; the chances of both winning seemed almost impossible. But the student colleagues, members of the MIT Physics Department who both work in the PSFC’s High-Energy-Density (HEDP) division, were thrilled to learn they had each received the award. The three-year fellowship, named after Ernest Orlando Lawrence, who received the Nobel Prize in 1939 for inventing the cyclotron, is given to candidates who have exceptional talent, scientific track records, and potential for significant achievements. The Lawrence fellowship is given out to a wide range of disciplines in the physical and biological sciences. HEDP division head Richard Petrasso notes, “To the best of my knowledge, this is the first time two students from MIT have concurrently been selected for this fellowship. I confess that it took my breath away when I learned that both Hans and Alex were awardees. So difficult for one, almost unheard of for two!” Adds PSFC director Dennis Whyte, “These awards to Hans and Alex are a testament to the outstanding science education efforts at the multi-departmental PSFC, and to the combined excellence of the HEDP research staff and students, who are producing impactful results on the national stage.” For the last six years, both Rinderknecht and Zylstra have worked side-by-side on inertial confinement fusion and high-energy-density-physics diagnostics. They implemented their diagnostics at the Omega Laser Facility (at the University of Rochester in New York) and the National Ignition Facility at LLNL in Livermore, California, deriving their principal data and results from these two major experimental facilities. Important progress on the road to fusion ignition has been achieved in the last 12 months, and Rinderknecht and Zylstra have played important roles in that success. Like other MIT students who do research with the PSFC’s HEDP division, both have been involved in these experiments since the ignition campaign began six years ago. Since learning of their success, Keith LeChien, director of the Office of Inertial Confinement Fusion at the National Nuclear Security Administration, has invited them to give talks on their research in Washington, D.C. Both students will have to make the choice between the Lawrence Award and other prestigious fellowships they have been offered. Alex Zylstra has also received a Los Alamos National Laboratory Fredrick Reines Postdoctoral Fellowship. Hans Rinderknecht has received the Harry S.Truman Fellowship in National Security Science and Engineering from Sandia National Laboratory.


News Article | December 12, 2016
Site: www.eurekalert.org

HOUSTON - (Dec. 12, 2016) - Rice University astronomers and their colleagues have for the first time mapped gases in three dark rings around a distant star. The rings mark spaces where planets are thought to have formed from dust and gas around the star. All the rings around HD 163296 are devoid of dust, and the international team of researchers led by Rice astronomer Andrea Isella is sure that planets, probably gas giants with masses comparable to Saturn, are responsible for clearing the outermost ones. But the inner ring has far more carbon monoxide than the other two, leading them to believe no planet exists there. That remains unexplained, he said. "The inner gap is mysterious," Isella said. "Whatever is creating the structure is removing the dust but there's still a lot of gas." The work appears this week in Physical Review Letters. Only 20 years after the first exoplanet was spotted, astronomers are beginning to learn how planets form. Thousands of planets are now in the database, and scientists continue to improve their ability to analyze them for life-supporting characteristics. One aim of the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, the world's largest radio telescope used in the research, is to study protoplanetary systems. Their mechanics will help scientists understand how the planets like Earth formed. (Radio telescopes detect the unique electromagnetic waves emitted by elements and compounds in space, allowing them to be identified and measured.) The Rice-led team's target star, nearly 400 light years away and best observed from the Southern hemisphere, is one of many known to have a large disk of dust and gas, Isella said. "Of the material that formed this disk, about 1 percent is dust particles and 99 percent is gas," Isella said. "So if you only see the dust, you cannot tell if a ring was formed by a planet or another phenomenon. In order to distinguish and really tell if there are planets or not, you need to see what the gas is doing, and in this study, for the first time, we can see both the dust and the gas." The outer rings are 100 and 160 astronomical units from HD 163296. (One unit is the distance from the center of the sun to Earth.) That's much farther from the star than previously thought possible for planet formation. The star is too far from Earth for direct observation of the planets, yet evidence from the new study shows they are likely to be there, clearing dust and gas from the outer rings much like orbiting asteroids called shepherd satellites clear space between the rings around Saturn, Isella said. The inner ring, 60 astronomical units from its star, showed a much greater concentration of the three carbon monoxide isotopes measured relative to dust. "Theoreticians have proposed other phenomena that can form dark rings without planets," Isella said. The researchers suspect one in particular: a lack of turbulence among non-ionized gas molecules in a magnetorotational instability "dead zone" that allows gas and dust to condense into a Saturn-like ring at the edge of the dark zone rather than a planet. The ring may also appear at the carbon monoxide frost line where the gas becomes cold enough to condense. The results laid the foundation for a new round of observations just starting at ALMA. Isella's initiative to study 20 stars with protoplanetary disks is one of two "large programs" (from 24 proposals) accepted by the telescope. "'Large program' means that you ask for a lot of telescope time, more than 50 hours," Isella said. "The other one they accepted will look at the emissions from a galaxy at the beginning of the universe. "It means our topic is considered by the astronomical community to be one of its highest priorities," he said. Isella said the researchers will also return to HD 163296 to learn what other elements are in the disk and rings. Taking inventory will help them determine what kind of a planet might form from material that's readily available. "If we know the chemistry of the material forming a planet, we can understand the chemistry of the planet," he said. Proximity to the star is also important. For example, water has to be far enough from a star to freeze around grains and allow them to aggregate. There is a chance that systems like HD 163296 and HL Tauri, two of the first three (with TW Hydrae) observed by ALMA to show protoplanetary rings, are anomalies, he said. "Now the question is whether all the protoplanetary disks are like this. Do they all have this structure? There is the concern that this object and HL Tauri are freaks," Isella said. Answers should come when the 20-star survey is completed. "We won't begin to get the data until next September, but it will tell us a lot," he said. Co-authors are Greta Guidi of the Arcetri Astrophysical Observatory in Firenze, Italy; Leonardo Testi of the Arcetri Observatory and the European Southern Observatory, Munich; Shangfei Liu of Rice and Los Alamos National Laboratory; Hui Li and Shengtai Li of Los Alamos; Erik Weaver and Yann Boehler of Rice; John Carperter and Itziar De Gregorio-Monsalvo of the Joint ALMA Observatory; Carlo Manara of the European Space Research and Technology Center, Noordwijk, The Netherlands; Antonella Natta of Arcetri and the Dublin Institute for Advanced Studies, Ireland; Laura Pérez of the Max Planck Institute for Radio Astronomy, Bonn, Germany; Luca Ricci of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.; Anneila Sargent of the California Institute of Technology, Pasadena, Calif.; Marco Tazzari of the European Southern Observatory, and Neal Turner of the Jet Propulsion Laboratory, California Institute of Technology. The National Science Foundation, NASA, the National Radio Astronomy Observatory, the European Space Agency and Los Alamos National Laboratory supported the research. This news release can be found online at http://news. NRAO press release: ALMA Finds Compelling Evidence for Pair of Infant Planets around Young Star Rice University researchers led the first mapping of gases in rings around a distant star with the powerful ALMA radio telescope. Clockwise from upper left: Yann Boehler, Shanfei Liu, Andrea Isella, Luca Ricci and Erik Weaver. (Credit: Jeff Fitlow/Rice University) An ALMA image of the star HD 163296 and its protoplanetary disk as seen in dust. New observations suggested that two planets, each about the size of Saturn, are in orbit around the star. These planets, which are not yet fully formed, revealed themselves in the dual imprint they left in both the dust and the gas portions of the star's protoplanetary disk. (Credit: ALMA [ESO/NAOJ/NRAO]/Andrea Isella/B. Saxton [NRAO/AUI/NSF]) A composite image of the star HD 163296. The inner red area shows the dust of the protoplanetary disk. The broader blue disk is the carbon monoxide gas in the system. ALMA observed that in the outer two gaps in the dust, there was a significant dip in the concentration of carbon monoxide, suggesting two planets are forming there. (Credit: ALMA [ESO/NAOJ/NRAO]/Andrea Isella/B. Saxton [NRAO/AUI/NSF]) An artist's impression of the protoplanetary disk surrounding the young star HD 163296. By studying the dust (ruddy brown) and carbon monoxide gas (light blue) profiles of the disk, astronomers led by Rice University's Andrea Isella discovered tantalizing evidence that two planets are forming in the outer two dust gaps in the disk. (Credit: B. Saxton/NRAO/AUI/NSF) Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice's undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for happiest students and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. To read "What they're saying about Rice," go to http://tinyurl. .


News Article | October 31, 2016
Site: www.rdmag.com

Earth will not be the only place with pumpkins this Halloween. Recently, astronomers using observations from NASA’s Kepler and Swift missions have unveiled a batch of 18 rapidly spinning stars that spin so fast they’ve been squashed into “pumpkin-like” shapes. The stars produce X-rays at more than 100 times the peak levels ever seen from the sun and are thought to be the result of close binary systems where two sun-like stars’ merge. Steve Howell, a senior research scientist at NASA's Ames Research Center in Moffett Field, California and leader of the team, explained the behavior of the stars. “These 18 stars rotate in just a few days on average, while the sun takes nearly a month,” he said in a statement. “The rapid rotation amplifies the same kind of activity we see on the sun, such as sunspots and solar flares and essentially sends it into overdrive.” Astronomers dubbed the most extreme member of the group KSw 71, a K-type orange giant that is more than 10 times larger than the sun, rotates in just 5.5 days and produces X-ray emission 4,000 times greater than the sun does at solar maximum. The rare stars were found as part of the Kepler–Swift Active Galaxies and Stars Survey (KSwAGS ) of the original Kepler field of view, which measured the brightness of more than 150,000 stars between 2009 and 2013 to detect the regular dimming from planets passing in front of their host stars. “A side benefit of the Kepler mission is that its initial field of view is now one of the best-studied parts of the sky,” team member Padi Boyd, a researcher at NASA's Goddard Space Flight Center in Greenbelt, Md., who designed the Swift survey, said in a statement. “Our group was looking for variable X-ray sources with optical counterparts seen by Kepler, especially active galaxies, where a central black hole drives the emissions.” Using X-ray and ultraviolet/optical telescopes aboard Swift, the researchers conducted the survey, imaging about 6 square degrees or 12 times the apparent size of a full moon, in the Kepler field. “With KSwAGS we found 93 new X-ray sources, about evenly split between active galaxies and various types of X-ray stars,” team member Krista Lynne Smith, a graduate student at the University of Maryland, who led the analysis of Swift data, said in a statement. “Many of these sources have never been observed before in X-rays or ultraviolet light.” The researchers determined the rotational periods and size of 10 of the stars, which range from 2.9 to 10.5 times larger than the sun, using Kepler measurements. Ames manages the Kepler and K2 missions for NASA’s Science Mission Directorate. NASA's Jet Propulsion Laboratory in Pasadena, Calif., managed Kepler mission development. Ball Aerospace & Technologies Corp. operates the flight system with support from the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. Goddard manages the Swift mission in collaboration with Pennsylvania State University in University Park, the Los Alamos National Laboratory in New Mexico and Orbital Sciences Corp. in Dulles, Va. Other partners include the University of Leicester and Mullard Space Science Laboratory in the United Kingdom, Brera Observatory and the Italian Space Agency in Italy, with additional collaborators in Germany and Japan. A paper detailing the findings will be published in the Nov. 1 edition of the Astrophysical Journal and is now available online.


News Article | March 4, 2016
Site: news.mit.edu

Amelia Trainer can recite the Seamus Heaney poem, “Scaffolding,” by memory. It starts: Masons, when they start upon a building, Are careful to test out the scaffolding; Make sure that planks won't slip at busy points, Secure all ladders, tighten bolted joints. Trainer, a sophomore majoring in physics and nuclear science and engineering, calls “Scaffolding” “probably the most true and meaningful love poem I’ve read.” But its metaphorical take on a relationship might also resonate for Trainer around her work with the Computational Reactor Physics Group (CRPG), where Trainer is deeply committed to learning about strengthening structures in a nuclear realm. Since early in her freshman year, Trainer has been part of a team helping refine OpenMC Monte Carlo, a powerful computational code for modeling neutron behavior inside nuclear reactors. Developed by the CRPG, with support from the U.S. Department of Energy, OpenMC offers the possibility of “simulations that recreate the world inside a reactor,” Trainer says. “This means we can make operating reactors safer, improve future designs, and avoid doing costly experiments. It’s the coolest thing ever.” Her engagement with this research has already earned recognition: She won an outstanding Undergraduate Research Opportunity Project (UROP) award, and presented a paper to the American Nuclear Society Student Conference in 2015, all based on work she performed in freshman year. In OpenMC simulations, scientists are interested in testing a variety of reactor parameters, including operating temperature. But it can be time-consuming to generate simulation data for a large number of nuclides at specific temperatures, and research suggested it might be possible to achieve comparably accurate results by instead interpolating between specific temperatures. Trainer’s research tested this hypothesis by comparing different interpolation models with actual experiments. “Instead of generating data for nuclear material at 293 kelvins, we could say the data roughly equals some material at 250 K and some at 300 K while preserving accuracy,”  Trainer says. “We can save time with a shortcut like this, and make everyone’s life easier.” Trainer got a running start on this project. “I began pestering Jon Walsh [a PhD candidate working in the CRPG group under the direction of professors Kord Smith and Benoit Forget] with emails looking for a research project before freshman orientation,” she says. “I can’t emphasize how much I like this project and the research group as a whole.” Given her avid interest in this work, and her double major, it is somewhat surprising to learn that as late as the summer before senior year in high school Trainer was uncertain about attending college. In her Florida high school, some students didn’t bother, and Trainer, while a top scholar and math team member, was happy serving up ice cream at a beach shop. “I enjoyed food service,” she says. “I had a million things in my head to think about and I got paid for a job that left my mind free to wander all day.” But Trainer soon shifted course. She’d had a revelation in physics class: “The most excited I’d been in high school was when a teacher described how nuclear reactors worked, what E=mc2 is, and I thought I should learn more about this stuff.” She also won a scholarship to Embry-Riddle Aeronautical University to learn computer coding, and discovered she had a knack for programming. Then, after a friend in a fantasy basketball league suggested she was accomplished enough to attend college, Trainer decided to apply to MIT. Today, not even halfway through her undergraduate career, Trainer believes she has found her calling: “I’m very happy with what I’m working on now,” she says. “Why change a good thing?” Her software writing proficiency has proved essential: She interned at Los Alamos National Laboratory last summer as a code developer, one of only a few undergraduates in her division. Coding is also central to her current CRPG research, which involves processing giant data libraries generated by the national laboratories into compact versions whose parameters are used by CRPG codes for the purposes of performing reactor simulations. She is also comparing U.S. data libraries to those of other countries, which are based on different assumptions and experiments. At this point, Trainer says she would like nothing better than to pursue an academic career “just trying slowly to add to the bank of what people understand about nuclear energy, furthering computational tools.” She would be especially pleased to do this at MIT. “I have an opportunity to be at the forefront, studying under the most brilliant minds, and if I don’t take advantage of this, I’d be completely missing out,” Trainer says.


News Article | September 23, 2016
Site: www.sciencenews.org

A summertime cold snap can, quite literally, take the bloom off the rose. Not so for Scotch heather — and now scientists know why. Thick cell walls and narrow plumbing in the alpine shrub’s stems stop deadly ice crystals from spreading to its fragile flowers during sudden summer freezes, researchers report September 15 in PLOS ONE. That lets the flowers survive and the plant make seeds even if temperatures dip below freezing. Once ice crystals start to form inside of a plant, they can spread very quickly, says Gilbert Neuner, a botanist at the University of Innsbruck in Austria who led the study. Those sharp crystals can destroy plant cells — and flowers are particularly sensitive. So plants living in cold climes have developed strategies to confine ice damage to less harmful spots. Neuner and his team used infrared imaging to measure heat given off by Scotch heather (Calluna vulgaris) plants as they freeze. That technique revealed where and when ice was forming. And looking at thin slices of the plant under a microscope let the scientists pick apart the structure of the plant’s ice barrier. Cells at the base of the flower stalks had thicker walls and were packed more closely together than elsewhere in the plant, the team found. In the same area, the pipelines that carry water up the plant — called xylem — were narrower and had fewer points where ice could potentially sneak through. Those modifications let the plants “supercool” their flowers. That is, even when the flowers chilled to below zero degrees Celsius, they contained liquid water instead of ice. Ice didn’t form in the Scotch heather flowers until far below normal freezing temperatures, ‒22° C, and ice that formed elsewhere in the plant didn’t spread to the flowers. Other species sometimes put up temporary ice blockades, for instance to protect overwintering buds. But that usually cuts off the flow of water through the xylem — fine if a plant is dormant over the winter, but flowers facing a sudden summer freeze need a continuous supply of water. Scotch heather gets around this problem by threading its xylem right through the icy barrier. Membranes let water pass between the xylem cells, and these membranes might ultimately control the spread of ice crystals in C. vulgaris, Neuner suspects. Tiny pores in the membranes are too small to let ice crystals through the barrier. And when water molecules are found inside such small holes, the molecules are bound so tightly to the structures around them that they behave more like a gel instead of crystalizing into ice. The team hopes to test the idea in future studies. Other flowering alpine plants could use a similar strategy. “I don’t think that this is unique to this plant,” says Sanna Sevanto, a tree physiologist at Los Alamos National Laboratory in New Mexico who wasn’t involved in the study. “It’s just that nobody has looked at it.”


News Article | December 13, 2016
Site: astrobiology.com

NASA's Curiosity rover is climbing a layered Martian mountain and finding evidence of how ancient lakes and wet underground environments changed, billions of years ago, creating more diverse chemical environments that affected their favorability for microbial life. Hematite, clay minerals and boron are among the ingredients found to be more abundant in layers farther uphill, compared with lower, older layers examined earlier in the mission. Scientists are discussing what these and other variations tell about conditions under which sediments were initially deposited, and about how groundwater moving later through the accumulated layers altered and transported ingredients. Effects of this groundwater movement are most evident in mineral veins. The veins formed where cracks in the layers were filled with chemicals that had been dissolved in groundwater. The water with its dissolved contents also interacted with the rock matrix surrounding the veins, altering the chemistry both in the rock and in the water. "There is so much variability in the composition at different elevations, we've hit a jackpot," said John Grotzinger, of Caltech in Pasadena, California. He and other members of Curiosity's science team presented an update about the mission Tuesday, Dec. 13, in San Francisco during the fall meeting of the American Geophysical Union. As the rover examines higher, younger layers, researchers are impressed by the complexity of the lake environments when clay-bearing sediments were being deposited, and also the complexity of the groundwater interactions after the sediments were buried. "A sedimentary basin such as this is a chemical reactor," Grotzinger said. "Elements get rearranged. New minerals form and old ones dissolve. Electrons get redistributed. On Earth, these reactions support life." Whether Martian life has ever existed is still unknown. No compelling evidence for it has been found. When Curiosity landed in Mars' Gale Crater in 2012, the mission's main goal was to determine whether the area ever offered an environment favorable for microbes. The crater's main appeal for scientists is geological layering exposed in the lower portion of its central mound, Mount Sharp. These exposures offer access to rocks that hold a record of environmental conditions from many stages of early Martian history, each layer younger than the one beneath it. The mission succeeded in its first year, finding that an ancient Martian lake environment had all the key chemical ingredients needed for life, plus chemical energy available for life. Now, the rover is climbing lower on Mount Sharp to investigate how ancient environmental conditions changed over time. "We are well into the layers that were the main reason Gale Crater was chosen as the landing site," said Curiosity Deputy Project Scientist Joy Crisp of NASA's Jet Propulsion Laboratory, in Pasadena, California. "We are now using a strategy of drilling samples at regular intervals as the rover climbs Mount Sharp. Earlier we chose drilling targets based on each site's special characteristics. Now that we're driving continuously through the thick basal layer of the mountain, a series of drill holes will build a complete picture." Four recent drilling sites, from "Oudam" this past June through "Sebina" in October, are each spaced about 80 feet (about 25 meters) apart in elevation. This uphill pattern allows the science team to sample progressively younger layers that reveal Mount Sharp's ancient environmental history. One clue to changing ancient conditions is the mineral hematite. It has replaced less-oxidized magnetite as the dominant iron oxide in rocks Curiosity has drilled recently, compared with the site where Curiosity first found lakebed sediments. "Both samples are mudstone deposited at the bottom of a lake, but the hematite may suggest warmer conditions, or more interaction between the atmosphere and the sediments," said Thomas Bristow of NASA Ames Research Center, Moffett Field, California. He helps operate the Chemistry and Mineralogy (CheMin) laboratory instrument inside the rover, which identifies minerals in collected samples. Chemical reactivity occurs on a gradient of chemical ingredients' strength at donating or receiving electrons. Transfer of electrons due to this gradient can provide energy for life. An increase in hematite relative to magnetite indicates an environmental change in the direction of tugging electrons more strongly, causing a greater degree of oxidation in iron. Another ingredient increasing in recent measurements by Curiosity is the element boron, which the rover's laser-shooting Chemistry and Camera (ChemCam) instrument has been detecting within mineral veins that are mainly calcium sulfate. "No prior mission has detected boron on Mars," said Patrick Gasda of the U.S. Department of Energy's Los Alamos National Laboratory, Los Alamos, New Mexico. "We're seeing a sharp increase in boron in vein targets inspected in the past several months." The instrument is quite sensitive; even at the increased level, boron makes up only about one-tenth of one percent of the rock composition. Boron is famously associated with arid sites where much water has evaporated away -- think of the borax that mule teams once hauled from Death Valley. However, environmental implications of the minor amount of boron found by Curiosity are less straightforward than for the increase in hematite. Scientists are considering at least two possibilities for the source of boron that groundwater left in the veins. Perhaps evaporation of a lake formed a boron-containing deposit in an overlying layer, not yet reached by Curiosity, then water later re-dissolved the boron and carried it down through a fracture network into older layers, where it accumulated along with fracture-filling vein minerals. Or perhaps changes in the chemistry of clay-bearing deposits, such as evidenced by the increased hematite, affected how groundwater picked up and dropped off boron within the local sediments. "Variations in these minerals and elements indicate a dynamic system," Grotzinger said. "They interact with groundwater as well as surface water. The water influences the chemistry of the clays, but the composition of the water also changes. We are seeing chemical complexity indicating a long, interactive history with the water. The more complicated the chemistry is, the better it is for habitability. The boron, hematite and clay minerals underline the mobility of elements and electrons, and that is good for life." Curiosity is part of NASA's ongoing Mars research and preparation for a human mission to Mars in the 2030s. Caltech manages JPL, and JPL manages the Curiosity mission for NASA's Science Mission Directorate in Washington.


News Article | February 3, 2016
Site: www.nature.com

No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during outcome assessment. The three study subjects were enrolled into a clinical protocol where treatment was started after we obtained peripheral blood by venepuncture, inguinal lymph nodes by excisional biopsy, and ileum and rectal biopsies through colonoscopy13. Two subjects were antiretroviral drug naive (1727 and 1679) and one subject (1774) had not received any drugs for at least 1 year before enrolment. Subjects 1679 and 1774 received emtricitabine (FTC), tenofovir (TFV; as the disoproxil fumarate (TDF)), and atazanavir with ritonavir (ATV/R). Subject 1727 received FTC, TDF and efavirenz (EFV). In all three subjects, conventional typing methods confirmed that the plasma virus was sensitive to the potent antiretroviral regimen that they received. Subjects 1727 and 1679 were well-suppressed patients. Subject 1774 continued to have measureable amounts of HIV-1 RNA in plasma after 3, but not 6 months of treatment. We obtained peripheral blood, inguinal lymph node, and terminal ileum and rectum biopsies once again at 3 and 6 months after starting treatment. Laboratory procedures for tissue management, in situ hybridization, and analytical pharmacology are described elsewhere13. The Institutional Review Board of the University of Minnesota approved the study. All subjects provided written informed consent. Nucleic acids were extracted from frozen cells obtained from blood or lymphoid tissue using the MasterPure Complete DNA and RNA Purification Kit (Epicentre). Viral RNA was isolated from plasma using the PureLink Viral RNA/DNA Mini Kit (Life Technologies). HIV-1 was quantified using a quantitative reverse transcription PCR assay. The relative amount of HIV-1 target-DNA was normalized to the quantification cycle for a concentration calibrator by using an external standard curve of serial tenfold dilutions of reference DNA for the Gag region of HIV-1 derived from the plasmid pNL-43. All reactions were performed in triplicate on the ABI 7900HT sequence detector (Applied Biosystems). As the number of viral templates in the sampled material is low, we used an amplicon-based deep sequencing strategy. To minimize biased amplification of the target sequence, primer locations in the Gag and Pol regions of HIV-1 were selected on the basis of the alignment positional entropy in the multiple sequences aligned from the Los Alamos National Laboratory HIV-1 sequence database (http://www.hiv.lanl.gov/). The primers were computationally screened for cross-dimer interactions and the concentration of each primer was optimized for amplification. For each sample, blanks were included to screen for contamination. We used designated, physically separated areas within the laboratory to set-up PCR, which avoided contact with potentially contaminating amplicons. To generate the long read-length PCR amplicons sequenced in this study (range, 509–587 bp read-lengths per run depending on the gene region being analysed), we amplified the Gag and Pol regions of HIV-1. For the Gag region of HIV-1, we used forward primer gag_632F_EK (5′-GCAGTGGCGCCCGAAC-3′ (corresponding to HXB2 nucleic acid sequence numbering positions 632–647) and reverse primer gag_1788R_EK (5′-AATAGTCTTACAATCTGGGTTCGC-3′ (1788–1765)). For the Pol region of HIV-1 that spanned the genomic region encoding the viral enzyme protease and reverse transcriptase, we used forward primer HIV-Pro1_2137F (5′-CAGAGCAGACCAGAGCCAAC-3′, corresponding to positions 2137–2156) and reverse primer HIV-RT1_3531R (5′-CTGCTATTAAGTCTTTTGATGGGTC-3′ (3531–3507)). PCR amplification was performed using the High Fidelity Platinum Taq DNA Polymerase (Invitrogen) with thermal cycling conditions of 94 °C for 2 min, followed by 35 cycles of 94 °C for 15 s, 54 °C for 15 s, 68 °C for 1 min, with a final extension step at 68 °C for 5 min. We used an integrated sequencing pipeline for library construction, template amplification, and DNA sequencing as in ref. 20. Multiplex Identifiers were included during library preparation for sample barcoding. For the Gag region of HIV-1, we used the forward primer A-Gag_977F_degEK 5′-primer A-GCTACAACCAKCCCTYCAGACAG-3′ (977–1000) and the reverse primer B-Gag_1564R_degEK 5′-primer B-CTACTGGGATAGGTGGATTAYKTG-3′ (1564–1541) to generate a 587 bp amplicon (977–1564). For the Pol region of HIV-1 that spanned the genomic region encoding the viral enzyme protease, we used forward primer A-Pol1_2235F 5′-primer-A-ACTGTATCCTTTAGCTTCCCTCA-3′ (2235–2262) and the reverse primer B-Pol1_2744R 5′-primer B-TTTCTTTATGGCAAATACTGGAG-3′ (2744–2721) to generate a 509 bp amplicon (2235–2744). For the Pol region of HIV-1 that spanned the genomic region encoding the viral enzyme reverse transcriptase, we used forward primer A-Pol2_2700F 5′-primer-A-GGGCCTGAAAATCCATACAAT-3′ (2700–2721) and the reverse primer B-Pol2_3265R 5′-primer-B-CATTTATCAGGATGGAGTTCATA-3′ (3265–3242) to generate a 565 bp amplicon (2700–3265). PCR was performed as detailed above. DNA amplicon libraries were resolved on a pre-cast 2% agarose gel and purified with QIAquick Gel Extraction kit (Qiagen) and AMPure XP SPRI beads (Beckman Coulter). Libraries were quantified with KAPA Library Quant Kit (Kapa Biosystems) on Agilent 2100 Bioanalyzer High Sensitivity DNA chip (Agilent) for concentration and size distribution. The concentration of the product DNA was normalized before pooling to achieve sequence uniformity across amplicons. Controls were spiked into the reaction to monitor the library construction process and potential index cross-contamination. The known internal control sequence (clonal sequence of 456 bp) introduced into the reaction was used to calculate the single nucleotide error rate and set the cut-off for the sequence analysis where no control errors could be detected. PCR errors are more common in later cycles of amplification, and being limited to small copy numbers they have little impact on the haplotype distribution. Bidirectional sequencing using the 454 Life Sciences' GS-FLX sequencing platform (Roche) provided independent confirmation of sequence information. The long read-length sequences were sorted based on index sequences, trimmed to remove residual adaptor bases from the ends of the reads, and filtered for length and duplicates before alignment. Read depth and coverage estimation that met predetermined coverage thresholds were performed as in ref. 20. Sequencing quality metrics were calculated for all samples using FastQC and only high-quality sequencing libraries were used in the ensuing analyses. Experimental precision along with deep coverage allows for accurate estimation of the underpinning diversity of the virus population. We binned the sequence reads by multiplex identifier barcodes, and identified and excluded sequencing errors and misaligned regions from the analysis by computational methods. A small number of reads had a disproportionate number of errors that accounted for most of the inaccuracy in the full data set. After quality filtering and trimming to a uniform length, we proceeded to build the different haplotypes present in each of the samples. We began by collapsing the identical sequences into haplotypes using reference-guided assembly to avoid the use of uninformative sequence repeats. Haplotypes at prevalence above the error threshold (defined by the internal control spiked in the sequencing runs), which corresponded to variants present above 0.04% of the total existing variants in the collapsed alignments, were used in the analysis. For the processing of the temporally and spatially linked deep-sequencing data for studying viral diversity, the viral sequences were first aligned against the HXB2 reference sequence (GenBank accession number K03455) using Segminator II (version 0.1.1). We then generated a consensus viral sequence for each patient as a reference for assembly to improve the alignment quality. A statistical model that utilizes platform error rates in conjunction with patterns within nucleotide frequencies derived from data obtained from related samples (that is, different compartments within the host or temporally linked samples) was used to separate low frequency platform error from true variation. This model is termed ‘probabilistic read error detection across temporally obtained reads’ (PREDATOR) and is implemented within Segminator II. This statistical framework maintains the reading frame and corrects for deep sequencing errors19. We corroborated the number of haplotypes and the frequency of haplotypes that explain the data using a second reconstruction algorithm based on combinations of multinomial distributions to analyse the k-mer frequency spectrum of the sequencing data implemented with QuRe40. We screened the sequence alignments for recombinant sequences using the GARD algorithm implemented in HyPhy41, 42. The high coverage of massively parallel sequencing is necessary to ensure reliable detection of low-frequency viral variants. A simple calculation based on the geometric distribution shows that in order to guarantee that a viral template occurring at frequency f is detected with probability p or better, it is necessary to sequence at least log(1 − p)/log(1 − f) − 1 templates. A variant of frequency 0.01 (that is, 1%), for example, would require 450 sequences to ensure its detection at probability 0.99 (that is, 99%) or better. Conversely, a study using 100 single-genome sequences would detect a variant of frequency 0.01 with probability 0.64. A low number of input DNA templates derived from one compartment that catches the spillover from another does not account for the complexities of partial observation and spatial heterogeneity that could lead to measurement error elsewhere16. This complication emphasizes the challenge in trying to extrapolate from single-template sequencing the magnitude and character of the virus population that comprises the viral reservoir. Maximum-likelihood phylogenies were created with PhyML using the general time-reversible model with the proportion of invariant sites and gamma distribution of among-site rate variation (GTR + I + Γ ) nucleotide substitution model43 applying an approximate likelihood ratio test for branch support44. We estimated trees on viral sequence sets from which gaps in the alignment were removed and considered as missing data for the reason that maximum-likelihood tree error may increase with inclusion of unreliable sites. We assessed the temporal structure of the trees by performing linear regression on the root-to-tip distances of samples versus the time of sampling and tested the validity of the time-dependency of the evolution rate estimates with the assumption of a strict molecular clock using the program Path-O-Gen v1.4 (http://tree.bio.ed.ac.uk/software/pathogen/). We used the Highlighter sequence visualization tool (http://www.HIV.lanl.gov) to trace commonality between sequences in an alignment based on individual nucleotide changes. We tested for subdivision of viral sequences into sub-populations in the different compartments at each time point. We calculated genetic distances using the Wright’s F and S test statistics45, 46. We used a bootstrap test to determine the confidence of the estimates and performed a permutation test (1000 iterations) to assess the significance levels of the obtained scores. We used a modification of a random effects branch-site model to detect positive selection and test whether the phylogeny diverged over time47. A likelihood ratio hypothesis test compared the fit of the model using a 3-bin ω distribution (ω and ω in [0,1], ω unrestricted) to describe the evolution of all branches in the tree, to the fit of the model where ω is also restricted to be in [0,1]. We tested whether or not a proportion of sites along internal branches of the intra-host viral phylogeny have been subject to episodic selection (ω > 1)24, restricting the test to internal branches to lessen the biasing effects of neutral or deleterious mutations on ω estimates48, 49 and serve as a proxy for population level selection25, 26. To resolve the phyloanatomy, we reconstructed the temporal and spatial dynamics of the viral haplotype lineages with a Bayesian statistical framework using Markov chain Monte Carlo sampling for evolutionary hypothesis testing, as implemented in BEAST version 2.1.2 (ref. 27). This approach was used to sample phylogenies from their joint posterior distribution, in which the viral haplotypes are restricted by their known date of sampling, using a simple substitution model described by Hasegawa–Kishino–Yano (HKY) to avoid over-parameterization50. Models differing in assumptions on mutation rate and effective population size were run for 100 million generations each and compared using the Bayes factor as implemented in Tracer version 1.6. We determined that the best-fit model included a strict molecular clock and assumed a constant population size. We used a symmetric transition model with constant rates over time that considered a discretized diffusion process among the different compartments. This was formalized as a continuous time Markov chain model to reconstruct the spatial dynamics between compartments. All chains were run for sufficient length and convergence of the relevant parameters was assessed using Tracer version 1.6, ignoring 10% of the chain as burn-in. We summarized the connections between virus evolution and anatomical compartment history using an annotated MCC phylogenetic tree estimated with BEAST. The model and its parameters were chosen after computing the posterior probability of several models to obtain the discriminatory Bayes factors. Because population structure, whether due to spatial segregation or limitations to gene flow, may affect evolutionary dynamics, we confirmed that the direction of flow was not due to oversampling of a particular environment, by running a two-deme Bayesian inference under a structured coalescent model with a HKY substitution model assuming a strict molecular clock29, which is less susceptible to sampling issues than our trait-based analysis. For completeness, we conducted a search for topologies and divergence times assuming a relaxed molecular clock as well. In the analyses performed, HIV-1 showed a high degree of clock-like evolution and a mean nucleotide substitution rate expected to be within the bounds necessary to obtain meaningful phyloanatomic information from sequence data. Using the location of each of the haplotypes, a discrete trait was included in the inference. We used BEAST to estimate the probabilities of each of the possible states. Standard descriptive statistics were performed with the use of the STATA, GraphPad, or R packages.


News Article | November 20, 2015
Site: news.mit.edu

One of the main challenges faced by the nuclear industry is the long-term confinement of nuclear waste. Concrete is one of the barrier materials commonly used to contain radionuclides, both in nuclear reactors and nuclear waste-storing facilities. For this reason, it is extremely important that researchers and industry professionals understand the chemical and structural stability of cement (the basic binding ingredient in concrete) containing radioactive materials. A new study by researchers from the MIT Concrete Sustainability Hub and the joint MIT-French National Center for Scientific Research (CNRS) lab known as Multi-Scale Materials Science for Energy and Environment (MSE2) is the first to show that cement is effective for nuclear containment of radioactive materials. The paper, coauthored by MIT postdoc Lucile Dezerald, visiting scientist Andres Saul, senior research scientist Roland J.-M. Pellenq, and Professor Franz-Josef Ulm, presents research that, for the first time, offers a quantum modeling of cement hydrate at the nano scale. “In short, what the research showed is that cement is a good choice for storing nuclear waste from the fission reaction in nuclear plants,” Pellenq says. One of the study’s key findings is that cement could be a good material to store radioactive strontium-90 and its daughter elements (yttrium and zirconium) from its radioactive beta decay. This could be highly relevant for improving waste storage conditions in accidental cases, such as the Fukushima Daiichi disaster in March 2011. The study may also have an impact on considerations around the decommissioning of nuclear plants. “We disconnect the plant from the grid, but the nuclear waste is still there,” Pellenq says, noting that the cores of decommissioned nuclear plants from the 1950s in the United Kingdom and France are still intact, as are cores at even older decommissioned plants in the United States. Knowing now that cement can effectively contain nuclear waste materials may buy researchers enough time to sort out the technology and lead to potential long-term solutions. This is the first nano-metric study using quantum physical chemistry to understand how cement is effective for the storage of intermediate half-life fission products. The paper’s other authors are: Jorge J. Kohanoff of the Atomistic Simulation Centre of Queen’s University Belfast; Alfredo A. Correa of the Lawrence Livermore National Laboratory; and Alfredo Caro of Los Alamos National Laboratory. The paper is available from the journal Environmental Science and Technology.


NASA's Curiosity rover, currently studying Martian mountains for ancient environmental changes, has zeroed in on wet environments and their evolution over billions of years. The new data indicate that past environment in Mars was supportive of microbial life though the exact time frame is up for debate. This was mooted by NASA and Los Alamos National Laboratory researchers at the American Geophysical Union meeting in San Francisco. "There is so much variability in the composition at different elevations, we've hit a jackpot," said John Grotzinger of Caltech in Pasadena, California. His science team presented an update on the mission at the meeting. The claim is gaining weight after the detection of boron in the ancient surface of Mars and researchers are hopeful that the arid planet's ancient climate was indeed habitable. Boron is a telling signature of evaporated past water on Earth and is found in many arid regions including Death Valley at Nevada-California belt. This new finding by NASA's Curiosity rover at Mars' Mount Sharp within Gale Crater implies that the ancient groundwater at Mars was liquid and habitable. This is leading scientists to the conclusion that Mars was conducive for microbial life hundreds of millions of years ago. "If the boron that we found in calcium sulfate mineral veins on Mars is similar to what we see on Earth, it would indicate that the groundwater of ancient Mars would have been 32-140 degrees Fahrenheit with neutral-to-alkaline ph," said Patrick Gasda of Los Alamos National Laboratory in New Mexico. Curiosity as a robotic rover has been exploring Gale Crater on Mars and completed 1,590 days since it landed on August 6, 2012. The scientists expressed views on the composition and variations in minerals and attributed them to conditions under which sediments were formed. One main factor is a movement of groundwater through layers and making changes in the size of ingredients. The latest updates of Curiosity rover also reported more hematite, boron and clay minerals in the uphill layers where the rover is focused now. According to Grotzinger, the sedimentary basin is like a chemical reactor where elements get rearranged, new minerals are formed and electrons become redistributed. These reactions on Earth can support life. Until now, there has been no evidence of Martian life even though Curiosity mission was sent to determine whether Gale Crater area was conducive for microbes' life. This was endorsed by Curiosity Deputy Project Scientist Joy Crisp of NASA's Jet Propulsion Laboratory in Pasadena. Crisp indicated that the choice of Crater stemmed from the "Mount Sharp" area that offered varied geological layering and access to rocks of different environmental phases in Martian history. Crucial to the Curiosity mission's was finding an ancient Martian lake environment that was rich in chemical factors supporting life and had the presence of relevant chemical energy. The clue to ancient conditions that backed life was mineral hematite being replaced by magnetite in rocks suggesting warmer conditions, noted Thomas Bristow of NASA Ames Research Center, Moffett Field. Regarding the increasing presence of boron, as detected by rover's ChemCam in mineral veins of calcium sulfate, the scientists said Boron is associated with aridity and signals evaporated water. Scientists see two possibilities for higher boron deposits. One is evaporation of lakes that had a boron-containing deposit. Second must be water-dissolved boron being pulled down to older layers and it accumulating at the fracture-filling veins. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | December 13, 2016
Site: www.eurekalert.org

Los Alamos, N.M., Dec. 13, 2016 - Boron has been identified for the first time on the surface of Mars, indicating the potential for long-term habitable groundwater in the ancient past. This finding and others from NASA's Curiosity rover science team will be discussed in a press conference today in San Francisco during the American Geophysical Union conference. "No prior mission to Mars has found boron," said Patrick Gasda, a postdoctoral researcher at Los Alamos National Laboratory. "If the boron that we found in calcium sulfate mineral veins on Mars is similar to what we see on Earth, it would indicate that the groundwater of ancient Mars that formed these veins would have been 0-60 degrees Celsius [32-140 degrees Fahrenheit] and neutral-to-alkaline pH." The temperature, pH, and dissolved mineral content of the groundwater could make it habitable. (Watch video here.) The boron was identified by the rover's laser-shooting Chemistry and Camera (ChemCam) instrument, which was developed at Los Alamos National Laboratory in conjunction with the French space agency. Los Alamos' work on discovery-driven instruments like ChemCam stems from the Laboratory's experience building and operating more than 500 spacecraft instruments for national defense. Boron is famously associated with arid sites where much water has evaporated away--think of the borax that mule teams once hauled from Death Valley. However, environmental implications of the boron found by Curiosity are still open to debate. Scientists are considering at least two possibilities for the source of boron that groundwater left in the veins: It could be that the drying out of part of Gale lake resulted in a boron-containing deposit in an overlying layer, not yet reached by Curiosity. Some of the material from this layer could have later been carried by groundwater down into fractures in the rocks. Or perhaps changes in the chemistry of clay-bearing deposits and groundwater affected how boron was picked up and dropped off within the local sediments. The discovery of boron is only one of several recent findings related to the composition of Martian rocks. Curiosity is climbing a layered Martian mountain and finding rock-composition evidence of how ancient lakes and wet underground environments changed, billions of years ago, in ways that affected their favorability for microbial life. As the rover has progressed uphill, compositions trend toward more clay and more boron. These and other variations can tell us about conditions under which sediments were initially deposited and about how later groundwater moving through the accumulated layers altered and transported ingredients. Groundwater and chemicals dissolved in it that appeared later on Mars left its effects most clearly in mineral veins that filled cracks in older layered rock. But it also affected the composition of that rock matrix surrounding the veins, and the fluid was in turn affected by the rock. "There is so much variability in the composition at different elevations, we've hit a jackpot," said John Grotzinger, of Caltech, Pasadena, Calif. As the rover gets further uphill, researchers are impressed by the complexity of the lake environments when clay-bearing sediments were being deposited and also by the complexity of the groundwater interactions after the sediments were buried. "A sedimentary basin such as this is a chemical reactor," Grotzinger said. "Elements get rearranged. New minerals form and old ones dissolve. Electrons get redistributed. On Earth, these reactions support life." Whether Martian life has ever existed is still unknown. No compelling evidence for it has been found. When Curiosity landed in Mars' Gale Crater in 2012 the mission's main goal was to determine whether the area ever offered an environment favorable for microbes. Four recent drilling sites, from "Oudam" this past June through "Sebina" in October, are spaced about 80 feet (about 25 meters) apart in elevation. This uphill pattern allows the science team to sample progressively younger layers that reveal Mount Sharp's ancient environmental history. "Variations in these minerals and elements indicate a dynamic system," Grotzinger said. "They interact with groundwater as well as surface water. The water influences the chemistry of the clays, but the composition of the water also changes. We are seeing chemical complexity indicating a long, interactive history with the water. The more complicated the chemistry is, the better it is for habitability. The boron and clay underline the mobility of elements and electrons, and that is good for life." ChemCam target Catabola is a raised resistant calcium sulfate vein with the highest abundance of boron observed so far. The red outline shows the location of the ChemCam target remote micro images (inset). The remote micro images show the location of each individual ChemCam laser point (red crosshairs) and the B chemistry associated with each point (colored bars). The scale bar is 9.2 mm or about 0.36 inches. Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group and URS, an AECOM company, for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction and solving problems related to energy, environment, infrastructure, health and global security concerns.


News Article | November 9, 2015
Site: www.scientificamerican.com

Two decades ago, The Sciences, a now-defunct magazine, asked me to review Fire in the Mind by science journalist George Johnson (with whom I have jawed for the past decade on Bloggingheads.tv). Fire alarmed me, because it challenged a fundamental premise of The End of Science, which I was just finishing. “Do the patterns found by science hold some claim to universal truth,” Johnson’s book jacket asked, “or would a visitor from another galaxy find them as quaint and culturally determined, as built on faith, as the world's religion?” I strove mightily in my review to rebut Johnson’s view of science and defend my “naive realism” (as a friend describes it). I’m reprinting my review here, because Fire in the Mind has just been re-published with a new preface, and it remains the most eloquent expression of postmodernism I’ve encountered (which is why it’s one of my 25 favorite science books). George and I also talk about his book and my review on a new segment of Bloggingheads.tv. -- John Horgan In New Mexico, the mountains' naked strata compel historical, even geological, perspectives. The human culture, too, is stratified. Scattered in villages throughout the region are Native Americans such as the Tewa, whose creation myths rival those of modern cosmology in their intricacy. Exotic forms of Christianity thrive among both the Indians and the descendants of the Spaniards who settled here several centuries ago. In the town of Truchas, a sect called the Hermanos Penitentes seeks to atone for humanity's sins by staging mock crucifixions and practicing flagellation. Lying lightly atop these ancient belief systems is the austere but dazzling lamina of science. Slightly more than half a century ago, physicists at the Los Alamos National Laboratory demonstrated the staggering power of their esoteric formulas by detonating the first atomic bomb. Thirty miles to the south, the Santa Fe Institute was founded in 1985 and now serves as the headquarters of the burgeoning study of complex systems. At both facilities, some of the world's most talented investigators are seeking to extend or transcend current explanations about the structure and history of the cosmos. The mix of nature, ancient culture and contemporary science makes New Mexico an ideal place in which to contemplate the future of science. The precipitous pace of research in New Mexico and elsewhere at the close of the millennium has given rise, paradoxically, to two opposing views of the scientific enterprise. On the one hand, physicists such as the Nobel laureate Steven Weinberg of the University of Texas have proclaimed that researchers are converging on a "final theory," one that will encompass the basic laws governing the physical realm. On the other hand, many philosophers and others question whether science can ever arrive at the truth. Given the speed with which one theory has succeeded another, these skeptics argue, how can scientists ever be sure that their current theories are right? In this struggle between belief and disbelief, scientists have always had a potent ally: science writers. Most journalists covering science assume that the laws of the universe are "like veins of gold, and that scientists are extracting the ore," as George Johnson puts it. Now Johnson, a reporter for The New York Times, has veered away from the pack with a brilliant new book, one that raises unsettling questions about science's claims to truth. Fire in the Mind is on one level a conventional--albeit exceptionally well-executed--work of science journalism. Johnson provides an up-to-the-minute survey of the most exciting and philosophically resonant fields of modern research. This achievement alone would make his book worth reading. His accounts of particle physics, cosmology, chaos, complexity, evolutionary biology and related developments are both lyrical and lucid. They made me realize, somewhat to my consternation, how poorly I had grasped David Bohm's pilot-wave interpretation of quantum mechanics, or the links between information theory and thermodynamics. What sets Fire in the Mind apart from other science books is its profound questioning of such theories. Science can also be viewed as "a man-made edifice that is historical, not timeless," Johnson suggests, "one of many alternative ways of carving up the world." His reports on the latest attempts to explain the origin of life or of the entire universe alternate with descriptions of New Mexico's primordial terrain and of the religious views of the Penitentes and the Tewas. These theologies, Johnson notes, stem from the same human passion for order--and the same faith that such order exists--that quantum mechanics or the big bang theory do. To aliens from an advanced civilization, or to human scientists a thousand years hence, might not both sets of beliefs appear almost equally primitive, equally distant from The Truth (assuming that classical concept remains viable)? It is a testament to Johnson's rhetorical skills, and to his intimate knowledge of both science and philosophy, that he forces us to take such possibilities seriously. Fire in the Mind is a subversive work, all the more so because it is so subtle. Johnson's style is less polemical than poetic: he advances his position through analogy, implication, innuendo. That may be why previous reviewers of Fire in the Mind, including the evolutionary biologist Stephen Jay Gould, seem not to have appreciated just how serious an assault Johnson has mounted against the concept of objective knowledge. Johnson chips away at science's foundations with tools drawn from science itself. Physicists have demonstrated that even some apparently simple systems are chaotic; that is, minute perturbations of nature (the puff of the proverbial butterfly's wing in Iowa) can trigger a cascade of utterly unpredictable consequences (a monsoon in Indonesia). These arguments also apply to our own mental faculties. Neuroscientists often emphasize that the brain, far from being a perfect machine for problem solving, was cobbled together by natural selection out of whatever happened to be at hand. Johnson proposes that science, too, might be "a construction of towers that just might have been built another way." Although he never mentions The Structure of Scientific Revolutions, Johnson's view evokes the one set forth in 1962 by the philosopher Thomas Kuhn. In that book's coda, Kuhn compared the evolution of science to the evolution of life. Biologists, he noted, have banished the teleological notion that evolution advances toward anything--including the clever, featherless biped known as Homo sapiens. In the same way, Kuhn suggested, scientists should eschew the illusion that science is evolving toward a perfect, true description of nature. In fact, Kuhn asserted, neither life nor science evolve toward anything, but only away from something; science is thus as contingent, as dependent on circumstance, as life is. Johnson espouses a similar view, though more eloquently than Kuhn did. Fire in the Mind serves as a provocation rather than a definitive statement. It challenges readers to reconsider their assumptions about what is true, what merely imagined. It challenged this reader, at any rate. I first encountered Fire in the Mind while completing my own book, one that takes the quite different view that scientific truth is virtually at hand. Reading Johnson, I would often nod in agreement and then realize that in doing so I was violating one of my own convictions. Conversely, I would reflexively object to something he had written and then question my own position. Other readers will surely come to terms with Johnson's thesis in their own way. But unless they are radical relativists to begin with, they are unlikely to finish the book without undergoing a crisis of faith. My main disagreement with Johnson is that he plies his doubts too even-handedly. The inevitable result is that all theories, from the most empirically substantiated to those that are untestable even in principle, seem equally tentative. I also cannot accept Johnson's evolutionary, Kuhnian model of scientific progress. Just because the objects of science's scrutiny--including the human mind, from which science springs--are shaped by contingent factors does not mean that science is equally contingent. Natural selection has inclined us to see the world in terms of nouns and verbs, objects and actions, matter and energy. There may well be other ways to categorize reality, but this is the mode of perception we have been granted, and it has been remarkably successful. Given our perceptual predilections and the world in which we live, it seems inevitable that once we invented science we would stumble upon certain facts. The genetic code may be the product of contingent events, but its discovery seems inevitable. Astronomers have established that our sun is just one of many stars, our Milky Way one of billions of galaxies. It is a discovery, as irrefutable as the roundness of the earth, that all matter consists of substances called elements, all elements are made of atoms, and atoms are in turn composed of even smaller entities, the electrons, protons and neutrons. Surely Johnson accepts these findings as true. Or does he? Early on in his book, Johnson offers a refresher course on modern physics, beginning with general relativity and quantum mechanics and culminating in the phantasmagoria of superstring theory. While doing full justice to the power and beauty of these achievements, Johnson also suggests that modern physics may be in some sense an ingenious confabulation--an effective one, to be sure, but merely one of many possible alternatives. Time and again, Johnson points out, physicists postulated and then found a particle satisfying some theoretical requirement. "A theory requires a particle and there is a race to find it. The detector is built and then tuned and retuned until, lo and behold, the predicted effect is observed--the effect, not the particle itself, which might not live long enough to leave a track." Take the neutrino. Theorists first postulated its existence to ensure conservation of energy in certain forms of nuclear decay, and twenty-five years later experimenters found clear-cut evidence for it. In part because its mass and other properties have been so difficult to pin down, the neutrino now plays a central role in speculations about the constitution and ultimate fate of the universe. "Once accepted as real, neutrinos could be used to make sense of other phenomena," Johnson writes. "And so they became woven tighter and tighter into the mesh, the gauze of theory that lay between us and the nuclear world." What Johnson downplays is that some hypothesized particles and processes were never discovered. Theorists have lusted after magnetic monopoles, for instance, and proton decay, but experiments have failed to provide evidence for either one. Moreover, experiments have also uncovered countless phenomena that no one had anticipated, such as x-rays, radioactivity and superconductivity. Johnson further attempts to undermine the reader's faith in physics by reviewing ongoing efforts to make sense of quantum mechanics. One of the most puzzling of the notorious quantum paradoxes is the measurement problem. Quantum theory suggests that a particle such as an electron follows many different possible paths, all of which are embodied in the particle's so-called wave function. But the ambiguity suddenly ceases when the particle is measured; at that moment, the wave function "collapses" and the particle assumes one of its possible values. The measurement problem implies, unsettlingly, that the physical realm is defined in some sense by our perception of it. To solve that and other conundrums, some physicists are now attempting to recast quantum mechanics in the mold of information theory, a discipline created in the 1940s by the mathematician Claude Shannon. Information, as defined by Shannon, can be viewed as the capacity of a system to surprise an observer. Wojciech H. Zurek of Los Alamos and others have proposed that the apparent dichotomy between the observer and the object under observation can be dissolved if all physical processes are viewed as information rather than as strictly physical phenomena. But resolving the measurement problem in this way is analogous to solving the mind-body problem by assuming that all matter is a manifestation of mind. Information, after all, only exists if there are conscious information-processors to apprehend it. Johnson, although he seems sympathetic to the information-based paradigm, acknowledges its fatal flaw. "If the human race were wiped from the earth, the computers would keep going until the energy supplies ran down. But without an interpreter, could they really be said to be processing something called information?" The universe, as far as we know, was devoid of life until 4 billion years ago; intelligent, self-aware life, capable of creating science, may have existed for only the last split second of cosmic time. How, then, can information be a fundamental property of reality? Other interpretations of quantum mechanics also strain credulity, as Johnson shows. According to the many-worlds interpretation, for example, a quantum entity such as an electron actually follows all the paths allowed it by quantum mechanics--in separate universes. Worse yet, none of the most popular interpretations can be empirically differentiated. When physicists care to interpret at all (and many do not, preferring to retreat to instrumentalism), they choose one over the other primarily for aesthetic reasons. Johnson's real purpose in probing the philosophical underpinnings of quantum mechanics is to cast doubt on its status as an absolute truth, one that physicists have plucked directly from the Platonic realm. Have physicists discovered quantum mechanics, Johnson asks, or imagined it? Are other, perhaps more sensible, models of the microworld possible? One way to address such questions is to accept Johnson's own description of scientific theories as maps of nature. Road maps rely on certain conventions and formalisms, and so do the maps that scientists construct of an atomic nucleus, or the Milky Way, or the human genome. But the fact that these formalisms are invented does not make highways, neutrons, galaxies and genes equally imaginary. Formalisms, in and of themselves, may not always make much sense . Newton's formalism for gravity, for instance, smacked of the occult to his contemporaries. How can one thing possibly tug at another across vast distances of empty space? (In 1995, an editorial in Nature questioned whether Newton would have been able to publish his theory today, given its self-evident preposterousness.) But the important thing is that the formalism works; its truth is not threatened by mere weirdness. So it is with quantum mechanics--and with Einstein's formalisms. What is more shocking to common sense than curved space and compressible time? But those concepts work too, even better than Newton's version of gravity. The question is: Do those theories of gravity correspond to something real in nature? Are physicists reifying their theories when they speak of gravity as something real rather than invented? After all, we cannot see gravity in the same way that we can see neurons, or bats, or red giant stars; we can only infer gravity's existence. Nevertheless, I believe--and most scientists and even nonscientists believe--that there is something real in nature that both Newton's theory and Einstein's can be said to be about. Newton can justly be said to have discovered rather than invented gravity, although his mathematical formalism is itself an invention. One may legitimately ask, Are better mathematical and conceptual formalisms possible? Perhaps. But quantum mechanics and general relativity, the premier formalisms of modern physics, already describe the behavior of matter under all but the most extreme circumstances. These formalisms are so uncannily effective that they can be considered as virtual discoveries; that is, their creation seems to be in retrospect every bit as inevitable, and irrevocable, as our discernment of atoms, elements and galaxies. Johnson is quite right that physicists often become overly enamored of their formalisms. He justly questions whether concepts such as symmetry--which has become the guiding light of particle physics--represent truths in and of themselves or merely useful mathematical tools. An object is said to be symmetrical if it can undergo certain mathematical transformations--such as rotation around an axis--and remain essentially unchanged. It is symmetry that has allowed physicists to show that, just as electricity and magnetism are two aspects of a single force, so are electromagnetism and the weak nuclear force manifestations of a unified electroweak force. In the quest for similar platonic properties, some physicists have speculated that the cosmos had its beginnings in symmetry-rich particles called superstrings. These almost infinitely malleable loops of ur-stuff allegedly gave rise to all matter, forces, space and time. "In the beginning," Johnson writes, "was a world of mathematical purity that shattered to give birth to the world in which we find ourselves. How is this belief so different from the Fall from the Garden of Eden, or the emergence of the Tewa from the heavenly underworld?" Comparing the myths of the Tewa to superstring theory is not entirely unfair. Superstrings are thought to wriggle about in a microrealm more distant from human intervention, in practical terms, than the quasars lurking at the edge of the visible universe, and they exist not just in the four dimensions we inhabit (three spatial dimensions plus time) but in at least six extra ones as well. There is no direct empirical evidence supporting the existence of superstrings, and there almost certainly never will be. Probing the distance scales and energies where superstrings are thought to dwell would require a particle accelerator 1,000 light years around. On the issue of superstrings, Johnson and I are in complete agreement. Superstring theory seems to be a product not of empirical investigation of nature but of a kind of religious conviction about reality's symmetrical structure. Some particle physicists have the same view. Sheldon Glashow, one of the architects of the electroweak theory, has ridiculed superstring researchers as the "equivalents of medieval theologians." Such skepticism is essential. Without it, there can be no progress in science, or in the humanities or in any mode of knowledge. But just how skeptical should we be? When pushed too far, skepticism culminates in the ultimate conversation-stopper, solipsism. In a late-night discussion fueled by controlled substances, a college sophomore may suggest that he is nothing but a brain sitting in a vat of chemicals somewhere, programmed by alien scientists to imagine all his experiences. But as amusing as that assumption may be, it is untestable and therefore unscientific. Religious myths, in much the same way, are either demonstrably wrong or untestable. Geologists have proven beyond a reasonable doubt that the earth formed not 6,000 years ago (as some fundamentalists believe) but roughly 4.5 billion years ago. Scientists cannot prove that God does not exist (though as the biologist Richard Dawkins has argued, the concept of a benign God certainly seems contradicted by the evidence). But fundamentalists cannot show that God does exist, either. God is thus an unscientific concept. Johnson is well aware of all that, and yet he still asks us to consider that at some level science, too, may be as much a matter of faith as of reason. His doubts, moreover, are shared by many scientists. The physicist Freeman Dyson, for example, is fond of asserting that our current knowledge will seem as primitive to future scientists as the physics of Aristotle seems to us. Why do people so obviously knowledgeable and respectful--indeed, even worshipful--of science as Dyson and Johnson veer so close to relativism? My own theory is that the alternative, for them, is much worse: the end of science. They recognize that science, if it is capable of absolute truth, might soon become a victim of its own success. After all, researchers have already mapped out the entire universe, from the microrealm of quarks and electrons to the macrorealm of galaxies and quasars. Physicists have shown that all matter is composed of a few elementary particles ruled by a few basic forces. Scientists have also woven their knowledge into an impressive, albeit incomplete, narrative of how we came to be. The universe exploded into existence 15 billion years ago, give or take five billion years, and is still expanding. Some 4.5 billion years ago, the detritus of a supernova condensed into our solar system. During the next few hundred million years, single-celled organisms bearing an ingenious molecule called DNA emerged on this planet. These primordial microbes gave rise, by means of natural selection, to an extraordinary array of more complex creatures, including Homo sapiens. My guess is that this modern myth of creation will be as viable 100 or even 1,000 years from now as it is today. Why? Because it is true. In fact, given how far science has already come, and given the physical, social and cognitive limits constraining further research, science may have little to add to the knowledge it has so far bequeathed. To the extent that our current knowledge is true, it is that much more difficult to transcend. In 1965 Richard Feynman, prescient as always, prophesied that science would reach this impasse. "The age in which we live is the age in which we are discovering the fundamental laws of nature, and that day will never come again." After the great truths are revealed, Feynman continued, "there will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a new territory." As Johnson himself points out, science has left some rather important questions unanswered: How, exactly, did the universe begin? Could it have come into being in some other way? How inevitable was the emergence of life on earth and its subsequent evolution? Does life exist elsewhere in the universe? Efforts to answer those questions--and to transcend the received wisdom--have prompted some ambitious, creative scientists to vault past empirical studies, landing them in theories more akin to philosophy or art than to science in the conventional sense. If superstring theory and other untestable hypotheses signal the demise of true, empirical science, it is a tragedy for humanity. For all its achievements, science has been something of a disappointment. It has left us still plagued with warfare, disease, poverty, ethnic conflict and a multitude of lesser evils. It has crushed the superstitions that made life meaningful without offering us a comforting substitute. As Steven Weinberg himself once put it, "The more the universe becomes comprehensible, the more it seems pointless." Johnson, I suspect, would agree. For seekers such as him and Dyson, the possibility that science might be nearing closure is terrifying, because truth-seeking, not truth itself, is what makes life meaningful. The skeptical stance allows Johnson, Dyson and others to maintain hope that the great age of discovery is not over, that more revelations and revolutions lie ahead. They are willing to sacrifice the notion of absolute truth so that the truth can be sought forever. Whether or not these science-loving skeptics turn out to be right, books like Johnson's are precious. Wittgenstein wrote, in his prose-poem Tractatus Logico-Philosophicus: "Not how the world is, is the mystical, but that it is." The most exalted of all human emotions, Wittgenstein knew, consists of pure dumbfoundment before the mystery of existence. By pointing to all the inadequacies of science, to all the questions that it raises but leaves unanswered, Johnson has done his part to ensure that our sense of wonder does not soon vanish. Was I Wrong about The End of Science? What Thomas Kuhn Really Thought about Scientific "Truth." Profile of Steven Weinberg: from The End of Science. Physics Titan Still Thinks String Theory Is “On the Right Track.” If You Want More Higgs Hype, Don’t Read This Column. Why Information Can’t Be the Basis of Reality.


News Article | November 8, 2016
Site: www.prweb.com

Over 400 attendees from 20 countries gathered from October 16-20, 2016 at the Sheraton® San Diego in San Diego, CA, for the 2016 Laser Institute of America’s International Congress on Applications of Lasers & Electro-Optics (ICALEO®). Gathering a highly engaged group of field veterans, new registrants and students, this year’s ICALEO featured more than 200 presentations, 59 peer-reviewed talks, comprehensive biophotonics coverage, and the introduction of the new ICALEO mobile app. Returning Congress General Chair Silke Pflueger was back at the helm helping compile the most highly-rated Opening Plenary presentations delivered at an ICALEO conference thus far. “We worked so hard this year to ensure, ICALEO 2016 once again exceeded all expectations,” said Pflueger. “Our opening plenary session is a great example. From visiting Mars, to self-driving cars and a LIGO revisit, we inspired new outlooks and forged new relationships, which is what ICALEO is all about.” Opening plenary speakers included Nina Lanza from Los Alamos National Laboratory, who linked humanity together in her discussion about the laser used aboard the Opportunity rover on Mars, and Jim McBride from Ford Motor Company, who talked about the challenges of sensing on fully autonomous vehicles. Albert Lazzarini, Deputy Director of LIGO Laboratory at California Institute of Technology, presented breaking results regarding black holes made from the first gravitational waves detected by LIGO. This year’s ICALEO also featured a variety of laser research and experimentation revelations, from the use of lasers in emerging areas, like paint stripping and dairy, to microprocessing and several new opportunities in wearables and medicine. Other highlights include LIA Executive Director Peter Baker’s honor as the first recipient of the new LIA Leadership Award. Retiring next April, Baker commented on his meaningful career: “At LIA we’re saving eyesight, preventing skin damage, and helping create laser technologies, products, and services that make the world a better place.” The 2016 Arthur L. Schawlow Award was awarded to Yongfeng Lu, the Lott Distinguished Professor of Engineering at the University of Nebraska-Lincoln, LIA Board Member, Past President, Treasurer and Fellow. LIA also honored Silke Pflueger and Neil Ball by elevating them to the highest level of membership as LIA Fellows. By unanimous decision, the first place ICALEO Poster Award went to Kohei Asano and his colleagues from Osaka University, the Industrial Research Institute of Ishikawa, and Yamazaki Mazak Corporation in Japan for their poster Copper Layer Formation Produced with 100W Blue Direct Diode Laser System, while the first place Student Paper Award winner was Christian Hagenlocher from IFSW in Stuttgart, Germany, for his paper Space and Time Resolved Determination of Thermomechanical Deformation Adjacent to the Solidification Zone during Hot Crack Formation in Laser Welding. As the 35th ICALEO ended, Neil Ball, newly-honored LIA Fellow, called the breakthrough laser event, “bar none, the best networking opportunity and the best opportunity to look forward and see what applications are on the horizon.” Ken Dzurko, General Manager of SPI Lasers said, “LIA does a great job creating a comfortable, relaxed mood right for exchanging ideas at this one-of-a-kind event that’s really the world’s premier gathering of scientists interested in laser applications.” ICALEO 2016 proceedings are available for sale at http://www.lia.org/store. For more information on ICALEO 2017, held Oct 22-26 in Atlanta, GA, visit http://www.lia.org/conferences/icaleo. The Laser Institute of America (LIA) is the professional society for laser applications and safety serving the industrial, educational, medical, research and government communities throughout the world since 1968. http://www.lia.org, 13501 Ingenuity Drive, Ste 128, Orlando, FL 32826, +1.407.380.1553.


News Article | November 12, 2015
Site: www.scientificcomputing.com

PALO ALTO, CA — D-Wave Systems, the world's first quantum computing company, announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The National Strategic Computing Initiative, created by executive order of President Barack Obama in late July, is intended "to maximize [the] benefits of high-performance computing (HPC) research, development, and deployment." "Eventually, Moore's Law (that predicted that the number of transistors on an integrated circuit would double every two years) will come to an end," said John Sarrao, associate director for Theory, Simulation, and Computation at Los Alamos. "Dennard Scaling (that predicted that performance per watt of computing would grow exponentially at roughly the same rate) already has. Beyond these two observations lies the end of the current 'conventional' computing era, so new technologies and ideas are needed." D-Wave's quantum annealing technology leverages quantum effects to quickly find the lowest points in a virtual "energy landscape." These low points correspond to optimal or near optimal solutions to a given problem. In addition, D-Wave's superconducting processor generates almost no heat, so the system's power requirements, currently less than 25 kW, will remain low as the computer continues to scale. This is in stark contrast to current supercomputers that can require many megawatts of power, a huge impediment as the need for computational resources continues to grow. "As conventional computers reach their limits in terms of scaling and performance per watt, we need to investigate new technologies to support our mission," said Mark Anderson of the Laboratory's Weapons Physics Directorate. "Researching and evaluating quantum annealing as the basis for new approaches to address intractable problems is an essential and powerful step, and will enable a new generation of forward thinkers to influence its evolution in a direction most beneficial to the nation." "Los Alamos is a global leader in high-performance computing and a pioneer in the application of new architectures to solve critical problems related to national security, energy, the environment, materials, health and earth science," said Robert "Bo" Ewald, president of D-Wave U.S. "As we work jointly with scientists and engineers at Los Alamos, we expect to be able to accelerate the pace of quantum software development to advance the state of algorithms, applications and software tools for quantum computing." The D-Wave 2X system is expected to be installed at Los Alamos in early 2016. About D-Wave Systems D-Wave Systems is the first quantum computing company. Its mission is to integrate new discoveries in physics, engineering, manufacturing and computer science into breakthrough approaches to computation to help solve some of the world's most complex challenges. The company's quantum computers are built using a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation. D-Wave's customers include some of the world's most prominent organizations including Lockheed Martin and Google, whose system is hosted at NASA's Ames Research Center. With headquarters near Vancouver, Canada, D-Wave U.S. is based in Palo Alto, California. D-Wave has a blue-chip investor base including Bezos Expeditions, BDC Capital, DFJ, Goldman Sachs, Growthworks, Harris & Harris Group, In-Q-Tel, International Investment and Underwriting, and Kensington Partners Limited.


The star is one that pulsates and so is characterized by varying brightness over time. It's situated 7,000 light years away from the Earth in the constellation Pegasus, said astronomer Farley Ferrante, a member of the team that made the discovery at Southern Methodist University, Dallas. Called a variable star, this particular star is one of only seven known stars of its kind in our Milky Way galaxy. "It was challenging to identify it," Ferrante said. "This is the first time we'd encountered this rare type." The Milky Way has more than 100 billion stars. But just over 400,900 are catalogued as variable stars. Of those, a mere seven—including the one identified at SMU—are the rare intrinsic variable star called a Triple Mode 'high amplitude delta Scuti' (pronounced SKOO-tee) or Triple Mode HADS(B), for short. "The discovery of this object helps to flesh out the characteristics of this unique type of variable star. These and further measurements can be used to probe the way the pulsations happen," said SMU's Robert Kehoe, a professor in the Department of Physics who leads the SMU astronomy team. "Pulsating stars have also been important to improving our understanding of the expansion of the universe and its origins, which is another exciting piece of this puzzle." The star doesn't yet have a common name, only an official designation based on the telescope that recorded it and its celestial coordinates. The star can be observed through a telescope, but identifying it was much more complicated. A high school student in an SMU summer astronomy program made the initial discovery upon culling through archived star observation data recorded by the small but powerful ROTSE-I telescope formerly at Los Alamos National Laboratory in New Mexico. Upon verification, the star was logged into the International Variable Star Index as ROTSE1 J232056.45+345150.9 by the American Association of Variable Star Observers at this link. How in the universe was it discovered? SMU's astrophysicists discovered the variable star by analyzing light curve shape, a key identifier of star type. Light curves were created from archived data procured by ROTSE-I during multiple nights in September 2000. The telescope generates images of optical light from electrical signals based on the intensity of the source. Data representing light intensity versus time is plotted on a scale to create the light curves. Plano Senior High School student Derek Hornung first discovered the object in the ROTSE-I data and prepared the initial light curves. From the light curves, the astronomers knew they had something special. It became even more challenging to determine the specific kind of variable star. Then Eric Guzman, a physics graduate from the University of Texas at Dallas, who is entering SMU's graduate program, solved the puzzle, identifying the star as pulsating. "Light curve patterns are well established, and these standard shapes correspond to different types of stars," Ferrante said. "In a particular field of the night sky under observation there may have been hundreds or even thousands of stars. So the software we use generates a light curve for each one, for one night. Then—and here's the human part—we use our brain's capacity for pattern recognition to find something that looks interesting and that has a variation. This allows the initial variable star candidate to be identified. From there, you look at data from several other nights. We combine all of those into one plot, as well as add data sets from other telescopes, and that's the evidence for discerning what kind of variable star it is." That was accomplished conclusively during the referee process with the Variable Star Index moderator. The work to discover and analyze this rare variable star was carried out in conjunction with analyses by eight other high school students and two other undergraduates working on other variable candidates. The high school students were supported by SMU's chapter of the Department of Energy/National Science Foundation QuarkNet program. Of the stars that vary in brightness intrinsically, a large number exhibit amazingly regular oscillations in their brightness which is a sign of some pulsation phenomenon in the star, Ferrante said. Pulsation results from expanding and contracting as the star ages and exhausts the hydrogen fuel at its core. As the hydrogen fuel burns hotter, the star expands, then cools, then gravity shrinks it back, and contraction heats it back up. "I'm speaking very generally, because there's a lot of nuance, but there's this continual struggle between thermal expansion and gravitational contraction," Ferrante said. "The star oscillates like a spring, but it always overshoots its equilibrium, doing that for many millions of years until it evolves into the next phase, where it burns helium in its core. And if it's about the size and mass of the sun—then helium fusion and carbon is the end stage. And when helium is used up, we're left with a dying ember called a white dwarf." Within the pulsating category is a class of stars called delta Scuti, of which there are thousands. They are named for a prototype star whose characteristic features—including short periods of pulsating on the scale of a few hours—are typical of the entire class. Within delta Scuti is a subtype of which hundreds have been identified, called high amplitude delta Scuti, or HADS. Their brightness varies to a particularly large degree, registering more than 10 percent difference between their minimum and maximum brightness, indicating larger pulsations. Common delta Scuti pulsate along the radius in a uniform contraction like blowing up a balloon. A smaller sub-category are the HADS, which show asymmetrical-like pulsating curves. Within HADS, there's the relatively rare subtype called HADS(B) , of which there are only 114 identified. A HADS(B) is distinguished by its two modes of oscillation—different parts of the star expanding at different rates in different directions but the ratio of those two periods is always the same. For the SMU star, two modes of oscillation weren't immediately obvious in its light curve. "But we knew there was something going on because the light curve didn't quite match known light curves of other delta Scuti's and HADS' objects we had studied. The light curves—when laid on top of each other—presented an asymmetry," Ferrante said. "Ultimately the HADS(B) we discovered is even more unique than that though—it's a Triple Mode HADS(B) and there were previously only six identified in the Milky Way. So it has three modes of oscillation, all three with a distinct period, overlapping, and happening simultaneously." So rare, in fact, there's no name yet for this new category nor a separate registry designation for it. Guzman, the student researcher who analyzed and categorized the object, recalled how the mystery unfolded. "When I began the analysis of the object, we had an initial idea of what type it could be," Guzman said. "My task was to take the data and try to confirm the type by finding a second period that matched a known constant period ratio. After successfully finding the second mode, I noticed a third signal. After checking the results, I discovered the third signal coincided with what is predicted of a third pulsation mode." The SMU Triple Mode HADS(B) oscillates on a scale of 2.5 hours, so it will expand and contract 10 times in one Earth day. It and the other known six HADS(B)'s are in the same general region of the Milky Way galaxy, within a few thousand light years of one another. "I'm sure there are more out there," Ferrante said, "but they're still rare, a small fraction." SMU's Triple Mode HADS(B) is unstable and further along in its stellar evolution than our sun, which is about middle-aged and whose pulsating variations occur over a much longer period of time. SMU's Triple Mode HADS(B) core temperature, heated from the burning of hydrogen fuel, is about 15 million Kelvin or 28 million degrees Fahrenheit. Someday, millions of years from now, SMU's Triple Mode HADS(B) will deplete the hydrogen fuel at its core, and expand into a red giant. "Our sun might eventually experience this as well," Ferrante said. "But Earth will be inhospitable long before then. We won't be here to see it."


News Article | February 21, 2017
Site: www.theenergycollective.com

NucNet: Private companies have invested over $1bn in the development of small modular reactors (SMRs), but more investment through public-private partnerships is needed to assure that SMRs are a viable option in the mid-2020s, the US-based SMR Start industry consortium said in a policy statement. SMR Start is urging Congress to authorize sufficient funds for an SMR commercial deployment program and called for the continuation and expansion of the existing licensing technical support program to include the design and engineering, regulatory review and approval of SMR technologies and facilities. The statement said that in addition to accomplishing the public benefit from SMR deployment, the federal government would receive a return on investment through taxes associated with investment, job creation and economic output over the lifetime of the SMR facilities that would otherwise not exist without federal investment. It also called for continuation of the loan guarantee program to support financing for the design and construction of SMR facilities and SMR component manufacturing facilities. The policy statement is online: http://bit.ly/2kQM1vG Members of the consortium include AREVA, Bechtel, BWXT, Dominion, Duke Energy, Energy Northwest, Fluor, Holtec International, NuScale Power, Ontario Power Generation, PSEG Nuclear, Southern Nuclear, TVA, and UAMPS Plant Vogtle could use new fuel when MOX is complete Augusta Chronicle: Proponents of the Savannah River Site’s mixed-oxide (MOX) fuel fabrication facility are touting the possibility that once in production, its output, PWR type fuel assemblies in the form of MOX fuel, could be used at Units 3 and 4 at Southern Nuclear’s Plant Vogtle. According to Areva Nuclear Materials LLC, one of the companies involved in MOX design and construction, the Energy Department planned for a subsidized cost structure to make the MOX fuel more attractive. Plant Vogtle, which could have four operating reactor units when the MOX plant is finished, could make the switch to the mixed-oxide fuel. Up to one-third of the reactor’s fuel assemblies could be MOX fuel. The advantage of the fuel is that there are longer periods between fuel outages during which time the reactor is not generating electricity nor making any money for its investors. “In order to accommodate the potential use of MOX fuel, modifications would be required for the plant’s physical structure, as well as the processes and procedures used to operate the facility.” He might have also added that the plant would have to modify the NRC license for each reactor based on the change in fuel type. The agency has no experience with this kind of modification of a license. AP: A federal appeals court has rejected a Virginia company’s bid to end the state’s decades-long ban on uranium mining. A panel of the 4th U.S. Circuit Court of Appeals in Richmond has upheld the ruling of a district judge who threw out a lawsuit from Virginia Uranium Inc. challenging the ban. The Pittsylvania County company wants to mine a 119-million-pound deposit of the radioactive ore. It argued that a federal law should pre-empt state regulations, but the courts disagreed. Does India still want the Westinghouse reactors despite Toshiba meltdown? Reuters, PTI: In a burst of what can charitably be characterized as wishful thinking, an Indian government official said he does not expect fallout from the financial meltdown at Toshiba Corp to halt plans to buy six nuclear reactors from the Japanese company’s U.S. nuclear unit Westinghouse. Indian wire services added details to the report. This statement was not made by NPCIL which is the main actor in all matters related to building new nuclear power plants. This raises a question of how credible the statement is as an expression of the Indian’s government’s views on Toshiba’s financial troubles. India has been in talks for years to build six Westinghouse AP1000 reactors in the southern state of Andhra Pradesh under its drive to expand nuclear generation and to move the economy off polluting fuels like coal. “As for the technical execution of the project, I do not see many problems,” Sekhar Basu, secretary of the Department of Atomic Energy that reports directly to Modi, told Reuters in a short telephone interview. The wire service reported that negotiations on the technical and commercial terms of the reactor deal have reached an advanced stage. Not mentioned in the report is any measure of relief from the terms of the supplier liability law that has kept U.S. firms out of the Indian market. Industry experts said that, if the project is still at all viable, the main logistical challenge would be to locate civil engineering contractors since Westinghouse would only provide the reactors. India has not yet signed a contract with Westinghouse, nor has cash changed hands. Basu said that talks on financing had not yet begun in earnest. Significantly, he also said the state-owned Nuclear Power Corporation of India (NPCIL) had yet to be updated by Westinghouse on recent developments. Westinghouse and NPCIL did not respond to requests for comment from wire service reporters. Separately, there is considerable pressure within India’s nuclear industdry to abandon the Westinghouse reactor project and build 700 MW PHWR reactors based on an Indian adaptation of the CANDU technology. Further, an amendment to the enabling legislation for NPCIL allows it to do joint development efforts with heavy industry such as steel and petrochemicals and even provide electricity for India’s vast electrified rail network. This is seen as an advantage and would build domestic capabilities, supply chains, and not have India relying on western technology. Albuquerque Journal: WIPP said it expects to begin accepting shipments of nuclear waste from storage sites around the country in April. Feb 14th marked three years since a radiation accident contaminated the Waste Isolation Pilot Plant outside Carlsbad. After struggling to clean up the deep underground repository, WIPP commenced waste emplacement last month. WIPP has been moving waste drums underground from an above-ground warehouse, where waste was being temporarily held when a drum of radioactive material burst underground on Feb. 14, 2014, and WIPP was shut down. WIPP has started off slowly, making just two emplacements per week from the waste handling building, according to a spokesman. When shipments begin, the facility is aiming to make about four emplacements per week by the end of the year compared with an average of 17 per week before the accident. Los Alamos National Laboratory is on the list of those sites expected to begin shipping in April, along with Department of Energy facilities at Idaho, Oak Ridge and Savannah River. LANL faced its own issues after investigators discovered that the drum that burst at WIPP had been improperly packed by a LANL subcontractor. The Idaho Falls Post Register reported that the Idaho Cleanup Project will send an estimated 61 shipments of radioactive waste to New Mexico’s Waste Isolation Pilot Plant over the next year, more than any other site, U.S. Department of Energy officials said. While Idaho cleanup contractor Fluor Idaho will send more than twice the number of shipments of any other site, it will not be nearly enough to make the necessary progress toward meeting a Dec. 31, 1995, Settlement Agreement deadline with the state of Idaho. Idaho has more than 900 shipments — or more than 20,000 individual containers — of transuranic waste that are supposed to leave by the end of next year.


News Article | February 15, 2017
Site: cerncourier.com

Tom Dombeck, an innovative and versatile physicist and project manager, passed away in Kāneʻohe, Hawaii, on 4 November 2016. His legacy includes many measurements in particle physics, the development of new techniques for the production of ultra-cold neutrons and substantial contributions to the management of several major scientific projects. Tom received a BA in physics from Columbia University in 1967 and a PhD in particle physics from Northwestern University in 1972, and his career saw him hold prominent roles at numerous institutes. He was a research associate at Imperial College London from 1972 to 1974 and visiting scientist at Dubna in the former USSR in 1975. Following six years at the University of Maryland, from 1981 to 1988 Tom held various roles at Los Alamos National Laboratory (LANL) after which he spent a year working in the US Department of Energy in the office of the Superconducting Supercollider (SSC). Afterwards he became a staff physicist and ultimately deputy project manager for operations at the SSC laboratory in Texas, where he led the successful “string test”. In 1994 he moved to a role as project manager for the Sloan Digital Sky Survey at the University of Chicago. Tom was deputy head for the technical division at Fermilab from 1997 to 1999, and project manager for the Next Linear Collider project at Fermilab between 2000 and 2002. From 2003 to 2006 he was project manager for the Pan-STARRS telescope at the University of Hawaii and an affiliated graduate faculty member there until 2016. Tom began his scientific research with bubble chambers and was a key participant in the experiment that observed the first neutrino interaction in a hydrogen filled bubble chamber in 1970 at the ZGS at Argonne National Laboratory. For many years he pursued measurements of the electric dipole moment (EDM) of the neutron and was also involved in the development of ultra-cold neutrons by Doppler shifting at pulsed sources. He proposed a new method for a neutron EDM measurement that involved Bragg scattering polarised neutrons from a silicon crystal and led an initial effort at the Missouri University Research Reactor, after which he initiated an experiment using the reactor at the NIST Center for Neutron Research. While at LANL, Tom led a neutrino-oscillation search that involved constructing a new beamline and neutrino source at LAMPF and provided improved limits on muon-neutrino to electron-neutrino oscillations. He carried these fundamental physics interests and abilities to his later work as a highly effective scientific programme manager. Tom was able to see the connections between disparate scientific areas and bring together new ideas and approaches that moved the field forwards. He could inspire people around him with his enthusiasm and kindness, and his wry sense of humour and wicked smile were trademarks that will long be remembered by his friends and colleagues. Tom was a devoted family man and is missed greatly by his wife Bonnie, his two children, Daniel and Heidi, and his four grandchildren. Sidney David Drell, professor emeritus of theoretical physics at SLAC National Accelerator Laboratory, senior fellow at Stanford’s Hoover Institution and a giant in the worlds of both academia and policy, died on 21 December 2016 at his home in Palo Alto, California. He was 90 years old. Drell made immense contributions to his field, including uncovering a process that bears his name and working on national and international security. His legacy as a humanitarian includes his friendship and support of Soviet physicist and dissident Andrei Sakharov, who won the Nobel Peace Prize in 1975 for his opposition of the abuse of power in the Soviet Union. Drell was also known for his welcoming nature and genuine, albeit perhaps unwarranted, humility. Drell’s commitment to arms control spanned more than 50 years. He served on numerous panels advising US Congress, the intelligence community and military. He was an original member of JASON, a group of academic scientists created to advise the government on national security and defence issues, and from 1992 to 2001 he was a member of the President’s Foreign Intelligence Advisory Board. He was also the co-founder of the Center for International Security and Cooperation at Stanford, and in 2006 he and former Secretary of State George Shultz began a programme at the Hoover Institution dedicated to developing practical steps towards ridding the world of nuclear weapons. In 1974, Drell met Sakharov at a conference hosted by the Soviet Academy of Sciences and they quickly became friends. When Sakharov was internally exiled to Gorky from 1980 to 1986 following his criticism of the Soviet invasion of Afghanistan, Drell exchanged letters with him and called on Soviet leader Mikhail Gorbachev for his release. He also organised a petition to allow another Soviet physicist and dissident, Nohim Meiman, to emigrate to Israel, and obtained the signatures of 118 members of the US National Academy of Sciences. Having graduated with a bachelor’s degree from Princeton University in 1946, Drell earned a master’s degree in 1947 and a PhD in physics in 1949 from the University of Illinois, Urbana-Champaign. He began at Stanford in 1950 as an instructor in physics, leaving to work as a researcher and assistant professor at the Massachusetts Institute of Technology and then returning to Stanford in 1956 as a professor of physics. He served as deputy director of SLAC from 1969 until his retirement from the lab in 1998. Drell’s research was in the fields of quantum electrodynamics and quantum chromodynamics. While at SLAC, he and research associate Tung-Mow Yan formulated the famous Drell–Yan Process, which has become an invaluable tool in particle physics. His theoretical work was critical in setting SLAC on the course that it took. As head of the SLAC theory group, Drell brought in a host of younger theoretical physicists who began creating the current picture of the structure of matter. He played an important role in developing the justification for experiments and turning the results into what became the foundation of the Standard Model of particle physics. For his research and lifetime of service to his country, Drell received many prestigious awards, including: the National Medal of Science; the Enrico Fermi Award; a fellowship from the MacArthur Foundation; the Heinz Award for contributions in public policy; the Rumford Medal from the American Academy of Arts and Sciences; and the National Intelligence Distinguished Service Medal. Drell was one of 10 scientists honoured as the founders of satellite reconnaissance as a space discipline by the US National Reconnaissance Office. He was elected to the National Academy of Sciences, the American Academy of Arts and Sciences and the American Philosophical Society, and was president of the American Physical Society in 1986. Drell was also an accomplished violinist who played chamber music throughout his life. He is survived by his wife, Harriet, and his children, Daniel, Virginia, Persis and Joanna. Persis Drell, a former director of SLAC who is also a physicist at Stanford and dean of the School of Engineering, will be the university’s next provost. • Based, with permission, on the obituary published on the Stanford University website on 22 December 2016. Mambillikalathil Govind Kumar Menon, a pioneer in particle physics and a distinguished statesman of science, passed away peacefully on 22 November at his home in New Delhi, India. He graduated with a bachelor of science from Jaswant College, Jodhpur, in 1946, and inspired by Chandrasekhara Venkata Raman, studied under the tutelage of spectroscopist Nanasaheb R Tawde before joining Cecil Powell’s group at the University of Bristol, UK, in 1949. Menon’s first important contribution was to establish the bosonic character of the pion through a study of fragments emerging from π-capture by light nuclei. He then focused his attention on the emerging field of K-meson physics. Along with his colleagues at Bristol, notably Peter Fowler, Cecil Powell and Cormac O’Ceallaigh, Menon discovered K+ → π+ π0 and K+ → π+ π– π+ events in nuclear emulsion indicating parity non-conservation, (the τ – θ puzzle). He also identified a sizeable collection of events showing the associated production of kaons and hyperons. In 1955 Menon joined the Tata Institute of Fundamental Research (TIFR), where he worked on cosmic-ray research programmes initiated by Homi Bhabha. Following Bhabha’s death in an air crash over Mont Blanc in 1966, the responsibility of the directorship of TIFR fell squarely on his shoulders, along with the wide-ranging initiatives for national development that Bhabha had started. Notwithstanding these additional demands on his time, his focus on particle physics never wavered. He continued with his research, establishing a collaboration with Arnold W Wolfendale at the University of Durham, UK, and Saburo Miyake of Osaka City University, Japan, for the study of particle physics with detectors deployed deep underground; he detected events induced by cosmic-ray neutrino interactions; and he also launched a dedicated effort to test the early predictions of violation of baryon-number conservation leading to proton decay. During his Bristol years, Menon established a close friendship with William O Lock, who had moved to CERN in 1959. This facilitated collaboration between TIFR and CERN, leading to the development of bubble-chamber techniques to study mesons produced in proton–antiproton collisions. These initial studies eventually led to highly successful collaborations between Indian researchers and the L3 experiment at LEP, and the CMS, ALICE and ATLAS experiments at the LHC. Menon won several awards including the Cecil F Powell and C V Raman medals, and was elected to the three scientific academies in India. He was elected as a fellow of the Royal Society in 1970, and subsequently to the Pontifical Academy of Sciences, American Academy of Arts and Sciences, the Russian Academy of Sciences and as an honorary fellow of the Institute of Physics and the Institution of Electrical & Electronics Engineers. He also served two terms as president of the International Council of Scientific Unions, and stimulated its participation in policy issues, including climate change. Menon held a firm conviction that science can bring about technological development and societal progress, which motivated him to work with Abdus Salam in founding the Third World Academy of Sciences. He held several high positions in the Indian government, and thus contributed to the growth of science and technology in India. Alongside his scientific achievements, M G K Menon was also very close to his wife Indumati and their two children Preeti and Anant Kumar. Our warmest thoughts go out to them and to innumerable others whose lives he touched in so many important ways. Helmut Oeschler, an active member of the ALICE collaboration, passed away from heart failure on 3 January while working at his desk. Born in Southern Germany, he received his PhD from the University of Heidelberg in 1972 and held postdoc positions at the Niels Bohr Institute in Copenhagen, and in Strasbourg, Saclay and Orsay in France. From 1981 he was at the Institute for Nuclear Physics of TU Darmstadt. He held a Doctorate Honoris Causa from Dubna University, Russia, and in 2006 he received the Gay-Lussac-Humboldt prize. Oeschler’s physics interests concerned the dynamics of nuclear reactions over a broad energy range, from the Coulomb barrier to ultra-relativistic collisions. He was a driving force for building the kaon spectrometer at the GSI in Darmstadt, which made it possible to measure strange particles in collisions of heavy nuclei. From the late 1990s he was actively involved in addressing new aspects of equilibration in relativistic nuclear reactions. Oeschler became a member of the ALICE collaboration at CERN in 2000 and made important contributions to the construction of the experiment. Together with his students, he was involved in developing track reconstruction software for measuring the production of charged particles in lead–lead collisions at the LHC. He also led the analysis efforts for the measurements of identified charged hadrons in the LHC’s first proton–proton collisions. From 2010 to 2014 he led the ALICE editorial board, overseeing the publication of key results relating to quark-gluon matter at the highest energy densities. His deep involvement in the data analysis and interpretation continued unabated and he made important contributions to several research topics. Advising and working in close collaboration with students was a much loved component of Helmut’s activity and was highly appreciated among the ALICE collaboration. Helmut Oeschler was a frequent visitor of South Africa and served there on numerous international advisory committees. He was instrumental in helping the South African community develop the physics of heavy-ion collisions and collaboration with CERN. With Helmut Oeschler we have lost an internationally renowned scientist and particular friend and colleague. His scientific contributions, especially on the production of strange particles in high-energy collisions, are important achievements.


News Article | December 16, 2016
Site: phys.org

Officials outlined the findings of the recent review during a public forum and reiterated their intended goal of resuming limited operations at the Waste Isolation Pilot Plant by the end of the month. Some in audience expressed skepticism after hearing the laundry list of issues that the Energy Department and the manager of the underground facility need to address before reopening. Those 21 findings range from simple paperwork and documentation issues to procedural inadequacies that involve new requirements for accepting waste from national laboratories and other defense sites around the country. Another 15 findings have been identified that can be addressed over time as operations resume, such as an aging public announcement system that warns workers of emergencies. It's been nearly three years since workers at the repository have handled any containers of waste. That's because a chemical reaction inside an inappropriately packed container resulted in a radiation release and forced the facility's indefinite closure. Investigators determined the mishap would have been preventable had policies and procedures been followed. The repository and Los Alamos National Laboratory, where the container was packed, were cited for numerous violations by state regulators. Officials have yet to publicly release the readiness report, but they said corrective action plans have been developed for all of the findings and that six of the issues have already been taken care of. "Again, safety is a priority and doing it right is a priority for us," said Todd Shrader, head of the Energy Department's field office in Carlsbad, New Mexico. "If it takes a little bit beyond that then we will take the time, but our goal does still remain December and we are at the last steps of the readiness process." Watchdog groups have been critical of the Energy Department, saying the agency shouldn't rush given the critical role the facility plays in the nation's multibillion-dollar cleanup program and the implications if something were to go wrong again. "The circumstances surrounding reopening WIPP need to be considered fully by the public prior to the reopening," the groups Concerned Citizens for Nuclear Safety and Citizen Action New Mexico wrote in a recent letter to agency officials. Shrader said Thursday the recovery process at the repository has cost around a quarter-billion dollars and the price tag will grow by just as much when a new ventilation system is installed. Once the findings are addressed and state and federal officials sign off, the repository plans to move waste already at the site into the first of five disposal rooms carved a thick layer of salt deep underground. Officials expect it will take between 2 and 3 years to fill up the space. Explore further: New Mexico finds no 'red flags' in nuclear dump inspection


News Article | August 23, 2016
Site: www.rdmag.com

The solar storm of late May 1967 brought the U.S. and the Soviet Union to the brink of global thermonuclear war, since the sun’s interference had jammed radar and some vital Cold War communications technologies. An even bigger solar storm, the so-called Carrington Event, set telegraph wires on fire in September 1859, and caused enough panic among some to believe it was the end of the world. The energy-intensive 21st century world has not dealt with these kinds of challenges from our sun – yet. But eventually, there will be a solar event that could compromise large swaths of the power grid, experts told Laboratory Equipment, a sister publication of R&D. Blackouts of that kind could cause chaos at large in an increasingly wired world, they explained. Next month Los Alamos National Laboratory launches a new investigation of how those solar events could affect a grid like a long string of Christmas lights – increasingly long and susceptible to a cascade of problems. The three-year Los Alamos program will be funded internally for about $5 million – and will determine what transformers, circuits, stations and conduits could be fried by a flare-up from the sun, said Mike Henderson, leader of the national security-focused program. “Impacts of Extreme Space Weather Events on Power Grid Infrastructure: Physics-Based Modeling of Geomagnetically-Induced Currents During Carrington-Class Geomagnetic Storms,” will first lay a foundation to understand the stress the sun can exert on terra firma. But the look will also extend to computer models of what exactly will happen with a wide variety of solar flares, bombardments of radiation, and ionospheric effects – and what circuits would be fried. The predictive exercise will also explore what a recovery in the aftermath would look like. “Right now we really don’t know what the effects would be,” said Henderson, in an interview. “So far, it’s been very difficult to predict what an event like that would look like.” The Los Alamos team will base its investigation on the documented effects of storms like the 1859 and 1967 storms, along with events like the September 1989 shockwave that completely blacked out Quebec for 12 full hours. But such powerful storms, if directed in a slightly different way, could knock out much more – and for longer. A complete loss of power could produce a civil breakdown along the lines of a Hurricane Katrina disaster, Henderson said. Emergency response, traffic lights, hospitals and almost every public service relies on electrical connections – so a breakdown could prove a “logistical nightmare.” One of the weak spots in the grid acknowledged so far is the step-up and step-down transformers, which form crucial junctures within the interconnected grid, said Henderson. Since most of those transformers are not standardized, if they are knocked out by a surge for whatever reason, it could be a long, chaotic time to repair, or rebuild. Just getting hardware from one side of the country to the other could be a serious conundrum. Worst case scenarios posit that a region could be rendered powerless for months at a time, he said. Power grid vulnerabilities have been addressed at a national policy level for about a decade. A National Academy of Sciences report found that the entire national power grid was uniquely susceptible to terrorist attacks – in addition to a long list of natural disasters, including solar storms. The Los Alamos team has been at the vanguard of understanding space weather’s effects on the energy infrastructure worldwide. Geoffrey Reeves, a Los Alamos space physicist, said in another interview that space weather has not yet tested the vital pressure points of a computerized and electric-intensive grid. “There are many types of space weather. We haven’t really seen a storm that affects the New York area in a major way,” said Reeves. “There’s been a lot of progress, but as much as we’d like,” added Reeves. “We’re a long way from looking at the sun and predicting exactly what the effects will be here back on Earth.” Henderson said the goal is to have a better game plan ready on standby as soon as possible. No less than the White House Situation Room has taken a keen interest in getting more information from the space physicists before the next big one. “Nobody knows exactly what’s going to happen,” said Henderson. “Everyone’s taking this a little more seriously at this point. “Impacts of Space Weather Events on Power Grid Infrastructure” begins in September, and is expected to run through most of 2019.


News Article | January 13, 2016
Site: www.nature.com

More than 600 metres below ground near Carlsbad, New Mexico, is the world's only operating deep geological repository currently accepting transuranic nuclear waste: that contaminated by elements heavier than uranium. The Waste Isolation Pilot Plant (WIPP), run by the US Department of Energy (DOE), is used to dispose of laboratory equipment, clothing and residues from the nation's nuclear-defence programme. In the past 15 years, around 91,000 cubic metres (equivalent to covering a soccer field to a depth of about 13 metres) of such transuranic waste, mostly of relatively low radiation levels, has been placed there. The main contaminants are long-lived isotopes of plutonium (mainly plutonium-239, with a half-life of 24,100 years, and plutonium-240, with a half-life of 6,560 years) and shorter-lived isotopes of americium and curium. In rooms carved out of a 250-million-year-old salt bed, the waste is stored in hundreds of thousands of plastic-lined steel drums. The repository is now at about half of its planned capacity and is to be sealed in 2033. The DOE is responsible for performing safety assessments to ensure that WIPP will not exceed limits on exposure to radioactivity, as set by the US Environmental Protection Agency (EPA), for 10,000 years. But new demands are emerging. An arms-control agreement with Russia made in 2000 obliges the United States to dispose of 34 tonnes of plutonium from dismantled nuclear weapons1. Following the terms of the agreement, the United States planned to convert the material into a fuel — mixed (uranium and plutonium) oxide, or MOX — to burn in commercial nuclear-power plants. But faced with soaring construction costs for a MOX fabrication facility at the Savannah River Site in South Carolina, the DOE has commissioned evaluations of alternatives2. The most recent report3, published in August 2015, recommends burying the weapons' plutonium at WIPP. Judging the repository's performance to have been “successfully demonstrated”, the DOE's Red Team expert panel proposes that the 34 tonnes of weapons plutonium can be added to WIPP once it has been diluted to low concentrations comparable to that of the transuranic waste at WIPP. In fact, WIPP's safety record is mixed. On 14 February 2014, a burst drum released small quantities of plutonium and americium to the surface (with a radioactivity of around 100 millicuries, or 3.7 gigabecquerels)4. Airborne radioactive material reached the surface through the ventilation system and spread 900 metres from the repository's exhaust shaft. Twenty-one workers were exposed to low levels of radioactivity, the highest dose equivalent to that from a chest X-ray. Nine days earlier, smoke from a burning truck filled the underground workings and shaft, damaging mechanical, electrical and ventilation systems. The DOE says that such accidents do not compromise the long-term performance of the repository. We agree that they need not — if lessons are learned. Our concern is not the events' severity but that they were unanticipated. These accidents illustrate how difficult it is to predict potential failures of such a disposal system over millennia. For example, assumptions about the repository's geochemistry or the likelihood of drilling into it can lead to underestimates of the risks. Before expanding WIPP's plutonium inventory, the DOE must examine more carefully its safety assesment for performance that stretches to 10,000 years and beyond. The 2014 radioactive leak at WIPP was caused by heat from a chemical reaction in a drum4. Plutonium-contaminated nitrate salts, a waste product of plutonium purification at Los Alamos National Laboratory (LANL) in New Mexico, reacted with an organic, wheat-based commercial cat litter used as an absorbent for liquid wastes. The heat popped the lid. Although sensors detected the released radioactivity and diverted exhaust air through filters, some radioactive material leaked through. WIPP operators sealed the leak in the filtration system and sealed off the room in which the leak occurred. The breached drum remains in the repository. Analyses of the accidents4 by the DOE have documented a lack of a 'safety culture' at WIPP. The facility's successful operation for 15 years had bred complacency. The failures were wide-ranging: in safety assessments, control of drum contents, installation and maintenance of equipment, and preparation for an accident. An investigation of the drum-packaging procedure, for example, found “no evidence that any type of technical evaluation occurred” when selecting the organic absorbent material, even though its incompatibility with nitrate salts had been raised at LANL during waste packaging4. From a systems-analysis perspective, the drum breach was a 'normal' accident5 — a human mistake that led to a cascade of errors and breakdowns, exacerbated by a failure to enforce safety protocols. Complex technologies are prone to unanticipated failures that can progress quickly; examples include the 1979 Three Mile Island nuclear-plant meltdown in Pennsylvania and the 1986 Challenger space-shuttle explosion. Such accidents cannot be easily predicted, but a system designed with failure in mind can mitigate the risk. The WIPP accident can be taken as a positive — it presents an opportunity to learn. The DOE has aggressively identified its causes and implemented corrective actions; incompatible chemicals are no longer mixed in the drums. But once the repository is closed, its contents cannot be monitored or problems fixed. We cannot be certain that future inhabitants of the area will even know that WIPP is there. To put the timescales in perspective, agriculture was developed just over 10,000 years ago. WIPP's present safety assessment addresses two scenarios: first, undisturbed performance and, second, human intrusion, such as inadvertently drilling through the repository in search of oil and gas6. The first foresees that after closure, the salt into which the repository is built will deform and flow around the drums to encase the waste. The model assumes that no fluids, such as brine, are present and that the site remains geologically isolated. Although the drums will be crushed, the radioactive material will be locked in the dry, solid salt, with no way to release radioactivity to the biosphere. Reliance on the geological barrier is so great that the form and composition of the waste is assumed to be unimportant; it need not even be treated. Human intrusion could release radioactivity to the environment6. Salt deposits, layered as sediments or as salt domes, are often associated with mineral and energy resources, such as potash and hydrocarbons — oil and gas. In southeastern New Mexico, exploration for and extraction of these fuels has led to extensive drilling in the Permian Basin, where WIPP is located. The probability of a borehole piercing the repository in the next 10,000 years is significant. If a borehole were to puncture the repository and a brine pocket, which are known to exist in the Castile geological formation below the Salado salt formation in which the repository sits, fluid may reach the transuranic waste (see 'Accident risk'). To assess the risk of radioactive release, one must first establish the probability of borehole penetration and determine how the pressurized brine will react with the waste. In forecasting future drilling rates, the EPA has used a 100-year historical average rate for the region, which predicts 67.3 boreholes per square kilometre over the 10,000-year regulated period6. But drilling near WIPP has risen sharply in recent years. As horizontal drilling and hydraulic-fracturing techniques have made new areas of hydrocarbon-bearing rocks accessible, the Permian Basin has become the most prolific oil-producing area in the United States. A recent 10-year historical average (2002–12) yields 148 boreholes per square kilometre over 10,000 years, more than doubling the projected risk of repository intrusion. Drilling rates, the effects of new technologies, and supply and demand pressures on hydrocarbon production are difficult if not impossible to predict centuries ahead. The concentration of transuranic elements leached by intruding brine is also hard to estimate because of the complexity of the waste: a typical drum contains a variety of materials, such as lab coats, gloves and other laboratory equipment. Different micro-geochemical environments will develop around different waste types. Chemically organic materials, such as plastic bags, may degrade by microbial action and generate carbon dioxide. In brine, CO forms stable carbonate and bicarbonate complexes with plutonium and other actinides (elements 89–103), raising their concentrations in solution. Large bags of magnesium oxide powder, amounting to more than 31,000 tonnes, have been placed in WIPP disposal rooms as an 'engineered barrier'. The magnesium oxide should react with the CO to form stable magnesium carbonates, thereby removing CO from solution and reducing the solubility of actinides. This presumes that the reactions proceed to completion and all the CO is consumed. The safety analysis calculations for WIPP assume that there is no CO present, dramatically lowering actinide concentrations in the brine and thus the risk of release of radioactivity. But reliance on magnesium oxide and a series of idealized reactions to constrain the repository's geochemistry is problematic, particularly if the amount of plutonium stored at WIPP increases. As made clear by the 2014 accidents, complex interactions of materials must be carefully considered when predicting the repository's performance now and in the future. The Red Team report proposes diluting the weapons plutonium before its disposal in an “inert adulterant” — a classified mixture of cementing, gelling, thickening and foaming agents known as stardust. The report is unclear on what is meant by 'inert'; however, inert materials are rare, particularly those that must remain so for thousands of years. In the case of plutonium-bearing solids, demonstrating chemical inertness presents a huge challenge. In near-surface conditions, plutonium can assume a variety of oxidation states — up to four, each with different solid-state and geochemical behaviours7. Its decay product uranium-235 has two principal oxidation states, U4+ and U6+, each with different geochemical mobility7. This complexity makes it difficult to predict how the actinides will react or be transported. Also, actinides decay mainly by the emission of α particles (energetic helium nuclei). During each decay, the daughter nucleus recoils and displaces thousands of atoms in the surrounding solid. Over time, this damage accumulates and changes the properties and chemical stability of the material. Radiation effects in actinide-bearing materials have been well documented over the past 20 years8, but are not considered in the Red Team's evaluation. The 'dilute-and-dispose' proposal to convert weapons-plutonium pits to plutonium oxide for burial in WIPP3 immediately raises safety issues. The extra plutonium nearly triples the current projected plutonium (around 12 tonnes) at closure. The design and safety assessment did not envision such a large amount. WIPP's capacity would have to expand by 15%1, increasing the likelihood that a borehole will one day intersect it. And the changed inventory of actinides demands new assessments of interactions with the materials present, including brine and CO . The amount of plutonium mobilized in brine depends on its solubility, which depends on its form and the amount of CO present after reaction with the bags of magnesium oxide. The current regulatory period of 10,000 years is short relative to the 24,100-year half-life of plutonium-239, let alone that of uranium-235, which has a half-life of 700 million years. To accommodate the extra plutonium, the regulatory period might be lengthened, meaning that the probability of human intrusion during this period increases. Some of these issues and others were raised in two 2015 reviews9, 10 of the Red Team report by the consultancy High Bridge Associates of Greensboro, Georgia. But the analysis did not consider the possibility of human intrusion. WIPP is fulfilling an important national need — the disposal of legacy transuranic waste from US defence programmes. Its opening was the culmination of 20 years of scientific research, engineering design and public engagement. Despite the accidents, WIPP can still fulfil its mission. However, proposals to substantially increase the plutonium inventory combined with a failure to revise the safety assessment, particularly the possibility of human intrusion, bear witness to the ease with which policy decisions can disregard the fundamental science — and risk yet another failure. The Red Team report shows a limited effort to consider or manage inherent risks. The shortcomings of proposals to dispose of weapons plutonium at WIPP mirror the operational failings that led to the 2014 accidents. Before the DOE considers implementing these recommendations, it should look to the repository's record over the past 15 years of operation and reassess its confidence in the safe performance of the facility over the next 10,000.


News Article | December 7, 2016
Site: www.scientificamerican.com

A surprising opportunity to explore something new in chemistry and physics has emerged. In March 1989, electrochemists Martin Fleischmann and Stanley Pons, at the University of Utah, announced that they had "established a sustained nuclear fusion reaction" at room temperature. By nearly all accounts, the event was a fiasco. The fundamental reason was that the products of their experiments looked nothing like deuterium-deuterium (D+D) fusion. In the following weeks, Caltech chemist Nathan Lewis sharply criticized Fleischmann and Pons in a symposium, a press release, a one-man press conference at the American Physical Society meeting in Baltimore, Maryland, and during his oral presentation at the APS meeting. Despite Lewis' prominence in the media spotlight, he never published a peer-reviewed critique of the peer-reviewed Fleischmann-Pons papers, and for good reason. Lewis' critique of the Fleischmann-Pons experiment was based on wrong guesses and assumptions. Richard Petrasso, a physicist at MIT, took Fleischmann and Pons to task for their claimed gamma-ray peak. Petrasso and the MIT team, after accusing Fleischmann and Pons of fraud in the Boston Herald, later published a sound and well-deserved peer-reviewed critique of what had become multiple versions of the claimed peak. From this dubious beginning, to the surprise of many people, a new field of nuclear research has emerged: It offers unexplored opportunities for the scientific community. Data show that changes to atomic nuclei, including observed shifts in the abundance of isotopes, can occur without high-energy accelerators or nuclear reactors. For a century, this has been considered impossible. In hindsight, glimpses of the new phenomena were visible 27 years ago. In October 1989, a workshop co-sponsored by the Electric Power Research Institute took place at the National Science Foundation headquarters, in Washington, D.C. Among the 50 scientists in attendance was the preeminent physicist Edward Teller. After hearing from scientists at the Lawrence Livermore National Laboratory and the Naval Research Laboratory who had observed isotopic shifts in room-temperature experiments, Teller concluded that nuclear effects were taking place. He even had a hunch about a possible mechanism, involving some sort of charge-neutral particle. By October, tritium production and low-levels of neutrons in such experiments had been reported from a few reputable laboratories, including Los Alamos National Laboratory and the Bhabha Atomic Research Center in India. Moreover, BARC researchers observed that the tritium production and neutron emissions were temporally correlated. Outside reviewers selected by the Department of Energy and tasked with examining the worldwide claims included this data in a draft of their report. Before the document was finalized, however, they removed the tables containing that data. In the early 1990s, several researchers in the field strongly favored neutron-based explanations for the phenomena. By the mid-1990s, a vocal contingent of scientists attempting to confirm Fleischmann and Pons' claims promoted the room-temperature fusion idea. Other scientists in the field, however, observed evidence—isotopic shifts and heavy-element transmutations—that pointed not to fusion but to some sort of neutron-induced reaction. In 1997, theorist Lewis Larsen looked at some of this data and noticed a similarity to elemental abundances he had learned about while a student in Subrahmanyan Chandrasekhar's astrophysics class at the University of Chicago. Larsen suspected that a neutronization process was occurring in low-energy nuclear reactions (LENR). Physicist Allan Widom joined Larsen's team in 2004, and in 2006 they published a theory in the European Physical Journal C - Particles and Fields. The Widom-Larsen theory has nothing to do with fusion; the key steps are based on weak interactions and are consistent with existing physics. The theory explains how nuclear reactions can occur at or near room temperature through the creation of ultra-low-momentum neutrons and subsequent neutron-capture processes. Such neutrons, according to the theory, have a very large DeBroglie wavelength and therefore have a huge capture cross-section, explaining why so few neutrons are detected. Many-body collective quantum and electromagnetic effects are fundamental to Widom and Larsen's explanation for the energy required to create neutrons in LENR cells. Crucially, such reaction-rate calculations are based not on few-body interactions but on many-body interactions. After 2006, the scientists who remained wedded to their belief in the idea of room-temperature fusion rejected the Widom-Larsen theory. A few of these fusion believers began making unsupported claims of commercially viable energy technologies. Hidden in the confusion are many scientific reports, some of them published in respectable peer-reviewed journals, showing a wide variety of experimental evidence, including transmutations of elements. Reports also show that LENRs can produce local surface temperatures of 4,000-5,000 K and boil metals (palladium, nickel and tungsten) in small numbers of scattered microscopic sites on the surfaces of laboratory devices. For nearly three decades, researchers in the field have not observed the emission of dangerous radiation. Heavy shielding has not been necessary. The Widom-Larsen theory offers a plausible explanation—localized conversion of gamma radiation to infrared radiation. The implication is that immense technological opportunities may exist if a practical source of energy can be developed from these laboratory curiosities. Perhaps most surprising is that, in the formative years of atomic science in the early 20th century, some scientists reported inexplicable experimental evidence of elemental transmutations. In the 1910s and 1920s, this research was reported in popular newspapers and magazines, and papers were published in the top scientific journals of the day, including Physical Review, Science and Nature. The experiments, using relatively simple, low-energy benchtop apparatus, did not use radioactive sources so the results defied prevailing theory. Several researchers independently detected the production of the gases helium-4, neon, argon, and an as-yet-unidentified element of mass-3, which we now identify as tritium. Two of these researchers were Nobel laureates. In 1966, physicist George Gamow wrote, "Let us hope that in a decade or two or, at least, just before the beginning of the 21st century, the present meager years of theoretical physics will come to an end in a burst of entirely new revolutionary ideas similar to those which heralded the beginning of the 20th century." LENR may very well be such an opportunity to explore new science.


« McLaren investing £1B in R&D over new 6-year plan; 50% of cars to feature hybrid tech by 2022 | Main | HARMAN and NXP cooperate on secure V2X communication » Researchers at the US Department of Energy’s (DOE) Argonne and Los Alamos national laboratories have teamed up to support a DOE initiative through the creation of the Electrocatalysis Consortium (ElectroCat), a collaboration devoted to finding an effective but cheaper alternative to platinum in hydrogen fuel cells. ElectroCat is one of four consortia that make up DOE’s new Energy Materials Network (EMN). (Earlier post.) About half of the total cost of a typical automotive fuel cell stack comes directly from the cost of the platinum metal in the electrode catalysts. ElectroCat is dedicated to finding new ways to replace rare and costly platinum group metals in fuel cell cathodes with more accessible and inexpensive substitutes such as materials based on the earth-abundant metals iron and cobalt. In order to make hydrogen fuel cell cars an affordable reality, we need to find a way to either significantly reduce the amount of platinum needed or completely replace platinum with less expensive materials. —Debbie Myers, an Argonne senior chemist and Argonne lead in the consortium The partnership between Argonne and Los Alamos that forms the core of ElectroCat involves the study, creation and implementation of possible alternatives to platinum-based electrodes through material-development efforts headed by Los Alamos and accelerated by the high-throughput, combinatorial, characterization and electrode performance modeling capabilities at Argonne, as well as by applying a high-performance supercomputer to model new catalyst structures at Los Alamos. In developing new materials to be explored, researchers at Los Alamos will bring to bear 15 years of experience in platinum-free catalyst design, synthesis, characterization and testing. In addition, Los Alamos can apply multi-scale modeling techniques that leverage world-class computing facilities to design catalysts with optimal activity, selectivity and durability. Once the potential replacement candidate materials are identified, scientists in the consortium can examine materials using a number of different methods, including X-ray imaging and spectroscopy techniques at Argonne. Argonne researchers can also investigate samples in a number of different environments that replicate how they would function in real-world scenarios. By combining the expertise and capabilities at Argonne and Los Alamos, in partnership with the private sector and universities, researchers expect to accelerate the development and implementation of platinum-free catalysts in fuel cells. The acceleration of progress in electrocatalysis without platinum will depend on focused catalyst design, guided by multi-scale modeling methods and facilitated by high-throughput methods for synthesis and screening. This partnership will integrate the strengths of both laboratories for this purpose. —Piotr Zelenay, a Los Alamos National Laboratory fellow and lead scientist on the Los Alamos component of the consortium


News Article | February 28, 2017
Site: www.marketwired.com

The Locus waste management application is fully integrated with the dynamic Locus Platform and will automate waste management for small and large enterprises SAN FRANCISCO, CA--(Marketwired - February 28, 2017) - Locus Technologies (Locus), the leader in cloud-based environmental compliance and sustainability management software, introduces a powerful waste management application to Locus Platform. Our new waste management app redefines how companies organize and manage their waste profiles and support EPA reporting requirements for hazardous and non-hazardous waste. With the click of a button the platform will be able to generate waste manifests, shipping labels or other documents populated with data from your records. The Locus Platform, a true multi-tenant SaaS, presents a highly flexible, user-friendly interface to meet individual organizations' environmental management needs. Nearly everything that doesn't leave your facility as a product eventually becomes waste that needs to be tracked, managed, disposed of, and reported. Regardless of hazardous, universal or recycling waste stream, Locus Platform is ready. Locus' waste module brings an organized approach and workflow process to schedule, sample, and analyze results. One of the standout features that Locus Platform offers is the ability to generate new waste profiles. The flexibility of the platform allows users to create these manually, link to Locus' EIM system or a third party's testing results. You will be able to quickly reuse existing profile sheets, only updating the changed information. The platform will help consolidate your profiles in containers or whole areas, and features built in maps to help your transporter know exactly where to go to pick up the ready waste. Users are able to manage the composition of profiles through adding multiple chemical components to the profiles and accounting for their make-up of the whole profile. You are able to add or remove waste containers, storage locations, etc. from your generating facilities, and allow any user to create their own way of organizing their data by generating filtered dashboards. At the enterprise level powerful dashboards will help you understand how each facility is generating waste, and how it is being handled. With the platform's flexibility, facility information can be automatically populated based on the user credentials, saving your team time and frustration. With Locus Platform's waste management module you will also have real-time insight into your generator status. If your facility is expecting growth we can help you understand when you may reach small quantity generator status, setting up notifications when you approach the threshold limit. Forms will also automatically adapt to a change in status, prompting users to provide different data points when your generator status has changed. The new waste application is fully mobile-enabled to track container location and ensure storage compatibility in satellite accumulation areas. When paired with Locus' EIM system, the tool can also track all of the waste sampling and characterization processes, and use that information directly to generate waste profiles. "Waste management processes require coordination between EHS managers, field staff, laboratories, TSD facilities, and other parties. For compliance purposes, it is critical to make sure that information is exchanged correctly and comprehensively between all of these entities in a timely manner. The Locus Platform gives you the notifications, mobile tools, and reporting tools to make waste management processes seamless for any organization. For companies that need to characterize hazardous waste and track detailed analytical data for profiling, this solution integrates directly with the Locus EIM system, giving an expanded toolset for validating and reviewing laboratory data," said J. Wesley Hawthorne, President of Locus. Locus Technologies is a leading environmental and sustainability software company that provides standard and configurable cloud-based solutions for companies to organize, validate, analyze, visualize, and report data for environmental compliance and sustainability. The software is used to manage emissions to water, air, and soil, the impact of greenhouse gasses as well as remediation and environmental content. Founded in 1997, Locus has been serving commercial companies and government organizations to help them achieve their EH&S and sustainability goals with customers including Chevron, Honeywell, Monsanto, DuPont, and Los Alamos National Laboratory. For more information, visit www.locustec.com or email info@locustec.com.


News Article | December 13, 2016
Site: phys.org

Hematite, clay minerals and boron are among the ingredients found to be more abundant in layers farther uphill, compared with lower, older layers examined earlier in the mission. Scientists are discussing what these and other variations tell about conditions under which sediments were initially deposited, and about how groundwater moving later through the accumulated layers altered and transported ingredients. Effects of this groundwater movement are most evident in mineral veins. The veins formed where cracks in the layers were filled with chemicals that had been dissolved in groundwater. The water with its dissolved contents also interacted with the rock matrix surrounding the veins, altering the chemistry both in the rock and in the water. "There is so much variability in the composition at different elevations, we've hit a jackpot," said John Grotzinger, of Caltech in Pasadena, California. He and other members of Curiosity's science team presented an update about the mission Tuesday, Dec. 13, in San Francisco during the fall meeting of the American Geophysical Union. As the rover examines higher, younger layers, researchers are impressed by the complexity of the lake environments when clay-bearing sediments were being deposited, and also the complexity of the groundwater interactions after the sediments were buried. "A sedimentary basin such as this is a chemical reactor," Grotzinger said. "Elements get rearranged. New minerals form and old ones dissolve. Electrons get redistributed. On Earth, these reactions support life." Whether Martian life has ever existed is still unknown. No compelling evidence for it has been found. When Curiosity landed in Mars' Gale Crater in 2012, the mission's main goal was to determine whether the area ever offered an environment favorable for microbes. The crater's main appeal for scientists is geological layering exposed in the lower portion of its central mound, Mount Sharp. These exposures offer access to rocks that hold a record of environmental conditions from many stages of early Martian history, each layer younger than the one beneath it. The mission succeeded in its first year, finding that an ancient Martian lake environment had all the key chemical ingredients needed for life, plus chemical energy available for life. Now, the rover is climbing lower on Mount Sharp to investigate how ancient environmental conditions changed over time. "We are well into the layers that were the main reason Gale Crater was chosen as the landing site," said Curiosity Deputy Project Scientist Joy Crisp of NASA's Jet Propulsion Laboratory, in Pasadena, California. "We are now using a strategy of drilling samples at regular intervals as the rover climbs Mount Sharp. Earlier we chose drilling targets based on each site's special characteristics. Now that we're driving continuously through the thick basal layer of the mountain, a series of drill holes will build a complete picture." Four recent drilling sites, from "Oudam" this past June through "Sebina" in October, are each spaced about 80 feet (about 25 meters) apart in elevation. This uphill pattern allows the science team to sample progressively younger layers that reveal Mount Sharp's ancient environmental history. One clue to changing ancient conditions is the mineral hematite. It has replaced less-oxidized magnetite as the dominant iron oxide in rocks Curiosity has drilled recently, compared with the site where Curiosity first found lakebed sediments. "Both samples are mudstone deposited at the bottom of a lake, but the hematite may suggest warmer conditions, or more interaction between the atmosphere and the sediments," said Thomas Bristow of NASA Ames Research Center, Moffett Field, California. He helps operate the Chemistry and Mineralogy (CheMin) laboratory instrument inside the rover, which identifies minerals in collected samples. Chemical reactivity occurs on a gradient of chemical ingredients' strength at donating or receiving electrons. Transfer of electrons due to this gradient can provide energy for life. An increase in hematite relative to magnetite indicates an environmental change in the direction of tugging electrons more strongly, causing a greater degree of oxidation in iron. Another ingredient increasing in recent measurements by Curiosity is the element boron, which the rover's laser-shooting Chemistry and Camera (ChemCam) instrument has been detecting within mineral veins that are mainly calcium sulfate. "No prior mission has detected boron on Mars," said Patrick Gasda of the U.S. Department of Energy's Los Alamos National Laboratory, Los Alamos, New Mexico. "We're seeing a sharp increase in boron in vein targets inspected in the past several months." The instrument is quite sensitive; even at the increased level, boron makes up only about one-tenth of one percent of the rock composition. Boron is famously associated with arid sites where much water has evaporated away—think of the borax that mule teams once hauled from Death Valley. However, environmental implications of the minor amount of boron found by Curiosity are less straightforward than for the increase in hematite. Scientists are considering at least two possibilities for the source of boron that groundwater left in the veins. Perhaps evaporation of a lake formed a boron-containing deposit in an overlying layer, not yet reached by Curiosity, then water later re-dissolved the boron and carried it down through a fracture network into older layers, where it accumulated along with fracture-filling vein minerals. Or perhaps changes in the chemistry of clay-bearing deposits, such as evidenced by the increased hematite, affected how groundwater picked up and dropped off boron within the local sediments. "Variations in these minerals and elements indicate a dynamic system," Grotzinger said. "They interact with groundwater as well as surface water. The water influences the chemistry of the clays, but the composition of the water also changes. We are seeing chemical complexity indicating a long, interactive history with the water. The more complicated the chemistry is, the better it is for habitability. The boron, hematite and clay minerals underline the mobility of elements and electrons, and that is good for life."


« X-energy and Southern Nuclear collaborate on advanced reactor development | Main | New 100 kWh Tesla Model S hits 0-60 in 2.5 seconds; 315-mile range » A Los Alamos National Laboratory team, in collaboration with Yoong-Kee Choe at the National Institute of Advanced Industrial Science and Technology in Japan and Cy Fujimoto of Sandia National Laboratories, has discovered that fuel cells based on a new phosphate-quaternary ammonium ion-pair membrane can be operated between 80 °C and 200 °C with and without water, enhancing the fuel cells’ usability under a range of conditions. The research is published in the journal Nature Energy. These fuel cells exhibit stable performance at 80–160 ˚C with a conductivity decay rate more than three orders of magnitude lower than that of a commercial high-temperature PEM fuel cell. By increasing the operational flexibility, this class of fuel cell can simplify the requirements for heat and water management, and potentially reduce the costs associated with the existing fully functional fuel cell systems. Low-temperature PEM fuel cells that use Nafion are at present being commercialized in fuel cell vehicles, but these cells can operate only at relatively low temperatures and high hydration levels; therefore, they require humidified inlet streams and large radiators to dissipate waste heat. High-temperature PEM fuel cells that use phosphoric acid (PA)-doped polybenzimidazole (PBI) could address these issues, but these PBI-based cells are difficult to operate below 140 ˚C without suffering loss of PA. The limited operating temperature range makes them unsuitable for automotive applications, where water condensation from frequent cold start-ups and oxygen reduction reactions at the fuel cell cathode occur during normal vehicle drive cycles. Currently, two main classes of polymer-based fuel cells exist. One is the class of low-temperature fuel cells that require water for proton conduction and cannot operate above 100 °C. The other type is high-temperature fuel cells that can operate up to 180 °C without water; however, the performance degrades under water-absorbing conditions below 140 °C. The operating temperature window of a PEM fuel cell is determined by the interactions between the acid (for example, tethered sulfonic acid or free phosphoric acid) and the base moieties (for example, free water or tethered QA) in the membrane, the researchers explained. For low-temperature fuel cells, the hydrogen bonding interactions between the sulfonic acid group and water molecules in Nafion is only 15.4 kcal mol−1; this does not provide enough stability above the boiling temperature of water, leading to membrane dehydration. For high-temperature fuel cells, The intermolecular interaction energy between benzimidazole and PA is 17.4 kcal mol−1—only 4.8 kcal mol−1 greater than that between PA and one water molecule, about 12.6 kcal mol−1. Due to the relatively weak interaction, benzimidazole tends to lose PA easily with water absorption. PA-doped PBI requires a high concentration of base moieties and a high acid content to impart sufficient anhydrous conductivity. The research team found that a phosphate-quaternary ammonium ion-pair that has much stronger interaction, which allows the transport of protons effectively even under water-condensing conditions. The Los Alamos team collaborated with Fujimoto at Sandia to prepare quaternary ammonium functionalized polymers. The prototype fuel cells made from the ion-pair-coordinated membrane demonstrated excellent fuel-cell performance and durability at 80-200 °C, which is unattainable with existing fuel cell technology. The performance and durability of this new class of fuel cells could even be further improved by high-performing electrode materials, said Kim, citing an advance expected within five to ten years that is another critical step to replace current low-temperature fuel cells used in vehicle and stationary applications. Researchers on this project include Kwan-Soo Lee (Los Alamos National Laboratory, Chemistry Division), Jacob Spendelow, Yu Seung Kim (Los Alamos National Laboratory, Materials Physics and Applications Division), Yoong-Kee Choe (National Institute of Advanced Industrial Science & Technology, Japan), and Cy Fujimoto (Sandia National Laboratories). Los Alamos has been a leader in fuel-cell research since the 1970s. Fuel cell technologies can significantly benefit the nation’s energy security, the environment and economy through reduced oil consumption, greenhouse gas emissions, and air pollution. The current research work supports the Laboratory’s missions related to energy security and materials for the future.


News Article | February 15, 2017
Site: www.marketwired.com

HANOVER, MD--(Marketwired - Feb 15, 2017) - D-Wave Systems Inc., the leader in the development and delivery of quantum computing systems and software, announced the addition of two new directors to the board of D-Wave Government Inc., a subsidiary formed to provide D-Wave's quantum computing systems to the U.S. government. Joining the board are Letitia "Tish" Long, former Director of the National Geospatial-Intelligence Agency and the first woman to lead a major U.S. intelligence agency, and Steven M. West, D-Wave Systems Inc. board member and a 30-year veteran of the information technology marketplace. Ms. Long has had a distinguished and decorated career in government. She served as a civilian in the U.S. Navy and the Intelligence Community from 1978 to 2014, retiring as the fifth director of the National Geospatial-Intelligence Agency. Her other senior executive roles included Deputy Director of the Defense Intelligence Agency, Deputy Undersecretary of Defense for Intelligence (Policy, Requirements and Resources), and Deputy Director of Naval Intelligence. Ms. Long is currently the Chairman of the Board of the Intelligence and National Security Alliance, and sits on the boards of Raytheon Company, Urthecast Corporation, Noblis, Inc., the Virginia Tech School of Public and International Affairs and the United States Geospatial Intelligence Foundation. Ms. Long is the recipient of the Department of Defense Medal for Distinguished Civilian Service, the Presidential Rank Award of Distinguished Executive, the Navy Distinguished Civilian Service Award, the Presidential Rank Award of Meritorious Executive (two awards) and the National Intelligence Distinguished Service Medal (two awards). Steven M. West is the founder and a partner in Emerging Company Partners LLC, a capital investor and a provider of advisory services to technology firms. Over the course of his career he has held many executive positions including CEO of nCUBE Corporation, a provider of on-demand media systems; president and CEO of Entera, an Internet content delivery firm (acquired by Blue Coat Systems, Inc.); CEO of Hitachi Data Systems, and Group Executive of EDS. Currently Mr. West serves on the boards of directors of Cisco Systems and D-Wave. He was formerly on the board of Autodesk, a 3D design software; Delta-Q Technologies, makers of power management and power conversion solutions; and Bycast Inc., advanced storage virtualization software for large-scale digital archives and storage clouds. "Adding such impressive executive leadership to the board of D-Wave Government Inc. speaks to the growing importance of quantum computing for government," said Robert "Bo" Ewald, president of D-Wave International. "Ms. Long's career includes nearly four decades of outstanding service to the U.S. government. Mr. West is a highly regarded technology executive and has been a valued member of the D-Wave board since 2008. We are very fortunate and appreciative that both have agreed to join the D-Wave Government board to lend their expertise to our government operations." "Quantum computing will provide an unprecedented new resource to solve important problems we face," said Ms. Long. "As the industry leader in quantum computing, D-Wave will help shape how this exciting technology will aid the defense and intelligence communities. I look forward to working with the D-Wave team to make continued progress supporting our national priorities." Long and West join current board members with deep experience at U.S. government agencies including the Department of Energy, the Department of Defense and the National Science Board. These include Jeffrey K. Harris, chairman of the U.S. Geospatial Intelligence Foundation and former president of Lockheed Martin Missiles and Space; Dr. Delores Etter, who served as the Deputy Under Secretary of Defense for Science and Technology and Assistant Secretary of the Navy; Frances Fleisch, former Executive Director of the National Security Agency and special advisor to the U.S Strategic Command; and Dr. Donald M. Kerr, former Principal Deputy Director of National Intelligence and Director of the National Reconnaissance Office. D-Wave is the leader in the development and delivery of quantum computing systems and software, and the world's only commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing to solve the most challenging national defense, scientific, technical, and commercial problems. D-Wave's systems are being used by some of the world's most advanced organizations, including Lockheed Martin, Google, NASA Ames and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Wave's U.S. operations are based in Palo Alto, CA and Hanover, MD. D-Wave has a blue-chip investor base including Goldman Sachs, Bezos Expeditions, DFJ, In-Q-Tel, BDC Capital, Growthworks, Harris & Harris Group, International Investment and Underwriting, and Kensington Partners Limited. For more information, visit: www.dwavesys.com


News Article | November 23, 2016
Site: www.24-7pressrelease.com

SANTA FE, NM, November 23, 2016-- Diana MacArthur has been included in Marquis Who's Who. As in all Marquis Who's Who biographical volumes, individuals profiled are selected on the basis of current reference value. Factors such as position, noteworthy accomplishments, visibility, and prominence in a field are all taken into account during the selection process.Mrs. MacArthur's professional and volunteer lives have, over the years, taken parallel paths, each reinforcing the other. She was Co-founder, Chair, and CEO of Dynamac International Inc.; CEO, Research Analysis and Management Corp.; President, Consumer Dynamics, Inc.; Program Manager, Space Division, General Electric; Vice President, Thomas J. Deegan Co. (a public relations firm in the InterPublic group); and Director, Office of Private and International Organizations, the Peace Corps. She has served as a consultant to Westinghouse, Eastman Kodak, Lever Brothers, the Hearst Corporation, the National Youth Conference on Natural Beauty and Conservation, and the National Recreation and Park Association.Complementing her professional accomplishments, Mrs. MacArthur was a member of or served on the boards of various organizations in the realms of public policy, science, education, and business. These include the following: President's Committee of Advisors on Science and Technology; Citizen's Advisory Board, President's Council on Youth Opportunity; Center for Strategic and International Studies; University of Maryland Biotechnology Institute; Smithsonian Environmental Research Center; Chancellor's Advisory Council, University System of Maryland; Tech Council of Maryland; Council on Competitiveness; Business-Higher Education Forum; Atlantic Council; Santa Fe Institute; Los Alamos National Laboratory Foundation; Science and Technology Corp., University of New Mexico; Lady Bird Johnson Wildflower Center; Menninger-Baylor College of Medicine-The Methodist Hospital Foundation; Business Advisory Committee, Center for China-U.S. Cooperation, University of Denver Graduate School of International Studies.Now retired and drawing on her wealth of experience, Mrs. MacArthur has embarked on a project in one of the most beautiful and daunting regions of the world, the Karakoram Mountains of Pakistan. On a visit to the village of MurtazaAbad, she was asked for help in constructing a building for the local school, and she agreed because her daughter had taught there. Mrs. MacArthur is now undertaking a remarkable effort to build a Center for Educational Excellence along the newly developed China-Pakistan Economic Corridor, bringing modernity to a small but economically and strategically important area of the world.She serves as the principal donor and volunteer project director of the Center, working with the local community and its representative, the Al-Murtaza Educational and Social Welfare Organization. The Center's co-ed primary/secondary schools, now under construction, will be housed in a state-of-the art, earthquake-resistant, high-tech school building, and will provide low-cost education, without discrimination based on religion, gender, ethnicity, or social status. Because a school is more than a building, the outstanding educational precepts of the Aga Khan Educational Network are being followed, and emphasis is being placed on teacher training, curriculum development, parental involvement, and efficient management and administration. Working closely with local families, Mrs. MacArthur has successfully encouraged the bringing together of the Ismaili, Shia, and Sunni Muslim communities, with the goal of providing an outstanding education to the children of the region and creating a vital civil society. Mrs. MacArthur's contributions, in time, expertise, and funding, demonstrate just how much she cares about the world and its citizens, especially children.Mrs. MacArthur has been recognized through features in Who's Who in Finance and Business, Who's Who in Science and Engineering, Who's Who of American Women, Who's Who in the East, Who's Who in the West, Who's Who in America, and Who's Who in the World.About Marquis Who's Who :Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America , Marquis Who's Who has chronicled the lives of the most accomplished individuals and innovators from every significant field of endeavor, including politics, business, medicine, law, education, art, religion and entertainment. Today, Who's Who in America remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms around the world. Marquis now publishes many Who's Who titles, including Who's Who in America , Who's Who in the World , Who's Who in American Law , Who's Who in Medicine and Healthcare , Who's Who in Science and Engineering , and Who's Who in Asia . Marquis publications may be visited at the official Marquis Who's Who website at www.marquiswhoswho.com


News Article | December 3, 2015
Site: www.cemag.us

Physicists at the Technical University of Munich, the Los Alamos National Laboratory, and Stanford University have tracked down semiconductor nanostructure mechanisms that can result in the loss of stored information — and halted the amnesia using an external magnetic field. The new nanostructures comprise common semiconductor materials compatible with standard manufacturing processes. Quantum bits, qubits for short, are the basic logical elements of quantum information processing (QIP) that may represent the future of computer technology. Since they process problems in a quantum-mechanical manner, such quantum computers might one day solve complex problems much more quickly than currently possible, so the hope of researchers. In principle, there are various possibilities of implementing qubits: photons are an option equally as viable as confined ions or atoms whose states can be altered in a targeted manner using lasers. The key questions regarding their potential use as memory units are how long information can be stored in the system and which mechanisms might lead to a loss of information. A team of physicists headed by Alexander Bechtold and Professor Jonathan Finley at the Walter Schottky Institute of the Technical University of Munich and the Excellence Cluster Nanosystems Initiative Munich (NIM) have now presented a system comprising a single electron trapped in a semiconductor nanostructure. Here, the electron’s spin serves as the information carrier. The researchers were able to precisely demonstrate the existence of different data loss mechanisms and also showed that stored information can nonetheless be retained using an external magnetic field. The TUM physicists evaporated indium gallium arsenide onto a gallium arsenide substrate to form their nanostructure. As a result of the different lattice spacing of the two semiconductor materials strain is produced at the interface between the crystal grids. The system thus forms nanometer-scale “hills” — so-called quantum dots. When the quantum dots are cooled down to liquid helium temperatures and optically excited, a single electron can be trapped in each of the quantum dots. The spin states of the electrons can then be used as information stores. Laser pulses can read and alter the states optically from outside. This makes the system ideal as a building block for future quantum computers. Spin up or spin down correspond to the standard logical information units 0 and 1. But, on top of this come additional intermediate states of quantum mechanical up and down superpositions. However, there is one problem: “We found out that the strain in the semiconductor material leads to a new and until recently unknown mechanism that results in the loss of quantum information,” says Alexander Bechtold. The strain creates tiny electric fields in the semiconductor that influence the nuclear spin orientation of the atomic nuclei. “It’s a kind of piezoelectric effect,” says Bechthold. “It results in uncontrolled fluctuations in the nuclear spins.” These can, in turn, modify the spin of the electrons, i.e. the stored information. The information is lost within a few hundred nanoseconds. In addition, Alexander Bechthold’s team was able to provide concrete evidence for further information loss mechanisms, for example that electron spins are generally influenced by the spins of the surrounding 100,000 atomic nuclei. “However, both loss channels can be switched off when a magnetic field of around 1.5 tesla is applied,” says Bechtold. “This corresponds to the magnetic field strength of a strong permanent magnet. It stabilizes the nuclear spins and the encoded information remains intact.” “Overall, the system is extremely promising,” according to Jonathan Finley, head of the research group. “The semiconductor quantum dots have the advantage that they harmonize perfectly with existing computer technology since they are made of similar semiconductor material.” They could even be equipped with electrical contacts, allowing them to be controlled not only optically using a laser, but also using voltage pulses. The research was funded by the European Union (S3 Nano and BaCaTeC), the US Department of Energy, the US Army Research Office (ARO), the German Research Foundation DFG (excellence cluster Nanosystems Munich (NIM) and SFB 631), and the Alexander von Humboldt Foundation, as well as the TUM Institute for Advanced Study (Focus Group Nanophotonics and Quantum Optics). Release Date: December 2, 2015 Source: Technical University of Munich


News Article | December 8, 2015
Site: www.scientificcomputing.com

Physicists at the Technical University of Munich, the Los Alamos National Laboratory and Stanford University (USA) have tracked down semiconductor nanostructure mechanisms that can result in the loss of stored information — and halted the amnesia using an external magnetic field. The new nanostructures comprise common semiconductor materials compatible with standard manufacturing processes. Quantum bits, qubits for short, are the basic logical elements of quantum information processing (QIP) that may represent the future of computer technology. Since they process problems in a quantum-mechanical manner, such quantum computers might one day solve complex problems much more quickly than currently possible, so is the hope of researchers. In principle, there are various possibilities of implementing qubits: photons are an option equally as viable as confined ions or atoms whose states can be altered in a targeted manner using lasers. The key questions regarding their potential use as memory units are how long information can be stored in the system and which mechanisms might lead to a loss of information. A team of physicists headed by Alexander Bechtold and Professor Jonathan Finley at the Walter Schottky Institute of the Technical University of Munich and the Excellence Cluster Nanosystems Initiative Munich (NIM) have now presented a system comprising a single electron trapped in a semiconductor nanostructure. Here, the electron’s spin serves as the information carrier. The researchers were able to precisely demonstrate the existence of different data loss mechanisms and also showed that stored information can nonetheless be retained using an external magnetic field. The TUM physicists evaporated indium gallium arsenide onto a gallium arsenide substrate to form their nanostructure. As a result of the different lattice spacing of the two semiconductor materials strain is produced at the interface between the crystal grids. The system thus forms nanometer-scale “hills” — so-called quantum dots. When the quantum dots are cooled down to liquid helium temperatures and optically excited, a singe electron can be trapped in each of the quantum dots. The spin states of the electrons can then be used as information stores. Laser pulses can read and alter the states optically from outside. This makes the system ideal as a building block for future quantum computers. Spin up or spin down correspond to the standard logical information units 0 and 1. But, on top of this come additional intermediate states of quantum mechanical up and down superpositions. However, there is one problem: “We found out that the strain in the semiconductor material leads to a new and, until recently, unknown mechanism that results in the loss of quantum information,” says Alexander Bechtold. The strain creates tiny electric fields in the semiconductor that influence the nuclear spin orientation of the atomic nuclei. “It’s a kind of piezoelectric effect,” says Bechthold. “It results in uncontrolled fluctuations in the nuclear spins.” These can, in turn, modify the spin of the electrons, i.e. the stored information. The information is lost within a few hundred nanoseconds. In addition, Alexander Bechthold’s team was able to provide concrete evidence for further information loss mechanisms, for example that electron spins are generally influenced by the spins of the surrounding 100,000 atomic nuclei. “However, both loss channels can be switched off when a magnetic field of around 1.5 tesla is applied,” says Bechtold. “This corresponds to the magnetic field strength of a strong permanent magnet. It stabilizes the nuclear spins, and the encoded information remains intact.” “Overall, the system is extremely promising,” according to Jonathan Finley, head of the research group. “The semiconductor quantum dots have the advantage that they harmonize perfectly with existing computer technology, since they are made of similar semiconductor material.” They could even be equipped with electrical contacts, allowing them to be controlled not only optically using a laser, but also using voltage pulses. The research was funded by the European Union (S3 Nano and BaCaTeC), the US Department of Energy, the US Army Research Office (ARO), the German Research Foundation DFG (excellence cluster Nanosystems Munich (NIM) and SFB 631), the Alexander von Humboldt Foundation as well as the TUM Institute for Advanced Study (Focus Group Nanophotonics and Quantum Optics). Citation: Three-stage decoherence dynamics of an electron spin qubit in an optically active quantum dot; Alexander Bechtold, Dominik Rauch, Fuxiang Li, Tobias Simmet, Per-Lennart Ardelt, Armin Regler, Kai Müller, Nikolai A. Sinitsyn and Jonathan J. Finley; Nature Physics, 11, 1005-1008 (2015) – DOI: 10.1038/nphys3470


News Article | July 9, 2015
Site: www.techtimes.com

The United States Air Force and the National Nuclear Security Administration have jointly completed the first test for a controversial update to the B61-12 nuclear bomb, which has been in use since the Cold War in the 1960s. The B61-12 nuclear bomb that was tested was a safe version, as it did not contain a nuclear warhead. The testing occurred at the Tonopah Test Range, which is located in Nevada. The tests are being done to achieve a longer lifespan for the nuclear bomb, with upgrades being done on some of the weapon's parts. "This test marks a major milestone for the B61-12 Life Extension Program, demonstrating end-to-end system performance under representative delivery conditions," said Don Cook, deputy administrator for defense programs at the NNSA. Cook added that the completion of the first flight test for the B61-12 shows the commitment of the U.S. in maintaining the nuclear weapon and providing assurance to its allies. The B61 was previously known as the TX-61 before 1968. The weapon was designed back in 1963 in New Mexico at the Los Alamos National Laboratory. The completed test is only the first among a series of three flight tests scheduled for the program to extend the life of the B61-12. The two other tests have been scheduled to be carried out later within the year. According to the U.S. Air Force, the first test was successful in demonstrating performance during realistic flight situations, along with an efficient release of the safe version of the nuclear weapon. The U.S. Air Force added that it was successful in collecting tracing, telemetry and video data for the test, which will provide confidence in the B61-12 and its design, along with the current state of hardware, before the weapon system goes under a review next year. The Life Extension Program of the nuclear weapon was launched on Feb. 12, aiming to refurbish both the non-nuclear and nuclear parts of the B61-12 for the extension of the service life of the bomb while also making improvements to security, safety and reliability. With a tail-kit assembly that the U.S. Air Force has provided, the B61-12 will be replacing the current B61-3, B61-4, B61-7 and B61-10 nuclear bombs.


News Article | November 14, 2016
Site: www.sciencedaily.com

Isotopes -- atoms of an element that are chemically identical but vary in neutron number and mass -- are essential to nuclear medicine. In an effort to return to a stable mass, isotopes known as radioisotopes emit radiation that can damage diseased tissue and can be traced in certain environments, making them useful for medical imaging and cancer therapy, as well as tracking environmental change in oceans and soil, studying the fundamental science of nuclei and safeguarding national security. Of the thousands of radioisotopes identified so far, a few occur in nature, but most must be produced in nuclear reactors or particle accelerators. The most widely used medical radioisotope is technetium-99m (Tc-99m), the daughter isotope (meaning the product of radioactive decay) of molybdenum-99 (Mo-99). The 66-hour half-life of Mo-99, which is currently produced by the fission of uranium targets in research reactors, makes it a challenge to produce and transport around the world before it spontaneously decays into Tc-99m. Built 30 to 50 years ago at the height of the Atoms for Peace era, several of the world's key nuclear reactors producing Mo-99 have experienced unplanned shutdowns that have disrupted the Mo-99 radioisotope supply. With the possibility of an unstable supply, there is a strong incentive to develop alternative production methods for this essential isotope. To address this issue, the National Nuclear Security Administration (NNSA) is supporting commercial partners to accelerate the establishment of a domestic, commercial Mo-99 production capability that does not use highly enriched uranium. At the same time, the NNSA is working to significantly reduce the risk of nuclear proliferation by supporting international Mo-99 producers in converting their Mo-99 production processes from highly enriched uranium (HEU) to low-enriched uranium (LEU). The U.S. Department of Energy's (DOE's) Argonne National Laboratory actively supports both of these efforts. "Beginning in the 1980s, Argonne focused on converting targets for the production of molybdenum-99 from highly enriched uranium to low-enriched uranium and provided the technology to do so," said George Vandegrift, a Distinguished Fellow in the Nuclear Engineering Division at Argonne. In the past five years, Argonne has partnered with industry and other national laboratories under NNSA stewardship to help establish U.S. production of Mo-99. Now, in addition to continuing this technology development for Mo-99 production methods for the DOE and NNSA, Argonne is also advancing new production and processing approaches for other promising radioisotopes with the encouragement and support of the DOE Isotope Program. Whereas Tc-99m, derived from its parent isotope Mo-99, is used primarily for diagnostic procedures, there is great interest in the development of efficient production methods for isotopes that can be used for cancer therapy. Argonne is using its recently upgraded electron linear accelerator (LINAC), along with its historical expertise in developing targets and recovering and purifying radioisotopes, to explore a variety of such therapeutic isotopes. Argonne's electron LINAC operates in a "sweet spot" for making particular radioisotopes. At 30-50 mega-electron volts and relatively high currents, the LINAC has the capability to produce potentially important radioisotopes. For example, radioisotopes like copper-67 (Cu-67), and scandium-47 (Sc-47) -- whose use has been impeded in the development of pharmaceutical applications by lack of reliable supply or low specific activity. More than 80 percent of nuclear diagnostic procedures (40 million per year) worldwide use Tc-99m. The United States alone uses half of the world's supply, yet imports 100 percent of the parent isotope Mo-99 from foreign countries. Vandegrift said technetium is widely used because of its radiological properties and rich chemistry. Tc-99m has a short half-life, emits gamma rays at energies that are easy to detect and binds to many kinds of proteins that allow the radioisotope to image specific tissues and functions of the body. "When Mo-99 faced a crisis in supply, NNSA expanded the HEU to LEU conversion program to include development of domestic supplies of Mo-99," Vandegrift said. Funded by the NNSA, Argonne is collaborating with two U.S. companies, SHINE Medical Technologies and NorthStar Medical Technologies, LLC, to establish a scalable, safe and reliable way to produce, recover and purify Mo-99 that meets the required pharmaceutical standards for use of Tc-99m as a radiopharmaceutical. Argonne demonstrated each company's production pathway with its LINAC and provided expertise in target chemistry and post processing. Both companies are in the process of establishing production facilities that will allow them to make and distribute enough Mo-99 to each meet 50 percent of U.S. demand without the use of HEU. The proposed SHINE production method uses a low-energy accelerator-based neutron generator to propagate fission of a low-enriched uranyl sulfate solution, producing hundreds of isotopes, including Mo-99. Argonne researchers have demonstrated the feasibility of this method using the electron LINAC to produce neutrons via a converter to irradiate a uranyl sulfate solution. They also developed technologies for the recovery and purification of Mo-99, which makes up six percent of the fission products present in the irradiated solution. Argonne recently produced and processed a significant quantity of Mo-99 using this technology that was developed for SHINE. The Mo-99 was sent to GE Healthcare for testing and passed all purity and utilization requirements. The U.S. Nuclear Regulatory Commission recently authorized issuance of a construction permit for SHINE. "SHINE has benefited tremendously from the work Argonne has done to help design and test our process. Argonne's unique capabilities meant they were able to demonstrate the SHINE process years before our manufacturing facility is built, and that has greatly reduced our startup risks," said Katrina Pitas, SHINE Vice President of Business Development. "We're grateful to George Vandegrift, a pioneer in the LEU Mo-99 world, and his entire team for the shrewd insights and hard work they've contributed to the SHINE project." The NorthStar process, on the other hand, uses a high-energy electron beam to produce photons in a converter target to irradiate a set of stable Mo-100 sintered disks. In this process, photons hit atoms of Mo-100, discharging neutrons and leaving a significant yield of Mo-99 for processing. The Argonne team optimized disk chemistry and structure and developed a chemical recycling pathway that will enable NorthStar to reuse costly enriched-molybdenum target material. The test facility built at Argonne, in collaboration with Los Alamos National Laboratory, is a pilot-scale operation with all of the components of NorthStar's proposed production unit. In the last production test run at Argonne, several curies of Mo-99 were produced and sent to NorthStar for evaluation. "NorthStar has enjoyed significant support and contribution from Argonne National Laboratory over the past five or more years as we developed and optimized our production process for Mo-99 via an electron accelerator," said Jim Harvey, NorthStar Senior Vice President and Chief Science Officer. "The development of chemical processes to dissolve and recover enriched target material combined with the ability to now run production of meaningful irradiations using Argonne's electron accelerator are directly contributing to our successful future deployment of this Mo-99 production process in the United States." With successful demonstrations of both processes developed at Argonne's LINAC, SHINE and NorthStar have continued plans to use accelerators for production. SHINE will use an accelerator-based neutron generator at its Janesville, Wis., facility for its neutron-induced-fission process, and NorthStar has long-term plans to install its own electron LINAC at its Beloit, Wis., facility as the basis of its gamma-induced-reaction process. Back at Argonne, researchers, with support and encouragement of the DOE Isotope Program, are expanding their radioisotope program with the goal of performing groundbreaking research and carrying out the development and demonstration needed for supplying a range of key radioisotopes. Nontoxic with suitable emissions for radiotherapy and imaging and a 2.6-day half-life, Cu-67 has been recognized for decades for its suitable properties as a "theranostic" -- a combined therapeutic and diagnostic radioisotope. "Cu-67 could allow doctors to monitor therapy in real-time," said David Rotsch, Argonne chemist. "The beta particles it emits have been shown to penetrate a few millimeters into tissues, killing cancer cells, but it also emits gamma radiation that can be used to track how well it is eradicating those cancer cells." Using the LINAC, researchers produced high specific activity Cu-67 and demonstrated small-scale production of the radioisotope. Now, with funding from the DOE Isotope Program, they are preparing to produce enough Cu-67 to meet estimated demand for external R&D and future clinical trials. Argonne will distribute Cu-67 through the DOE's National Isotope Development Center as early as November of this year. "We're starting at 100 milliCurie batches and can increase to two Curie batches depending on demand," Rotsch said. The team also received Argonne Laboratory Directed Research and Development (LDRD) funding in fall 2015 to investigate the production of various other radioisotopes with the electron LINAC. One focus of the LDRD is creating a production mode for Sc-47 that will enable quick and simple processing for distribution. Like Cu-67, Sc-47 has characteristics that make it a good theranostic candidate. Argonne researchers are developing a process for the production and purification of Sc-47 to minimize processing times and maximize yields. This isotope would also be made available through the DOE Isotope Program. "We're in a good position because, right now, our LINAC is unique in having a higher power and energy beam," said Sergey Chemerisov, Accelerator Facility Manager in the Nuclear Engineering Division. "We're the only DOE LINAC that can do this job right now."


Abstract: Physicists at the Technical University of Munich, the Los Alamos National Laboratory and Stanford University (USA) have tracked down semiconductor nanostructure mechanisms that can result in the loss of stored information - and halted the amnesia using an external magnetic field. The new nanostructures comprise common semiconductor materials compatible with standard manufacturing processes. Quantum bits, qubits for short, are the basic logical elements of quantum information processing (QIP) that may represent the future of computer technology. Since they process problems in a quantum-mechanical manner, such quantum computers might one day solve complex problems much more quickly than currently possible, so the hope of researchers. In principle, there are various possibilities of implementing qubits: photons are an option equally as viable as confined ions or atoms whose states can be altered in a targeted manner using lasers. The key questions regarding their potential use as memory units are how long information can be stored in the system and which mechanisms might lead to a loss of information. A team of physicists headed by Alexander Bechtold and Professor Jonathan Finley at the Walter Schottky Institute of the Technical University of Munich and the Cluster of Excellence Nanosystems Initiative Munich (NIM) have now presented a system comprising a single electron trapped in a semiconductor nanostructure. Here, the electron's spin serves as the information carrier. The researchers were able to precisely demonstrate the existence of different data loss mechanisms and also showed that stored information can nonetheless be retained using an external magnetic field. Electrons trapped in a quantum dot The TUM physicists evaporated indium gallium arsenide onto a gallium arsenide substrate to form their nanostructure. As a result of the different lattice spacing of the two semiconductor materials strain is produced at the interface between the crystal grids. The system thus forms nanometer-scale "hills" - so-called quantum dots. When the quantum dots are cooled down to liquid helium temperatures and optically excited, a singe electron can be trapped in each of the quantum dots. The spin states of the electrons can then be used as information stores. Laser pulses can read and alter the states optically from outside. This makes the system ideal as a building block for future quantum computers. Spin up or spin down correspond to the standard logical information units 0 and 1. But, on top of this come additional intermediate states of quantum mechanical up and down superpositions. Hitherto unknown memory loss mechanisms However, there is one problem: "We found out that the strain in the semiconductor material leads to a new and until recently unknown mechanism that results in the loss of quantum information," says Alexander Bechtold. The strain creates tiny electric fields in the semiconductor that influence the nuclear spin orientation of the atomic nuclei. "It's a kind of piezoelectric effect," says Bechthold. "It results in uncontrolled fluctuations in the nuclear spins." These can, in turn, modify the spin of the electrons, i.e. the stored information. The information is lost within a few hundred nanoseconds. In addition, Alexander Bechthold's team was able to provide concrete evidence for further information loss mechanisms, for example that electron spins are generally influenced by the spins of the surrounding 100,000 atomic nuclei. Preventing quantum mechanical amnesia "However, both loss channels can be switched off when a magnetic field of around 1.5 tesla is applied," says Bechtold. "This corresponds to the magnetic field strength of a strong permanent magnet. It stabilizes the nuclear spins and the encoded information remains intact." "Overall, the system is extremely promising," according to Jonathan Finley, head of the research group. "The semiconductor quantum dots have the advantage that they harmonize perfectly with existing computer technology since they are made of similar semiconductor material." They could even be equipped with electrical contacts, allowing them to be controlled not only optically using a laser, but also using voltage pulses. ### The research was funded by the European Union (S3 Nano and BaCaTeC), the US Department of Energy, the US Army Research Office (ARO), the German Research Foundation DFG (Cluster of Excellence Nanosystems Munich (NIM) and SFB 631), the Alexander von Humboldt Foundation as well as the TUM Institute for Advanced Study (Focus Group Nanophotonics and Quantum Optics). For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Los Alamos National Laboratory Orders a 1000+ Qubit D-Wave 2X Quantum Computer: Los Alamos to explore new methods of computing as part of new U.S. Government National Strategic Computing Initiative Abstract: D-Wave Systems Inc., the world's first quantum computing company, announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The National Strategic Computing Initiative, created by executive order of President Obama in late July, is intended "to maximize [the] benefits of high-performance computing (HPC) research, development, and deployment." "Eventually Moore’s Law (that predicted that the number of transistors on an integrated circuit would double every two years), will come to an end," said John Sarrao, associate director for Theory, Simulation, and Computation at Los Alamos. "Dennard Scaling (that predicted that performance per watt of computing would grow exponentially at roughly the same rate) already has. Beyond these two observations lies the end of the current 'conventional' computing era, so new technologies and ideas are needed." “As conventional computers reach their limits in terms of scaling and performance per watt, we need to investigate new technologies to support our mission,” said Mark Anderson of the Laboratory's Weapons Physics Directorate. “Researching and evaluating quantum annealing as the basis for new approaches to address intractable problems is an essential and powerful step, and will enable a new generation of forward thinkers to influence its evolution in a direction most beneficial to the nation." “Los Alamos is a global leader in high performance computing and a pioneer in the application of new architectures to solve critical problems related to national security, energy, the environment, materials, health and earth science," said Robert "Bo" Ewald, president of D-Wave U.S. “As we work jointly with scientists and engineers at Los Alamos we expect to be able to accelerate the pace of quantum software development to advance the state of algorithms, applications and software tools for quantum computing.” D-Wave's quantum annealing technology leverages quantum effects to quickly find the lowest points in a virtual “energy landscape." These low points correspond to optimal or near optimal solutions to a given problem. In addition, D-Wave's superconducting processor generates almost no heat, so the system's power requirements, currently less than 25 kW, will remain low as the computer continues to scale. This is in stark contrast to current supercomputers that can require many megawatts of power, a huge impediment as the need for computational resources continues to grow. The D-Wave 2X system is expected to be installed at Los Alamos in early 2016. About D-Wave Systems Inc. D-Wave Systems is the first quantum computing company. Its mission is to integrate new discoveries in physics, engineering, manufacturing and computer science into breakthrough approaches to computation to help solve some of the world's most complex challenges. The company's quantum computers are built using a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation. D-Wave's customers include some of the world's most prominent organizations including Lockheed Martin and Google, whose system is hosted at NASA's Ames Research Center. With headquarters near Vancouver, Canada, D-Wave U.S. is based in Palo Alto, California. D-Wave has a blue-chip investor base including Bezos Expeditions, BDC Capital, DFJ, Goldman Sachs, Growthworks, Harris & Harris Group, In-Q-Tel, International Investment and Underwriting, and Kensington Partners Limited. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Harris & Harris Group Portfolio Company, D-Wave Systems, Announces the Sale of Its Latest Quantum Computer to Los Alamos National Laboratory Abstract: Harris & Harris Group, Inc. (NASDAQ:TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The D-Wave 2X system is expected to be installed at Los Alamos in early 2016. D-Wave's press release can be viewed at www.dwavesys.com/press-releases/los-alamos-national-laboratory-orders-1000-qubit-d-wave-2x-quantum-computer. About Harris & Harris Group Harris & Harris Group is a publicly traded venture capital firm that is also a business development company. Detailed information about Harris & Harris Group and its holdings can be found on its website at www.HHVC.com, on Facebook at www.facebook.com/harrisharrisvc and by following on Twitter @harrisandharrisgroup. This press release may contain statements of a forward-looking nature relating to future events. These forward-looking statements are subject to the inherent uncertainties in predicting future results and conditions. These statements reflect the Company's current beliefs, and a number of important factors could cause actual results to differ materially from those expressed in this press release. Please see the Company's Annual Report on Form 10-K for the fiscal year ended December 31, 2014, as well as subsequent filings, filed with the Securities and Exchange Commission for a more detailed discussion of the risks and uncertainties associated with the Company's business, including, but not limited to, the risks and uncertainties associated with venture capital investing and other significant factors that could affect the Company's actual results. Except as otherwise required by Federal securities laws, the Company undertakes no obligation to update or revise these forward-looking statements to reflect new events or uncertainties. The references and links to the websites www.HHVC.com, www.dwavesys.com, and www.Facebook.com have been provided as a convenience, and the information contained on such websites is not incorporated by reference into this press release. Harris & Harris Group is not responsible for the contents of third party websites. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 23, 2016
Site: www.businesswire.com

SALT LAKE CITY--(BUSINESS WIRE)--SC16, the 28th annual international conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers and scientists - from those just starting their careers to those whose contributions have made lasting impacts. The conference drew more than 11,100 registered attendees and featured a technical program spanning six days. The exhibit hall featured 349 exhibitors from industry, academia and research organizations from around the world. “There has never been a more important time for high performance computing, networking and data analysis,” said SC16 General Chair John West from the Texas Advanced Computing Center. “But it is also an acute time for growing our workforce and expanding diversity in the industry. SC16 was the perfect blend of research, technological advancement, career recognition and improving the ways in which we attract and retain that next generation of scientists.” According to Trey Breckenridge, SC16 Exhibits Chair from Mississippi State University, the SC16 Exhibition was the largest in the history of the conference. The overall size of the exhibition was 150,000 net square feet (breaking the 2015 record of 141,430). The 349 industry and research-focused exhibits included 44 first-timers and 120 organizations from 25 countries outside the United States. During the conference, Salt Lake City also became the hub for the world’s fastest computer network: SCinet, SC16’s custom-built network which delivered 3.15 terabits per second in bandwidth. The network featured 56 miles of fiber deployed throughout the convention center and $32 million in loaned equipment. It was all made possible by 200 volunteers representing global organizations spanning academia, government and industry. For the third year, SC featured an opening “HPC Matters” plenary that this year focused on Precision Medicine, which examined what the future holds in this regard and how advances are only possible through the power of high performance computing and big data. Leading voices from the frontlines of clinical care, medical research, HPC system evolution, pharmaceutical R&D and public policy shared diverse perspectives on the future of precision medicine and how it will impact society. The Technical Program again offered the highest quality original HPC research. The SC workshops set a record with more than 2,500 attendees. There were 14 Best Paper Finalists and six Gordon Bell Finalists. These submissions represent the best of the best in a wide variety of research topics in HPC. “These awards are very important for the SC Conference Series. They celebrate the best and the brightest in high performance computing,” said Satoshi Matsuoka, SC16 Awards Chair from Tokyo Institute of Technology. “These awards are not just plaques or certificates. They define excellence. They set the bar for the years to come and are powerful inspiration for both early career and senior researchers.” Following is the list of Technical Program awards presented at SC16: SC16 received 442 paper submissions, of which 81 were accepted (18.3 percent acceptance rate). Of those, 13 were selected as finalists for the Best Paper (six) and Best Student Paper (seven) awards. The Best Paper Award went to “Daino: A High-Level Framework for Parallel and Efficient AMR on GPUs” by Mohamed Wahib Attia and Naoya Maruyama, RIKEN; and Takayuki Aoki, Tokyo Institute of Technology. The Best Student Paper Award went to “Flexfly: Enabling a Reconfigurable Dragonfly Through Silicon Photonics” by Ke Wen, Payman Samadi, Sebastien Rumley, Christine P. Chen, Yiwen Shen, Meisam Bahadori, and Karen Bergman, Columbia University and Jeremiah Wilke, Sandia National Laboratories. The ACM Gordon Bell Prize is awarded for outstanding team achievement in high performance computing and tracks the progress of parallel computing. This year, the prize was awarded to a 12-member Chinese team for their research project, “10M-Core Scalable Fully-Implicit Solver for Nonhydrostatic Atmospheric Dynamics.” The winning team presented a solver (method for calculating) atmospheric dynamics. In the abstract of their presentation, the winning team writes, “On the road to the seamless weather-climate prediction, a major obstacle is the difficulty of dealing with various spatial and temporal scales. The atmosphere contains time-dependent multi-scale dynamics that support a variety of wave motions.” To simulate the vast number of variables inherent in a weather system developing in the atmosphere, the winning group presents a highly scalable fully implicit solver for three-dimensional nonhydrostatic atmospheric simulations governed by fully compressible Euler equations. Euler equations are a set of equations frequently used to understand fluid dynamics (liquids and gasses in motion). Winning team members are Chao Yang, Chinese Academy of Sciences; Wei Xue, Weimin Zheng, Guangwen Yang, Ping Xu, and Haohuan Fu, Tsinghua University; Hongtao You, National Research Center of Parallel Computer Engineering and Technology; Xinliang Wang, Beijing Normal University; Yulong Ao and Fangfang Liu, Chinese Academy of Sciences, Lin Gan, Tsinghua University; Lanning Wang, Beijing Normal University. This year, SC received 172 detailed poster submissions that went through a rigorous review process. In the end, 112 posters were accepted and five finalists were selected for the Best Poster Award. As part of its research poster activities, SC16 also hosted the ACM Student Research Competition for both undergraduate and graduate students. In all 63 submissions were received, 26 Student Research Competition posters were accepted – 14 in the graduate category and 12 in the undergraduate category. The Best Poster Award went to “A Fast Implicit Solver with Low Memory Footprint and High Scalability for Comprehensive Earthquake Simulation System” with Kohei Fujita from RIKEN as the lead author. First Place: “Touring Dataland? Automated Recommendations for the Big Data Traveler” by Willian Agnew and Michael Fischer, Advisors: Kyle Chard and Ian Foster. Second Place: “Analysis of Variable Selection Methods on Scientific Cluster Measurement Data” by Jonathan Wang, Advisors: Wucherl Yoo and Alex Sim. Third Place: “Discovering Energy Usage Patterns on Scientific Clusters” by Matthew Bae, Advisors: Wucherl Yoo, Alex Sim and Kesheng Wu. First Place: “Job Startup at Exascale: Challenges and Solutions” by Sourav Chakroborty, Advisor: Dhabaleswar K. Panda. Second Place: “Performance Modeling and Engineering with Kerncraft,” by Julian Hammer, Advisors: Georg Hager and Gerhard Wellein. Third Place: “Design and Evaluation of Topology-Aware Scatter and AllGather Algorithms for Dragonfly Networks” by Nathanael Cheriere, Advisor: Matthieu Dorier. The Scientific Visualization and Data Analytics Award featured six finalists. The award went to “Visualization and Analysis of Threats from Asteroid Ocean Impacts” with John Patchett as the lead author. The Student Cluster Competition returned for its 10th year. The competition which debuted at SC07 in Reno and has since been replicated in Europe, Asia and Africa, is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC16 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges. The students partner with vendors to design and build a cutting-edge cluster from commercially available components, not to exceed a 3120-watt power limit and work with application experts to tune and run the competition codes. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery. This is the university’s third appearance in the competition. Also, an ACM SIGHPC Certificate of Appreciation is presented to the authors of a recent SC paper to be used for the SC16 Student Cluster Competition Reproducibility Initiative. The selected paper was “A Parallel Connectivity Algorithm for de Bruijn Graphs in Metagenomic Applications” by Patrick Flick, Chirag Jain, Tony Pan and Srinivas Aluru from Georgia Institute of Technology. The George Michael Memorial HPC Fellowship honors exceptional Ph.D. students. The first recipient is Johann Rudi from the Institute for Computational Engineering and Sciences at the University of Texas at Austin for his project, “Extreme-Scale Implicit Solver for Nonlinear, Multiscale, and Heterogeneous Stokes Flow in the Earth’s Mantle.” The second recipient is Axel Huebl from Helmholtz-Zentrum Dresden-Rossendorf at the Technical University of Dresden for his project, “Scalable, Many-core Particle-in-cell Algorithms to Stimulate Next Generation Particle Accelerators and Corresponding Large-scale Data Analytics.” The SC Conference Series also serves as the venue for recognizing leaders in the HPC community for their contributions during their careers. Here are the career awards presented at SC16: The IEEE-CS Seymour Cray Computer Engineering Award recognizes innovative contributions to high performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The 2016 IEEE-CS Seymour Cray Computer Engineering Award was presented to William J. Camp of Los Alamos National Laboratory “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Camp previously served as Intel’s Chief Supercomputing Architect and directed Intel’s Exascale R&D efforts. Established in memory of Ken Kennedy, the founder of Rice University's nationally ranked computer science program and one of the world's foremost experts on high-performance computing, the ACM/IEEE-CS Ken Kennedy Award recognizes outstanding contributions to programmability or productivity in high-performance computing together with significant community service or mentoring contributions. The 2016 Ken Kennedy Award was presented to William D. Gropp “for highly influential contributions to the programmability of high-performance parallel and distributed computers, and extraordinary service to the profession.” Gropp Is the Acting Director of the National Center for Supercomputing Applications and Director, Parallel Computing Institute, Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign. The IEEE-CS Sidney Fernbach Memorial Award is awarded for outstanding contributions in the application of high performance computers using innovative approaches. The 2016 IEEE-CS Sidney Fernbach Memorial Award was presented to Vipin Kumar “for foundational work on understanding scalability, and highly scalable algorithms for graph positioning, sparse linear systems and data mining.” Kumar is a Regents Professor at the University of Minnesota. The Supercomputing Conference Test of Time Award recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends. The winning paper is “Automatically Tuned Linear Algebra Software” by Clint Whaley from University of Tennessee and Jack Dongarra from University of Tennessee and Oak Ridge National Laboratory. IEEE TCSC Award for Excellence in Scalable Computing for Early Career Researchers: The IEEE TCHPC Award for Excellence in Scalable Computing for Early Career Researchers recognizes individuals who have made outstanding and potentially long-lasting contributions to the field within five years of receiving their Ph.D. The 2016 awards were presented to Kyle Chard, Computation Institute , University of Chicago and Argonne National Laboratory; Sunita Chandrassekaran, University of Delaware; and Seyong Lee, Oak Ridge National Laboratory. SC17 will be held next November 12-17 in Denver, Colorado. For more details, go to http://sc17.supercomputing.org/. SC16, sponsored by the IEEE Computer Society and ACM (Association for Computing Machinery), offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world-class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC16, visit: http://sc16.supercomputing.org.


News Article | October 27, 2016
Site: phys.org

"These 18 stars rotate in just a few days on average, while the sun takes nearly a month," said Steve Howell, a senior research scientist at NASA's Ames Research Center in Moffett Field, California, and leader of the team. "The rapid rotation amplifies the same kind of activity we see on the sun, such as sunspots and solar flares, and essentially sends it into overdrive." The most extreme member of the group, a K-type orange giant dubbed KSw 71, is more than 10 times larger than the sun, rotates in just 5.5 days, and produces X-ray emission 4,000 times greater than the sun does at solar maximum. These rare stars were found as part of an X-ray survey of the original Kepler field of view, a patch of the sky comprising parts of the constellations Cygnus and Lyra. From May 2009 to May 2013, Kepler measured the brightness of more than 150,000 stars in this region to detect the regular dimming from planets passing in front of their host stars. The mission was immensely successful, netting more than 2,300 confirmed exoplanets and nearly 5,000 candidates to date. An ongoing extended mission, called K2, continues this work in areas of the sky located along the ecliptic, the plane of Earth's orbit around the sun. "A side benefit of the Kepler mission is that its initial field of view is now one of the best-studied parts of the sky," said team member Padi Boyd, a researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who designed the Swift survey. For example, the entire area was observed in infrared light by NASA's Wide-field Infrared Survey Explorer, and NASA's Galaxy Evolution Explorer observed many parts of it in the ultraviolet. "Our group was looking for variable X-ray sources with optical counterparts seen by Kepler, especially active galaxies, where a central black hole drives the emissions," she explained. Using the X-ray and ultraviolet/optical telescopes aboard Swift, the researchers conducted the Kepler-Swift Active Galaxies and Stars Survey (KSwAGS), imaging about six square degrees, or 12 times the apparent size of a full moon, in the Kepler field. "With KSwAGS we found 93 new X-ray sources, about evenly split between active galaxies and various types of X-ray stars," said team member Krista Lynne Smith, a graduate student at the University of Maryland, College Park who led the analysis of Swift data. "Many of these sources have never been observed before in X-rays or ultraviolet light." For the brightest sources, the team obtained spectra using the 200-inch telescope at Palomar Observatory in California. These spectra provide detailed chemical portraits of the stars and show clear evidence of enhanced stellar activity, particularly strong diagnostic lines of calcium and hydrogen. The researchers used Kepler measurements to determine the rotation periods and sizes for 10 of the stars, which range from 2.9 to 10.5 times larger than the sun. Their surface temperatures range from somewhat hotter to slightly cooler than the sun, mostly spanning spectral types F through K. Astronomers classify the stars as subgiants and giants, which are more advanced evolutionary phases than the sun's caused by greater depletion of their primary fuel source, hydrogen. All of them eventually will become much larger red giant stars. A paper detailing the findings will be published in the Nov. 1 edition of the Astrophysical Journal and is now available online. Forty years ago, Ronald Webbink at the University of Illinois, Urbana-Champaign noted that close binary systems cannot survive once the fuel supply of one star dwindles and it starts to enlarge. The stars coalesce to form a single rapidly spinning star initially residing in a so-called "excretion" disk formed by gas thrown out during the merger. The disk dissipates over the next 100 million years, leaving behind a very active, rapidly spinning star. Howell and his colleagues suggest that their 18 KSwAGS stars formed by this scenario and have only recently dissipated their disks. To identify so many stars passing through such a cosmically brief phase of development is a real boon to stellar astronomers. "Webbink's model suggests we should find about 160 of these stars in the entire Kepler field," said co-author Elena Mason, a researcher at the Italian National Institute for Astrophysics Astronomical Observatory of Trieste. "What we have found is in line with theoretical expectations when we account for the small portion of the field we observed with Swift." The team has already extended their Swift observations to additional fields mapped by the K2 mission. Ames manages the Kepler and K2 missions for NASA's Science Mission Directorate. NASA's Jet Propulsion Laboratory in Pasadena, California, managed Kepler mission development. Ball Aerospace & Technologies Corp. operates the flight system with support from the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. Goddard manages the Swift mission in collaboration with Pennsylvania State University in University Park, the Los Alamos National Laboratory in New Mexico and Orbital Sciences Corp. in Dulles, Virginia. Other partners include the University of Leicester and Mullard Space Science Laboratory in the United Kingdom, Brera Observatory and the Italian Space Agency in Italy, with additional collaborators in Germany and Japan. Explore further: 'Heartbeat stars' unlocked in new study More information: Steve B. Howell et al. RAPIDLY ROTATING, X-RAY BRIGHT STARS IN THEFIELD, The Astrophysical Journal (2016). DOI: 10.3847/0004-637X/831/1/27


News Article | October 27, 2016
Site: www.eurekalert.org

Astronomers using observations from NASA's Kepler and Swift missions have discovered a batch of rapidly spinning stars that produce X-rays at more than 100 times the peak levels ever seen from the sun. The stars, which spin so fast they've been squashed into pumpkin-like shapes, are thought to be the result of close binary systems where two sun-like stars merge. "These 18 stars rotate in just a few days on average, while the sun takes nearly a month," said Steve Howell, a senior research scientist at NASA's Ames Research Center in Moffett Field, California, and leader of the team. "The rapid rotation amplifies the same kind of activity we see on the sun, such as sunspots and solar flares, and essentially sends it into overdrive." The most extreme member of the group, a K-type orange giant dubbed KSw 71, is more than 10 times larger than the sun, rotates in just 5.5 days, and produces X-ray emission 4,000 times greater than the sun does at solar maximum. These rare stars were found as part of an X-ray survey of the original Kepler field of view, a patch of the sky comprising parts of the constellations Cygnus and Lyra. From May 2009 to May 2013, Kepler measured the brightness of more than 150,000 stars in this region to detect the regular dimming from planets passing in front of their host stars. The mission was immensely successful, netting more than 2,300 confirmed exoplanets and nearly 5,000 candidates to date. An ongoing extended mission, called K2, continues this work in areas of the sky located along the ecliptic, the plane of Earth's orbit around the sun. "A side benefit of the Kepler mission is that its initial field of view is now one of the best-studied parts of the sky," said team member Padi Boyd, a researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who designed the Swift survey. For example, the entire area was observed in infrared light by NASA's Wide-field Infrared Survey Explorer, and NASA's Galaxy Evolution Explorer observed many parts of it in the ultraviolet. "Our group was looking for variable X-ray sources with optical counterparts seen by Kepler, especially active galaxies, where a central black hole drives the emissions," she explained. Using the X-ray and ultraviolet/optical telescopes aboard Swift, the researchers conducted the Kepler-Swift Active Galaxies and Stars Survey (KSwAGS), imaging about six square degrees, or 12 times the apparent size of a full moon, in the Kepler field. "With KSwAGS we found 93 new X-ray sources, about evenly split between active galaxies and various types of X-ray stars," said team member Krista Lynne Smith, a graduate student at the University of Maryland, College Park who led the analysis of Swift data. "Many of these sources have never been observed before in X-rays or ultraviolet light." For the brightest sources, the team obtained spectra using the 200-inch telescope at Palomar Observatory in California. These spectra provide detailed chemical portraits of the stars and show clear evidence of enhanced stellar activity, particularly strong diagnostic lines of calcium and hydrogen. The researchers used Kepler measurements to determine the rotation periods and sizes for 10 of the stars, which range from 2.9 to 10.5 times larger than the sun. Their surface temperatures range from somewhat hotter to slightly cooler than the sun, mostly spanning spectral types F through K. Astronomers classify the stars as subgiants and giants, which are more advanced evolutionary phases than the sun's caused by greater depletion of their primary fuel source, hydrogen. All of them eventually will become much larger red giant stars. A paper detailing the findings will be published in the Nov. 1 edition of the Astrophysical Journal and is now available online. Forty years ago, Ronald Webbink at the University of Illinois, Urbana-Champaign noted that close binary systems cannot survive once the fuel supply of one star dwindles and it starts to enlarge. The stars coalesce to form a single rapidly spinning star initially residing in a so-called "excretion" disk formed by gas thrown out during the merger. The disk dissipates over the next 100 million years, leaving behind a very active, rapidly spinning star. Howell and his colleagues suggest that their 18 KSwAGS stars formed by this scenario and have only recently dissipated their disks. To identify so many stars passing through such a cosmically brief phase of development is a real boon to stellar astronomers. "Webbink's model suggests we should find about 160 of these stars in the entire Kepler field," said co-author Elena Mason, a researcher at the Italian National Institute for Astrophysics Astronomical Observatory of Trieste. "What we have found is in line with theoretical expectations when we account for the small portion of the field we observed with Swift." The team has already extended their Swift observations to additional fields mapped by the K2 mission. Ames manages the Kepler and K2 missions for NASA's Science Mission Directorate. NASA's Jet Propulsion Laboratory in Pasadena, California, managed Kepler mission development. Ball Aerospace & Technologies Corp. operates the flight system with support from the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder. Goddard manages the Swift mission in collaboration with Pennsylvania State University in University Park, the Los Alamos National Laboratory in New Mexico and Orbital Sciences Corp. in Dulles, Virginia. Other partners include the University of Leicester and Mullard Space Science Laboratory in the United Kingdom, Brera Observatory and the Italian Space Agency in Italy, with additional collaborators in Germany and Japan.


News Article | October 23, 2015
Site: news.mit.edu

Twelve new faculty members have been invited to join the ranks of the School of Engineering at MIT. Drawn from institutions and industry around the world, and ranging from distinguished senior researchers to promising young investigators, they will contribute to the research and educational activities of six academic departments in the school and a range of other labs and centers across the Institute. “This year we are welcoming another exceptionally strong group of new faculty to engineering,” says Ian A. Waitz, Dean of the School of Engineering. “They are remarkably accomplished, and their research spans some of the most important and pressing challenges in the world. I can’t wait to see what they do.” The new School of Engineering faculty members are: Mohammad Alizadeh will join the faculty as an assistant professor in the Department of Electrical Engineering and Computer Science in September 2015. He was a principal engineer at Cisco, which he joined through the acquisition of Insieme Networks in 2013. Alizadeh completed his undergraduate degree in electrical engineering at Sharif University of Technology and received his PhD in electrical engineering in 2013 from Stanford University, where he was advised by Balaji Prabhakar. His research interests are broadly in the areas of networked systems, data-center networking, and cloud computing. His dissertation focused on designing high-performance packet-transport mechanisms for data centers. His research has garnered significant industry interest: The Data Center TCP congestion control algorithm has been integrated into the Windows Server 2012 operating system; the QCN algorithm has been standardized as the IEEE 802.1Qau standard; and most recently, the CONGA adaptive load-balancing mechanism has been implemented in Cisco’s new flagship Application Centric Infrastructure products. Alizadeh is a recipient of a SIGCOMM best-paper award, a Stanford Electrical Engineering Departmental Fellowship, the Caroline and Fabian Pease Stanford Graduate Fellowship, and the Numerical Technologies Inc. Prize and Fellowship. Tamara Broderick will start as an assistant professor in electrical engineering and computer science in January 2015. She received a BA in mathematics from Princeton in 2007, a master of advanced study for completion of Part III of the Mathematical Tripos from the University of Cambridge in 2008, an MPhil in physics from the University of Cambridge in 2009, and an MS in computer science and a PhD in statistics from the University of California at Berkeley in 2013 and 2014, respectively. Her recent research has focused on developing and analyzing models for scalable, unsupervised learning using Bayesian nonparametrics. She has been awarded the Evelyn Fix Memorial Medal and Citation (for the PhD student on the Berkeley campus showing the greatest promise in statistical research), the Berkeley Fellowship, a National Science Foundation Graduate Research Fellowship, and a Marshall Scholarship. Michael Carbin will join the Department of Electrical Engineering and Computer Science as an assistant professor in January 2016. His research interests include the theory, design, and implementation of programming systems, including languages, program logics, static and dynamic program analyses, run-time systems, and mechanized verifiers. His recent research has focused on the design and implementation of programming systems that deliver improved performance and resilience by incorporating approximate computing and self-healing. Carbin’s research on verifying the reliability of programs that execute on unreliable hardware received a best-paper award at a leading programming languages conference (OOPSLA 2013). His undergraduate research at Stanford received the Wegbreit Prize for Best Computer Science Undergraduate Honors Thesis. As a graduate student at MIT, he received the MIT-Lemelson Presidential and Microsoft Research Graduate Fellowships. James Collins joined the faculty in the Department of Biological Engineering and as a core member of the Institute for Medical Engineering and Science. Collins received a PhD in mechanical engineering from the University of Oxford and was formerly the William F. Warren Distinguished Professor, university professor, professor of biomedical engineering, and director of the Center of Synthetic Biology at Boston University. He is a world leader in bringing together engineering principles and fundamental biology to make new discoveries and invent systems that can improve the human condition. Collins is among the founders of the field of synthetic biology. Otto X. Cordero will join the Department of Civil and Environmental Engineering as an assistant professor. He received a BS in computer and electrical engineering from the Polytechnic University of Ecuador, and an MS in artificial intelligence and PhD in theoretical biology from Utrecht University. For his dissertation, Cordero worked with Paulien Hogeweg on the scaling laws that govern the evolution of genome size in microbes. While a Netherlands Organization for Scientific Research Postdoctoral Fellow working with Martin Polz, he pursued a study of ecological and social interactions in wild populations of bacteria, and demonstrated the importance of these interactions in generating patterns of diversity and sustaining ecological function. In 2013 Cordero was awarded the European Research Council Starting Grant, the most prestigious career award in Europe, to reconstruct and model networks of ecological interactions that form between heterotrophic microbes in the ocean. Since November 2013, he has been an assistant professor at the Swiss Federal Institute of Technology in Zurich. The main goal of Cordero’s lab is to develop the study of natural microbial communities as dynamical systems, using a combination of experimental and computational approaches. Areg Danagoulian joined the faculty in the Department of Nuclear Science and Engineering (NSE) as an assistant professor in July 2014. He received a BS in physics from MIT and a PhD in experimental nuclear physics from the University of Illinois at Urbana-Champaign. He was a postdoctoral associate at the Los Alamos National Laboratory and subsequently worked as a senior scientist at Passport Systems Inc. Danagoulian’s research interests are focused in nuclear security. He works on problems in the areas of nuclear nonproliferation, technologies for arms-control treaty verification, nuclear safeguards, and nuclear-cargo security. Specific projects include the development of zero-knowledge detection concepts for weapon authentication, and research on monochromatic, tunable sources that can be applied to active interrogation of cargoes. Other areas of research include nuclear forensics and the development of new detection concepts. Danagoulian’s research and teaching will contribute to NSE’s growing program in nuclear security. Ruonan Han joined the electrical engineering and computer science faculty in September as an assistant professor. He is also a core member of the Microsystems Technology Laboratories. He earned his BS from Fudan University in 2007, an MS in electrical engineering from the University of Florida in 2009, and his PhD in electrical and computer engineering from Cornell University in 2014. Han’s research group aims to explore microelectronic-circuit and system technologies to bridge the terahertz gap between microwave and infrared domains. They focus on high-power generation, sensitive detection and energy-efficient systems. Han is the recipient of the Electrical Computing and Engineering Director’s Best Thesis Research Award and Innovation Award from Cornell, the Solid-State Circuits Society Pre-Doctoral Achievement Award and Microwave Theory Techniques Society Graduate Fellowship Award from IEEE, as well as the Best Student Paper Award from IEEE Radio-Frequency Integrated Circuits Symposium. Juejun (JJ) Hu joined the faculty in the Department of Materials Science and Engineering in January 2015 as an assistant professor and as the Merton C. Flemings Career Development Professor of Materials Science and Engineering. He comes to MIT from the University of Delaware, where he was a tenure-track assistant professor. Previously, he was a postdoc in MIT’s Microphotonics Center. As the Francis Alison Young Professor, Hu initiated and led research projects involving environmental monitoring, renewable energy, biological sensing, and optical communications. He received the 2013 Gerard J. Mangone Young Scholars Award, which recognizes promising and accomplished young faculty and is the University of Delaware’s highest faculty honor. His research is in three main areas: substrate-blind multifunctional photonic integration, mid-infrared integrated photonics, and 3-D photonic integrated circuits. Hu’s group has applied photonic technologies to address emerging application needs in environmental monitoring, renewable energy harvesting, communications, and biotechnology. He earned a BS in materials science and engineering from Tsinghua University, and a PhD from MIT. Rafael Jaramillo will join the materials science and engineering faculty as an assistant professor and the Toyota Career Development Professor in Materials Science and Engineering in the summer of 2015. He has a BS summa cum laude and an MEng, both in applied and engineering physics, from Cornell University. He also holds a PhD in physics from the University of Chicago. Jaramillo is currently a senior postdoctoral fellow at MIT in the Laboratory of Manufacturing and Productivity (LMP). His interests in renewable energy and accomplishments in developing materials systems and techniques for energy applications led to him receiving the Energy Efficiency and Renewable Energy Postdoctoral Research Fellowship from the U.S. Department of Energy. Prior to his appointment in LMP, Jaramillo was a postdoctoral fellow at the Harvard University Center for the Environment. His research interests lie at the intersection of solid-state physics, materials science, and renewable energy technologies. Stefanie Jegelka joined the faculty in the electrical engineering and computer science in January 2015. Formerly a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at the University of California at Berkeley, she received a PhD in computer science from the Swiss Federal Institute of Technology in Zurich (in collaboration with the Max Planck Institute for Intelligent Systems in Tuebingen, Germany), and a diploma in bioinformatics with distinction from the University of Tuebingen in Germany. During her studies, she was also a research assistant at the Max Planck Institute for Biological Cybernetics and spent a year at the University of Texas at Austin. She conducted research visits to Georgetown University, the University of Washington, the University of Tokyo, the French Institute for Research in Computer Science and Automation, and Microsoft Research. She has been a fellow of the German National Academic Foundation and its College for Life Sciences, and has received a Google Anita Borg Fellowship, a Fellowship of the Klee Foundation, and a Best Paper Award at the International Conference on Machine Learning. Jegelka organized several workshops on discrete optimization in machine learning, and has held three tutorials on submodularity in machine learning at international conferences. Her research interests lie in algorithmic machine learning. In particular, she is interested in modeling and efficiently solving machine-learning problems that involve discrete structure. She has also worked on distributed machine learning, kernel methods, clustering, and applications in computer vision. Aleksander Madry is a former assistant professor in the Swiss Federal Institute of Technology in Lausanne (EPFL) School of Computer and Communication Sciences and started as an assistant professor in electrical engineering and computer science in February 2015. His research centers on tackling fundamental algorithmic problems that are motivated by real-world optimization. Most of his work is concerned with developing new ideas and tools for algorithmic graph theory, with a particular focus on approaching central questions in that area with a mix of combinatorial and linear-algebraic techniques. He is also interested in understanding uncertainty in the context of optimization — how to model it and cope with its presence. Madry received his PhD in computer science from MIT in 2011 and, prior to joining EPFL, spent a year as a postdoctoral researcher at Microsoft Research New England. His work was recognized with a variety of awards, including the Association for Computing Machinery Doctoral Dissertation Award Honorable Mention, the George M. Sprowls Doctoral Dissertation Award, and a number of best paper awards at Foundations of Computer Science, Symposium on Discrete Algorithms, and Symposium on Theory of Computing meetings. Xuanhe Zhao joined the Department of Mechanical Engineering faculty in September 2014 as an assistant professor. Before joining MIT, he was an assistant professor in the Department of Mechanical Engineering and Materials Science at Duke University. He earned his PhD at Harvard University in 2009. Zhao conducts research on the interfaces between solid mechanics, soft materials, and bio-inspired design. His current research goal is to understand and design new soft materials with unprecedented properties for impactful applications. His current research projects are centered on three bio-inspired themes: artificial muscle (dielectric polymers and electromechanics), tough cartilage (tough and bioactive hydrogels and biomechanics), and transformative skin (functional surface instabilities and thin-film mechanics). Zhao’s discovery of new failure mechanisms of dielectric polymers in 2011 and 2012 can potentially enhance electric energy densities of dielectric elastomers and gels by a factor of 10. In 2012, he designed a new synthetic biocompatible hydrogel with hybrid crosslinking, which achieved fracture toughness multiple times higher than articular cartilage — unprecedented by previous synthetic gels. With fiber reinforcements, Zhao further controlled the modulus of the tough hydrogel over a wide range from a few kilopascals to over 10 megapascals in 2013 and 2014. By harnessing surface instabilities such as wrinkles and creases in 2014, he dynamically varied both surface textures and colors of an electro-mechano-chemically responsive elastomers to achieve the dynamic-camouflage function of cephalopods. This work was highlighted by Nature News, reported by the The Washington Post, and featured on the MIT homepage: “How to hide like an octopus.” Xuanhe is a recipient of the National Science Foundation CAREER Award, Office of Naval Research Young Investigator Program Award, and the Early Career Researchers Award from AVS Biomaterial Interfaces Division.


Hammond G.E.,Pacific Northwest National Laboratory | Lichtner P.C.,Los Alamos National Laboratory
Water Resources Research | Year: 2010

High-resolution, three-dimensional, reactive flow and transport simulations are carried out to describe the migration of hexavalent uranium [U(VI)] at the Hanford 300 Area bordering the Columbia River and to better understand the persistence of the uranium plume at the site. The computer code PFLOTRAN developed under a DOE SciDAC-2 project is employed in the simulations that are executed on ORNL's Cray XT4/XT5 supercomputer Jaguar. The conceptual model used in the simulations is based on the recognition of three distinct phases or time periods in the evolution of the U(VI) plume. These correspond to (1) initial waste emplacement; (2) initial presence of both labile and nonlabile U(VI) with an evolved U(VI) plume extending from the source region to the river boundary, representing present-day conditions; and (3) the complete removal of all nonlabile U(VI) and labile U(VI) in the vadose zone. This work focuses primarily on modeling Phase II using equilibrium and multirate sorption models for labile U(VI) and a continuous source release of nonlabile U(VI) in the South Process Pond through dissolution of metatorbernite as a surrogate mineral. For this case, rapid fluctuations in the Columbia River stage combined with the slow release of nonlabile U(VI) from contaminated sediment are found to play a predominant role in determining the migration behavior of U(VI) with sorption only a second-order effect. Nevertheless, a multirate model was essential in explaining breakthrough curves obtained from laboratory column experiments using the same sediment and is demonstrated to be important in Phase III. The calculations demonstrate that U(VI) is discharged to the river at a highly fluctuating rate in a ratchet-like behavior as the river stage rises and falls. The high-frequency fluctuations must be resolved in the model to calculate the flux of U(VI) at the river boundary. By time averaging the instantaneous flux to average out noise superimposed on the river stage fluctuations, the cumulative U(VI) flux to the river is found to increase approximately linearly with time. The flow rate and U(VI) flux are highly sensitive to the conductance boundary condition that describes the river-sediment interface. By adjusting the conductance coefficient to give a better match to the measured piezometric head, good agreement was obtained with field studies for both the mean flux of water of 109 kg/yr and U(VI) of 25 kg/yr at the river-aquifer boundary for a computational domain encompassing the South Process Pond. Finally, it is demonstrated that, through global mass conservation, the U(VI) leach rate from the source region is related to the U(VI) flux at the river boundary. Copyright 2010 by the American Geophysical Union.


Hraber P.,Los Alamos National Laboratory | Seaman M.S.,Beth Israel Deaconess Medical Center | Bailer R.T.,National Institute of Allergy and Infectious Diseases | Mascola J.R.,National Institute of Allergy and Infectious Diseases | And 2 more authors.
AIDS | Year: 2014

Objective: Studies of neutralizing antibodies in HIV-1 infected individuals provide insights into the quality of the response that should be possible to elicit with vaccines and ways to design effective immunogens. Some individualsmake high titres of exceptional broadly reactive neutralizing antibodies that are of particular interest; however, more modest responses may be a reasonable goal for vaccines. We performed a large cross-sectional study to determine thespectrum of neutralization potency and breadth that is seen during chronic HIV-1 infection. Design: Neutralization potency and breadth were assessed with genetically and geographically diverse panels of 205 chronic HIV-1 sera and 219 Env-pseudotyped viruses representing all major genetic subtypes of HIV-1. Methods: Neutralization was measured by usingTat-regulated luciferase reporter gene expression in TZM-bl cells. Serum-neutralizing activity was compared with a diverse set of human mAbs that are widely considered to be broadly neutralizing. Results: We observed a uniform continuumof responses, with most sera displaying some level of cross-neutralization, and approximately 50% of sera neutralizingmore than 50% of viruses. Titres of neutralization (potency) were highly correlated with breadth. Many sera had breadth comparable to several of the less potent broadly neutralizing human mAbs. Conclusion: These results help clarify thespectrum of serum-neutralizing activity induced by HIV-1 infection and that should be possible to elicit with vaccines. Importantly, most people appear capable of making low to moderate titres of broadly neutralizing antibodies. Additional studies of these relatively common responses might provide insights for practical and feasible vaccine designs. © 2014 Wolters Kluwer Health.


Stroppa A.,CNR Institute of Neuroscience | Barone P.,CNR Institute of Neuroscience | Jain P.,Los Alamos National Laboratory | Perez-Mato J.M.,University of the Basque Country | Picozzi S.,CNR Institute of Neuroscience
Advanced Materials | Year: 2013

On the basis of first-principles calculations, we design a novel Cr-based metal-organic framework to be both multiferroic and magnetoelectric. The compound shows a "double-hybrid" nature: it is a hybrid organic-inorganic compound and it shows hybrid improper ferroelectricity. Here, the coupling of non-polar distortions, such as Jahn-Teller pseudo-rotations and tilting, pave the way to a polar behavior, with the coupling being realized through hydrogen bonds. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Lindquist N.C.,University of Minnesota | Lindquist N.C.,Bethel University | Nagpal P.,Los Alamos National Laboratory | McPeak K.M.,ETH Zurich | And 2 more authors.
Reports on Progress in Physics | Year: 2012

Metallic nanostructures now play an important role in many applications. In particular, for the emerging fields of plasmonics and nanophotonics, the ability to engineer metals on nanometric scales allows the development of new devices and the study of exciting physics. This review focuses on top-down nanofabrication techniques for engineering metallic nanostructures, along with computational and experimental characterization techniques. A variety of current and emerging applications are also covered. © 2012 IOP Publishing Ltd.


Kang Z.-B.,Los Alamos National Laboratory | Qiu J.-W.,Brookhaven National Laboratory | Qiu J.-W.,State University of New York at Stony Brook
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

In anticipating the upcoming proton-lead run at the LHC in the near future, we present predictions for the nuclear modification factor of transverse momentum spectrum of Z0 production and transverse momentum broadening of vector boson (J/ψ, Υ{hooked}, W/Z0) production in proton-lead collisions at s=5TeV, respectively. We find that the measurement of nuclear modification factor of Z0 production provides a clean and unambiguous test of the nuclear anti-shadowing proposed in the recent EPS09. In addition, the dramatic difference in transverse momentum broadening between the heavy quarkonium and W/Z0 production could be a signature prediction of QCD parton multiple scattering, and it provides valuable information on color neutralization of a produced heavy quark pair when it transmutes into a physical quarkonium. © 2013 Elsevier B.V.


Wehling T.O.,University of Bremen | Black-Schaffer A.M.,Uppsala University | Balatsky A.V.,NORDITA | Balatsky A.V.,Los Alamos National Laboratory
Advances in Physics | Year: 2014

A wide range of materials, like d-wave superconductors, graphene, and topological insulators, share a fundamental similarity: their low-energy fermionic excitations behave as massless Dirac particles rather than fermions obeying the usual Schrödinger Hamiltonian. This emergent behavior of Dirac fermions in condensed matter systems defines the unifying framework for a class of materials we call "Dirac materials." In order to establish this class of materials, we illustrate how Dirac fermions emerge in multiple entirely different condensed matter systems and we discuss how Dirac fermions have been identified experimentally using electron spectroscopy techniques (angle-resolved photoemission spectroscopy and scanning tunneling spectroscopy). As a consequence of their common low-energy excitations, this diverse set of materials shares a significant number of universal properties in the low-energy (infrared) limit. We review these common properties including nodal points in the excitation spectrum, density of states, specific heat, transport, thermodynamic properties, impurity resonances, and magnetic field responses, as well as discuss many-body interaction effects. We further review how the emergence of Dirac excitations is controlled by specific symmetries of the material, such as time-reversal, gauge, and spin-orbit symmetries, and how by breaking these symmetries a finite Dirac mass is generated. We give examples of how the interaction of Dirac fermions with their distinct real material background leads to rich novel physics with common fingerprints such as the suppression of back scattering and impurity-induced resonant states. © 2014 Taylor & Francis.


Das T.,Los Alamos National Laboratory | Balatsky A.V.,Los Alamos National Laboratory | Balatsky A.V.,KTH Royal Institute of Technology
Nature Communications | Year: 2013

Topological insulators represent a new class of quantum phase defined by invariant symmetries and spin-orbit coupling that guarantees metallic Dirac excitations at its surface. The discoveries of these states have sparked the hope of realizing non-trivial excitations and novel effects such as a magnetoelectric effect and topological Majorana excitations. Here we develop a theoretical formalism to show that a three-dimensional topological insulator can be designed artificially via stacking bilayers of two-dimensional Fermi gases with opposite Rashba-type spin-orbit coupling on adjacent layers, and with interlayer quantum tunneling. We demonstrate that in the stack of bilayers grown along a (001)-direction, a non-trivial topological phase transition occurs above a critical number of Rashba bilayers. In the topological phase, we find the formation of a single spin-polarized Dirac cone at the G-point. This approach offers an accessible way to design artificial topological insulators in a set up that takes full advantage of the atomic layer deposition approach. This design principle is tunable and also allows us to bypass limitations imposed by bulk crystal geometry. & 2013 Macmillan Publishers Limited. All rights reserved.


Nisoli C.,Los Alamos National Laboratory | Moessner R.,Max Planck Institute for the Physics of Complex Systems | Schiffer P.,University of Illinois at Urbana - Champaign
Reviews of Modern Physics | Year: 2013

Frustration, the presence of competing interactions, is ubiquitous in the physical sciences and is a source of degeneracy and disorder, which in turn gives rise to new and interesting physical phenomena. Perhaps nowhere does it occur more simply than in correlated spin systems, where it has been studied in the most detail. In disordered magnetic materials, frustration leads to spin-glass phenomena, with analogies to the behavior of structural glasses and neural networks. In structurally ordered magnetic materials, it has also been the topic of extensive theoretical and experimental studies over the past two decades. Such geometrical frustration has opened a window to a wide range of fundamentally new exotic behavior. This includes spin liquids in which the spins continue to fluctuate down to the lowest temperatures, and spin ice, which appears to retain macroscopic entropy even in the low-temperature limit where it enters a topological Coulomb phase. In the past seven years a new perspective has opened in the study of frustration through the creation of artificial frustrated magnetic systems. These materials consist of arrays of lithographically fabricated single-domain ferromagnetic nanostructures that behave like giant Ising spins. The nanostructures' interactions can be controlled through appropriate choices of their geometric properties and arrangement on a (frustrated) lattice. The degrees of freedom of the material can not only be directly tuned, but also individually observed. Experimental studies have unearthed intriguing connections to the out-of-equilibrium physics of disordered systems and nonthermal "granular" materials, while revealing strong analogies to spin ice materials and their fractionalized magnetic monopole excitations, lending the enterprise a distinctly interdisciplinary flavor. The experimental results have also been closely coupled to theoretical and computational analyses, facilitated by connections to classic models of frustrated magnetism, whose hitherto unobserved aspects have here found an experimental realization. Considerable experimental and theoretical progress in this field is reviewed here, including connections to other frustrated phenomena, and future vistas for progress in this rapidly expanding field are outlined. © 2013 American Physical Society.


Del Campo A.,Los Alamos National Laboratory | Egusquiza I.L.,University of the Basque Country | Plenio M.B.,University of Ulm | Huelga S.F.,University of Ulm
Physical Review Letters | Year: 2013

Bounds to the speed of evolution of a quantum system are of fundamental interest in quantum metrology, quantum chemical dynamics, and quantum computation. We derive a time-energy uncertainty relation for open quantum systems undergoing a general, completely positive, and trace preserving evolution which provides a bound to the quantum speed limit. When the evolution is of the Lindblad form, the bound is analogous to the Mandelstam-Tamm relation which applies in the unitary case, with the role of the Hamiltonian being played by the adjoint of the generator of the dynamical semigroup. The utility of the new bound is exemplified in different scenarios, ranging from the estimation of the passage time to the determination of precision limits for quantum metrology in the presence of dephasing noise. © 2013 American Physical Society.


Li J.,Environment Canada | Chylek P.,Los Alamos National Laboratory
Journal of Climate | Year: 2012

Atmospheric entropy and its association with climate dissipation are investigated. The balance equation for entropy is derived through the mean and transient thermal and moisture equations. The entropy production contains the internal and external parts. The external entropy production, due to small-scale diabatic heating, can be evaluated by the surface entropy flux. Using NCEP data from 1998 to 2007, it is found that the surface entropy flux is much larger in the tropics than in the extratropics. In the December-February (DJF) Northern Hemisphere, there are two strong positive centers of boundary layer supply of entropy: one is in the northwestern Pacific and the other is in the western Atlantic. The external entropy production, due to large-scale eddy flow, can be evaluated by the convergence of eddy entropy flow. It is found that the large-scale eddy entropy flow is divergent in the midlatitudes and convergent in the higher latitudes. The internal entropy production shows the dissipation to the orderly thermal structure. For the internal entropy production due to a large-scale eddy, it is shown that in the Northern Hemisphere during DJF there are three maxima, located in the western Pacific, western Atlantic, and northern polar regions. This illustrates the dissipation of the highly organized thermal structure in such regions. An interesting finding is that the large-scale eddy internal entropy production is negative in the lower stratosphere. It is found that the long-time-averaged global mean of the internal entropy production is 0.037 49 W m -2 K -1. By including the entropy sink from radiation, the total entropy production is close to balance. © 2012 American Meteorological Society.


Kang Z.-B.,Brookhaven National Laboratory | Vitev I.,Los Alamos National Laboratory
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

We study the photon-triggered light and heavy meson production in both p+p and A+A collisions. We find that a parton energy loss approach that successfully describes inclusive hadron attenuation in nucleus-nucleus reactions at RHIC can simultaneously describe well the experimentally determined photon-triggered light hadron fragmentation functions. Using the same framework, we generalize our formalism to study photon-triggered heavy meson production. We find that the nuclear modification of photon-tagged heavy meson fragmentation functions in A+A collision is very different from that of the photon-tagged light hadron case. While photon-triggered light hadron fragmentation functions in A+A collisions are suppressed relative to p+p, photon-triggered heavy meson fragmentation functions can be either enhanced or suppressed, depending on the specific kinematic region. The anticipated smaller energy loss for b-quarks manifests itself as a flatter photon-triggered B-meson fragmentation function compared to that for the D-meson case. We make detailed predictions for both RHIC and LHC energies. We conclude that a comprehensive comparative study of both photon-tagged light and heavy meson production can provide new insights in the details of the jet quenching mechanism.© 2011 American Physical Society.


Atanasov V.,Telecommunications Institute of Portugal | Saxena A.,Los Alamos National Laboratory
Physical Review B - Condensed Matter and Materials Physics | Year: 2010

It is shown that for monolayer graphene electrons are confined on a perfect two-dimensional surface. The implications for the electronic properties of corrugated graphene are discussed in view of a derivation of the constrained relativistic dynamics for the massless carriers in two dimensions in accord with the Heisenberg uncertainty principle. Surface curvature is related to a series of phenomena with practical applications such as curvature induced p-n junctions, band-gap opening, and decoherence. We also establish a bending free energy by treating graphene as a soft electronic membrane. © 2010 The American Physical Society.


Kudekar S.,Ecole Polytechnique Federale de Lausanne | Kudekar S.,Los Alamos National Laboratory | Richardson T.J.,Flarion Technologies | Urbanke R.L.,Ecole Polytechnique Federale de Lausanne
IEEE Transactions on Information Theory | Year: 2011

Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrm and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why convolutional-like or spatially coupled codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon threshold saturation. This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the threshold saturation only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the dynamical threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis. © 2006 IEEE.


Beresnyak A.,Los Alamos National Laboratory | Beresnyak A.,KTH Royal Institute of Technology
Astrophysical Journal Letters | Year: 2015

Anisotropy of MHD turbulence has been studied extensively for many years, most prominently by measurements in the solar wind and high-resolution simulations. The spectrum parallel to the local magnetic field was observed to be steeper than the perpendicular spectrum, typically k-2, consistent with the widely accepted Goldreich & Sridhar model. In this Letter, I looked deeper into the nature of the relation between parallel and perpendicular spectra and argue that this k-2 scaling has the same origin as the w-2 scaling of the Lagrangian frequency spectrum in strong hydrodynamic turbulence. This follows from the fact that Alfven waves propagate along magnetic field lines. It has now became clear that the observed anisotropy can be argued without invocation of the critical balance argument and is more robust that was previously thought. The relation between parallel (Lagrangian) and perpendicular (Eulerian) spectra is an inevitable consequence of strong turbulence of Alfven waves, rather than a conjecture based on the uncertainty relation. I tested this using high-resolution simulations of MHD turbulence, in particular, I verified that the cutoff of the parallel spectrum scales as a Kolmogorov timescale, not lengthscale. © 2015. The American Astronomical Society. All rights reserved.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 390.06K | Year: 2015

Ionotronic devices rely on charge effects based on ions instead of/or in addition to electrons. The field has begun to gain very wide attention recently. It has been applied mainly to oxide thin film memristors (resistance depends on voltage and can be switched between an on and an off state of high and low resistance). These devices are interesting for creating electrically switchable memory, but there are challenges with these structures including the requirement of a setting process and variable properties from one film to another. In this proposal, we have the new idea to utilise ionotronic effects to create a new kind of electrically switchable memory. Here ionic defects at vertical interfaces in vertical nanocomposite thin films charge couple to magnetism in a magnetic transition metal oxide. Since the cation valences in the metal oxide depend on oxygen concentration or charge state, and since the magnetic properties depend on cation valences, it should be possible to switch magnetism on and off by applying an electric field. This device is an ionotronic magnetoelectric, and it represents a completely new form of magnetoelectric RAM. Magnetoelectric RAM is where electric field controls magnetism instead of electric current doing so as in other forms of RAM, and it is a long sought-after goal. It offers the possibility of low power, very high density, high-speed reading and writing times, and non-volatility. Low energy, high performance computing is promised with this technology. However, while a range of structures and materials have been studied to date, none has proved practical in terms of ease of structure formation, stability, temperature of operation, or size of magnetoelectric effect. Making the ionotronic magnetoelectric a practical reality is not trivial, and relies on advanced materials science - the growth of very thin films, the creation of highly ordered materials combinations on a very small scale (1/0000 the thickness of a human hair), the movement of charges along interface nanochannels near to room temperature, the knowledge of which materials combine together in a compatible way, the imaging of materials at the atomic scale, etc. To attain the practical magnetoelectric dream we propose to create and measure new structures, we will use unique experimental capabilities and will also collaborate with world-leading researchers. Our starting point for the research is our ability to create, at the nanometre scale, ionic interface channels in perfect vertical nanocomposite films. We have also observed the first signs that ions can indeed charge couple to magnetic properties.


Liu H.,University of Illinois at Urbana - Champaign | Valocchi A.J.,University of Illinois at Urbana - Champaign | Kang Q.,Los Alamos National Laboratory
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2012

We present an improved three-dimensional 19-velocity lattice Boltzmann model for immisicible binary fluids with variable viscosity and density ratios. This model uses a perturbation step to generate the interfacial tension and a recoloring step to promote phase segregation and maintain surfaces. A generalized perturbation operator is derived using the concept of a continuum surface force together with the constraints of mass and momentum conservation. A theoretical expression for the interfacial tension is determined directly without any additional analysis and assumptions. The recoloring algorithm proposed by Latva-Kokko and Rothman is applied for phase segregation, which minimizes the spurious velocities and removes lattice pinning. This model is first validated against the Laplace law for a stationary bubble. It is found that the interfacial tension is predicted well for density ratios up to 1000. The model is then used to simulate droplet deformation and breakup in simple shear flow. We compute droplet deformation at small capillary numbers in the Stokes regime and find excellent agreement with the theoretical Taylor relation for the segregation parameter β=0.7. In the limit of creeping flow, droplet breakup occurs at a critical capillary number 0.35


Mereghetti E.,Los Alamos National Laboratory | Van Kolck U.,University Paris - Sud | Van Kolck U.,University of Arizona
Annual Review of Nuclear and Particle Science | Year: 2015

Thanks to the unnaturally small value of the QCD vacuum angle , time-reversal violation () offers a window into physics beyond the Standard Model (SM) of particle physics. We review the effective field theory framework that establishes a clean connection between (a) mechanisms, which can be represented by higher-dimensional operators involving SM fields and symmetries, and (b) hadronic interactions, which allow for controlled calculations of low-energy observables involving strong interactions. The chiral properties of mechanisms lead to a pattern that should be identifiable in measurements of the electric dipole moments of the nucleon and light nuclei. Copyright ©2015 by Annual Reviews. All rights reserved.


News Article | July 14, 2014
Site: www.techtimes.com

The U.S. government has just hired Cray Inc., a supercomputer company, to build a supercomputer that would help manage the U.S. nuclear arsenal. Cray announced July 10 that it had been awarded the contract for $174 million with the National Nuclear Security Administration (NNSA). The system will advance the NNSA and help secure the safety of the U.S. nuclear supply. The computer will be named Trinity. Over the course of the contract, Cray will build a multipetaflop supercomputing system and a multipetabyte Cray Sonexion storage system. "Both Los Alamos and Sandia have a long history with Cray, going back to the beginning of the supercomputing era and most recently with the Cielo platform," said Gary Grider, High Performance Computing Division Leader at Los Alamos. "That history continues with the Trinity platform that will provide next-generation supercomputing in support of the U. S. nuclear security enterprise." The new supercomputer system will support all three of the NNSA national laboratories, which include Los Alamos National Laboratory, Sandia National Laboratories, and Lawrence Livermore National Laboratory. The NNSA is currently using a supercomputer designed by Cray and located at Los Alamos, a Cray XE6 model named Cielo. The new Trinity system is expected to deliver more than eight times greater applications performance than the Cielo system, according to the press release. "It is a real honor that one of the largest contracts in our company's history has come from one of our most important customers," said Peter Ungaro, president and CEO of Cray. "Our partnership and collaboration with the NNSA has led to the development of groundbreaking supercomputing systems, including the Cray XT3 system that resulted from the Red Storm project. The NNSA has consistently deployed the world's most advanced supercomputing systems to support their critical mission of ensuring the health of our nation's nuclear stockpile." The contract is valued at more than $174 million in total, which includes products and services offered by Cray. The contract has multiple phases and is expected to take several years to complete, continuing through at least 2016. The NNSA does not yet have a time announced for when it expects Trinity to be completed and running. This contract is one of Cray's biggest-ever contracts. It has not been stated yet whether the U.S. would use Cray's system to deploy the nuclear weapons if the need should ever come up.


News Article | September 12, 2016
Site: www.scientificcomputing.com

The U.S. Department of Energy's (DOE) Exascale Computing Project (ECP) today announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. Exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 to 100 times faster than the nation's most powerful supercomputers in use today. The 15 awards being announced total $39.8 million, targeting advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy and national security, as well as collaborations such as the Precision Medicine Initiative with the National Institutes of Health's National Cancer Institute. Of the proposals announced that are receiving full funding, two are being led by principal investigators at the DOE's Argonne National Laboratory: 1. Computing the Sky at Extreme Scales equips cosmologists with the ability to design foundational simulations to create "virtual universes" on demand at the extreme fidelities demanded by future multi-wavelength sky surveys. The new discoveries that will emerge from the combination of sky surveys and advanced simulation provided by the ECP will shed more light on three key ingredients of our universe: dark energy, dark matter and inflation. All three of these concepts reach beyond the known boundaries of the Standard Model of particle physics. 2. Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) that addresses three top challenges of the National Cancer Institute: understanding the molecular basis of key protein interactions, developing predictive models for drug response and automating the analysis and extraction of information from millions of cancer patient records to determine optimal cancer treatment strategies. Rick Stevens, Principal Investigator, Argonne National Laboratory, with Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory and the National Institutes of Health's National Cancer Institute (NIH/NCI). Additionally, a third project led by Argonne will be receiving seed funding: Multiscale Coupled Urban Systems will create an integrated modeling framework comprising data curation, analytics, modeling and simulation components that will equip city designers, planners and managers to scientifically develop and evaluate solutions to issues that affect cities now and in the future. The framework will focus first on integrating urban atmosphere and infrastructure heat exchange and air flow; building energy demand at district or city-scale, generation and use; urban dynamics and socioeconomic models; population mobility and transportation; and hooks to expand to include energy systems (biofuels, electricity and natural gas) and water resources. The application efforts will help guide DOE's development of a U.S. exascale ecosystem as part of President Obama's National Strategic Computing Initiative. DOE, the U.S. Department of Defense and the National Science Foundation have been designated as lead agencies, and ECP is the primary DOE contribution to the initiative. The ECP's multi-year mission is to maximize the benefits of high-performance computing for U.S. economic competitiveness, national security and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms and workforce development needs critical to the effective development and deployment of future exascale systems. Leadership of the Exascale Computing Project comes from six DOE national laboratories: the Office of Science's Oak Ridge, Argonne and Lawrence Berkeley national labs and NNSA's Lawrence Livermore, Los Alamos and Sandia national labs.


« Honda begins sales of Clarity Fuel Cell in Japan; targeting 200 units first year | Main | UQM receives new follow-on order from Proterra to support increased demand for electric buses » The US Department of Energy (DOE) selected 33 small businesses to work directly with DOE national labs to accelerate the commercialization of new clean energy technologies. The department’s Office of Energy Efficiency and Renewable Energy is investing nearly $6.7 million under Round 1 of the new Small Business Vouchers (SBV) pilot. For Round 1, the small businesses and laboratories will collaborate on advancing a number of clean energy technologies, including water, wind, bioenergy, solar, buildings, vehicles, fuel cells, geothermal technologies, and advanced manufacturing. The selected small businesses will work with scientists at nine department laboratories: Oak Ridge National Laboratory (ORNL); National Renewable Energy Laboratory (NREL); Lawrence Berkeley National Laboratory (LBNL); Sandia National Laboratories (SNL); Pacific Northwest National Laboratory (PNNL); Idaho National Laboratory (INL); Los Alamos National Laboratory (LANL); Argonne National Laboratory (ANL); and Lawrence Livermore National Laboratory (LLNL). SBV is a collaborative, national effort that provides $20 million for US companies to help improve industry awareness of national laboratory capabilities and provide small and mid-size businesses access to the resources available within the national laboratory system. Vouchers range from $50,000-300,000. The companies were competitively chosen from a pool of hundreds of applications. Almost 40% of the businesses represent new DOE-industry partnerships and relationships with the national labs. Building on the tremendous response of Round 1, the department also has begun accepting applications for Round 2 of the SBV pilot. A total of $13 million worth of funding remains; over the course of the year, up to 100 vouchers will be awarded.


News Article | October 10, 2016
Site: www.cemag.us

Reports of the non-existence of the so-called “sterile” neutrino are premature, say scientists at the U.S. Department of Energy’s Brookhaven National Laboratory — even as they release results from two experiments that further limit the places this elusive particle may be hiding. These results, described in three papers published in Physical Review Letters (PRL) by scientists working on the Daya Bay Neutrino Experiment in China and the Main Injector Neutrino Oscillation Search (MINOS) at Fermi National Accelerator Laboratory — like results recently announced by another neutrino experiment known as IceCube — greatly narrow the “phase space” where scientists must hunt. Whether you look at the new results as “zeroing in” on the elusive particles, or largely ruling out their existence, the scientists say the search for sterile neutrinos will and should go on. “Finding true evidence that sterile neutrinos exist would profoundly change our understanding of the universe,” says Brookhaven physicist Xin Qian, one of the neutrino hunters. These light, electrically neutral particles could be components of the mysterious “dark world,” including the dark matter that physicists know makes up about a quarter of the universe, but has never been directly detected. “In the Big Bang at the very beginning of the universe, some of the energy turned into particles that so far have proven to be completely invisible,” explains Milind Diwan, another Brookhaven neutrino physicist. “That there exists such matter that is dark — undetectable — is an established fact. What that is composed of is a big mystery. Every time we find a hint of a new neutral particle that could account for this missing energy — or a new search method — it offers us a window into the universe that we must explore.” Even the process of figuring out how to search for an elusive, undetectable particle — what to measure and how to sift through the data — is worthwhile, the scientists say, because it sparks creativity and the evolution of scientific thinking in unexpected ways. “These analyses are all side results to the main research programs at experiments where scientists realized they had the ability to look at the data in new and interesting ways,” says Diwan. “In this way, these programs are creating a group of young researchers who understand how to scrutinize data and question results, and they will become leaders with the types of skills that we need in science.” Brookhaven Lab scientists play important roles in both Daya Bay and MINOS, and were essential to understanding how the data collected for the experiments’ main research goals could be used to search for “signatures” of sterile neutrinos. For example, Brookhaven’s Qian led the development of the statistical methods for combining results from these experiments, working with a group of postdoctoral fellows and graduate students — about half from Daya Bay and half from MINOS. “It took some effort to finalize what kind of statistical method to use,” says Wei Tang, one of the Brookhaven postdocs. “Working with the two collaborations, with very different cultures, is a good example of how this work is done — how science brings people together,” he says. Both MINOS and Daya Bay are designed to explore details of neutrinos, tiny subatomic particles that carry no electrical charge and travel vast distances without interacting with anything. The Standard Model of particle physics — a list of known particles and the forces through which they interact — describes three known neutrino “flavors” (electron, muon, and tau), named for their relation with charged particles of those names. But the Standard Model also says that neutrinos have no mass — an idea recently refuted by experiments confirming these particles’ ability to change identities among the three flavors. These flavor-shifting “oscillations” are only possible if neutrinos exist in different mass states — indeed, a mixture of different masses for each known flavor. Therefore detecting neutrino oscillations, a discovery that garnered the 2015 Nobel Prize in physics, is proof that neutrinos do have mass — pointing out at least one hole in the Standard Model. “The fact that neutrinos have mass is a strong hint that we may discover other things we don’t know about these particles,” Qian says. “It would be earth-shaking if a fourth type, sterile neutrinos, really existed.” The search for sterile neutrinos is also motivated by theoretical ideas and hints from an earlier experiment at Los Alamos National Laboratory that may have observed a new kind of neutrino oscillation — approximately described as a two-step shift that transforms existing muon neutrinos into neutrinos of a fourth, undetectable (a.k.a. “sterile”) flavor, which then oscillate again to become electron neutrinos. The Los Alamos team measured the rate of this proposed two-step transformation through an undetectable intermediary flavor. The new results from MINOS and Daya Bay test this observation by measuring the rates of the individual steps. Multiplying the rates measured for each of the two individual steps that include the proposed sterile neutrino should equal the rate measured by the Los Alamos team — if the sterile neutrino indeed exists. In step one — muon neutrinos transforming into proposed sterile ones—we’re talking about measuring a transformation from a particle that scientists can create and detect to a particle they cannot “see” in any way. So instead of trying to measure something undetectable, they look for the rate at which the muon neutrinos disappear without transforming into either of the two other known types. This work was done by the MINOS experiment, which produces a beam of muon neutrinos to measure their oscillations to other flavors. To tackle the second step — the transformation from an undetectable particle to a detectable one — the scientists had an even bigger challenge. “We can’t make a beam of the undetectable sterile neutrinos — after all, we don’t even know if they exist!,” Diwan says. Instead, they take advantage of one of the basic laws of Nature — namely, that the rate of this process should be equal whether it’s running forward or in reverse — and measure the rate of the reverse process. They use a beam of electron neutrinos (actually electron antineutrinos, which for the purpose of this experiment are the same) and measure the rate at which they disappear, ruling out or accounting for those transforming into one of the two other known types. The electron antineutrinos are generated in the cores of nuclear reactors — in this case, a set of six that are part of the Daya Bay Neutrino Experiment in China. Detectors at varying distances from the reactors pick up signals that allow scientists to calculate oscillation rates. In the papers publishing in PRL — one from the Daya Bay collaboration, one from MINOS, and another presenting a combined analysis of the Daya Bay and MINOS results as well as a reanalysis of results from Bugey, an older experiment in France — the main conclusion is that the accounting doesn’t work out. Multiplying the rates of the two individual reactions did not match the Los Alamos data. Overlapping plots of results from the various analyses revealed only small remaining “parameter regions” where the sterile neutrinos could still be “hiding.” IceCube, an experiment that measures oscillations of neutrinos from the atmosphere with a detector deep under the ice of Antarctica, also recently reported new results. Its search for the disappearance of muon neutrinos to a proposed sterile flavor also came up empty handed, with no sterile neutrinos to be found. Future experiments at Fermilab will continue to search in the remaining channels. But even if they don’t find the elusive particles, the search and the competition and collaboration it fosters between and among scientists from different experiments and in different nations will create new opportunities. These teams are building the technologies — detectors, data infrastructure, and analysis tools — needed for the next generation of experiments, including the Deep Underground Neutrino Experiment to be hosted at Fermilab with widespread, international participation. “The physics of neutrino mass and the physics of the universe — its formation and structure — are intimately related,” says Diwan.  “The discoveries in the last few decades have shown that the universe is filled with electrically neutral invisible stuff including neutrinos that could have unknown and rich interactions. Unraveling the mysterious properties of neutrinos is proving to be one of the few possible windows into this dark world.” Brookhaven Lab’s role in this research is supported by the DOE Office of Science


News Article | February 16, 2017
Site: www.nanotech-now.com

Abstract: Francis (Frank) Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory, effective February 1. Alexander comes to Brookhaven Lab from DOE's Los Alamos National Laboratory, where he was the acting division leader of the Computer, Computational, and Statistical Sciences (CCS) Division. During his more than 20 years at Los Alamos, he held several leadership roles, including as leader of the CCS Division's Information Sciences Group and leader of the Information Science and Technology Institute. Alexander first joined Los Alamos in 1991 as a postdoctoral researcher at the Center for Nonlinear Studies. He returned to Los Alamos in 1998 after doing postdoctoral work at the Institute for Scientific Computing Research at DOE's Lawrence Livermore National Laboratory and serving as a research assistant professor at Boston University's Center for Computational Science. "I was drawn to Brookhaven by the exciting opportunity to strengthen the ties between computational science and the significant experimental facilities-the Relativistic Heavy Ion Collider, the National Synchrotron Light Source II, and the Center for Functional Nanomaterials [all DOE Office of Science User Facilities]," said Alexander. "The challenge of getting the most out of high-throughput and data-rich science experiments is extremely exciting to me. I very much look forward to working with the talented individuals at Brookhaven on a variety of projects, and am grateful for the opportunity to be part of such a respected institution." In his new role as deputy director, Alexander will work with CSI Director Kerstin Kleese van Dam to expand CSI's research portfolio and realize its potential in data-driven discovery. He will serve as the primary liaison to national security agencies, as well as develop strategic partnerships with other national laboratories, universities, and research institutions. His current research interest is the intersection of machine learning and physics (and other domain sciences). "We are incredibly happy that Frank decided to join our CSI team," said Kleese van Dam. "With his background in high-performance computing, data science, and computational and statistical physics, he is the ideal fit for Brookhaven." Throughout his career, Alexander has worked in a variety of areas, including nonequilibrium physics and computational physics. More recently, he has focused on the optimal design of experiments as part of the joint DOE/National Cancer Institute collaboration on cancer research, as well as on uncertainty quantification and error analysis for the prediction of complex systems' behavior. Alexander has served on many committees and advisory panels, including those related to DOE's Laboratory Directed Research and Development [http://science.energy.gov/lp/laboratory-directed-research-and-development/] program. Currently, he is on DOE's Computational Research Leadership Council and the editorial board of Computing in Science & Engineering Magazine. Alexander received his PhD in physics in 1991 from Rutgers University and a BS in mathematics and physics in 1987 from The Ohio State University. About Brookhaven National Laboratory Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | September 8, 2016
Site: phys.org

Since then, scientists have found evidence that long noncoding RNA, or lncRNA, plays roles in many cellular processes, including guiding cell fate during embryonic development. However, it has been unknown exactly how lncRNA exerts this influence. Inspired by historical work showing that structure plays a role in the function of other classes of RNA such as transfer RNA, MIT biologists have now deciphered the structure of one type of lncRNA and used that information to figure out how it interacts with a cellular protein to control the development of heart muscle cells. This is one of first studies to link the structure of lncRNAs to their function. "Emerging data points to fundamental roles for many of these molecules in development and disease, so we believe that determining the structure of lncRNAs is critical for understanding how they function," says Laurie Boyer, the Irwin and Helen Sizer Career Development Associate Professor of Biology and Biological Engineering at MIT and the senior author of the study, which appears in the journal Molecular Cell on Sept. 8. Learning more about how lncRNAs control cell differentiation could offer a new approach to developing drugs for patients whose hearts have been damaged by cardiovascular disease, aging, or cancer. The paper's lead author is MIT postdoc Zhihong Xue. Other MIT authors are undergraduate Boryana Doyle and Sarnoff Fellow Arune Gulati. Scott Hennelly, Irina Novikova, and Karissa Sanbonmatsu of Los Alamos National Laboratory are also authors of the paper. Boyer's lab previously identified a mouse lncRNA known as Braveheart, which is found at higher levels in the heart compared to other tissues. In 2013, Boyer showed that this RNA molecule is necessary for normal development of heart muscle cells. In the new study, the researchers decided to investigate which regions of the 600-nucleotide RNA molecule are crucial to its function. "We knew Braveheart was critical for heart muscle cell development, but we didn't know the detailed molecular mechanism of how this lncRNA functioned, so we hypothesized that determining its structure could reveal new clues," Xue says. To determine Braveheart's structure, the researchers used a technique called chemical probing, in which they treated the RNA molecule with a chemical reagent that modifies exposed RNA nucleotides. By analyzing which nucleotides bind to this reagent, the researchers can identify single-stranded regions, double-stranded helices, loops, and other structures. This analysis revealed that Braveheart has several distinct structural regions, or motifs. The researchers then tested which of these motifs were most important to the molecule's function. To their surprise, they found that removing 11 nucleotides, composing a loop that represents just 2 percent of the entire molecule, halted normal heart cell development. The researchers then searched for proteins that the Braveheart loop might interact with to control heart cell development. In a screen of about 10,000 proteins, they discovered that a transcription factor protein called cellular nucleic acid binding protein (CNBP) binds strongly to this region. Previous studies have shown that mutations in CNBP can lead to heart defects in mice and humans. Further studies revealed that CNBP acts as a potential roadblock for cardiac development, and that Braveheart releases this repressor, allowing cells to become heart muscle. Scientists have not yet identified a human counterpart to the mouse Braveheart lncRNA, in part because human and mouse lncRNA sequences are poorly conserved, even though protein-coding genes of the two species are usually very similar. However, now that the researchers know the structure of the mouse Braveheart lncRNA, they plan to analyze human lncRNA molecules to identify similar structures, which would suggest that they have similar functions. "We're taking this motif and we're using it to build a fingerprint so we can potentially find motifs that resemble that lncRNA across species," Boyer says. "We also hope to extend this work to identify the modes of action of a catalog of motifs so that we can better predict lncRNAs with important functions." The researchers also plan to apply what they have learned about lncRNA toward engineering new therapeutics. "We fully expect that unraveling lncRNA structure-to-function relationships will open up exciting new therapeutic modalities in the near future," Boyer says. Explore further: Long non-coding RNA molecules necessary to regulate differentiation of embryonic stem cells into cardiac cells


News Article | September 9, 2016
Site: www.biosciencetechnology.com

Several years ago, biologists discovered a new type of genetic material known as long noncoding RNA. This RNA does not code for proteins and is copied from sections of the genome once believed to be “junk DNA.” Since then, scientists have found evidence that long noncoding RNA, or lncRNA, plays roles in many cellular processes, including guiding cell fate during embryonic development. However, it has been unknown exactly how lncRNA exerts this influence. Inspired by historical work showing that structure plays a role in the function of other classes of RNA such as transfer RNA, MIT biologists have now deciphered the structure of one type of lncRNA and used that information to figure out how it interacts with a cellular protein to control the development of heart muscle cells. This is one of first studies to link the structure of lncRNAs to their function. “Emerging data points to fundamental roles for many of these molecules in development and disease, so we believe that determining the structure of lncRNAs is critical for understanding how they function,” said Laurie Boyer, the Irwin and Helen Sizer Career Development Associate Professor of Biology and Biological Engineering at MIT and the senior author of the study, which appears in the journal Molecular Cell on Sept. 8. Learning more about how lncRNAs control cell differentiation could offer a new approach to developing drugs for patients whose hearts have been damaged by cardiovascular disease, aging, or cancer. The paper’s lead author is MIT postdoc Zhihong Xue. Other MIT authors are undergraduate Boryana Doyle and Sarnoff Fellow Arune Gulati. Scott Hennelly, Irina Novikova, and Karissa Sanbonmatsu of Los Alamos National Laboratory are also authors of the paper. Boyer’s lab previously identified a mouse lncRNA known as Braveheart, which is found at higher levels in the heart compared to other tissues. In 2013, Boyer showed that this RNA molecule is necessary for normal development of heart muscle cells. In the new study, the researchers decided to investigate which regions of the 600-nucleotide RNA molecule are crucial to its function. “We knew Braveheart was critical for heart muscle cell development, but we didn’t know the detailed molecular mechanism of how this lncRNA functioned, so we hypothesized that determining its structure could reveal new clues,” Xue says. To determine Braveheart’s structure, the researchers used a technique called chemical probing, in which they treated the RNA molecule with a chemical reagent that modifies exposed RNA nucleotides. By analyzing which nucleotides bind to this reagent, the researchers can identify single-stranded regions, double-stranded helices, loops, and other structures. This analysis revealed that Braveheart has several distinct structural regions, or motifs. The researchers then tested which of these motifs were most important to the molecule’s function. To their surprise, they found that removing 11 nucleotides, composing a loop that represents just 2 percent of the entire molecule, halted normal heart cell development. The researchers then searched for proteins that the Braveheart loop might interact with to control heart cell development. In a screen of about 10,000 proteins, they discovered that a transcription factor protein called cellular nucleic acid binding protein (CNBP) binds strongly to this region. Previous studies have shown that mutations in CNBP can lead to heart defects in mice and humans. Further studies revealed that CNBP acts as a potential roadblock for cardiac development, and that Braveheart releases this repressor, allowing cells to become heart muscle. “This is one of the first studies to relate lncRNA structure to function,” said John Rinn, a professor of stem cell and regenerative biology at Harvard University, who was not involved in the research. “It is critical that we move toward understanding the specific functional domains and their structural elements if we are going to get lncRNAs up to speed with proteins, where we already know how certain parts play certain roles. In fact, you can predict what a protein does nowadays because of the wealth of structure-to-function relationships known for proteins,” Rinn said. Scientists have not yet identified a human counterpart to the mouse Braveheart lncRNA, in part because human and mouse lncRNA sequences are poorly conserved, even though protein-coding genes of the two species are usually very similar. However, now that the researchers know the structure of the mouse Braveheart lncRNA, they plan to analyze human lncRNA molecules to identify similar structures, which would suggest that they have similar functions. “We’re taking this motif and we’re using it to build a fingerprint so we can potentially find motifs that resemble that lncRNA across species,” Boyer said. “We also hope to extend this work to identify the modes of action of a catalog of motifs so that we can better predict lncRNAs with important functions.” The researchers also plan to apply what they have learned about lncRNA toward engineering new therapeutics. “We fully expect that unraveling lncRNA structure-to-function relationships will open up exciting new therapeutic modalities in the near future,” Boyer said.


News Article | December 19, 2016
Site: www.eurekalert.org

LOS ALAMOS, N.M., Dec. 19, 2016 -- The virulent pathogen that causes the disease tularemia, or "rabbit fever," was weaponized during past world wars and is considered a potential bioweapon. Through a new study of the coccobacillus Francisella, Los Alamos National Laboratory researchers are working to use DNA markers to discern related but relatively harmless species as they are identified and to provide a means to distinguish them from the harmful F. tularensis. "This large study is particularly notable for having used 31 publicly available genomes plus select genes from about 90 additional isolates," said corresponding author Cheryl Kuske of the Bioenergy & Biome Sciences group at Los Alamos National Laboratory. The paper was published in the current issue of Applied and Environmental Microbiology (AEM). The group's work includes developing a comprehensive, genomics-based understanding of organisms and their environment as part of the Lab's energy-security research into energy sources and their impact. "We conducted standard single and multi-gene comparisons with whole-genome approaches and identified potential virulence factors to discriminate new species from among the more traditional set. Thanks to coordination with the Centers for Disease Control and Prevention (CDC), we now have a framework to identify new isolates or environmental detections even if only partial information is available," Kuske said. Humans can easily develop tularemia through incidental contact with infected rabbits, cats, rodents or other animals (and the ticks or fleas that may be on them). The disease manifests in a variety of forms: fever and chills with joint pain, cough or pneumonia, or abdominal pain and distress. Fatality rates range, depending on the form of the disease, from 2 percent to 24 percent, according to CDC data. Due to this severity, the CDC has made tularemia a reportable disease, and like the pathogens causing anthrax and plague, it is considered a bioweapon. Through detailed genome comparisons, sequence-alignment algorithms and other bioinformatics tools run on Los Alamos computers, the Los Alamos team identified features that differentiate among F. tularensis and other novel clinical and environmental Francisella isolates, providing a knowledge base for comparison of new sequences from clinical or environmental surveys. "For residents of areas such as the Southwest, where tularemia outbreaks are endemic and potentially life threatening, having more identification tools at hand is extremely valuable," Kuske said. The distribution of these isolates is global, however, suggesting a worldwide distribution of Francisella organisms that inhabit fish, ticks and a variety of environmental sources. The team designated four new species groups within the genus; Francisella opportunistica, an opportunistic pathogen of immune compromised patients; Francisella salina and Francisella uliginis, environmental isolates from coastal seawater near Galveston, TX; and Francisella frigiditurris from cooling-tower water in California. The paper: "Whole genome relationships among Francisella bacteria of diverse origin define new species and provide specific regions for detection." (http://aem. ) Funding: The genomic sequencing was funded by the U.S. Department of Homeland Security, Science and Technology Directorate.


News Article | December 19, 2016
Site: phys.org

Scanning electron micrograph of a murine macrophage infected with Francisella tularensis strain LVS. Macrophages were dry-fractured by touching the cell surface with cellophane tape after critical point drying to reveal intracellular bacteria. Bacteria (colorized in blue) are located either in the cytosol or within a membrane-bound vacuole. Credit: NIAID The virulent pathogen that causes the disease tularemia, or "rabbit fever," was weaponized during past world wars and is considered a potential bioweapon. Through a new study of the coccobacillus Francisella, Los Alamos National Laboratory researchers are working to use DNA markers to discern related but relatively harmless species as they are identified and to provide a means to distinguish them from the harmful F. tularensis. "This large study is particularly notable for having used 31 publicly available genomes plus select genes from about 90 additional isolates," said corresponding author Cheryl Kuske of the Bioenergy & Biome Sciences group at Los Alamos National Laboratory. The paper was published in the current issue of Applied and Environmental Microbiology (AEM). The group's work includes developing a comprehensive, genomics-based understanding of organisms and their environment as part of the Lab's energy-security research into energy sources and their impact. "We conducted standard single and multi-gene comparisons with whole-genome approaches and identified potential virulence factors to discriminate new species from among the more traditional set. Thanks to coordination with the Centers for Disease Control and Prevention (CDC), we now have a framework to identify new isolates or environmental detections even if only partial information is available," Kuske said. Humans can easily develop tularemia through incidental contact with infected rabbits, cats, rodents or other animals (and the ticks or fleas that may be on them). The disease manifests in a variety of forms: fever and chills with joint pain, cough or pneumonia, or abdominal pain and distress. Fatality rates range, depending on the form of the disease, from 2 percent to 24 percent, according to CDC data. Due to this severity, the CDC has made tularemia a reportable disease, and like the pathogens causing anthrax and plague, it is considered a bioweapon. Through detailed genome comparisons, sequence-alignment algorithms and other bioinformatics tools run on Los Alamos computers, the Los Alamos team identified features that differentiate among F. tularensis and other novel clinical and environmental Francisella isolates, providing a knowledge base for comparison of new sequences from clinical or environmental surveys. "For residents of areas such as the Southwest, where tularemia outbreaks are endemic and potentially life threatening, having more identification tools at hand is extremely valuable," Kuske said. The distribution of these isolates is global, however, suggesting a worldwide distribution of Francisella organisms that inhabit fish, ticks and a variety of environmental sources. The team designated four new species groups within the genus; Francisella opportunistica, an opportunistic pathogen of immune compromised patients; Francisella salina and Francisella uliginis, environmental isolates from coastal seawater near Galveston, TX; and Francisella frigiditurris from cooling-tower water in California. Explore further: What makes Francisella such a bad actor? More information: Jean F. Challacombe et al, Whole genome relationships amongbacteria of diverse origin define new species and provide specific regions for detection, Applied and Environmental Microbiology (2016). DOI: 10.1128/AEM.02589-16


News Article | December 19, 2016
Site: www.chromatographytechniques.com

The virulent pathogen that causes the disease tularemia, or "rabbit fever," was weaponized during past world wars and is considered a potential bioweapon. Through a new study of the coccobacillus Francisella, Los Alamos National Laboratory researchers are working to use DNA markers to discern related but relatively harmless species as they are identified and to provide a means to distinguish them from the harmful F. tularensis. "This large study is particularly notable for having used 31 publicly available genomes plus select genes from about 90 additional isolates," said corresponding author Cheryl Kuske of the Bioenergy & Biome Sciences group at Los Alamos National Laboratory. The paper was published in the current issue of Applied and Environmental Microbiology (AEM). The group's work includes developing a comprehensive, genomics-based understanding of organisms and their environment as part of the Lab's energy-security research into energy sources and their impact. "We conducted standard single and multi-gene comparisons with whole-genome approaches and identified potential virulence factors to discriminate new species from among the more traditional set. Thanks to coordination with the Centers for Disease Control and Prevention (CDC), we now have a framework to identify new isolates or environmental detections even if only partial information is available," Kuske said. Humans can easily develop tularemia through incidental contact with infected rabbits, cats, rodents or other animals (and the ticks or fleas that may be on them). The disease manifests in a variety of forms: fever and chills with joint pain, cough or pneumonia, or abdominal pain and distress. Fatality rates range, depending on the form of the disease, from 2 percent to 24 percent, according to CDC data. Due to this severity, the CDC has made tularemia a reportable disease, and like the pathogens causing anthrax and plague, it is considered a bioweapon. Through detailed genome comparisons, sequence-alignment algorithms and other bioinformatics tools run on Los Alamos computers, the Los Alamos team identified features that differentiate among F. tularensis and other novel clinical and environmental Francisella isolates, providing a knowledge base for comparison of new sequences from clinical or environmental surveys. "For residents of areas such as the Southwest, where tularemia outbreaks are endemic and potentially life threatening, having more identification tools at hand is extremely valuable," Kuske said. The distribution of these isolates is global, however, suggesting a worldwide distribution of Francisella organisms that inhabit fish, ticks and a variety of environmental sources. The team designated four new species groups within the genus; Francisella opportunistica, an opportunistic pathogen of immune compromised patients; Francisella salina and Francisella uliginis, environmental isolates from coastal seawater near Galveston, TX; and Francisella frigiditurris from cooling-tower water in California.


News Article | August 22, 2016
Site: www.scientificcomputing.com

A moment of inspiration during a wiring diagram review has saved more than $2 million in material and labor costs for the Trinity supercomputer at Los Alamos National Laboratory. The Laboratory’s High Performance Computing (HPC) facilities team, led by Data Center Manager Ron Velarde of HPC-Design (HPC-DES). discovered the potential to re-engineer Cray’s initial wiring diagram for the power feed to Trinity’s computing racks. As a result, the facilities team was able to perform a redesign, approved by Cray and the Laboratory, that ultimately saved the Lab an estimated $2.6 M in material and labor costs. Sandia National Laboratories adopted this redesign for their Trinity-like systems and Lawrence Berkeley National Laboratory is considering it as well. “My team and I are always looking to improve engineering means and methods for supercomputer installations in our data centers,” said Velarde. Trinity, Los Alamos’s latest major high-performance computer, is designed to provide increased computational capability for the NNSA Nuclear Security Enterprise in order to improve geometric and physics accuracy in calculations that can be completed in weeks — not years. The facilities team also used prefabricated copper tray cable rather than fabricating the 22,000 feet of cables on site, which was the plan originally proposed for the project. For high-performance computing systems in the Laboratory Data and Communications Center (LDCC), where power cables are being routed over the computers for the first time, the facilities team worked with the Lab’s electrical standards personnel to approve new aluminum power cables, saving 20 percent in materials cost and a factor of three in weight. This pioneering use of aluminum is also expected to generate cost and weight savings in other areas of the Laboratory in addition to the high-performance computing data centers.​ The Trinity Phase 1 system, based on the Haswell processor, is currently performing simulations. The Trinity Phase 2 system, based on the Knight’s Landing processor, is being delivered this summer. As an example of the scales involved, Phase 2 of the Trinity system, which began to arrive in June, weighs just under 100 tons. Trinity in its completed form will weigh about 175 tons and contain twelve miles of copper cable and 44 miles of optical cable. More 16,000 gallons of water will circulate in the inner process loop to cool Trinity at a flow rate of around 10,000 gallons per minute.


News Article | February 22, 2017
Site: www.prweb.com

Cyon Research announced today that Melanie Mitchell, an expert in complex systems, will give the Friday keynote at COFES 2017: The Congress On the Future of Engineering Software. COFES 2017 is being held in Scottsdale, Arizona, April 6-9 at the Scottsdale Plaza Resort. This year at COFES 2017, attendees will take a deep dive into Complexity and Transformation. The growth of complexity is presenting businesses with new and difficult challenges — from the constantly changing business environment to conflicting requirements of more complex products that must be delivered more simply to the customer. New phenomena result from complexity, often requiring consideration of things that were not previously an issue. Rather than focus on how to design complex systems from scratch, COFES 2017 will challenge attendees to focus on interventions that can transform existing systems to mitigate the effects of complexity. In her keynote "An Introduction to Complexity" Mitchell will take participants on a tour of the sciences of complexity, a broad set of efforts that seek to explain how large-scale complex, organized, and adaptive behavior can emerge from simple interactions among a myriad of individual components. Based on her work at the Santa Fe Institute, Mitchell brings clarity to the workings of complexity across a broad range of biological, technological, and social phenomena. “The sciences of complexity have forced us to reexamine long-held assumptions about the way the world works, in disciplines ranging from genetics to economics, from insect colonies to human social networks,” Mitchell explains. "My talk will explore some of the important lessons we have learned from complex systems thinking.” “Your first encounter with the emerging science of complexity will alter your world view,” said Michael Tardif, Managing Partner, Building Informatics. “Melanie Mitchell and her peers are only the second generation of scientists to undertake the daunting task of understanding the world as it is, rather than trying to fit it into predefined theorems. One day, the history of science will be taught as B.C. (before complexity) and A.C. (after complexity). Dr. Mitchell’s COFES 2017 keynote is your chance to bear witness to and become a part of this seminal moment in the history of science.” (Building Informatics empowers building owners to optimize the financial and operational performance of their real property assets throughout their lifecycle). “Melanie Mitchell is an intellectual leader in the field of complex adaptive systems and one of the most effective communicators of complex systems ideas to scientists, engineers, and the public at large, said C. C. Wood, External Professor, Former Vice President Santa Fe Institute. “Melanie is the prize-winning author of ‘Complexity: A Guided Tour’ (Oxford, 2009) and a gifted teacher, having founded and led the Santa Fe Institute’s ‘Complexity Explorer’ series of online courses. Like it or not, increasing complexity in the systems we engineer, build, and sell is a fact of modern technological life, and COFES attendees will benefit significantly from Melanie’s perspectives.” (The Santa Fe Institute is the world headquarters for complexity science, operated as an independent, not-for-profit research and education center located in Santa Fe, New Mexico). “We live in a complex world, but our brains are wired for simplicity, said Brad Holtz, president and CEO of Cyon Research Corporation. “When confronted with logarithmic growth, we are constantly surprised because our innate expectation that the world follows linear projections consistently leads us to the wrong conclusions. Similarly, the complexness and interconnectivity of our world continues to baffle us. Melanie’s talk will help us understand not just why, but also how to think. Which is what COFES is all about.” Melanie Mitchell is Professor of Computer Science at Portland State University and External Professor and Member of the Science Board at the Santa Fe Institute. She attended Brown University, where she majored in mathematics and did research in astronomy. She also attended the University of Michigan, where she received a Ph.D. in computer science. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She has held faculty or professional positions at the University of Michigan, the Santa Fe Institute, Los Alamos National Laboratory, the OGI School of Science and Engineering, and Portland State University. She is the author or editor of five books and over 80 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour, won the 2010 Phi Beta Kappa Science Book Award. Now in its 18th year, COFES is the annual, invitation-only event for the design and engineering software industry. Widely recognized as a think-tank event, COFES (cofes.com) gathers design and engineering software leaders (vendors, users, press, and analysts) together to discuss the role engineering technology will play in the future survival and success of business. COFES is renowned for hosting leading keynote visionaries that provide a new perspective to the future of the industry. The by-invitation-only COFES is a Cyon Research event. Apply for an invitation at apply.cofes.com. Cyon Research Corporation is a think-tank providing key information, research, analysis, and consulting to its clients worldwide. COFES (The Congress on the Future of Engineering Software) and ASSESS (Analysis, Simulation, and Systems Engineering Software Strategies) are Cyon Research's primary events. cyonresearch.com.


News Article | November 16, 2015
Site: phys.org

But those concentrated beams of light have never been able to cool liquids. University of Washington researchers are the first to solve a decades-old puzzle—figuring out how to make a laser refrigerate water and other liquids under real-world conditions. In a study to be published the week of Nov. 16 in the Proceedings of the National Academy of Sciences, the team used an infrared laser to cool water by about 36 degrees Fahrenheit—a major breakthrough in the field. "Typically, when you go to the movies and see Star Wars laser blasters, they heat things up. This is the first example of a laser beam that will refrigerate liquids like water under everyday conditions," said senior author Peter Pauzauskie, UW assistant professor of materials science and engineering. "It was really an open question as to whether this could be done because normally water warms when illuminated." The discovery could help industrial users "point cool" tiny areas with a focused point of light. Microprocessors, for instance, might someday use a laser beam to cool specific components in computer chips to prevent overheating and enable more efficient information processing. Scientists could also use a laser beam to precisely cool a portion of a cell as it divides or repairs itself, essentially slowing these rapid processes down and giving researchers the opportunity to see how they work. Or they could cool a single neuron in a network—essentially silencing without damaging it—to see how its neighbors bypass it and rewire themselves. "There's a lot of interest in how cells divide and how molecules and enzymes function, and it's never been possible before to refrigerate them to study their properties," said Pauzauskie, who is also a scientist at the U.S. Department of Energy's Pacific Northwest National Laboratory in Richland, Washington. "Using laser cooling, it may be possible to prepare slow-motion movies of life in action. And the advantage is that you don't have to cool the entire cell, which could kill it or change its behavior." The UW team chose infrared light for its cooling laser with biological applications in mind, as visible light could give cells a damaging "sunburn." They demonstrated that the laser could refrigerate saline solution and cell culture media that are commonly used in genetic and molecular research. To achieve the breakthrough, the UW team used a material commonly found in commercial lasers but essentially ran the laser phenomenon in reverse. They illuminated a single microscopic crystal suspended in water with infrared laser light to excite a unique kind of glow that has slightly more energy than that amount of light absorbed. This higher-energy glow carries heat away from both the crystal and the water surrounding it. The laser refrigeration process was first demonstrated in vacuum conditions at Los Alamos National Laboratory in 1995, but it has taken nearly 20 years to demonstrate this process in liquids. Typically, growing laser crystals is an expensive process that requires lots of time and can cost thousands of dollars to produce just a single gram of material. The UW team also demonstrated that a low-cost hydrothermal process can be used to manufacture a well-known laser crystal for laser refrigeration applications in a faster, inexpensive and scalable way. The UW team also designed an instrument that uses a laser trap—akin to a microscopic tractor beam—to "hold" a single nanocrystal surrounded by liquid in a chamber and illuminate it with the laser. To determine whether the liquid is cooling, the instrument also projects the particle's "shadow" in a way that allows the researchers to observe minute changes in its motion. As the surrounding liquid cools, the trapped particle slows down, allowing the team to clearly observe the refrigerating effect. They also designed the crystal to change from a blueish-green to a reddish-green color as it cools, like a built-in color thermometer. "The real challenge of the project was building an instrument and devising a method capable of determining the temperature of these nanocrystals using signatures of the same light that was used to trap them," said lead author Paden Roder, who recently received his doctorate from the UW in materials science and engineering and now works at Intel Corp. So far, the UW team has only demonstrated the cooling effect with a single nanocrystal, as exciting multiple crystals would require more laser power. The laser refrigeration process is currently quite energy intensive, Pauzauskie said, and future steps include looking for ways to improve its efficiency. One day the cooling technology itself might be used to enable higher-power lasers for manufacturing, telecommunications or defense applications, as higher-powered lasers tend to overheat and melt down. "Few people have thought about how they could use this technology to solve problems because using lasers to refrigerate liquids hasn't been possible before," he said. "We are interested in the ideas other scientists or businesses might have for how this might impact their basic research or bottom line." More information: Laser refrigeration of hydrothermal nanocrystals in physiological media, Proceedings of the National Academy of Sciences, www.pnas.org/cgi/doi/10.1073/pnas.1510418112


News Article | November 3, 2016
Site: www.eurekalert.org

LOS ALAMOS, N.M., November 4, 2016 -- A broad computational study of cancer genome sequences identifies telltale mutational signatures associated with smoking tobacco and demonstrates, for the first time, that smoking increases cancer risk by causing somatic mutations in tissues directly and indirectly exposed to tobacco smoke. The international study by Los Alamos National Laboratory with the UK's Wellcome Trust Sanger Institute and other collaborators was published in the November 4 issue of Science. "This study offers fresh insights into how tobacco smoke causes cancer," said Dr. Ludmil Alexandrov, Oppenheimer Fellow at Los Alamos National Laboratory and co-lead author of the study. "Our analysis demonstrates that tobacco smoking causes mutations that lead to cancer by multiple distinct mechanisms. Tobacco smoking damages DNA in organs directly exposed to smoke as well as speeds up a mutational cellular clock in organs that are both directly and indirectly exposed to smoke." Tobacco smoke is a complex witch's brew containing more than 7,000 chemicals including over 70 known to cause cancer (carcinogens). Previous large-scale epidemiological studies have associated tobacco smoking with increased risk for 17 different types of cancer, including cancer in tissue not directly exposed to smoke. However, the mechanisms by which tobacco smoking causes cancer have previously remained elusive. This study demonstrates that smoking increases cancer risk by causing somatic mutations that both directly damage DNA and increase the speed of an endogenous molecular clock. "This research brings together Big Data generated by international cancer consortia and the supercomputing and machine-learning capabilities of Los Alamos to address one of the leading public health issues of our time," said Laboratory Director Charlie McMillan. "Alexandrov's work leverages pattern-recognition software in genomic screening and represents a creative breakthrough in cancer research." The study, "Mutational signatures associated with tobacco smoking in human cancer," focused on identifying the mutation signatures and methylation changes in 5,243 genome sequences of smoking-related cancers by comparing the cancers of smokers to those of non-smokers. Professor Sir Mike Stratton, joint lead author from the Wellcome Trust Sanger Institute, Cambridge, UK, said: "The genome of every cancer provides a kind of 'archaeological record,' written in the DNA code itself, of the exposures that caused the mutations that lead to the cancer. Our research indicates that the way tobacco smoking causes cancer is more complex than we thought. Indeed, we do not fully understand the underlying causes of many types of cancer and there are other known causes, such as obesity, about which we understand little of the underlying mechanism. This study of smoking tells us that looking in the DNA of cancers can provide provocative new clues to how cancers develop and thus, potentially, how they can be prevented." All mutations--harmless or cancerous--are due to the activity of endogenous or exogenous mutation processes. Each process leaves a signature of scrambled DNA code on the base pairs of that cell's genome. The four base pairs are made up of molecular units identified by the letters A, T, G, and C. The new study found more than 20 mutational signatures across the 17 cancer types associated with tobacco smoking. However, only five of these signatures were elevated in cancers from smokers. Some cancer types had only a single mutational signature elevated in smokers, while others had multiple. One signature, called signature 4, can be traced to DNA being damaged by direct exposure to tobacco smoke. "Signature 4 is likely the direct mutational consequence of misreplication of DNA damage induced by tobacco carcinogens," particularly benzo[a]pryrene, according to the study. Signature 5, found by previous Los Alamos research to occur in all cells and to trigger mutations with clock-like regularity, correlated with increased mutations in smokers versus non-smokers. Alexandrov explains that smoking accelerates the clock function mostly likely by altering the molecular machinery underlying this signature. Other signatures reinforce the theory that smoking increases the risk of several cancer types by raising the overall number of mutations, even in tissue not directly exposed to smoke. The authors note that for some of the mutational signatures, the underlying mechanisms are still unclear. Alexandrov modeled the cancer mutational processes as a blind-source-separation problem to distinguish coherent signals from a noisy background, a methodology used in other areas of the Laboratory's research related to its nuclear-security mission. The project drew on the Laboratory's high-performance computing resources and expertise, as well as expertise in numerical optimization problems. Alexandrov is the winner of the 2016 Carcinogenesis Young Investigator Award from the journal Carcinogenesis: Integrative Cancer Research, which recognizes a recent significant contribution to carcinogenesis research by an investigator under the age of 40. In 2014, he was recognized by Forbes as one of the "30 brightest stars under the age of 30" in the field of science and healthcare. In 2015, he was awarded the Science magazine and SciLifeLab Prize for Young Scientists in genomics and proteomics and a Harold M. Weintraub Award for outstanding achievement during graduate studies in the biological sciences. He is a program member of the University of New Mexico Cancer Center. Researchers representing 16 institutions in the United States, Europe, and Asia worked on the study published by Science. More information, including a copy of the paper, "Mutational signatures associated with tobacco smoking in human cancer," can be found online at the Science press package at http://www. . A brief video summarizing the research is available on YouTube: https:/ . Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction and solving problems related to energy, environment, infrastructure, health and global security concerns. The Wellcome Trust Sanger Institute is one of the world's leading genome centers. Through its ability to conduct research at scale, it is able to engage in bold and long-term exploratory projects that are designed to influence and empower medical science globally. Institute research findings, generated through its own research programs and through its leading role in international consortia, are being used to develop new diagnostics and treatments for human disease. http://www.


Marking a major innovation in muon detectors used in the studies of interiors of structures like pyramids and caves, a new muon detector has been developed matching the bigger ones in terms of performance yet with a smaller size. The details of the innovation were covered in the address of Alain Bonneville at the American Geophysical Union Fall Meeting in San Francisco on Thursday. He is a geophysicist at Pacific Northwest National Laboratory Muons are elementary particles produced by the collision of cosmic rays with atmospheric molecules and are invisible to the naked eye. Earth receives a constant stream of muons at various angles and they pass through earth and rock. That is why detecting them gives a broad idea of the internal structures of rocks and other materials. Already, elementary particles have been quite effective in exploring underground areas of pyramids and volcanoes in deciphering their deeper structures. However, the method was beset with a shortcoming in not being able to communicate the changes happening in the density of the internal structures. In a technological breakthrough, the Pacific Northwest National Laboratory, or PNNL created a miniature version of the muon detector. It is the largest supporter of research in physical sciences in the United States. The mini muon detector answers the need for "seeing" changes in density variations at the underground and in the objects' internal structures. Made of plastic components and optical fibers, these detectors carry signals to electronics by accounting for each muon passing through the device. The density change is understood from the variation in the number of muons hitting the detector at a given period as in the case of a plume of carbon dioxide. The PNNL's mini detector carries smaller dimensions such as the diameter of six inches and length of three feet. It is capable of going down thousands of feet underground via horizontal boreholes, to do good imaging and monitor carbon dioxide storage sites pretty well. The testing of mini detector's output was made by pitting it against large detectors in a tunnel at Los Alamos National Laboratory, where the scientists found the results on par with the larger machines. With data converting into the image, the device can be used in tracking carbon dioxide or leakage underground. The scope extends further into applications on a variety of subsurface imaging areas. Meanwhile, a muon tracking telescope-array in India's Ooty detected cosmic ray showers. The GRAPES-3 facility noticed a surge in muon intensity and a corresponding weakening of the earth's magnetic field after a solar storm hit the earth on June 22, 2015. The details were published in the journal Physical Review Letters by the Indo-Japanese collaboration that made the study. According to the researchers, a coronal mass ejection (CME) reached earth on June 22, 2015, after the solar flares in the sunspot 12371 at the sun's central disc and caused radio blackouts and Aurora Borealis. The data revealed that CME weakened the earth's magnetic field after the burst of high-energy cosmic rays. Calling it unique, the team said the muon tracking study offered the advantage of assessing the impact of solar storms and space weather at distances twice the radius of the earth. This was unlike satellite-based studies, which are constrained by the limitation of yielding very localized information that is within their vicinity. "Galactic cosmic rays producing a muon burst were bent in the space surrounding the Earth over a volume that is 7 times that of the Earth, and hence they serve as a monitor of the solar storm over this volume. This is in stark contrast to the satellite-based measurements that provide only in situ information," said S.K. Gupta, head of the GRAPES-3 experiment that is based in Tata Institute of Fundamental Research, Mumbai. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 4, 2016
Site: phys.org

Yet certain medically useful isotopes can only be produced by nuclear reactions that rely on high-energy particle accelerators, which are expensive to build and operate. As such, these isotopes are not typically available from commercial vendors. Instead, they are produced at facilities with unique capabilities, like the national labs built and funded by the U.S. Department of Energy (DOE). The DOE Office of Science's Nuclear Physics Isotope Development and Production for Research and Applications program (DOE Isotope Program) seeks to make critical isotopes more readily available for energy, medical, and national security applications and for basic research. Under this program, scientists, engineers, and technicians at DOE's Brookhaven National Laboratory recently completed the installation of a beam raster (or scanning) system designed to increase the yield of critical isotopes produced at the Brookhaven Linac Isotope Producer (BLIP), the Lab's radioisotope production and research facility, in operation since 1972. The new raster system became operational in Jan. 2016 and is contributing to increased production of strontium-82 (Sr-82), an isotope used for cardiac imaging, and to research and development of actinium-225 (Ac-225), an isotope that may be a promising treatment for targeting many forms of cancer, including leukemia and melanoma. BLIP produces isotopes by directing a high-energy (up to 200 million electron volts), high-intensity narrow beam of protons at a water-cooled array of several target materials. When the protons collide with the target materials, a reaction occurs, producing radioactive products in the target. More than one isotope is being produced at any given time, so multiple targets are hit with the beam. As the beam passes through the first to last targets, its energy decays. The targets are lined up in an order that maximizes production of the desired isotopes: the targets needed to produce isotopes requiring higher energy are put in the front of the array. A small fraction of each of the products is the desired isotopes, which can be extracted and purified through chemical processing conducted inside lead-enclosed "hot" cells at Brookhaven Lab's Isotope Research Processing Laboratory. Inside these hot cells, which are designed for safely handling radioactive materials, the irradiated targets are cut open and their contents are dissolved to separate out the desired isotopes. Personnel use manipulators to remotely handle the radioactive material. The isotopes are then shipped in specialized containers to suppliers, who distribute them to hospitals for medical use or to universities for ongoing research. What makes BLIP unique is its high-energy capacity. "A minimum of 70 million electron volts is required to make Sr-82," explained Leonard Mausner, the head of research and development and facility upgrades within Brookhaven Lab's Medical Isotope Research and Production Program. Mausner proposed both the new beam raster system and a complementary effort to increase the beam intensity of the Brookhaven Linear Accelerator, or Linac, which provides the beam used at BLIP. "To date, there are no commercial accelerators in the United States with the power required to produce this critical isotope used to image 300,000 patients per year," said Mausner. However, the targets' ability to handle BLIP's high intensities has, until now, been largely hindered by the way the beam was positioned on the target. The beam, which significantly heats up the target, had always been narrowly focused on a single area of the target. As a result, the targets—such as the rubidium chloride (RbCl) used to produce Sr-82—became very hot only in the region of highest beam intensity. "The beam pulse, which is 450 microseconds long and happens every 150 milliseconds, would hit the targets in the same exact spot 24/7 over a period of weeks, potentially damaging the targets when high beam current was applied," explained Robert Michnoff, project manager for the BLIP beam raster system and an engineer in Brookhaven Lab's Collider-Accelerator Department. "Because the RbCl target was being heated unevenly, a small molten zone would form, then expand and push out into the cooler outer edges of the target area, where the material would solidify," said Mausner. "We want the target fully solid or fully molten, as a uniform density may improve isotope yield." Uneven heating causes the density of a target to vary across its diameter. As a result, the energy of the beam exiting the target will also vary across this diameter, impacting the energy that enters the next target in the array. "The distribution in beam energy increases as it travels through one target to the next, making it difficult to assess how much energy is transferred. We want to optimize production of isotopes in the downstream targets," said Cathy Cutler, director of Brookhaven Lab's Medical Isotope Research and Production Program. The new beam raster system provides a more uniform distribution of the beam on the target by rapidly "painting" the beam on the target in a circular raster fashion using two custom-designed magnets. But, rapidly sweeping the beam in one circular motion is not enough to uniformly distribute the beam across the target. "The beam intensity pattern would resemble a donut or a volcanic crater—a circle with a hollow center," said Mausner. Instead, the beam is moved in a circular pattern at two different radii, essentially creating a larger and smaller circle. The radius values and the number of beam pulses for each radius can be programmed to optimize beam distribution. "The rastering pattern allows us to achieve near-uniform beam current density on the target," said Michnoff, who mentioned plans to test a "middle" radius, which may help to provide even better beam distribution. Paving the way for higher beam intensities The new raster system provides an opportunity for researchers to safely apply increasingly higher beam currents on targets—an approach that Brookhaven scientists have been actively exploring to further increase isotope production yield. In a complementary effort to the BLIP raster upgrade, the scientists worked to increase the beam intensity of the Brookhaven Linac by optimizing operating parameters of the hydrogen ion source, neutralizing the electric charge generated by the beam, and increasing the length of the beam pulse by placing the beam earlier in the radio-frequency pulse used to accelerate the ions. "At higher currents, the beam's electric charge causes the beam to spread out, resulting in defocusing of the beam and ultimately a loss of current. We use Xenon gas to neutralize that charge," explained Deepak Raparia, head of the linac pre-injector system, project manager of the linac upgrade, and a scientist in Brookhaven Lab's Collider-Accelerator Department. "By increasing the length of the beam pulse, we can deliver more beam current to the BLIP target station and thus increase the quantity of isotopes produced." In 2015, the Brookhaven Linac's beam intensity was increased from 115 to 147 microamps, surpassing the initial project goal of 140 microamps. After the raster system was commissioned in 2016, the beam was further optimized, achieving 165 microamps. According to Mausner, yield could rise significantly at this higher intensity: "If the beam current can be maintained at 165 microamps, production levels could potentially increase by 40 percent." In a second proposed phase of the Linac upgrade, the Brookhaven team hopes to double beam current by doubling the beam pulse length from 450 to 900 microseconds. Accelerating and extracting significantly more beam current out of the linac and transporting that current to BLIP will require the linac's radio-frequency system and beam-focusing elements to be upgraded. "These upgrades will increase the duration of accelerating and focusing fields to cover the entire beam pulse," said Raparia. The capability to handle higher and higher beam intensities without targets failing not only increases the yield of routinely produced medical isotopes, such as Sr-82, but also enables scientists to investigate other promising isotopes for medical and other applications. Among these isotopes is Ac-225, a short-lived alpha emitter—it quickly releases a high amount of energy in the form of alpha particles that can only travel short distances. As such, Ac-225 has the ability to destroy targeted cancer cells without destroying healthy surrounding tissue. Traditionally, Ac-225 has been produced through the natural decay of thorium-229 from uranium-233, which is not readily available and has an extremely long half-life, or rate of decay. In collaboration with Los Alamos National Laboratory and Oak Ridge National Laboratory, Brookhaven Lab has been developing an accelerator-based capability to scale up the production of Ac-225, the demand for which is 50 to 100 times greater than the current supply. Scientists from all three laboratories are working to develop a new Ac-225 production approach in which readily available thorium metal targets are bombarded with high-energy protons. Initial experiments demonstrated that an irradiation lasting less than a week's time could produce the annual quantity of Ac-225 currently in supply. "The raster system will ultimately aid us in the research and development of Ac-225, which could eventually be manufactured into an active pharmaceutical ingredient used in cancer therapy," said Mausner. However, before Ac-225 can become a routine medical isotope, several challenges must be overcome. "Hundreds of isotopes are produced when thorium is hit with protons, so it is difficult to separate out Ac-225 during chemical processing of the target," explained Mausner. A small number of clinical cancer treatment trials are ongoing using the currently limited supply of Ac-225. Initial results indicate that Ac-225 has the potential to put terminal cancer patients with acute myeloid leukemia, a cancer of the blood and bone marrow, into remission. "The increased beam intensity, coupled with this new raster system, will enable higher production levels of desirable radioisotopes for the nuclear medicine community and industry, and will increase the number of patients who could benefit across our nation," said Mausner.


Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters The first optical clock to be operated in space has been launched by Matthias Lezius and colleagues at the Germany-based Menlo Systems. Based on a frequency-comb laser system, the optical clock operates at a frequency that is about 100,000 times higher than that of the microwave-based atomic clocks that are currently used on global-positioning-system (GPS) satellites. The optical clock is about 22 cm in size and weighs 22 kg. Its power consumption is about 70 W, which makes it suitable for satellite applications. Although this prototype optical clock can only operate at about one tenth the accuracy of today's GPS atomic clocks, Lezius' team is now working on a new version of the clock that promises to improve this accuracy by several orders of magnitude – which could boost the accuracy of GPS. The current clock was tested on board a research rocket that flew a 6 min parabolic flight. The next version of the optical clock is scheduled for testing in 2017. The research is described in Optica. The physicist and advocate of strategic nuclear-arms reduction Richard Garwin will receive a Presidential Medal of Freedom from US president Barack Obama. Garwin, who is 88, was a PhD student of Enrico Fermi at the University of Chicago before designing the first hydrogen bomb in 1952 under Edward Teller at Los Alamos National Laboratory. He then moved to IBM's Thomas J Watson Research Center, where he is an IBM fellow emeritus. At IBM he worked on a broad range of topics including condensed matter, particle physics and gravitation. He also applied his skills to the development of touch screens, laser printers and intelligence-gathering technologies. Garwin served as a scientific adviser to presidents Kennedy, Johnson and Nixon, which is when he developed his long-standing interest in nuclear non-proliferation (see video). The medal is the highest civilian honour in the US and it will be given to Garwin and 20 other winners at a ceremony at the White House on 22 November. A brilliant burst of radiation known as a fast radio burst (FRB) that has travelled over a billion light years has unexpectedly revealed information about the cosmic web – the large-scale structure of the universe. A team led by Ryan Shannon at the International Centre for Radio Astronomy Research (ICRAR) and Vikram Ravi of the California Institute of Technology says that the latest FRB – one of 18 to be detected to date – is one of the brightest seen. The flash was captured by CSIRO's Parkes radio telescope in New South Wales, Australia. FRBs are extremely rare, short but intense pulses of radio waves, each only lasting about a millisecond. "This particular FRB is the first detected to date to contain detailed information about the cosmic web – regarded as the fabric of the universe – but it is also unique because its travel path can be reconstructed to a precise line of sight and back to an area of space about a billion light-years away that contains only a small number of possible home galaxies," says Shannon. The cosmic web is very difficult to spot because most of the plasma and gas it contains is very faint. It is usually detected when large sections of it are lit up briefly, for example by a bright quasar or a FRB. This particular flash reached the Parkes radio telescope mid last year and is described in Science.


News Article | December 19, 2016
Site: www.eurekalert.org

Alain Bonneville, a geophysicist at Pacific Northwest National Laboratory, will present details on the muon detector and the comparative field tests at the American Geophysical Union Fall Meeting in San Francisco. His talk is Thursday, December 15, 2016 at 5:40 p.m. in Moscone South, Room 307. Muons, once used to explore the inside of pyramids and volcanoes alike, are enabling researchers to see deep underground with a technological breakthrough from PNNL. Invisible to the naked eye, muons are elementary particles created by the collisions of cosmic rays with molecules in the atmosphere. Muons are constantly raining down on the earth at various angles. They can pass through materials, such as earth and rock, and detecting these particles have helped researchers "see" the inside of structures such as the pyramids of Giza. But the detectors -- which measure the number and trajectories of muons hitting the detector -- are rather large, about the size of a small car. In order to be able to "see" changes in density underground, the detectors need to be much smaller. PNNL researchers and their partners have created a smaller -- just six inches in diameter and about three feet long -- and more rugged version. This mini detector will be able to go thousands of feet underground via horizontal boreholes. The borehole-sized detectors are made out of plastic components and optical fibers that carry signals to electronics to count each muon that passes through the device. This summer, researchers tested its output against two existing large detectors in a tunnel at Los Alamos National Laboratory. The results were the same as the larger machines developed by LANL and Sandia National Laboratories. Like its big brothers, the small detector can measure anomalies in the flux of muons that pass through. A change in the number of muons hitting the detector during a certain period and space indicate a change in density within the structure or object -- for instance a plume or reservoir of carbon dioxide underground. The data convert to an image and both could help monitor CO2 movement or leakage underground at a sequestration site, and have applications for a wide variety of subsurface imaging. Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter.


News Article | February 15, 2017
Site: www.realclimate.org

The Fourth Santa Fe Conference on Global & Regional Climate Change will be held on Feb 5-10, 2017. It is the fourth in a series organized and chaired by Petr Chylek of Los Alamos National Laboratory (LANL) and takes place intervals of 5 years or thereabouts. It is sponsored this year by LANL’s Center for Earth and Space Science and co-sponsored by the American Meteorological Society. I attended the Third in the series, which was held the week of Oct 31, 2011. I reported on it here in my essay “Climate cynicism at the Santa Fe conference”. In that report, I described my experiences and interactions with other attendees, whose opinions and scientific competence spanned the entire spectrum of possibility. Christopher Monckton represented one extreme end-member, with no scientific credibility, total denial of facts, zero acknowledgment of uncertainty in his position, and complete belief in a global conspiracy to promote a global warming fraud. At the opposite end were respected professional climate scientists at the top of their fields, such as Richard Peltier and Gerald North. Others, such as Fred Singer and Bill Gray, occupied different parts of the multi-dimensional phase space, having credentials but also having embraced denial—each for their own reasons that probably didn’t intersect. For me, the Third Conference represented an opportunity to talk to people who held contrary opinions and who promoted factually incorrect information for reasons I did not understand. My main motivation for attending was to engage in dialogue with the contrarians and deniers, to try to understand them, and to try to get them to understand me. I came away on good terms with some (Bill Gray and I bonded over our common connection to Colorado State University, where I was an undergraduate physics student in the 1970s) but not so much with others. I was ambitious and submitted four abstracts. I and my colleagues were pursuing uncertainty quantification for climate change in collaboration with other DOE labs. I had been collaborating on several approaches to it, including betting markets, expert elicitation, and statistical surrogate models, so I submitted an abstract for each of those methods. I had also been working with Lloyd Keigwin, a senior scientist and oceanographer at Woods Hole Oceanographic Institution and another top-of-his-field researcher. We submitted an abstract together about his paleotemperature reconstruction of Sargasso Sea surface temperature, which is probably the most widely reproduced paleoclimate time series other than the Mann et al. “Hockey Stick” graph. I had updated it with modern SST measurements, and in our abstract we pointed out that it had been misused by contrarians who had removed some of the data, replotted it, and mislabeled it to falsely claim that it was a global temperature record showing a cooling trend. The graph continues to make appearances. On March 23, 2000, ExxonMobil took out an advertisement in the New York Times claiming that global warming was “Unsettled Science”. The ad was illustrated with a doctored version of Lloyd’s graph (the inconvenient modern temperature data showing a warming trend had been removed). This drawing was very similar to one that had been generated by climate denier Art Robinson and his son for a Wall Street Journal editorial a couple months earlier. It wasn’t long before other distorted versions started showing up elsewhere, such as the Albuquerque Journal opinion page. The 2000 ExxonMobil version was just entered into the Congressional Record last week by Senator Tim Kaine during the Tillerson confirmation hearings. In 2011, my abstracts on betting, expert elicitation, and statistical models were all accepted, and I presented them. But the abstract that Lloyd and I submitted was unilaterally rejected by Chylek who said, “This Conference is not a suitable forum for [the] type of presentations described in [the] submitted abstract. We would accept a paper that spoke to the science, the measurements, the interpretation, but not simply an attempted refutation of someone else’s assertions (especially when made in unpublished reports and blog site).” The unpublished report he spoke of was the NIPCC/Heartland Institute report, which Fred Singer was there to discuss. After the conference, I spoke to one of the co-chairs about the reasons for the rejection. He said that he hadn’t seen it and did not agree with the reasons for the rejection. He encouraged Lloyd and me to re-submit it again for the 4th conference. So we did. Lloyd sent the following slightly-revised version on January 4. Keigwin (Science 274:1504–1508, 1996) reconstructed the SST record in the northern Sargasso Sea to document natural climate variability in recent millennia. The annual average SST proxy used δ18O in planktonic foraminifera in a radiocarbon-dated 1990 Bermuda Rise box core. Keigwin’s Fig. 4B (K4B) shows a 50-year-averaged time series along with four decades of SST measurements from Station S near Bermuda, demonstrating that at the time of publication, the Sargasso Sea was at its warmest in more than 400 years, and well above the most recent box-core temperature. Taken together, Station S and paleotemperatures suggest there was an acceleration of warming in the 20th century, though this was not an explicit conclusion of the paper. Keigwin concluded that anthropogenic warming may be superposed on a natural warming trend. In a paper circulated with the anti-Kyoto “Oregon Petition,” Robinson et al. (“Environmental Effects of Increased Atmospheric Carbon Dioxide,” 1998) reproduced K4B but (1) omitted Station S data, (2) incorrectly stated that the time series ended in 1975, (3) conflated Sargasso Sea data with global temperature, and (4) falsely claimed that Keigwin showed global temperatures “are still a little below the average for the past 3,000 years.” Slight variations of Robinson et al. (1998) have been repeatedly published with different author rotations. Various mislabeled, improperly-drawn, and distorted versions of K4B have appeared in the Wall Street Journal, in weblogs, and even as an editorial cartoon—all supporting baseless claims that current temperatures are lower than the long term mean, and traceable to Robinson’s misrepresentation with Station S data removed. In 2007, Robinson added a fictitious 2006 temperature that is significantly lower than the measured data. This doctored version of K4B with fabricated data was reprinted in a 2008 Heartland Institute advocacy report, “Nature, Not Human Activity, Rules the Climate.” On Jan. 9, Lloyd and I got a terse rejection from Chylek: “Not accepted. The committee finding was that the abstract did not indicate that the presentation would provide additional science that would be appropriate for the conference.” I had also submitted an abstract with Stephen Lewandowsky and James Risbey called “Bets reveal people’s opinions on climate change and illustrate the statistics of climate change,” and a companion poster entitled “Forty years of expert opinion on global warming: 1977-2017” in which we proposed to survey the conference attendees: Forecasts of anthropogenic global warming in the 1970s (e.g. Broecker, 1975, Charney et al., 1979) were taken seriously by policy makers. At that time, climate change was already broadly recognized within the US defense and intelligence establishments as a threat to national and global security, particularly due to climate’s effect on food production. There was uncertainty about the degree of global warming, and media-hyped speculation about global cooling confused the public. Because science-informed policy decisions needed to be made in the face of this uncertainty, the US Department of Defense funded a study in 1977 by National Defense University (NDU) called “Climate Change to the Year 2000” in which a panel of experts was surveyed. Contrary to the recent mythology of a global cooling scare in the 1970s, the NDU report (published in 1978) concluded that, “Collectively, the respondents tended to anticipate a slight global warming rather than a cooling”. Despite the rapid global warming since 1977, this subject remains politically contentious. We propose to use our poster presentation to survey the attendees of the Fourth Santa Fe Conference on Global and Regional Climate Change and to determine how expert opinion has changed in the last 40 years. I had attempted a similar project at the 3rd conference with my poster “Comparison of Climate Forecasts: Expert Opinions vs. Prediction Markets” in which my abstract proposed the following: “As an experiment, we will ask participants to go on the record with estimates of probability that the global temperature anomaly for calendar year 2012 will be equal to or greater than x, where x ranges in increments of 0.05 °C from 0.30 to 1.10 °C (relative to the 1951-1980 base period, and published by NASA GISS).” I included a table for participants to fill in, and even printed extra sheets to tack up on the board with my poster so I could compile them and report them later. This idea was a spinoff of work I had presented at an unclassified session of the 2006 International Conference on Intelligence Analysis on my research in support of the US intelligence community for which a broad spectrum of opinion must be used to generate an actionable consensus with incomplete or conflicting information. That was certainly the case in Santa Fe, where there were individuals (e.g. Don Easterbrook) who were going on record with predictions of global cooling. By the last day of the conference, several individuals had filled in the table with their probabilistic predictions and I decided to leave my poster up until the end of the day, which was how long they could be displayed according to the conference program. I wanted to plug it during my oral presentation on prediction markets so that I could get more participation. Unfortunately when I returned to the display room, my poster had been removed. Hotel employees did not know where it was, and the diverse probability estimates were lost. This year I would be more careful, as announced in my abstract. But the committee would have no part of it. On Jan 10 I got my rejection letter: Of the hundreds of abstracts I’ve submitted, this is the only conference that’s ever rejected one. As a frequent session convener and program committee chair myself, I am accustomed to providing poster space for abstracts that I might question, misunderstand, or disagree with. It has never occurred to me to look at the publication list of a poster presenter, But if I were to do that, I would be more thorough and look other information, including their coauthors’ publication lists and CVs as well. In this case, the committee might have discovered more than a few papers by one of them on the subject, such as Risbey and Kandlikar (2002) “Expert Assessment of Uncertainties in Detection and Attribution of Climate Change” in the Bulletin of the American Meteorological Society, or that Prof. Risbey was a faculty member in Granger Morgan’s Engineering and Public Policy department at CMU for five years, a place awash in expert elicitation of climate (I sent my abstract to Prof. Morgan–who I know from my AGU uncertainty quantification days–for his opinion before submitting it to the conference). At the very least, I would look at the previous work cited in the abstract. The committee would not have been puzzled by how to transform survey data into probabilistic projections if they had done so. They would have learned that the 1978 NDU study we cited had already established the methodology we were proposing to use. The NDU “Task I” was “To define and estimate the likelihood of changes in climate during the next 25 years…” using ten survey questions described in Chapter One (Methodology). The first survey question was on average global temperature. So the legitimacy of the method we were planning to use was established 40 years ago. I concluded after the 3rd Santa Fe conference that cynicism was the only attribute that was shared by the minority of attendees who were deniers, contrarians, publicity-seekers, enablers, or provocateurs. I now think that cynicism has something in common with greenhouse gases. Cynicism begets cynicism, to the detriment of society. There are natural-born cynics, and if they turn the rest of us into cynics then we are their amplifiers, just like water vapor is an amplifier of carbon dioxide’s greenhouse effect. We become part of a cynical feedback loop that generates distrust in science and the scientific method. I refuse to let that happen. I might have gotten a little steamed by an unfair or inappropriate rejection, but I’ve cooled off and my induced cynicism has condensed now. I am not going to assume that everyone is a cynic just because of a couple of misguided and misinformed decisions. As President Obama said in his farewell address, “If you’re tired of arguing with strangers on the Internet, try talking with one of them in real life.” So if you are attending the Santa Fe conference, I would like to meet with you. If you are flying into Albuquerque, where I live, drop me a line. Or meet me for a drink or dinner in Santa Fe. I can show you why Lloyd’s research really does provide additional science that is relevant to the conference. I can try to convince you that prediction markets are indeed superior to expert elicitation in their ability to forecast climate change. Maybe I can even talk you into going on record with your own probabilistic global warming forecast!


News Article | November 10, 2015
Site: www.scientificcomputing.com

LOS ALAMOS, New Mexico and BERKELEY, CA — Three of the Department of Energy's leading national laboratories are working together to solve some of the world's most challenging problems by ensuring that the nation's scientific community has access to leading-edge computing systems to carry out their research. Los Alamos, Lawrence Berkeley and Sandia national laboratories, have formed the Alliance for Application Performance at Extreme Scale (APEX) to focus on the design, acquisition and deployment of future advanced technology high performance computing systems. Over the years, each of the laboratories has independently deployed world-leading supercomputers to support their respective scientific missions. In joining together, they aim to work even more closely with vendors to shape the future of supercomputer design to deliver ever-more capable systems to solve problems of national importance. “The supercomputing community is entering a time that is both exciting and challenging as architectures evolve to move us closer to exascale systems,” said Gary Grider, High Performance Computing Division Leader at Los Alamos. “The supercomputers of the future will not only be extremely powerful, but will also need to be much more energy efficient and resilient than current designs. APEX is a critical force in achieving these goals for the nation.” APEX will develop future computing technology with two new advanced computing systems, “Crossroads” for the New Mexico Alliance for Computing at Extreme Scale (ACES, a Los Alamos and Sandia national laboratories partnership), and “NERSC-9” for the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab in the 2020 time-frame. Both new computing platforms will focus on providing increased application performance and capability in addition to the deployment of advanced technology concepts necessary to meet the mission needs of the Department of Energy/National Nuclear Security Administration's Advanced Simulation and Computing (ASC) Program and the DOE Office of Science Advanced Scientific Computing Research (ASCR) Program. This partnership allows the ASC Program and the ASCR Program to work together to leverage investments and increase the cost-effectiveness in acquisition of yet-to-be-deployed systems. APEX will also allow the Laboratories to pursue areas of collaboration with vendors providing substantial value to the Crossroads and NERSC-9 systems in the areas of increased application performance, increased workflow efficiency, and enhanced reliability of the systems. “APEX’s focus is on applications and enabling the solution of problems that can’t be solved using today’s supercomputers,” said NERSC Director Sudip Dosanjh, who led the extreme-scale computing program at Sandia and was co-director of ACES before assuming the leadership of NERSC in 2012. “At the same time, we must ensure that our users can effectively use these new systems. With the proven expertise of our three laboratories combined, we are confident that APEX will develop and deploy technologies that further DOE’s mission in science and national security.” The overarching, long-term goal is to develop systems and technology in order to enable the most complex computational simulations for the DOE. The complexity of questions being asked in fields such as national security, clean energy, materials science, climate change and astrophysics are driving the mission need for larger supercomputers. APEX is also charged with delivering the systems along the pathway to exascale as laid out by the DOE Exascale Computing Initiative, as well as the National Strategic Computing Initiative. ACES and NERSC began collaborating in 2010 with the procurement and deployment of two similar systems, “Cielo” at Los Alamos, and “Hopper” at NERSC, both Cray XE6 systems. The partnership was formalized and strengthened with the Trinity/NERSC-8 joint procurement, which resulted in the acquisition of two Cray XC40 systems: “Trinity” at Los Alamos, and “Cori” at NERSC. The APEX collaboration continues to build on this shared history between the two organizations. Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, NM, and Livermore, CA, Sandia has major R&D responsibilities in national security, energy and environmental technologies and economic competitiveness. About Berkeley Lab and the National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. As the leading supercomputing center for the DOE Office of Science, NERSC supports nearly 6,000 users at national laboratories and universities across the country. As each new system is planned, the center needs to assure that it has the capability and capacity to meet the wide range of computing requirements from the center’s user community. NERSC is a division of the Lawrence Berkeley National Laboratory.


News Article | December 8, 2016
Site: www.prnewswire.co.uk

- Cybersecurity Ventures ranks EY 15th out of all global cybersecurity organizations LONDON, Dec. 8, 2016 /PRNewswire/ -- EY has been named one of the world's top cybersecurity consulting organizations to watch in 2017 by Cybersecurity Ventures, a global research and publishing company focused on the cybersecurity industry. Of the 500 global companies listed in the Cybersecurity 500 report, the EY organization was ranked number 15 overall, ahead of other consulting organizations that provide cybersecurity products and services. "We are proud to be one of the highest ranked cybersecurity consulting companies in the report. Our ranking reflects our deep understanding of cyber threats and our close collaboration with clients worldwide. Cybersecurity Ventures' recognition is testament to EY's industry-leading approach, which is focused on clients' entire cyber ecosystem, including the 'human element,' which is so critical to addressing cybersecurity challenges. This acknowledgement by Cybersecurity Ventures confirms the hard work of the EY cybersecurity and risk teams and the recent investments we have made in these services." Chad Holmes, Principal of Ernst & Young LLP in the U.S and. Cybersecurity Leader, says: "Cybersecurity breaches are always a matter of 'when', not 'if.' That requires an enterprise-wide approach with experienced people, efficient processes and innovative technology. The launch of our Managed Security Operations Center in June 2015 and  the US-firm's alliance with the US based Los Alamos National Laboratory, formed in August 2015, have been significant investments over the past year." The EY risk and cybersecurity group has more than 14,500 practitioners in member firms of more than 150 countries. They advise on large, transformational cybersecurity improvement programs that focus on cyber threat management, identity and access management, data protection, privacy and "post-incident" services. EY services are end-to-end — from assessments to managed services — and underpinned by robotics and analytics capabilities. EY is a global leader in assurance, tax, transaction and advisory services. The insights and quality services we deliver help build trust and confidence in the capital markets and in economies the world over. We develop outstanding leaders who team to deliver on our promises to all of our stakeholders. In so doing, we play a critical role in building a better working world for our people, for our clients and for our communities. EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. For more information about our organization, please visit ey.com. This news release has been issued by EYGM Limited, a member of the global EY organization that also does not provide any services to clients. To download a copy of Cybersecurity Ventures' Cybersecurity 500 report, please visit: cybersecurityventures.com/cybersecurity-500 To learn more about EY's global cybersecurity practice, please visit: ey.com/gl/en/services/advisory/ey-cybersecurity


News Article | November 3, 2016
Site: www.chromatographytechniques.com

Scientists have measured the catastrophic genetic damage caused by smoking in different organs of the body and identified several different mechanisms by which tobacco smoking causes mutations in DNA. Researchers at the Wellcome Trust Sanger Institute, the Los Alamos National Laboratory and their collaborators found smokers accumulated an average of 150 extra mutations in every lung cell for each year of smoking one packet of cigarettes a day. Reported in the Journal Science, the study provides a direct link between the number of cigarettes smoked in a lifetime and the number of mutations in the tumor DNA. The highest mutation rates were seen in the lung cancers but tumors in other parts of the body also contained these smoking-associated mutations, explaining how smoking causes many types of human cancer. Tobacco smoking claims the lives of at least 6 million people every year and, if current trends continue, the World Health Organization predicts more than 1 billion tobacco-related deaths in this century. Smoking has been epidemiologically associated with at least 17 types of human cancer, but until now no one has seen the mechanisms by which smoking causes many of these cancer types. Cancer is caused by mutations in the DNA of a cell. In the first comprehensive analysis of the DNA of cancers linked to smoking, researchers studied over 5,000 tumours, comparing cancers from smokers with cancers from people who had never smoked. They found particular molecular fingerprints of DNA damage - called mutational signatures - in the smokers' DNA, and counted how many of these particular mutations were found in the different tumours. The authors found that, on average, smoking a pack of cigarettes a day led to 150 mutations in each lung cell every year. These mutations represent individual potential start points for a cascade of genetic damage that can eventually lead to cancer. The numbers of mutations within any cancer cell will vary between individuals, but this study shows the additional mutational load caused by tobacco. "Before now, we had a large body of epidemiological evidence linking smoking with cancer, but now we can actually observe and quantify the molecular changes in the DNA due to cigarette smoking," said Ludmil Alexandrov, first author from Los Alamos National Laboratory. "With this study, we have found that people who smoke a pack a day develop an average of 150 extra mutations in their lungs every year, which explains why smokers have such a higher risk of developing lung cancer." Other organs were also affected, with the study showing that a pack a day led to an estimated average 97 mutations in each cell in the larynx, 39 mutations for the pharynx, 23 mutations for mouth, 18 mutations for bladder and six mutations in every cell of the liver each year. Until now, it has not been fully understood how smoking increases the risk of developing cancer in parts of the body that don't come into direct contact with smoke. However, the study revealed different mechanisms by which tobacco smoking causes these mutations, depending on the area of the body affected. "The results are a mixture of the expected and unexpected, and reveal a picture of direct and indirect effects. Mutations caused by direct DNA damage from carcinogens in tobacco were seen mainly in organs that come into direct contact with inhaled smoke. In contrast, other cells of the body suffered only indirect damage, as tobacco smoking seems to affect key mechanisms in these cells that in turn mutate DNA," said David Phillips, an author on the paper and professor of environmental carcinogenesis at King's College London. The study revealed at least five distinct processes of DNA damage due to cigarette smoking. The most widespread of these is a mutational signature already found in all cancers. In this case, tobacco smoking seems to accelerate the speed of a cellular clock that mutates DNA prematurely. "The genome of every cancer provides a kind of 'archaeological record,' written in the DNA code itself, of the exposures that caused the mutations that lead to the cancer. Our research indicates that the way tobacco smoking causes cancer is more complex than we thought. Indeed, we do not fully understand the underlying causes of many types of cancer and there are other known causes, such as obesity, about which we understand little of the underlying mechanism. This study of smoking tells us that looking in the DNA of cancers can provide provocative new clues to how cancers develop and thus, potentially, how they can be prevented," said Sir Mike Stratton, joint lead author from the Wellcome Trust Sanger Institute.


News Article | September 6, 2016
Site: www.theenergycollective.com

Southern California Gas Company (SoCalGas) can meet its summer and winter peak demand without the Aliso Canyon gas storage facility, despite the company’s repeated warnings that blackouts could occur if Aliso stays offline. This is according to a new report prepared for Food & Water Watch and the community group Save Porter Ranch. The report, Critical Review of Aliso Canyon Winter Risk Assessment and Action Plan, by Bill Powers, P.E. of Powers Engineering, finds that as long as existing mitigation measures remain in place, Aliso Canyon, one of the largest natural gas storage fields in the western U.S. and site of the largest methane leak in U.S. history, is not needed to guarantee either summer or winter gas supplies in the Los Angeles Basin. In Thursday’s report, Powers refutes SoCalGas claims that permanent closure of Aliso Canyon would threaten gas supplies during peak usage by critiquing two documents released this year by California state regulators. Powers says those documents, a winter risk assessment released in April and a winter action plan released last month, use faulty assumptions about usage and standard mitigation practices and then “editorialize” to create an impression that Aliso Canyon is needed. The April report, created by the California Public Utilities Commission, California Energy, the California Independent System Operator, the Los Angeles Department of Water and Power, and SoCalGas, was the more problematic of the two reports, Powers tells DeSmog. “[The agencies] were hammered back in April, and not just by me, for crying wolf by using obsolete numbers to do their modeling,” Powers says. “In the August report, which SoCalGas was not a part of, more accurate data is used but it’s presented with misleading language to reach the conclusions they want.” Both reports require more expertise to connect those dots than a layperson has, Powers adds. Powers points out one key way the winter action plan presents data in what he calls an “uncontrolled” manner. The plan’s scenario assumes, without Aliso Canyon, a usage of 5.2 billion cubic feet per day (bcfd) during a peak winter event in 2016-2017. But the same report also states that, factoring in the new winter mitigation measures and continuing those from summer 2016, the maximum demand would be much lower, in the range of 4.1 to 4.2 bcfd. “They frame mitigation factors like balancing supply and demand and limiting the generators as a curtailment. It’s not. It’s a good management technique,” says Powers. Powers also criticizes the winter action plan for its assumption that shifting generation outside of the L.A. Basin to minimize natural gas demand on forecasted peak winter days amounts to a “curtailment” of service. As he says in the report: Aliso Canyon contains less than 20 percent of its capacity after a four-month-long leak that began with the October 23, 2015 blowout, which forced the company to release much of the facility’s supply. The state then ordered a moratorium on new natural gas injections and storage at the plant while inspectors checked the wells for safety. Without Aliso Canyon, three other storage facilities owned by SoCalGas — La Goleta Gas Field west of Santa Barbara, Honor Rancho near Newhall, and Playa del Rey — still produce an “overabundance” of gas, according to Powers and Food & Water Watch. State regulators and Southern California Gas warned in their April report — and continued to claim for months — that without Aliso Canyon, Southern Californians could face as many as 14 days of blackouts this summer, a threat that many residents of nearby Porter Ranch called “blackout blackmail.” As of September 2, the L.A. basin has seen none of the threatened blackouts. Powers says the company’s scare tactics were “bogus” from the beginning. SoCalGas maintains its position that Aliso Canyon must reopen to meet peak winter demands for gas in the region. A spokesperson for the company, Melissa Bailey, told DeSmog that SoCalGas executives have not seen the report by Powers and Food & Water Watch, but she pointed to separate and independent analysis conducted by Los Alamos National Laboratory that “concluded that the facility is an integral part of the SoCalGas system” and still necessary to handle potential shortages. “Restoring natural gas injection at Aliso Canyon to support energy reliability for the winter heating season is critical and without the facility there is an increased risk of natural gas curtailments affecting electric generation plants, hospitals, manufacturers, refineries, and other large users,” says Bailey. “That’s exactly the kind of muddy language I highlighted in my review,” Powers says about Bailey’s statement. “It’s politicizing with language. Their conclusions are not supported by their own numbers.” Powers’ review also shows that keeping Aliso Canyon closed could save ratepayers about $70 million a year. “The good news is, these documents by state agencies, no matter how they spin it, actually show a road map for not using Aliso,” Powers says. The Aliso Canyon storage facility leak, which released an estimated 100,000 metric tons of natural gas, caused 8,000 residents, mainly in the nearby Porter Ranch community, to be relocated for months. Some residents who have moved back still complain of headaches, nosebleeds, and nausea. Some have decided to leave permanently. Residents who want the facility shut down say Powers’ conclusions are vindication of their position. “There’s a greater chance of a big earthquake happening here than a blackout if Aliso Canyon is shut,” Save Porter Ranch President Matt Pakucko told dozens of protesters rallying near the facility on September 1. At the same rally, Alex Nagy, senior organizer with Food & Water Watch, told protesters that SoCalGas is still using scare tactics in order to keep Aliso Canyon open because it’s a huge money maker. “SoCalGas stores gas here and does commodities trading, selling it to the highest bidder, to other states and to Mexico,” she said. In a visit to the site during the leak this past winter, U.S. Secretary of Energy Ernest Moniz said Aliso Canyon showed the need for a fresh look at gas storage safety standards nationwide. One potential reform may be signed by California Governor Jerry Brown this month. On August 26, the California Senate sent SB 887, a bill establishing proactive safety standards for natural gas wells, to the governor. SB 887, approved overwhelmingly by both houses of the legislature, calls for continuous monitoring for leaks, regular inspections of wells to test their mechanical integrity, and that all wells be equipped with safeguards to ensure that no single point of failure can result in a leak. The standards in the bill are aligned with new regulations proposed by the Division of Oil, Gas & Geothermal Resources, and could become a national model for safety reforms, according to its author, Senator Fran Pavley. The Senate sent a companion bill to Gov. Brown last week. SB 888, authored by Senator Ben Allen, establishes the state Office of Emergency Services as the lead agency to coordinate responses to any future gas leaks. Gov. Brown has until September 30 to act on all bills sent to him by the legislature in August.


News Article | November 17, 2015
Site: www.rdmag.com

Since the first laser was invented in 1960, they've always given off heat -- either as a useful tool, a byproduct or a fictional way to vanquish intergalactic enemies. But those concentrated beams of light have never been able to cool liquids. Univ. of Washington researchers are the first to solve a decades-old puzzle -- figuring out how to make a laser refrigerate water and other liquids under real-world conditions. In a study to be published the week of Nov. 16 in the Proceedings of the National Academy of Sciences, the team used an infrared laser to cool water by about 36 degrees Fahrenheit -- a major breakthrough in the field. "Typically, when you go to the movies and see Star Wars laser blasters, they heat things up. This is the first example of a laser beam that will refrigerate liquids like water under everyday conditions," said senior author Peter Pauzauskie, UW assistant professor of materials science and engineering. "It was really an open question as to whether this could be done because normally water warms when illuminated." The discovery could help industrial users "point cool" tiny areas with a focused point of light. Microprocessors, for instance, might someday use a laser beam to cool specific components in computer chips to prevent overheating and enable more efficient information processing. Scientists could also use a laser beam to precisely cool a portion of a cell as it divides or repairs itself, essentially slowing these rapid processes down and giving researchers the opportunity to see how they work. Or they could cool a single neuron in a network -- essentially silencing without damaging it -- to see how its neighbors bypass it and rewire themselves. "There's a lot of interest in how cells divide and how molecules and enzymes function, and it's never been possible before to refrigerate them to study their properties," said Pauzauskie, who is also a scientist at the U.S. Department of Energy's Pacific Northwest National Laboratory in Richland, Washington. "Using laser cooling, it may be possible to prepare slow-motion movies of life in action. And the advantage is that you don't have to cool the entire cell, which could kill it or change its behavior." The UW team chose infrared light for its cooling laser with biological applications in mind, as visible light could give cells a damaging "sunburn." They demonstrated that the laser could refrigerate saline solution and cell culture media that are commonly used in genetic and molecular research. To achieve the breakthrough, the UW team used a material commonly found in commercial lasers but essentially ran the laser phenomenon in reverse. They illuminated a single microscopic crystal suspended in water with infrared laser light to excite a unique kind of glow that has slightly more energy than that amount of light absorbed. This higher-energy glow carries heat away from both the crystal and the water surrounding it. The laser refrigeration process was first demonstrated in vacuum conditions at Los Alamos National Laboratory in 1995, but it has taken nearly 20 years to demonstrate this process in liquids. Typically, growing laser crystals is an expensive process that requires lots of time and can cost thousands of dollars to produce just a single gram of material. The UW team also demonstrated that a low-cost hydrothermal process can be used to manufacture a well-known laser crystal for laser refrigeration applications in a faster, inexpensive and scalable way. The UW team also designed an instrument that uses a laser trap -- akin to a microscopic tractor beam -- to "hold" a single nanocrystal surrounded by liquid in a chamber and illuminate it with the laser. To determine whether the liquid is cooling, the instrument also projects the particle's "shadow" in a way that allows the researchers to observe minute changes in its motion. As the surrounding liquid cools, the trapped particle slows down, allowing the team to clearly observe the refrigerating effect. They also designed the crystal to change from a blueish-green to a reddish-green color as it cools, like a built-in color thermometer. "The real challenge of the project was building an instrument and devising a method capable of determining the temperature of these nanocrystals using signatures of the same light that was used to trap them," said lead author Paden Roder, who recently received his doctorate from the UW in materials science and engineering and now works at Intel Corp. So far, the UW team has only demonstrated the cooling effect with a single nanocrystal, as exciting multiple crystals would require more laser power. The laser refrigeration process is currently quite energy intensive, Pauzauskie said, and future steps include looking for ways to improve its efficiency. One day the cooling technology itself might be used to enable higher-power lasers for manufacturing, telecommunications or defense applications, as higher-powered lasers tend to overheat and melt down. "Few people have thought about how they could use this technology to solve problems because using lasers to refrigerate liquids hasn't been possible before," he said. "We are interested in the ideas other scientists or businesses might have for how this might impact their basic research or bottom line."


« VW: 430,046 MY 2016 vehicles in Europe affected by “CO2 issue” | Main | Huawei unveils new fast-charging Li-ion batteries » 3M will supply Plug Power Inc. with membrane electrode assemblies (MEAs) to be used in Plug Power designed proton exchange membrane (PEM) fuel cell stacks under a new strategic supply agreement. Through this strategic supply agreement, fuel cell stacks will be manufactured in Plug Power’s Latham, NY and Spokane, WA facilities to support the $20-billion material handling market. Additionally, this new fuel cell stack technology will be utilized to expand Plug Power’s presence into hydrogen-enabled electric vehicle applications outside of the material handling market. Fuel cell systems using the 3M MEA’s and Plug Power stack design will begin shipments in the fourth quarter of 2015. 3M offers a broad portfolio of solutions for fuel cell construction—for low- and high-temperature applications—including fluoropolymers, membrane materials and advanced ceramics. 3M has applied its expertise in electrochemistry, fluoropolymer membrane, and high throughput manufacturing for advanced fuel cell MEAs. 3M Dyneon Fluoropolymers feature excellent chemical and temperature resistance, weatherability, permeation control and low surface energy— making them highe suited for PEM applications. For example, gaskets made from 3M Dyneon Ultra Low Viscosity Fluoroelastomers cure at low temperatures for seal-in-place solutions. Barrier films can be made from thermoplastics such as 3M Dyneon THV, which imparts oxidative stability; 3M thermoplastics also function as binders for bipolar plate applications. 3M’s fully integrated membrane electrode assembly is based on its catalyst coated membrane (CCM). The CCMs feature integrated, thrifted subgasketed construction, and combine low precious metal loadings with strong performance and durability. 3M researchers have been working for several years on a DOE-funded project to develop a durable, low-cost, robust, high-performance membrane electrode assembly (MEA) for transportation applications that is able to meet or exceed US Department of Energy 2020 MEA targets. Partners in the three-year, $4.6-million project include Johns Hopkins University, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Michigan Technological University, Freudenberg FCCT, Argonne National Laboratory, Los Alamos National Laboratory and GM. The approach 3M took with this advanced MEA research project was to optimize the integration of advanced anode and cathode catalysts, based on 3M’s nanostructured thin film (NSTF) catalyst technology platform, with next generation PFSA PEMs (polymer electrolyte membranes based on perfluorosulfonic acid); gas diffusion media; cathode interfacial layers; and flow fields to improve overall MEA performance, durability, robustness, and cost. Plug Power’s GenKey, provides an all-inclusive package for customers, incorporating GenFuel hydrogen and fueling infrastructure, GenCare aftermarket service and either GenDrive or ReliOn fuel cell systems. GenDrive, a lead-acid battery replacement, is used in electric lift trucks in high-throughput material handling applications. With more than 9,000 GenDrive units deployed with material handling customers, GenDrive has been proven reliable with over 107 million hours of runtime.


News Article | November 4, 2016
Site: www.biosciencetechnology.com

Scientists have measured the catastrophic genetic damage caused by smoking in different organs of the body and identified several different mechanisms by which tobacco smoking causes mutations in DNA. Researchers at the Wellcome Trust Sanger Institute, the Los Alamos National Laboratory and their collaborators found smokers accumulated an average of 150 extra mutations in every lung cell for each year of smoking one packet of cigarettes a day. Reported in the Journal Science, the study provides a direct link between the number of cigarettes smoked in a lifetime and the number of mutations in the tumour DNA. The highest mutation rates were seen in the lung cancers but tumours in other parts of the body also contained these smoking-associated mutations, explaining how smoking causes many types of human cancer. Tobacco smoking claims the lives of at least six million people every year and, if current trends continue, the World Health Organization predicts more than 1 billion tobacco-related deaths in this century. Smoking has been epidemiologically associated with at least 17 types of human cancer, but until now no-one has seen the mechanisms by which smoking causes many of these cancer types. Cancer is caused by mutations in the DNA of a cell. In the first comprehensive analysis of the DNA of cancers linked to smoking, researchers studied over 5,000 tumours, comparing cancers from smokers with cancers from people who had never smoked. They found particular molecular fingerprints of DNA damage - called mutational signatures - in the smokers' DNA, and counted how many of these particular mutations were found in the different tumours. The authors found that, on average, smoking a pack of cigarettes a day led to 150 mutations in each lung cell every year. These mutations represent individual potential start points for a cascade of genetic damage that can eventually lead to cancer. The numbers of mutations within any cancer cell will vary between individuals, but this study shows the additional mutational load caused by tobacco. Dr Ludmil Alexandrov, first author from Los Alamos National Laboratory, said: "Before now, we had a large body of epidemiological evidence linking smoking with cancer, but now we can actually observe and quantify the molecular changes in the DNA due to cigarette smoking. With this study, we have found that people who smoke a pack a day develop an average of 150 extra mutations in their lungs every year, which explains why smokers have such a higher risk of developing lung cancer." Other organs were also affected, with the study showing that a pack a day led to an estimated average 97 mutations in each cell in the larynx, 39 mutations for the pharynx, 23 mutations for mouth, 18 mutations for bladder, and 6 mutations in every cell of the liver each year. Until now, it has not been fully understood how smoking increases the risk of developing cancer in parts of the body that don't come into direct contact with smoke. However, the study revealed different mechanisms by which tobacco smoking causes these mutations, depending on the area of the body affected. Prof David Phillips, an author on the paper and Professor of Environmental Carcinogenesis at King's College London, said: "The results are a mixture of the expected and unexpected, and reveal a picture of direct and indirect effects. Mutations caused by direct DNA damage from carcinogens in tobacco were seen mainly in organs that come into direct contact with inhaled smoke. In contrast, other cells of the body suffered only indirect damage, as tobacco smoking seems to affect key mechanisms in these cells that in turn mutate DNA." The study revealed at least five distinct processes of DNA damage due to cigarette smoking. The most widespread of these is a mutational signature already found in all cancers. In this case, tobacco smoking seems to accelerate the speed of a cellular clock that mutates DNA prematurely. Professor Sir Mike Stratton, joint lead author from the Wellcome Trust Sanger Institute, said: "The genome of every cancer provides a kind of "archaeological record", written in the DNA code itself, of the exposures that caused the mutations that lead to the cancer. Our research indicates that the way tobacco smoking causes cancer is more complex than we thought. Indeed, we do not fully understand the underlying causes of many types of cancer and there are other known causes, such as obesity, about which we understand little of the underlying mechanism. This study of smoking tells us that looking in the DNA of cancers can provide provocative new clues to how cancers develop and thus, potentially, how they can be prevented."


News Article | January 22, 2016
Site: www.greencarcongress.com

« Ballard receives follow-on order from Nisshinbo for development of fuel cell catalyst; targeting 70% reduction in platinum loading | Main | Pitt study reveals heat transfer mechanisms in MOF-based adsorbed natural gas storage; advancing the technology » The US Department of Energy (DOE) is establishing a new consortium to support a vision of developing and deploying lightweight materials twice as fast, at a fraction of the cost, while increasing US competitiveness in manufacturing. It can take as long as 20 years to get a lightweight material from concept to market. If DOE can help cut that by 10 years, the savings and boost to US industry will be substantial. With our national laboratories’ expertise in lightweight materials built on a framework of industry engagement and support, LightMAT can put more of DOE’s resources to work and have a real impact.Reuben Sarkar, deputy assistant secretary for sustainable transportation in DOE’s office of Energy Efficiency & Renewable Energy DOE’s Pacific Northwest National Laboratory will manage the new Lightweight Materials National Lab Consortium (LightMAT)—a network of nine national labs with technical capabilities that are highly relevant to lightweight materials development and use. LightMAT will provide a single point of contact which will match industry led research teams with specific expertise and equipment found only at the national labs. PNNL, in collaboration with the eight other DOE national laboratories, is developing a virtual storefront to showcase the capabilities of participating laboratories. Industry partners will now be able to engage LightMAT assistance through the LightMAT website. Any US company, large or small, is able to seek assistance to locate and use strategic resources to accelerate lightweight materials research & development. Along with access to national laboratory capabilities, all industrial partners will be able to use collaboration tools and data aggregation tools that are being developed as part of LightMAT. This includes simplified work and technology transfer agreements, and a LightMAT non-disclosure agreement that allows access to all participating laboratories through a single document. To date, the LightMAT network includes Ames Laboratory; Argonne National Laboratory; Idaho National Laboratory; Los Alamos National Laboratory; Lawrence Livermore National Laboratory; National Renewable Energy Laboratory; Oak Ridge National Laboratory; Pacific Northwest National Laboratory; and Sandia National Laboratories.


News Article | November 4, 2016
Site: www.sciencedaily.com

Scientists have measured the catastrophic genetic damage caused by smoking in different organs of the body and identified several different mechanisms by which tobacco smoking causes mutations in DNA. Researchers at the Wellcome Trust Sanger Institute, the Los Alamos National Laboratory and their collaborators found smokers accumulated an average of 150 extra mutations in every lung cell for each year of smoking one packet of cigarettes a day. Reported in the Journal Science, the study provides a direct link between the number of cigarettes smoked in a lifetime and the number of mutations in the tumour DNA. The highest mutation rates were seen in the lung cancers but tumours in other parts of the body also contained these smoking-associated mutations, explaining how smoking causes many types of human cancer. Tobacco smoking claims the lives of at least six million people every year and, if current trends continue, the World Health Organization predicts more than 1 billion tobacco-related deaths in this century. Smoking has been epidemiologically associated with at least 17 types of human cancer, but until now no-one has seen the mechanisms by which smoking causes many of these cancer types. Cancer is caused by mutations in the DNA of a cell. In the first comprehensive analysis of the DNA of cancers linked to smoking, researchers studied over 5,000 tumours, comparing cancers from smokers with cancers from people who had never smoked. They found particular molecular fingerprints of DNA damage -- called mutational signatures -- in the smokers' DNA, and counted how many of these particular mutations were found in the different tumours. The authors found that, on average, smoking a pack of cigarettes a day led to 150 mutations in each lung cell every year. These mutations represent individual potential start points for a cascade of genetic damage that can eventually lead to cancer. The numbers of mutations within any cancer cell will vary between individuals, but this study shows the additional mutational load caused by tobacco. Dr Ludmil Alexandrov, first author from Los Alamos National Laboratory, said: "Before now, we had a large body of epidemiological evidence linking smoking with cancer, but now we can actually observe and quantify the molecular changes in the DNA due to cigarette smoking. With this study, we have found that people who smoke a pack a day develop an average of 150 extra mutations in their lungs every year, which explains why smokers have such a higher risk of developing lung cancer." Other organs were also affected, with the study showing that a pack a day led to an estimated average 97 mutations in each cell in the larynx, 39 mutations for the pharynx, 23 mutations for mouth, 18 mutations for bladder, and 6 mutations in every cell of the liver each year. Until now, it has not been fully understood how smoking increases the risk of developing cancer in parts of the body that don't come into direct contact with smoke. However, the study revealed different mechanisms by which tobacco smoking causes these mutations, depending on the area of the body affected. Prof David Phillips, an author on the paper and Professor of Environmental Carcinogenesis at King's College London, said: "The results are a mixture of the expected and unexpected, and reveal a picture of direct and indirect effects. Mutations caused by direct DNA damage from carcinogens in tobacco were seen mainly in organs that come into direct contact with inhaled smoke. In contrast, other cells of the body suffered only indirect damage, as tobacco smoking seems to affect key mechanisms in these cells that in turn mutate DNA." The study revealed at least five distinct processes of DNA damage due to cigarette smoking. The most widespread of these is a mutational signature already found in all cancers. In this case, tobacco smoking seems to accelerate the speed of a cellular clock that mutates DNA prematurely. Professor Sir Mike Stratton, joint lead author from the Wellcome Trust Sanger Institute, said: "The genome of every cancer provides a kind of "archaeological record," written in the DNA code itself, of the exposures that caused the mutations that lead to the cancer. Our research indicates that the way tobacco smoking causes cancer is more complex than we thought. Indeed, we do not fully understand the underlying causes of many types of cancer and there are other known causes, such as obesity, about which we understand little of the underlying mechanism. This study of smoking tells us that looking in the DNA of cancers can provide provocative new clues to how cancers develop and thus, potentially, how they can be prevented."


News Article | December 22, 2015
Site: phys.org

Plutonium-238 produces heat as it decays and can be used in systems that power spacecraft instruments. The new sample, which is in the same oxide powder form used to manufacture heat sources for power systems, represents the first end-to-end demonstration of a plutonium-238 production capability in the United States since the Savannah River Plant in South Carolina ceased production of the material in the late 1980s. Researchers will analyze the sample for chemical purity and plutonium-238 content, then verify production efficiency models and determine whether adjustments need to be made before scaling up the process. "Once we automate and scale up the process, the nation will have a long-range capability to produce radioisotope power systems such as those used by NASA for deep space exploration," said Bob Wham, who leads the project for the lab's Nuclear Security and Isotope Technology Division. The success of Wham and a team of engineers and technicians at ORNL comes two years after NASA began funding the DOE Office of Nuclear Energy through a roughly $15 million per year effort to revive the department's capability to make plutonium-238. Production begins at Idaho National Laboratory, which stores the existing inventory of neptunium-237 feedstock and ships it as needed to ORNL. Engineers mix the neptunium oxide with aluminum and press the mixture into high-density pellets. They use the High Flux Isotope Reactor, a DOE Office of Science User Facility at ORNL, to irradiate the pellets, creating neptunium-238, which quickly decays and becomes plutonium-238. The irradiated pellets are then dissolved and ORNL staff use a chemical process to separate the plutonium from remaining neptunium. The plutonium product is converted to an oxide and shipped to Los Alamos National Laboratory, where the material will be stored until needed for a mission. Remaining neptunium is recycled into new targets to produce more plutonium-238. There are currently only 35 kilograms, or about 77 pounds, of plutonium-238 set aside for NASA missions, and only about half of this supply meets power specifications. This is only sufficient to power two to three proposed NASA missions through the middle of the 2020s. Fortunately, the additional material that will be produced at ORNL can be blended with the existing portion that doesn't meet specifications to extend the usable inventory. With continued NASA funding, DOE's Oak Ridge and Idaho national laboratories can ensure that NASA's needs are met, initially by producing 300 to 400 grams of the material per year and then, through automation and scale-up processes, by producing an average of 1.5 kilograms per year. "With this initial production of plutonium-238 oxide, we have demonstrated that our process works and we are ready to move on to the next phase of the mission," Wham said. The next NASA mission planning to use a radioisotope thermoelectric generator is the Mars 2020 rover, due to be launched in July 2020. The mission seeks signs of life on Mars and will test technology for human exploration and gather samples of rocks and soil that could be returned to Earth. Explore further: Fuel for deep space exploration running on empty


News Article | November 23, 2016
Site: www.prweb.com

Kitware and Intel® updated the Visualization Toolkit (VTK) and ParaView as well as OpenSWR and OSPRay to push the Software Defined Visualization initiative. The work builds on efforts by Kitware, Intel® and the Texas Advanced Computing Center (TACC) to integrate OpenSWR and OSPRay with VTK and ParaView. “VTK and ParaView offer open-source solutions for analyzing and visualizing data, which the OpenSWR and OSPRay renderers enhance,” said David DeMarle, a lead developer of VTK at Kitware. “The combination of the solutions with the renderers provides an especially powerful path toward visualizing extreme-scale scientific problems with massive supercomputers.” Kitware released ParaView 5.0 in conjunction with The International Conference for High Performance Computing, Networking, Storage and Analysis (SC) in 2015 as the first official release to include OpenSWR and OSPRay. The successor to ParaView 5.0, version 5.1, more tightly coupled OSPRay with ParaView and added the ability for users to quickly swap between rasterizing and ray tracing images with the same type of surface mesh data. Version 5.1 also brought ray tracing to ParaView on Microsoft Windows. At this year’s SC event, Kitware not only demonstrated these advancements, but it discussed new versions of VTK and ParaView. “I am particularly excited about the OSPRay based volume rendering modes in VTK 7.1 and ParaView 5.2,” DeMarle said. “OSPRay is a lot faster than VTK’s internal software volume renderer, and the fact that surfaces and volumes coexist in the same ray-traced context opens up new possibilities for visual effects. OSPRay now enables volumetric shadows and, more generally, makes volumetric participating media feasible.” OpenSWR also features state-of-the-art rendering capabilities, which initial benchmarks on the Trinity supercomputer at Los Alamos National Laboratory demonstrated earlier this year. The run used only a fraction of the supercomputing machine during the system shakedown period to render scenes up to 1.1 trillion triangles. Since then, Intel® has contributed OpenSWR to the open-source Mesa implementation of the OpenGL standard. In the recently released Mesa 13.0.0, OpenSWR gained significant optimizations. “When evaluating whether to update to the recent Mesa 13.0.0 release from the 12 series, we measured nearly a two-times increase in rendering performance with OpenSWR on Intel® Haswell central processing units (CPUs) and almost a four-times increase on next-generation Intel® Xeon Phi™ coprocessor, or ‘Knights Landing’ (KNL), platforms,” said Chuck Atkins, a staff research and development engineer at Kitware who contributes to ParaView, CMake and Mesa. “The dramatic increase in measured performance made the decision an easy one. As soon as Mesa 13.0.0 became available, we updated our ParaView build processes to incorporate it. We are now actively deploying Mesa 13.0.0 at partner institutions and customer sites.” Under a new Intel® Parallel Computing Centers project, Kitware and TACC will collaborate to optimize the full code path of ParaView for in situ processing. This important use case directly couples visualization and rendering with simulation codes. “We are thrilled to partner with Kitware and Intel® to advance in situ capabilities for many-core processors,” said Dr. Paul Navrátil, the manager of scalable visualization technologies at TACC. “The Stampede-KNL system at TACC will be the perfect test bed for developing effective strategies that fully utilize the capabilities on upcoming KNL systems like Trinity, Cori and Stampede 2.” Kitware encourages those who want to learn more about ParaView and VTK to contact kitware(at)kitware(dot)com. About Kitware Kitware is an advanced technology, research and open-source solutions provider for research facilities, government institutions and corporations worldwide. Founded in 1998, Kitware specializes in research and development in the areas of HPC and visualization, medical imaging, computer vision, data and analytics and quality software process. Among its services, Kitware offers consulting and support for high-quality software solutions. Kitware is headquartered in Clifton Park, NY, with offices in Carrboro, NC; Santa Fe, NM; and Lyon, France. More information can be found on http://www.kitware.com. About Los Alamos National Laboratory Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy’s National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.


News Article | March 30, 2016
Site: phys.org

"The accomplishment here is to demonstrate that we can analyze conventionally collected protein crystallography data and pull out background features that are usually thrown away," said Michael E. Wall, a scientist in the Computer, Computational, and Statistical Sciences Division at Los Alamos National Laboratory and co-corresponding author of the paper with James S. Fraser, Assistant Professor in the Department of Bioengineering and Therapeutic Sciences at University of California, San Francisco. "What's been reclaimed is information about how the protein moves, the more detailed dynamics within the sample," Wall said. Traditional crystallography data provide a blurred picture of how the protein moves, like overlaying the frames of a movie. The Laboratory's approach sharpens the picture, he noted, providing information about which atoms are moving in a concerted way, such as ones on the swinging arm of the protein or on opposite sides of a hinge opening or closing, and which ones are moving more independently. "This is a method that will eventually change the way X-ray crystallography is done, bringing in this data stream in addition to the sharply peaked Bragg scattering, which is the traditional analysis method," Wall said. "We're working toward using both data sets simultaneously to increase the clarity of the crystallography model and more clearly map how proteins are moving." In the work described in the paper, the 3-D diffuse scattering data were measured from crystals of the enzymes cyclophilin A (a type of protein) and trypsin (an enzyme that acts to degrade protein) at Stanford Synchrotron Radiation Lightsource (SSRL), a U.S. Department of Energy (DOE) Office of Science User Facility. The measurements were extracted and movements were modeled using computers at Los Alamos, Lawrence Berkeley National Laboratory, and the University of California, San Francisco. The ongoing computational work includes simulations on Conejo and Mustang, supercomputing clusters in the Institutional Computing Program at Los Alamos. Averaging relatively weak features in the data improves the clarity of the imaging of diffuse features, which has value as researchers have had an increasing interest in the role of protein motions. In designing a new drug, for example, one seeks to produce a small molecule that binds to a functional site on a specific protein and blocks its activity. With better modeling, adapted to more closely matched experimental diffuse data, the steps toward a new pharmaceutical product can be reduced by more accurately accounting for protein motions in drug interactions. The new approach can improve new and ongoing experiments and could potentially be used to explore data from previously conducted crystallography experiments if the level of background noise is not too severe. "Data coming off modern X-ray sources with the latest detectors are tending to be the type of data we can best analyze," Wall said, but "some of the older data could be reexamined as well." With this new method, scientists can experimentally validate predictions of detailed models of protein motions, such as computationally expensive all-atom molecular dynamics simulations, and less expensive "normal mode analysis," in which the protein motions are modeled as vibrations in a network of atoms interconnected by soft springs. A key finding is that normal mode models of both cyclophilin A and trypsin resemble the diffuse data; this creates an avenue for adjusting detailed models of protein motion to better agree with the data. "We are planning to add in the future a refinement step to increase accuracy even more," Wall said. A more detailed model provides a more direct connection between protein structure and biological mechanisms, which is desired for pharmaceutical applications. This use of diffuse scattering data illustrates the potential to increase understanding of protein structure variations in any X-ray crystallography experiment. Said Wall, "This represents a significant step toward moving diffuse scattering analysis into the mainstream of structural biology." Explore further: Bringing out the best in X-ray crystallography data More information: Andrew H. Van Benschoten et al. Measuring and modeling diffuse scattering in protein X-ray crystallography, Proceedings of the National Academy of Sciences (2016). DOI: 10.1073/pnas.1524048113


News Article | November 3, 2016
Site: www.eurekalert.org

Scientists have measured the catastrophic genetic damage caused by smoking in different organs of the body and identified several different mechanisms by which tobacco smoking causes mutations in DNA. Researchers at the Wellcome Trust Sanger Institute, the Los Alamos National Laboratory and their collaborators found smokers accumulated an average of 150 extra mutations in every lung cell for each year of smoking one packet of cigarettes a day. Reported in the Journal Science, the study provides a direct link between the number of cigarettes smoked in a lifetime and the number of mutations in the tumour DNA. The highest mutation rates were seen in the lung cancers but tumours in other parts of the body also contained these smoking-associated mutations, explaining how smoking causes many types of human cancer. Tobacco smoking claims the lives of at least six million people every year and, if current trends continue, the World Health Organization predicts more than 1 billion tobacco-related deaths in this century. Smoking has been epidemiologically associated with at least 17 types of human cancer, but until now no-one has seen the mechanisms by which smoking causes many of these cancer types. Cancer is caused by mutations in the DNA of a cell. In the first comprehensive analysis of the DNA of cancers linked to smoking, researchers studied over 5,000 tumours, comparing cancers from smokers with cancers from people who had never smoked. They found particular molecular fingerprints of DNA damage - called mutational signatures - in the smokers' DNA, and counted how many of these particular mutations were found in the different tumours. The authors found that, on average, smoking a pack of cigarettes a day led to 150 mutations in each lung cell every year. These mutations represent individual potential start points for a cascade of genetic damage that can eventually lead to cancer. The numbers of mutations within any cancer cell will vary between individuals, but this study shows the additional mutational load caused by tobacco. Dr Ludmil Alexandrov, first author from Los Alamos National Laboratory, said: "Before now, we had a large body of epidemiological evidence linking smoking with cancer, but now we can actually observe and quantify the molecular changes in the DNA due to cigarette smoking. With this study, we have found that people who smoke a pack a day develop an average of 150 extra mutations in their lungs every year, which explains why smokers have such a higher risk of developing lung cancer." Other organs were also affected, with the study showing that a pack a day led to an estimated average 97 mutations in each cell in the larynx, 39 mutations for the pharynx, 23 mutations for mouth, 18 mutations for bladder, and 6 mutations in every cell of the liver each year. Until now, it has not been fully understood how smoking increases the risk of developing cancer in parts of the body that don't come into direct contact with smoke. However, the study revealed different mechanisms by which tobacco smoking causes these mutations, depending on the area of the body affected. Prof David Phillips, an author on the paper and Professor of Environmental Carcinogenesis at King's College London, said: "The results are a mixture of the expected and unexpected, and reveal a picture of direct and indirect effects. Mutations caused by direct DNA damage from carcinogens in tobacco were seen mainly in organs that come into direct contact with inhaled smoke. In contrast, other cells of the body suffered only indirect damage, as tobacco smoking seems to affect key mechanisms in these cells that in turn mutate DNA." The study revealed at least five distinct processes of DNA damage due to cigarette smoking. The most widespread of these is a mutational signature already found in all cancers. In this case, tobacco smoking seems to accelerate the speed of a cellular clock that mutates DNA prematurely. Professor Sir Mike Stratton, joint lead author from the Wellcome Trust Sanger Institute, said: "The genome of every cancer provides a kind of "archaeological record", written in the DNA code itself, of the exposures that caused the mutations that lead to the cancer. Our research indicates that the way tobacco smoking causes cancer is more complex than we thought. Indeed, we do not fully understand the underlying causes of many types of cancer and there are other known causes, such as obesity, about which we understand little of the underlying mechanism. This study of smoking tells us that looking in the DNA of cancers can provide provocative new clues to how cancers develop and thus, potentially, how they can be prevented." Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction and solving problems related to energy, environment, infrastructure, health and global security concerns. http://www. King's College London is one of the top 25 universities in the world (2016/17 QS World University Rankings) and among the oldest in England. King's has more than 27,600 students (of whom nearly 10,500 are graduate students) from some 150 countries worldwide, and some 6,800 staff. King's has an outstanding reputation for world-class teaching and cutting-edge research. In the 2014 Research Excellence Framework (REF) King's was ranked 6th nationally in the 'power' ranking, which takes into account both the quality and quantity of research activity, and 7th for quality according to Times Higher Education rankings. Eighty-four per cent of research at King's was deemed 'world-leading' or 'internationally excellent' (3* and 4*). The university is in the top seven UK universities for research earnings and has an overall annual income of more than £684 million. http://www. The Wellcome Trust Sanger Institute is one of the world's leading genome centres. Through its ability to conduct research at scale, it is able to engage in bold and long-term exploratory projects that are designed to influence and empower medical science globally. Institute research findings, generated through its own research programmes and through its leading role in international consortia, are being used to develop new diagnostics and treatments for human disease. http://www. Wellcome exists to improve health for everyone by helping great ideas to thrive. We're a global charitable foundation, both politically and financially independent. We support scientists and researchers, take on big problems, fuel imaginations and spark debate. http://www.


News Article | March 9, 2016
Site: motherboard.vice.com

On Wednesday, a solar eclipse gave people across a swath of Indonesia and the South Pacific the chance to see a generous 4 minutes and 9 seconds of totality: the awe-inspiring sight of the moon completely covering the sun, turning day into night and offering a rare glimpse of the corona, the gas swirling in the Sun’s outer atmosphere. But in 1973, a small group of astronomers from around the globe had a secret weapon for seeing a longer eclipse than ever before: a prototype Concorde, capable of chasing the eclipse across the Earth at twice the speed of sound. The future airliner—developed in a partnership between the French and British governments—had first flown in 1969 and was nearing the end of its successful test program. It had shown that supersonic passenger travel was very much possible, and proven easier to handle in the air and on the ground than had been theorized. Under the control and overall direction of French test pilot André Turcat, the French Concorde “001” prototype had set numerous speed and altitude records. An era of supersonic passenger flight was on the horizon, promising to cut trans-Atlantic jaunts to a breezy 3 and a half hours, less than half the time of a conventional jet. As French and British aviation engineers readied their Concordes for passenger flight, the solar astronomy world was gearing up for its own milestone: the longest total solar eclipse of a lifetime, on June 30, 1973. The eclipse promised a luxurious view if you stood at the right place on the planet: a maximum of 7 minutes and 4 seconds as the moon passed over the Sahara Desert. It would be just 28 seconds short of the longest possible eclipse viewable from Earth; in the preceding several hundred years, there had only been one eclipse longer than this one, and there would not be a longer total solar eclipse until June 2150. Independently of each other, astronomers in France and the UK saw a happy coincidence in the new aircraft and the upcoming eclipse: In Concorde, they could follow the shadow of the moon as it raced across the planet. In theory, a supersonic transport (SST) could give them over 70 minutes to watch the eclipse, ten times more observation time than they’d get on the ground, and high above any potential clouds and water vapor. Now they just had to get their hands on one. In London, British astrophysicist John Beckman had been using aircraft to do astronomy for years, despite being a nervous flyer himself. “I was working in a pioneer group of far infrared submillimeter physics,” he recalls. “At that time you could do very little work of any kind from the ground. You would have to go to very high mountains, or fly. My scientific curiosity allowed me to overcome my nerves.” Beckman put in a request to use the British Concorde “002” prototype but was turned down, possibly because the UK test program was lagging behind the French. In May of 1972, just over a year before the celestial encounter, an astronomer from the Paris Observatory called Pierre Léna took his own proposal to French test pilot André Turcat personally. A scientifically-curious pilot, he’d given the young researcher an hour of his time. Over lunch at a restaurant inside Toulouse Airport, Léna sketched out his vision on the paper tablecloth. “I don’t know if this kind of adventure would be possible today,” laughs Léna, now 78 but still an active member of the scientific community. “I think now all the structures are way more organized and hierarchical. I was lucky I got to do it, and of course I was young. In fact, I was very lucky people even took me seriously!” The plan seemed deceptively simple. Closing in at maximum velocity, Concorde would swoop down from the north and intercept the shadow of the moon over northwest Africa. Traveling together at almost the same speed, Concorde would essentially race the solar eclipse across the surface of the planet, giving astronomers an unprecedented opportunity to study the various phenomena made possible by an eclipse: the ethereal solar corona, the effect of sunlight on the darkened atmosphere, and the brief red flash of the chromosphere, a narrow region around the sun that’s usually washed out by the much brighter photosphere. Turcat was impressed. He pitched the idea to his bosses at Aérospatiale, who gave a tentative green light, and agreed to assume the cost of the mission. Turcat and chief navigation engineer Henri Perrier got to work on all the details, factoring in weather patterns and even ground temperatures in the places where Concorde could take off from, which would affect the fuel load. (The Concorde’s engines were optimized for the upper atmosphere, though this made them less efficient on the ground. Just taxiing for take-off, the Concorde burned more fuel than a 737 flying from London to Amsterdam, a fact which contributed to its eventual grounding.) Léna and his colleagues began thinking of what experiments they would carry with them into the stratosphere and working out the details of actually intercepting the eclipse. After deciding on Gran Canaria as a good starting point, the team planned a route south and then east along the eclipse line. Turcat and Perrier looked into which runways in Africa would be able to handle the 200-foot long aircraft, which didn’t exactly stop on a dime. They pushed as far west as possible, to N’Djamena in Chad, with Kano in Nigeria as a back up. The actual rendezvous would take place over Mauritania, which agreed to close its airspace to commercial air traffic at midnight the night before. “There was a lot to cover,” Léna recalls. First was the mechanical transformation of the plane. Occuring during the summer solstice and practically along the Tropic of Cancer, the eclipse would be at the zenith: directly above them. To view it properly, they would need to install portholes in the roof and design an apparatus to photograph through it. The windows would have to resist the pressure difference between the cabin and the outside, as well as the high temperatures—around 100º C—felt on the outer surface of the fuselage in supersonic flight, due to friction. If the porthole were to blow out during flight, that wouldn't necessarily risk the safety of the flight, but it would require an emergency landing and an end to the experiment. The Queen Mary College system being tested outside the Aerospatiale factory in Toulouse. Left-to-right: Jim Lesurf, Jim Hall (RSRE), and Tony Marston. Photo: Jim Lesurf “As the research advanced and we got more excited, it became clear to us we couldn’t just do the flight without offering it to other scientists too. So we took the decision to use the Concorde’s maximum capacity and share it between five teams.” There would be scientists aboard from the French Institute of Astrophysics, Kitt Peak National Observatory, Los Alamos National Laboratory, Queen Mary University of London, and the University of Aberdeen. “So there were five different experiments, which had the advantage of spreading out the risk of failure. Each had a very different objective.” When Beckman learned there was room onboard the special flight, “I was absolutely ecstatic,” he recalls. “And a bit worried.” For one thing, time was short. “I had the basic equipment, of course, which was a Michelson interferometer with a helium-cooled detector, but I didn’t have the infrastructure to link with Concorde and track the eclipse path. There was no time to automate it.” Instead, Beckman devised a way to track the eclipse manually, which would require “continuously turning a handle to keep the mirror aligned with one hand, while tilting it down into the observing hole with the other hand.” At last, in February 1973, with just four months until the eclipse, the scientists received word that the mission was a go. The researchers rushed into final preparations. Besides the time constraints of the rapidly-approaching sun and moon, each experiment would have to cope with limitations imposed by the aircraft itself. Nothing could short-circuit—an onboard fire would be catastrophic—and everything had to be built to withstand the vibrations on takeoff: an afterburner-driven sprint that would accelerate Concorde to 250 miles-per-hour even before lift-off, pinning passengers firmly back against their seats. And, despite the aircraft's clean safety record at that point, in the back of the scientists' minds were legitimate concerns about flying in any experimental aircraft at supersonic speeds. Concorde 001 taking off on its eclipse mission. Photo: Jim Lesurf At precisely 10:08 am on the morning of June 30, the four twin-spool Olympus 593 engines under the Concorde’s sweeping white wings powered up to full afterburner and launched “001” down the runway of Gran Canaria's Las Palmas airport. Thousands of miles to the east, the shadow of the moon was already racing across the Atlantic at over 1,200 mph, tracing a path eastward from South America toward the African coast. Léna, Beckman, and the seven other astronomers aboard had no time to enjoy the joyride. “We were running through all the procedures in our minds,” recalls Léna, “and as soon as we got authorization from the cockpit to get out of our seats we got up and got to work. We had to double check the instruments, the pumps, the recorders… there was no room for emotion; that came later.” Two minutes after take-off, the aircraft hit Mach 1, or about 707 mph at altitude, and headed southeast toward the moving shadow. Climbing up into the stratosphere at an altitude of 56,000 ft., Turcat pushed the aircraft to Mach 2.05, more than twice the speed of sound. Even after a couple of test flights the atmosphere on board was tense—the timing and the equipment had to work perfectly. Helped by Concorde’s two onboard inertial guidance systems, the crew guided the aircraft along the carefully-mapped trajectory and met the eclipse within 1 second accuracy of the planned rendezvous. The chase was on. “Alone in the Mauritanian sky,” as a French film about the flight poetically put it, Concorde 001 hurtled east along the path of totality. With the eclipsed sun high over ahead, Turcat switched on the night-time navigation lights in the midday darkness. Paintings and stamps issued by various African countries would depict the epic, sci-fi sight, and Turcat would later deliberate about whether to file the flight as a day or night one. Under the four specially-made portholes and along some of the side-facing ones, Léna and Beckman, and others on board focused on the science. Léna and his team concentrated their efforts on detecting dust particles left over from comets in the solar halo, attempting to determine if there was a ring or sphere of them around the sun. Paul Wraith from Scotland used a side-porthole to watch the effect produced by the sudden darkness on oxygen atoms in the Earth’s atmosphere, while veteran American airborne eclipse chaser Donald Liebenberg looked up at the eclipse to measure pulsations in light intensity. From a porthole added to the roof of the plane, he observed, he said, how “the moon appeared to work as a slow camera shutter, slicing off only a tiny piece of the atmosphere each second.” Beckman, meanwhile, was busy multitasking, with a tape recorder taped to his chest so he could make audio observations about the chromosphere. “Everybody was very concentrated on what they were doing,” he recalls, “but I did have time to glance out of the side window at one point and see the edge of the umbra, the penumbra and the daylight beyond. From the height we were at I could actually see the curvature of the earth, so it was pretty incredible. I was also able to gaze up at the corona and, as the limb of the moon slowly occulted the solar disk, I saw the chromosphere, which flashed out in bright red alpha light.” One of the specially-designed instruments for observing the composition of the sun's corona. Photo: Pierre Léna The science and the Concorde could have kept going, but the landing site in Chad was coming up fast. Each team wrapped up their observations and managed to steal a few moments gazing out over the sands of the Sahara at a sight few get to witness. In all, the experimenters observed the totality of the eclipse for a record 74 minutes. Turning south out of the darkness, Turcat began lining up for the approach. “I would have 10 tonnes of kerosene upon arrival,” he later wrote, giving him “forty minutes’ wait, and the right to a missed approach.” In any event, the aircraft landed smoothly, although Beckman remembers that “the aircraft was within meters of the end of the runway.” The aircrew and the astronomers arrived to a surreal scene, having descended from the stratosphere at supersonic speed in one of the world’s most advanced aircraft only to emerge under the strange half-light of the African sun, still partially eclipsed. An attempted coup d’etat (possibly timed to coincide with the eclipse) meant that armored vehicles mingled with people on the street who were using smoky glass to gaze up at the sun. In one flight, Concorde had given astronomers more eclipse observing time than all the previous expeditions last century—generating three articles in Nature and a wealth of new data. But today Léna, who has recently published a book in French and English about the experiment is modest about what it accomplished. “The five experiments all succeeded, but none of them revolutionized our understanding of the corona,” he says in a disarmingly honest way about the flight’s immediate impact. “They all played their role in the normal progression of scientific knowledge, but there were no extraordinary results, it has to be said.” For the 1999 solar eclipse over Europe, one French and two British Concordes briefly chased the moon’s shadow, but the only passengers on board were tourists. The tragic crash of Concorde flight 4590 shortly after takeoff from Paris on July 25, 2000, killing all 109 passengers and crew and 4 people on the ground, marked the beginning of the end for what Léna called the “great white bird.” Planned Concorde eclipse tours in 2001 were grounded, and in 2003 the final commercial Concorde flights touched down for the final time. NASA and Lockheed Martin have begun to explore designs for a new, quieter supersonic passenger jet (the Concorde's powerful supersonic booms limited its popularity). In England, a group called Club Concorde is trying to raise enough money to get a single Concorde flying again, possibly for the aircraft’s 50th anniversary in 2019. Still, Léna doubts their incredible flight will ever be repeated. Today, space-based satellites that can watch the sun 24/7 and create permanent artificial eclipses have revolutionized our understanding of the nearest star to Earth—although observing eclipses on Earth is still useful for astronomy “At the time, our knowledge of the solar corona was very very limited,” explains Léna. “Today we have far less need for eclipse flights from a scientific point of view because we can put missions like SOHO in space, which is doing essentially what we were aboard Concorde. Our observation methods have changed a lot, so I doubt if today we’d redo a mission like that.” It’s often said that scientific inquiry leads to innovation, but the Concorde experiment is a reminder that sometimes innovation offers wild, unexpected dividends to science. Today, the exact plane that chased the eclipse in 1973 sits as a permanent exhibit at Le Bourget Air and Space Museum, complete with the special roof portholes and the eclipse mission logo on its fuselage. Léna, John Beckman and other engineers and astronomers were present for the 2013 unveiling, along with the late pilot André Turcat, who passed away just last month. “There’s no aircraft flying today that would let us do what we did, nothing that can fly that fast for that long," says Léna. "There are military jets that can fly faster, but not with the endurance of Concorde—and of course they couldn’t carry the instruments. Our record is safe for the foreseeable future.” Correction: An earlier version of this story incorrectly said the Concorde took off from Tenerife; in fact, it left from Gran Canaria. An earlier version also incorrectly indicated the speed of Mach 1 at which "001" was traveling after two minutes of ascent. At that altitude, 10,000 feet, the speed of sound is ~700mph.


News Article | November 17, 2015
Site: www.cemag.us

Since the first laser was invented in 1960, they’ve always given off heat — either as a useful tool, a byproduct, or a fictional way to vanquish intergalactic enemies. But those concentrated beams of light have never been able to cool liquids. University of Washington researchers are the first to solve a decades-old puzzle — figuring out how to make a laser refrigerate water and other liquids under real-world conditions. In a study published this week in the Proceedings of the National Academy of Sciences, the team used an infrared laser to cool water by about 36 degrees Fahrenheit — a major breakthrough in the field. “Typically, when you go to the movies and see Star Wars laser blasters, they heat things up. This is the first example of a laser beam that will refrigerate liquids like water under everyday conditions,” says senior author Peter Pauzauskie, UW assistant professor of materials science and engineering. “It was really an open question as to whether this could be done because normally water warms when illuminated.” The discovery could help industrial users “point cool” tiny areas with a focused point of light. Microprocessors, for instance, might someday use a laser beam to cool specific components in computer chips to prevent overheating and enable more efficient information processing. Scientists could also use a laser beam to precisely cool a portion of a cell as it divides or repairs itself, essentially slowing these rapid processes down and giving researchers the opportunity to see how they work. Or they could cool a single neuron in a network — essentially silencing without damaging it — to see how its neighbors bypass it and rewire themselves. “There’s a lot of interest in how cells divide and how molecules and enzymes function, and it’s never been possible before to refrigerate them to study their properties,” says Pauzauskie, who is also a scientist at the U.S. Department of Energy’s Pacific Northwest National Laboratory in Richland, Washington. “Using laser cooling, it may be possible to prepare slow-motion movies of life in action. And the advantage is that you don’t have to cool the entire cell, which could kill it or change its behavior.” The UW team chose infrared light for its cooling laser with biological applications in mind, as visible light could give cells a damaging “sunburn.” They demonstrated that the laser could refrigerate saline solution and cell culture media that are commonly used in genetic and molecular research. To achieve the breakthrough, the UW team used a material commonly found in commercial lasers but essentially ran the laser phenomenon in reverse. They illuminated a single microscopic crystal suspended in water with infrared laser light to excite a unique kind of glow that has slightly more energy than that amount of light absorbed. This higher-energy glow carries heat away from both the crystal and the water surrounding it. The laser refrigeration process was first demonstrated in vacuum conditions at Los Alamos National Laboratory in 1995, but it has taken nearly 20 years to demonstrate this process in liquids. Typically, growing laser crystals is an expensive process that requires lots of time and can cost thousands of dollars to produce just a single gram of material. The UW team demonstrated that a low-cost hydrothermal process can be used to manufacture a well-known laser crystal for laser refrigeration applications in a faster, inexpensive and scalable way. The UW team also designed an instrument that uses a laser trap — akin to a microscopic tractor beam — to “hold” a single nanocrystal surrounded by liquid in a chamber and illuminate it with the laser. To determine whether the liquid is cooling, the instrument also projects the particle’s “shadow” in a way that allows the researchers to observe minute changes in its motion. As the surrounding liquid cools, the trapped particle slows down, allowing the team to clearly observe the refrigerating effect. They also designed the crystal to change from a blueish-green to a reddish-green color as it cools, like a built-in color thermometer. “The real challenge of the project was building an instrument and devising a method capable of determining the temperature of these nanocrystals using signatures of the same light that was used to trap them,” says lead author Paden Roder, who recently received his doctorate from the UW in materials science and engineering and now works at Intel Corp. So far, the UW team has only demonstrated the cooling effect with a single nanocrystal, as exciting multiple crystals would require more laser power. The laser refrigeration process is currently quite energy intensive, Pauzauskie says, and future steps include looking for ways to improve its efficiency. One day the cooling technology itself might be used to enable higher-power lasers for manufacturing, telecommunications or defense applications, as higher-powered lasers tend to overheat and melt down. “Few people have thought about how they could use this technology to solve problems because using lasers to refrigerate liquids hasn’t been possible before,” he says. “We are interested in the ideas other scientists or businesses might have for how this might impact their basic research or bottom line.” The research was funded by the Air Force Office of Scientific Research and the UW, and benefitted from additional support from the National Science Foundation, Lawrence Livermore National Laboratory and Pacific Northwest National Laboratory. Co-authors include UW doctoral students Bennett E. Smith in chemistry, Xuezhe Zhou in materials science and engineering, and Matthew Crane in chemical engineering. Release Date: November 16, 2015 Source: University of Washington


News Article | November 16, 2016
Site: www.prweb.com

Combing through thousands of pages in recently declassified files at the U.S. National Archives and the U.K. National Archives, attorney and author Larry Loftis has collected startling evidence regarding America's entrance into World War II. His recent book, INTO THE LION'S MOUTH, provides FBI memos and letters showing that in August 1941 J. Edgar Hoover and the FBI were not only warned of a surprise Japanese attack on Pearl Harbor, but given documentary proof. Appendix 1 of the book, Loftis noted, includes two recently declassified documents never seen before in public: 1) an August 19, 1941 letter from FBI Assistant Director Earl Connelley to FBI Director J. Edgar Hoover; and 2) an attachment to that letter, identified by Connelley as "Exhbit C." In the letter, Connelley informs Hoover that the prior day, August 18, he and FBI Special Agent Charles Lanman met with MI6 double agent Dusko Popov and British Security Coordination liaison, Dick Ellis, at New York City's Commodore Hotel. Connelley writes that the meeting was three hours, and that Popov described in detail what the Germans had tasked him to do while on assignment in the U.S. Connelley's attached Schedule C was the English translation of Popov's two-page German questionnaire, 40 percent of which pertains to investigation of Pearl Harbor's defenses. "Hoover told no one," Loftis stated, "not FDR, not Admiral Kimmel, not naval intelligence. The Director kept the information classified his entire life. There were eight investigations into the Pearl Harbor disaster. Not one mentions Popov's questionnaire. Not one called him or Dick Ellis to testify. Worse, Hoover actively hid the information from FDR, and concocted a canard for Reader's Digest in April 1946 to continue the subterfuge." Before releasing INTO THE LION'S MOUTH, Loftis advised Admiral James A. Lyons, Jr., a former Commander of the U.S. Pacific Fleet who will be giving a special 75th Pearl Harbor Anniversary address in Washington, D.C. on December 7. After reviewing the book, Admiral Lyons concluded: "The description of UK's successful attack on the Italian Fleet at Taranto and how that served as the model for the Japanese attack on Pearl Harbor provide an interesting insight. Most shocking, however, is the fact that J. Edgar Hoover, the FBI Director, was provided this key intelligence by Dusko Popov four months before the Japanese attack, and apparently didn't tell anyone. Many thousands of lives could have been saved and the war in the Pacific could have been brought to a successful conclusion much sooner." "Every textbook in America will have to change," Loftis concluded. In addition to the review by Admiral Lyons, three other high-ranking military/intelligence officials weighed in on the book: former CIA Director Michael Morell, former Los Alamos National Laboratory Director Dr. Robert Kuckuck, and former Commander of the U.S. Pacific Fleet, Admiral Ronald J. Zlatoper. Their reviews, and others, are provided below. "Larry Loftis has written a sensational book, telling a remarkable true story that few know, even those in the intelligence business. Written in the style of the best spy thrillers, the book is impossible to put down. A must read." —Michael Morell, CIA Director (acting 2011, 2012-13) "Excellent book – very impressive research! This is both a well-researched history book and a dramatic breath-taking spy story, all of it true. Mr. Loftis has shown an extraordinary ability to combine the high-integrity research discipline of his legal training with a 'page-turner' writing talent to produce a scholarly thriller that is hard to put down." —Dr. Robert Kuckuck, Director, Los Alamos National Laboratory (ret.) “Sometimes truth is not only stranger, but more exciting than fiction. If one is an Ian Fleming aficionado and a World War II history buff, you will find Into the Lion's Mouth a merger of the most exciting and fascinating aspects of both genres. '007's' exploits in the fourteen James Bond books almost pale in comparison to the actual exploits of Dusko Popov. Like war history? Read this book. Like spy novels? Read this book." —Four-star Admiral R. J. Zlatoper, Commander, U.S. Pacific Fleet (ret.) "Spectacular—incredible research and writing. Nonfiction that reads like a page-turning novel. Riveting and moving." —Jonathan Braun, Parade "For those wanting to know the real-life inspiration for Ian Fleming's James Bond, here it is... A work of non-fiction that's more thriller than biography. It's well-researched, but provides just enough historical background to prep us for Popov's dangerous exploits into espionage, politics and warfare." —USA Today "A gripping story about a fascinating man of many talents and faces—and what makes it all the more intriguing is that it's all true! This one will keep you planted in your reading chair from start to finish." —Michael Connelly, #1 New York Times bestselling author "Loftis recounts the exploits of the model for Ian Fleming’s James Bond character with great skill.... Will have readers on the edge of their seats and immersed in this sometimes unbelievable tale." —Library Journal "Sifting through declassified documents from World War II—as well as hotel bills, letters and long-forgotten memoirs—Loftis painstakingly pinpoints the exact moment Bond’s creator, Ian Fleming, first encountered this real-life 007." —New York Post “Who needs fiction. Truth is a thousand times better, and this true-life adventure has it all. Action, history, secrets, conspiracies—a sizzling piece of entertainment that’s real.” —Steve Berry, New York Times bestselling author “The real-life inspiration for James Bond proves to be every bit as riveting as any of Ian Fleming's creations ...One of the most remarkable books we've read all year... Although Into the Lion's Mouth reads more like an intelligence caper than a biography, the book is impeccably researched and cited, making it an excellent read for even the most discriminating history buffs.” —BestThrillers.com INTO THE LION’S MOUTH The True Story of Dusko Popov: World War II Spy, Patriot, and the Real-Life Inspiration for James Bond Berkley Caliber Hardcover By Larry Loftis ISBN: 9780425281819; $27.00; 384 pages Larry Loftis is an author and AV-rated attorney. He has published legal articles in the University of Florida Law Review, Suffolk Transnational Law Journal, Georgia Journal of International and Comparative Law, Florida Bar Journal, National Law Journal, and Florida Banking. He received a BA and J.D. from the University of Florida, where he served on the Law Review as the senior executive editor and senior articles editor. He also served at the law school as a teaching fellow for Legal Research and Writing/Appellate Advocacy, and has taught law at Belhaven University.


News Article | October 28, 2016
Site: www.prweb.com

Nick Mezins carefully translates a material he inherited from his parents into English. His parents were part of a small group who conversed with the Almighty’s and God’s spirits from 1943 to 1971. In “The Tidings: Volume Six” (published by Trafford Publishing), it contains information provided by dozens of individual spirits, describing the teaching of the prophets of the past. In this sixth part of the volumed series, the book contains the Almighty’s religion for the universe, which emphasizes on the needs for the planet Earth. “The Tidings: Volume Six” tells about the creation of the universe, its purpose, how the universe is governed, the various intelligent beings in it, the expectations from humans on Earth and the important role that a human being is expected to play. “This book will not appeal to each and every individual. It is intended for those who are not satisfied with that which the currently existing religions on Earth give people,” Mezins says. An excerpt from the book: Now that man has freed himself from slavery and kings, he should thoroughly familiarize himself with Christ’s teachings, and discard everything that does not correspond to them. With respect to The Almighty, He does not recognize any worship and prayers. He recognizes only work; work that draws man closer to Him, and work that leads toward constructing the universe and achieving His - The Almighty’s - ideals. He wants to see man as His assistant in this grand work. About the Author Nick Mezins was born in Riga, Latvia in 1936. He came to the U.S. in 1950 and started high school. Mezins graduated from Theodore Roosevelt High School in Bronx, New York. He earned his Bachelor of Science in Electrical Engineering from the City College of New York in 1959 and received a commission in the regular Army. After more than 11 years on active duty, including three years in Vietnam, he became a civilian again. While working for a coal mining company, he attended the University of Pittsburgh and earned a Master of Science in Engineering in 1973. The Los Alamos National Laboratory hired him in 1974, where he worked as an engineer and section leader until retirement in 1993. Mezins is a licensed Professional Engineer in New Mexico and is retired from the United States Army as a Lieutenant Colonel of Engineers. More information is available in http://www.TheAlmightysRevelations.org. Trafford Publishing, an Author Solutions, LLC, author services imprint, was the first publisher in the world to offer an “on-demand publishing service,” and has led the independent publishing revolution since its establishment in 1995. Trafford was also one of the earliest publishers to utilize the Internet for selling books. More than 10,000 authors from over 120 countries have utilized Trafford’s experience for self publishing their books. For more information about Trafford Publishing, or to publish your book today, call 1-888-232-4444 or visit trafford.com.


« Blue Bird to begin production of CNG-fueled Type C Vision school bus in 2016; ROUSH fuel system | Main | National oil companies hold top spots in Energy Intelligence’s ranking of top oil firms » The US Department of Energy has announced the 2016 Small Business Innovation Research and Small Business Technology Transfer (SBIR/STTR) Phase I Release 2 Topics, covering eight DOE research program organizations. Among the many topics listed are magnetocaloric materials development for hydrogen delivery; two hydrogen technology transfer opportunities (TTO); EV traction batteries and power electronics; new combustion engine technologies; and the co-utilization of CO and methane in biogas to produce higher hydrocarbon fuels. DOE plans to issue the full Funding Opportunity Announcement (FOA) on 30 November 2015. Hydrogen. The first hydrogen TTO is focused on durable, high activity electrocatalyst with low platinum content and low cost for polymer electrolyte membrane fuel cell applications. Conventional polymer electrolyte membrane (PEM) fuel cell technology requires high content of platinum electrocatalyst, which raises manufacturing costs. Researchers at Brookhaven National Laboratory have developed an electrocatalyst design that significantly decreases the platinum content of the cathode by an order of magnitude while maintaining cathode performance. The selected project will meet the critical need for core-shell electrocatalyst manufacturing processes for PEM fuel cell membrane electrode assembly (MEA) components and develop a plan to scale-up production of the core-shell nanocatalysts, incorporate those catalysts into MEAs, and test the performance and durability of the MEAs under realistic fuel cell operating conditions. The second TTO is focused on safety sensors for hydrogen infrastructure applications. The selected project will use a unique class of electrochemical sensors created at Los Alamos National Laboratory (LANL) to develop low cost electronics packaging that is manufacturable at high volume and will integrate the LANL sensor into a commercial package that can meet the codes and standards for being deployed at a hydrogen fueling station. The Magnetocaloric Materials Development project will develop novel magnetocaloric materials that optimize material properties for cooling and liquefaction of hydrogen. Magnetocaloric materials have great potential to lower the energy consumption and carbon footprint of technologies used in building cooling, refrigeration, and gas liquefaction. Electric drive vehicle batteries. DOE will seek projects to develop electrochemical energy storage technologies which support commercialization of micro-, mild-, and full-HEVs, PHEVs, and EVs. Some specific improvements of interest include, but are not limited to, new low-cost materials; high voltage and high temperature non-carbonate electrolytes; improvements in manufacturing processes, speed, or yield; novel SEI stabilization techniques for silicon anodes; improved cell/pack design minimizing inactive material; significant improvement in specific energy (Wh/kg) or energy density (Wh/L); and improved safety. Phase I feasibility studies must be evaluated in full cells (not half cells) greater than 200mAh in size while Phase II technologies should be demonstrated in full cells greater than 2Ah. DOE will reject applications as non-responsive if the proposed technology is high cost; requires substantial infrastructure investments or industry standardization to be commercially viable; and/or cannot accept high power recharge pulses from regenerative breaking or has other characteristics that prohibit market penetration. SiC MOSFETs for electric drive vehicle power electronics. With large area (> 150 mm, or 6") Silicon Carbide (SiC) epitaxial wafer availability from a large number of qualified suppliers, the SiC device industry is approaching the state of the cost- competitive silicon (Si) power device industry, where the cost of fabrication is the primary driver for device cost, and their high device yield allows for a low overall cost of devices. Devices crucial for vehicle inverters which can take advantage of these SiC epitaxial wafers, SiC switches with either built-in free-wheeling Schottky diodes (lower cost) or in conjunction with Schottky diodes, offer significantly smaller on-state resistance as compared to current Si switches and enable very high power density, modular inverters for use in electric drive vehicles. The extremely high speed of SiC switches also allows for increased efficiencies and reduced passive device requirements for power inverter applications. This topic seeks to address this barrier through demonstrating the successful production of > 100A, > 600V rated switches with either built-in Schottky diodes (lower cost) or used with external Schottky diodes suitable for use in electric drive vehicle traction motor inverters. Specifically, devices produced should show automotive application readiness through passing qualification specifications or standards and high yields. Where possible, applicants should show a relationship to, and demonstrate an understanding of, automotive application requirements and environments. Examples include surface and/or substrate treatments and processing, and compatibility with existing power module packaging and processing. Proposals should also describe the cost of manufacturing SiC switches compared to competing Si switches, including details such as costs and availability of commercial SiC substrates, epi-layers, and additional equipment needed. These costs should be linked to a commercially viable business model for large scale manufacturing and should approach cost parity with Si switches on a cost per amp basis. Variable compression ratio or variable stroke internal combustion engine with real- time controllability. DOE is seeking a commercially viable control system design, including hardware and software, to enable the dynamic control of the compression ratio and/or piston stroke of operating internal combustion engines in passenger vehicles. Applications for variable compression ratio control should propose the development of systems that: Applications for variable stroke control should propose the development of systems that: Alternative crank mechanisms for internal combustion engines leading to improved energy efficiency. Reciprocating internal combustion (e.g. gasoline or diesel) engines for automotive applications use slider/crank mechanisms to create torque on an engine’s output shaft from forces applied to pistons as a result of the pressure created by the combustion of fuel. While direct mechanical losses of traditional slider/crank mechanisms are small, there is another indirect loss as a consequence of slider/crank use. Early in an engine’s power stroke, cylinder temperatures—and therefore convective and radiative heat losses—all peak. The engine’s rate of performing work is still very low reducing energy efficiency. The net effect may be that slider/crank mechanisms indirectly lead to preventable energy losses and reduced energy efficiency. DOE is seeking projects for the development or demonstration of a functioning prototype of a mass-produced, commercially available reciprocating engine, modified with an alternative mechanical mechanism linking the piston to the engine’s output shaft. Reporting must include fuel consumption test results over the entire engine map of the prototype compared with a second, unmodified, otherwise identical engine. All fuel consumption testing must be conducted according to engine industry norms. Statistically valid fuel economy improvements (95% confidence level) of at least 4.0% are desired. Reduction of PGM loading in automotive emission control systems. Modern automotive emission control catalyst systems utilize monolithic flow-through supports coated with high surface area inorganic oxides and, typically, platinum group metals (PGMs). These metals—palladium, platinum and rhodium—are suspended in the washcoat, a refractory oxide layer bonded to a ceramic or metal support surface. The North American market has the most stringent emissions regulations for the passenger car sector. The tightening of emissions standards has placed an ever-greater burden on catalyst performance and compliance has come in part through higher PGM content with associated higher costs and market volatility risks. DOE is seeking strategies for reducing PGM loading in automotive catalyst systems through new techniques for dispersing the PGMs in the washcoat or through complete or partial substitution of PGMs with other, lower cost catalytic materials. Applications may include oxidation catalysts and three-way catalysts for gasoline engines, NO adsorbers for lean-burn gasoline engines, as well as oxidation catalysts, urea selective catalytic reduction and NO adsorbers for diesel engines. The prototype catalyst thrifting strategy developed under this subtopic must be capable of reducing PGM loading by 50% in the proposed automotive emission control system application with conversion performance and durability comparable to current production catalyst systems. (For reference, the average loading for three-way catalysts is 1.1 grams PGM per liter of engine displacement). Co-utilization of CO and CH to produce biofuel and bioproduct precursors. Biogas is primarily comprised of methane and carbon dioxide. DOE seeks projects to convert the combination of biogenic CO and CH4 into higher hydrocarbons (C and above, or C and above with at least one double bond). Proposals that produce syngas, ethanol, or methanol as a final product will be considered non-responsive, although all of those substances are acceptable as process intermediates. Options include, but are not limited to: Evaluation criteria will include the conversion efficiency of biogenic carbon, total energy balance of the proposed processes, and cost-effectiveness in terms of DOE’s BETO’s 2017 and 2022 strategic targets. While biogas from wet organic waste streams is the primary target of this subtopic, proposals that utilize CO streams from other sources may be within scope, provided that they utilize non-photosynthetic biological conversion mechanisms. While the primary goal of this solicitation is to further the development of drop-in biofuels from wet organic waste streams, proposals that include biochemical precursors as part of the overall value proposition are welcomed.


Terwilliger T.C.,Los Alamos National Laboratory | Bricogne G.,Global Phasing Ltd
Acta Crystallographica Section D: Biological Crystallography | Year: 2014

Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering. © 2014 International Union of Crystallography.


Huang J.,Los Alamos National Laboratory | Liu T.,Hong Kong University of Science and Technology | Wang L.-T.,University of Chicago | Yu F.,Fermi National Accelerator Laboratory
Physical Review Letters | Year: 2014

We reveal a set of novel decay topologies for the 125 GeV Higgs boson in supersymmetry which are initiated by its decay into a pair of neutralinos, and discuss their collider search strategies. This category of exotic Higgs decays is characterized by the collider signature: visible objects+ET, with ET dominantly arising from escaping dark matter particles. Their benchmark arises naturally in the Peccei-Quinn symmetry limit of the minimal supersymmetric standard model singlet extensions, which is typified by the coexistence of three light particles: singletlike scalar h1 and pseudoscalar a1, and singlinolike neutralino χ1, all with masses of 10GeV, and the generic suppression of the exotic decays of the 125 GeV Higgs boson h2h1h1, a1a1 and χ1χ1, however. As an illustration, we study the decay topology: h2χ1χ2, where the binolike χ2 decays to h1χ1 or a1χ1, and h1/a1ff̄, with ff̄=+-, bb̄. In the dimuon case (mh1/a11GeV), a statistical sensitivity of S/B>6 can be achieved easily at the 8 TeV LHC, assuming (ppWh2)/(ppWhSM)Br(h2+-χ1χ1)=0.1. In the bb̄ case (mh1/a145GeV), 600fb1 data at the 14 TeV LHC can lead to a statistical sensitivity of S/B>5, assuming (ppZh2)/(ppZhSM) Br(h2bb̄χ1χ1)=0.5. These exotic decays open a new avenue for exploring new physics couplings with the 125 GeV Higgs boson at colliders. © 2014 American Physical Society.


Li Q.,University of Pennsylvania | Dong Y.,Purdue University | Perez D.,Los Alamos National Laboratory | Martini A.,Purdue University | Carpick R.W.,University of Pennsylvania
Physical Review Letters | Year: 2011

The atomic stick-slip behavior of a Pt tip sliding on a Au(111) surface is studied with atomic force microscopy (AFM) experiments and accelerated (i.e., reduced sliding speed) molecular dynamics (MD) simulations. The MD and AFM conditions are controlled to match, as closely as possible, the geometry and orientation, load, temperature, and compliance. We observe clear stick-slip without any damage. Comparison of both MD and AFM results with the thermally activated Prandtl-Tomlinson model shows that MD results at the highest speeds are not in the thermally activated regime. At lower speeds, within the thermally activated regime, AFM and MD provide consistent energetics, but attempt frequencies differ by orders of magnitude. Because this discrepancy lies in attempt frequencies and not energetics, atomistic details in MD simulations can be reliably used in interpreting AFM data if the MD speeds are slow enough. © 2011 American Physical Society.


Clark J.,University of Cambridge | Nelson T.,Los Alamos National Laboratory | Tretiak S.,Los Alamos National Laboratory | Cirmi G.,Massachusetts Institute of Technology | And 2 more authors.
Nature Physics | Year: 2012

Molecular conformational reorganization following photon absorption is a fundamental process driving reactions such as the cis-trans isomerization at the heart of the primary step of vision and can be exploited for switching in artificial systems using photochromics. In general, conformational change occurs on a timescale defined by the energy of the main vibrational mode and the rate of energy dissipation. Typically, for a conformational change such as a twist around the backbone of a conjugated molecule, this occurs on the tens of picoseconds timescale. However, here we demonstrate experimentally that in certain circumstances the molecule, in this case an oligofluorene, can change conformation over two orders of magnitude faster (that is sub-100 fs) in a manner analogous to inertial solvent reorganization demonstrated in the 1990s. Theoretical simulations demonstrate that non-adiabatic transitions during internal conversion can efficiently convert electronic potential energy into torsional kinetic energy, providing the 'kick' that prompts sub-100 fs torsional reorganization. © 2012 Macmillan Publishers Limited. All rights reserved.


Knezevic M.,University of New Hampshire | Drach B.,New Mexico State University | Ardeljan M.,University of New Hampshire | Beyerlein I.J.,Los Alamos National Laboratory
Computer Methods in Applied Mechanics and Engineering | Year: 2014

In this work, we use crystal plasticity finite element (CPFE) models of 2D and 3D polycrystalline microstructures to elucidate 3D topological effects on microstructural evolution during rolling deformation. The important capabilities of our CPFE framework are that it predicts not only texture evolution but also the evolution of intra-grain and inter-grain misorientations, grain shape and grain boundary character distribution. These abilities are possible because both grain structures and grain boundary surfaces are explicitly meshed. Both the 2D and 3D models predict heterogeneous deformation within the grains and across the polycrystal. They also predict similar evolution in grain shape and texture. However, we find that the inter-granular misorientations are higher, the intra-granular misorientations are lower, and the texture evolves faster in 3D compared to 2D, differences which increase with strain level. We attribute these growing differences to the fact that in the 3D microstructure, grains are allowed to reorient both in plane and out of plane to preferred orientations, unlike in 2D. Interestingly, we also find that in the 3D model, the frequency of σ3 boundaries increases with rolling strain up to the largest strain studied, 1.0. The important 3D effects revealed here can help studies that use CPFE models for understanding microstructural evolution, localization, and damage. © 2014 Elsevier B.V.


Fernandes R.M.,Columbia University | Fernandes R.M.,Los Alamos National Laboratory | Schmalian J.,Karlsruhe Institute of Technology
Superconductor Science and Technology | Year: 2012

We investigate how emergent nematic order and nematic fluctuations affect several macroscopic properties of both the normal and superconducting states of the iron pnictides. Due to its magnetic origin, long-range nematic order enhances magnetic fluctuations, leaving distinctive signatures in the spin-lattice relaxation rate, the spin-spin correlation function, and the uniform magnetic susceptibility. This enhancement of magnetic excitations is also manifested in the electronic spectral function, where a pseudogap can open at the hot spots of the Fermi surface. In the nematic phase, electrons are scattered by magnetic fluctuations that are anisotropic in momentum space, giving rise to a non-zero resistivity anisotropy whose sign changes between electron-doped and hole-doped compounds. We also show that due to the magneto-elastic coupling, nematic fluctuations soften the shear modulus in the normal state, but harden it in the superconducting state. The latter effect is an indirect consequence of the competition between magnetism and superconductivity, and also causes a suppression of the orthorhombic distortion below T c. We also demonstrate that ferro-orbital fluctuations enhance the nematic susceptibility, cooperatively promoting an electronic tetragonal symmetry-breaking. Finally, we argue that T c in the iron pnictides might be enhanced due to nematic fluctuations of magnetic origin. © 2012 IOP Publishing Ltd.


Wolfle P.,Karlsruhe Institute of Technology | Dubi Y.,Los Alamos National Laboratory | Balatsky A.V.,Los Alamos National Laboratory
Physical Review Letters | Year: 2010

Recently observed tunneling spectra on clean heavy-fermion compounds show a lattice periodic Fano line shape similar to what is observed in the case of tunneling to a Kondo ion adsorbed at the surface. We show that the translation symmetry of a clean surface in the case of weakly correlated metals leads to a tunneling spectrum which shows a hybridization gap but does not have a Fano line shape. By contrast, in a strongly correlated heavy-fermion metal the heavy quasiparticle states will be broadened by interaction effects. The hybridization gap is completely filled in this way, and an ideal Fano line shape of width ∼2TK results. In addition, we discuss the possible influence of the tunneling tip on the surface, in (i) leading to additional broadening of the Fano line and (ii) enhancing the hybridization locally, hence adding to the impurity type behavior. The latter effects depend on the tip-surface distance. © 2010 The American Physical Society.


Zou G.F.,Soochow University of China | Zhao J.,Soochow University of China | Luo H.M.,New Mexico State University | McCleskey T.M.,Los Alamos National Laboratory | And 2 more authors.
Chemical Society Reviews | Year: 2013

In this tutorial article, the recent development of polymer assisted deposition (PAD) for the growth of a wide range of materials, in particular in thin films, is reviewed. Specifically, we describe the unique chemistry and processes of PAD for the deposition of metals, metal-oxides, metal-nitrides, metal-carbides, and their derived composites. Many examples are given not only to illustrate the powerfulness of PAD for high quality coatings, but also to give readers an opportunity to evaluate this technique for specific applications. The challenging issues related to PAD, based on the authors' experience, are also discussed in this review article. © The Royal Society of Chemistry 2013.


Liu L.,Beihang University | Wang J.,Los Alamos National Laboratory | Gong S.K.,Beihang University | Mao S.X.,University of Pittsburgh
Physical Review Letters | Year: 2011

We have observed a new deformation-twinning mechanism using the high resolution transmission electron microscope in polycrystalline Ag films, zero-strain twinning via nucleation, and the migration of a Σ3{112} incoherent twin boundary (ITB). This twinning mechanism produces a near zero macroscopic strain because the net Burgers vectors either equal zero or are equivalent to a Shockley partial dislocation. This observation provides new insight into the understanding of deformation twinning and confirms a previous hypothesis: detwinning could be accomplished via the nucleation and migration of Σ3{112} ITBs. The zero-strain twinning mechanism may be unique to low staking fault energy metals with implications for their deformation behavior. © 2011 American Physical Society.


Ullah G.,Los Alamos National Laboratory | Mak D.-O.D.,University of Pennsylvania | Pearson J.E.,Los Alamos National Laboratory
Journal of General Physiology | Year: 2012

The inositol 1,4,5-trisphosphate (IP3) receptor (IP3R) channel is crucial for the generation and modulation of intracellular Ca2+ signals in animal cells. To gain insight into the complicated ligand regulation of this ubiquitous channel, we constructed a simple quantitative continuous-time Markov-chain model from the data. Our model accounts for most experimentally observed gating behaviors of single native IP3R channels from insect Sf9 cells. Ligand (Ca2+ and IP3) dependencies of channel activity established six main ligand-bound channel complexes, where a complex consists of one or more states with the same ligand stoichiometry and open or closed conformation. Channel gating in three distinct modes added one complex and indicated that three complexes gate in multiple modes. This also restricted the connectivity between channel complexes. Finally, latencies of channel responses to abrupt ligand concentration changes defined a model with specific network topology between 9 closed and 3 open states. The model with 28 parameters can closely reproduce the equilibrium gating statistics for all three gating modes over a broad range of ligand concentrations. It also captures the major features of channel response latency distributions. The model can generate falsifiable predictions of IP3R channel gating behaviors and provide insights to both guide future experiment development and improve IP3R channel gating analysis. Maximum likelihood estimates of the model parameters and of the parameters in the De Young-Keizer model yield strong statistical evidence in favor of our model. Our method is simple and easily applicable to the dynamics of other ion channels and molecules. © 2012 Ullah et al.


Huang J.,Los Alamos National Laboratory | Liu T.,Hong Kong University of Science and Technology | Wang L.-T.,University of Chicago | Yu F.,Fermi National Accelerator Laboratory
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2014

We continue our exploration of the nearly Peccei-Quinn symmetric limit shared by common singlet extensions of the minimal supersymmetric standard model. This limit has been established as a viable framework for studying subelectroweak scale dark matter phenomenology and has interesting and direct connections to new exotic Higgs decay physics. We present analytic calculations to motivate the important phenomenological features mentioned above. We also discuss benchmark points in this model framework that accommodate the observed Galactic center gamma-ray excess. We emphasize connections between phenomenology of dark matter direct detection and indirect detection, and new exotic decay channels for the 125 GeV Higgs boson. We conclude by identifying two benchmark modes of exotic Higgs decays for h→τ+τ-ET and h→bb¯ET final states and estimate their sensitivity prospects at the LHC. © 2014 American Physical Society.


Li Q.,Los Alamos National Laboratory | Cao R.,Ulsan National Institute of Science and Technology | Cho J.,Ulsan National Institute of Science and Technology | Wu G.,Los Alamos National Laboratory
Advanced Energy Materials | Year: 2014

Alkaline oxygen electrocatalysis, targeting anion exchange membrane fuel cells, Zn-air batteries, and alkaline-based Li-air batteries, has become a subject of intensive investigation because of its advantages compared to its acidic counterparts in reaction kinetics and materials stability. However, significant breakthroughs in the design and synthesis of efficient oxygen reduction catalysts from earth-abundant elements instead of precious metals in alkaline media remain in high demand. Carbon composite materials have been recognized as the most promising because of their reasonable balance between catalytic activity, durability, and cost. In particular, heteroatom (e.g., N, S, B, or P) doping can tune the electronic and geometric properties of carbon, providing more active sites and enhancing the interaction between carbon structure and active sites. Importantly, involvement of transition metals appears to be necessary for achieving high catalytic activity and improved durability by catalyzing carbonization of nitrogen/carbon precursors to form highly graphitized carbon nanostructures with more favorable nitrogen doping. Recently, a synergetic effect was found between the active species in nanocarbon and the loaded oxides/sulfides, resulting in much improved activity. This report focuses on these carbon composite catalysts. Guidance for rational design and synthesis of advanced alkaline ORR catalysts with improved activity and performance durability is also presented. The development of highly efficient catalysts from earth-abundant elements rather than precious metals is crucial for advanced electrochemical energy conversion and storage technologies. Recent breakthroughs in the nanocarbon composite cathode catalysts in alkaline-based fuel cells and metal-air batteries are summarized and guidance for the rational design and synthesis of advanced catalysts with improved activity and performance durability is presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Liu X.H.,Sandia National Laboratories | Huang S.,Georgia Institute of Technology | Picraux S.T.,Los Alamos National Laboratory | Li J.,University of Pennsylvania | And 3 more authors.
Nano Letters | Year: 2011

Retaining the high energy density of rechargeable lithium ion batteries depends critically on the cycle stability of microstructures in electrode materials. We report the reversible formation of nanoporosity in individual germanium nanowires during lithiation-delithiation cycling by in situ transmission electron microscopy. Upon lithium insertion, the initial crystalline Ge underwent a two-step phase transformation process: forming the intermediate amorphous LixGe and final crystalline Li 15Ge4 phases. Nanopores developed only during delithiation, involving the aggregation of vacancies produced by lithium extraction, similar to the formation of porous metals in dealloying. A delithiation front was observed to separate a dense nanowire segment of crystalline Li15Ge4 with a porous spongelike segment composed of interconnected ligaments of amorphous Ge. This front sweeps along the wire with a logarithmic time law. Intriguingly, the porous nanowires exhibited fast lithiation/delithiation rates and excellent mechanical robustness, attributed to the high rate of lithium diffusion and the porous network structure for facile stress relaxation, respectively. These results suggest that Ge, which can develop a reversible nanoporous network structure, is a promising anode material for lithium ion batteries with superior energy capacity, rate performance, and cycle stability. © 2011 American Chemical Society.


Lee H.,New Mexico State University | Rahn T.,Los Alamos National Laboratory | Throop H.,New Mexico State University
Global Change Biology | Year: 2012

Recent studies showed that photochemical breakdown (photodegradation) of plant material accounts for a substantial portion of litter decomposition and subsequent trace gas release in ecosystems under high radiative load and low precipitation. In the absence of solar radiation, thermal degradation may also cause trace gas release at temperatures below the ignition point. These observations suggest that the abiotic processes of photodegradation and thermal degradation of plant litter may be important in understanding global trace gas budgets. In a laboratory incubation study, we performed a simultaneous carbon (C) accounting of CO 2, CO, and CH 4 produced as a byproduct of photodegradation and thermal degradation of six different plant litter types that varied in chemical composition. The patterns of trace gas release during photodegradation and thermal degradation differed considerably across the six plant materials, suggesting that chemical composition of litter may influence the rates of abiotic degradation. There was a strong positive correlation between the rates of trace gas release during photodegradation and temperature. A significant portion of trace gases were produced during low temperature (< 100 °C) thermal degradation of litter in the absence of solar radiation, which was also positively correlated to temperature. In addition, both thermal degradation and photodegradation occurred in the absence of O 2. This indicates that the mechanism formerly accepted as photo-oxidation may only be one of several photodegradation processes. We speculate that the direct breakdown of chemical groups such as carboxyl, carbonyl, and methoxyl groups may result in CO 2, CO, and CH 4 release. We suggest that the combined processes of thermal and photodegradation of litter may be a previously under accounted source of C-based trace gases from terrestrial systems. © 2011 Blackwell Publishing Ltd.


Grant
Agency: Department of Defense | Branch: Missile Defense Agency | Program: STTR | Phase: Phase I | Award Amount: 99.27K | Year: 2013

An emerging consensus in engineering holds that aleatory uncertainty should be propagated by traditional methods of probability theory but that epistemic uncertainty may require methods that do not confuse incertitude with variability by requiring every possibility be associated with a probability that it occurs. Therefore, although Monte Carlo shells that re-run calculations many times while varying input values according to specified distributions are useful, they are insufficient to properly account for epistemic and aleatory uncertainty. Three things are needed to build on the new consensus. The first is a clear and comprehensive roadmap for how various important problems (e.g., arithmetic and logical evaluations, backcalculations, sensitivity analyses, etc.) can be solved under this view. The second is software that enables analysts to automatically propagate through simulations the uncertainty implied by the significant digits of the numbers specifying a model. Such software would require very little training in uncertainty analysis to be useful to analysts. The third need is a software library of recommended methods for common calculations that is usable by modelers and analysts who may not themselves be experts in uncertainty quantification but who recognize the need for and benefits from it.


Patent
Los Alamos National Laboratory and University of Maine, France | Date: 2015-06-19

The present disclosure relates to compositions and methods for increasing the leaf-to-root ratio of the signal metabolite 2-oxoglutaramate and related proline molecules in plants by modulating levels of -amidase to increase nitrogen use efficiency, resulting in enhanced growth, faster growth rates, greater seed and fruit/pod yields, earlier and more productive flowering, increased tolerance to high salt conditions, and increased biomass yields.


News Article | August 30, 2016
Site: phys.org

"This research combines theory for leaf energy flows with globally distributed temperature data for diverse plant taxa to show that leaves generally do not match air temperature, but instead thermoregulate," said Sean Michaletz, a plant ecologist at Los Alamos National Laboratory, which led the study. Los Alamos studies and models climate change and related impacts as part of its mission to maintain the nation's energy security. "The end result is that leaves are generally warmer than air in cold temperatures, and cooler than air in warm temperatures." In the paper recently published in Nature Plants, Michaletz and the team developed a novel theory that combined energy budgets, which account for incoming and outgoing thermal energy fluxes in a leaf, with the carbon economics theory, which posits that leaf form and function are ultimately constrained by the efficiency of the leaf's structure in processing carbon. By synthesizing these theories, the team showed how leaf thermoregulation helps to maximize leaf photosynthesis and, therefore, the total lifetime carbon gain of a leaf. The team's theory is key to developing a more quantitative plant ecology that examines the origins of leaf thermoregulation, or the process whereby leaf temperature varies from ambient air temperature. Their research shows that plant functions are decoupled from ambient temperatures, a finding that will support improved climate models. Most plants photosynthesize, converting light energy and carbon dioxide from the atmosphere into sugars that become leaves, stems and roots. Leaf thermoregulation is critical for plant carbon economics because leaf temperatures determine the speed of photosynthesis and respiration. Because it is generally assumed that plants take on the temperature of the environment, many current Earth system models for predicting plant-atmosphere feedbacks assume that plant physiology operates at the ambient air temperature. However, data from the team's study show that leaf temperature can differ dramatically from air temperatures. This decoupling weakens the link between the climate and plant functions, limiting climactic impacts on plant growth and the carbon budgets of an ecosystem. Michaletz, McDowell and their colleagues at the Laboratory conduct this research to help identify linkages among climate, plant traits and plant physiology rates. These recent research results may help to improve Earth system models and predict climate change impacts and feedbacks. Michaletz, a Los Alamos Director's Postdoctoral Fellow, and his mentor Nate McDowell published their Nature Plants paper in collaboration with authors from the University of Arizona, University of Oklahoma, Tsinghua University, Lawrence Berkeley Laboratory, Smithsonian Tropical Research Institute, University of Pennsylvania, The Santa Fe Institute, The iPlant Collaborative, Aspen Center for Environmental Studies. More information: Sean T. Michaletz et al, The energetic and carbon economic origins of leaf thermoregulation, Nature Plants (2016). DOI: 10.1038/nplants.2016.129


News Article | November 10, 2016
Site: phys.org

When Z fires, its huge electromagnetic field crushes pre-warmed fuel, forcing it to fuse. Tritium-enriched fuel should release many more neutrons than previous maximums at Z, already among the highest in the world. "This thing about creating energy where none existed before—we don't yet have a bonfire, but we're squirting starter on the grill," said Mike Cuneo, senior manager of Sandia's Pulsed Power Accelerator Science and Technology group. The output of Z has been used over decades to provide information for computer simulations that test the readiness of America's nuclear stockpile without exploding an actual weapon. It's also used by astrophysicists using the machine's momentarily astounding pressures and temperatures to understand conditions in stars and the cores of planets. And some hope that pressures created mainly by electricity and magnetism one day may reach nuclear fusion conditions suitable for energy production; this condition is called "high yield." The introduction of tritium is of high technical interest because a 50/50 mix of tritium and deuterium—the two isotopes of hydrogen—emits 80 times more neutrons, and 500 times more energy, than deuterium alone. Energy from deuterium—in a manner of speaking, a relatively low-octane fuel—has been the upper limit on output at Z. But it's still early days. A dry run in July, testing containment hardware and instrumentation, preceded Z's first tritium experiment three weeks later, when a fraction of a percent was cautiously introduced into the experiment's fuel. "We're going to crawl before we walk and run," said Cuneo. "We will gradually increase that fraction in contained experiments as we go." Only two other Department of Energy-supported, high-energy-density research sites, at Lawrence Livermore National Laboratory and the Laboratory for Laser Energetics at the University of Rochester, had been approved to use tritium, a potential environmental hazard. The Sandia experiments use electromagnetics to smash Z's more massive target and its entire target support area like they were hit by a sledgehammer. Unlike the laser facilities, the  Z chamber must be entered by personnel after each experiment to refurbish the facility for the next experiment. Under those conditions, introducing tritium into the target requires extreme care and forethought in the design, transport and containment of tritium to meet rigorous safety standards. "Tritium's like sand at the beach, it gets into everything," said Cuneo. "So for now, we can't let it go anywhere." The isotope is a small molecule with a lot of mobility, and the first big hurdle, he says, is to make sure the radioactive material with its 12-year half-life doesn't migrate to the million-gallon pools of water and oil that insulate Z's pulsed power components. "Laser facilities don't have these pools," he pointed out. Tritium also could bond to the metal walls of Z's central area, presenting a potential radioactive hazard where technicians enter daily to scrub after each shot. However, using the same unique design that has contained plutonium on more than a dozen previous Z shots, no tritium was released. Nearly 100 Sandia personnel contributed directly to the effort, funded through Sandia's Laboratory Directed Research and Development program. Also participating were researchers from General Atomics, Los Alamos National Laboratory, the University of New Mexico and Utah State University. Future work will be funded by the National Nuclear Security Administration (NNSA). "There was a high level of integration on facility containment and radiation protection, to do it right," said Brent Jones, facility integration lead. "The Sandia-California gas transfer group, with decades of experience dealing with tritium, developed a method of housing, delivering and containing the material. They built a device that could load a small but defined quantity of tritium; the neutron generator people filled the target with tritium; and the plutonium confinement folks contributed their shot expertise." The team now must evaluate whether tritium can be used safely in uncontained experiments, their ultimate goal. Confined tests can evaluate the compatibility of tritium with Z's materials and pressures, but don't accurately measure fusion outputs. "The use of contained tritium on Z is the first step on this journey," said Cuneo. "There is much more work to do. "Similar to what is done at the laser [fusion] facilities, one idea [for an uncontained experiment] is to purge the tritium immediately after a shot so that it doesn't stick to the walls of the Z chamber. We need to be able to efficiently purge the center section back to a safe level before technicians enter to refurbish it." Uncontained experiments will begin with very small levels of tritium and gradually ramp up in a several-year process. "We hope to find that we will able to safely handle 1-3 percent tritium in uncontained experiments, enough to advance Inertial Confinement Fusion applications, other weapons science applications and neutron effects testing," Cuneo said. It will be at least three years before experiments approach the 50/50 mix of tritium and deuterium, depending on funding and Sandia and NNSA priorities for Z. Explore further: Scientists create method to obtain the most precise data for thermonuclear reactors


News Article | February 17, 2017
Site: www.rdmag.com

You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices. When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system. "This is a really big problem, but it is mostly invisible to the public," said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled "Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather" at the annual meeting of the American Association for the Advancement of Science in Boston. When cosmic rays traveling at fractions of the speed of light strike the Earth's atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU. Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. "When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes," Bhuva explained. There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine's register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers - some of which experts feel must have been caused by SEUs - that have resulted in cancellation of hundreds of flights resulting in significant economic losses. An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates: Bhuva is a member of Vanderbilt's Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group's primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC "The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase," Bhuva said. "In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them." To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100's and 1,000's of FITs. "Our study confirms that this is a serious and growing problem," said Bhuva. "This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment." Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer. As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will "flip" from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased. More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise. Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability. For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: "The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct." This is the approach that NASA used to maximize the reliability of spacecraft computer systems. The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. "It is only the consumer electronics sector that has been lagging behind in addressing this problem." The engineer's bottom line: "This is a major problem for industry and engineers, but it isn't something that members of the general public need to worry much about."


News Article | June 29, 2016
Site: www.techtimes.com

Was planet Mars once more Earth-like than we previously believed? It may, for instance, have had an atmosphere abundant with oxygen, according to new data from NASA’s Curiosity rover. The Chemistry and Camera (ChemCam), a laser-spitting instrument on the Curiosity, detected high levels of manganese oxides in Martian rocks, which can be tied to an era when groundwater exists in the Gale Crater area of research. The hint added to other findings — including proof of ancient lakes — that reveal how Earth-resembling the Red Planet once was. "The only ways on Earth that we know how to make these manganese materials involve atmospheric oxygen or microbes," said lead author and planetary scientist Nina Lanza from New Mexico’s Los Alamos National Laboratory. "Now we're seeing manganese oxides on Mars, and we're wondering how the heck these could have formed?" According to the researchers, microbes are not the likely reason behind the manganese oxides. The high-manganese objects cannot form without plenty of liquid water as well as highly oxidizing environments, explained Lanza, pointing out that the Earth has sufficient water, but without massive manganese oxide deposits until after its atmospheric oxygen levels climbed. These minerals mark a major transition in Earth’s atmosphere, from quite low oxygen levels to the oxygen-rich one that it is today. On Mars, the presence of the same materials suggested that its atmosphere once held more oxygen than today — but how did those conditions actually form? One likely factor: water’s breakdown when the planet was losing its magnetic field, as high oxygen concentrations occurred at a time in its history when water used to be much more plentiful. However, without a magnetic field shielding the surface, ionizing radiation began splitting water molecules into hydrogen and oxygen. Since Mars has a relatively poor gravity, it did not maintain grip of the very light atoms of hydrogen, but instead kept the heavier oxygen ones. Some of the iron oxides, which offer Mars its popular rusty reddish color, went into the rocks. While iron oxides do not require a lot of oxygen to exist, manganese oxides can survive when at some point its environment had a greatly oxidizing atmosphere, marking the presence of oxygen. Lanza clarified that it is difficult to confirm this actual oxygen scenario in the Martian atmosphere, but it could enrich one’s understanding of how planets’ atmospheres can possibly become oxygenated. Rich oxygen levels in the air may be a biosignature or sign of life, but the process itself may not necessarily require life. The findings were detailed in the journal Geophysical Research Letters. Data sent back by the Curiosity rover also recently showed evidence of tridymite on Mars — a surprising discovery given that unlike Earth, Mars does not have shifting plates leading to volatile volcanoes and massive earthquakes. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | November 29, 2016
Site: www.eurekalert.org

LOS ALAMOS, N.M., Nov. 29, 2016 -- A new bioinformatics platform called Empowering the Development of Genomics Expertise (EDGE) will help democratize the genomics revolution by allowing users with limited bioinformatics expertise to quickly analyze and interpret genomic sequence data. Researchers at Los Alamos National Laboratory and their collaborators at the Naval Medical Research Center developed EDGE, which is described in a paper recently published in Nucleic Acids Research. "We realized that while next-generation sequencing instruments are becoming more widespread and more accessible to the average biologist or physician, the bioinformatics tools required to process and analyze the data were not as user-friendly or accessible," said Patrick Chain, of Los Alamos' Biosecurity and Public Health group and EDGE team lead. "Given the large number of applications where sequencing is now used, a robust bioinformatics platform that encapsulates a broad array of algorithms is required to help address questions a researcher may have. We sought to develop a web-based environment where non-bioinformatics experts could easily select what pipelines they need and rapidly obtain results and interact with their data." Stopping the spread of disease--from naturally occurring or manmade threats -- requires an in-depth understanding of pathogens and how they work. To this end, the ability to characterize organisms through accurately and rapidly comparing genomic data is an important part of Los Alamos' national security mission. Technology advancements have fueled the development of new sequencing applications and will flood current databases with raw data. A number of factors limit the use of these data, including the large number of associated software and hardware dependencies and the detailed expertise required to perform this analysis. To address these issues, Chain and his team have developed an intuitive web-based environment with a wide assortment of integrated and pioneering bioinformatics tools in pre-configured workflows, all of which can be readily applied to isolate genome sequencing projects or metagenomics projects. EDGE is a user-friendly and open-source platform that integrates hundreds of cutting-edge tools and helps reduce data analysis times from days or weeks to minutes or hours. The workflows in EDGE, along with its ease of use, provide novice next-generation sequencing users with the ability to perform many complex analyses with only a few mouse clicks. This bioinformatics platform is described as an initial attempt at empowering the development of genomics dxpertise, as its name suggests, for a wide range of applications in microbial research. EDGE has already helped streamline data analysis for groups in Thailand, Georgia, Peru, South Korea, Gabon, Uganda, Egypt and Cambodia, as well as within several government laboratories in the United States. The paper "Enabling the democratization of the genomics revolution with a fully integrated web-based bioinformatics platform" was published in Nucleic Acids Research in partnership with the Defense Threat Reduction Agency, the Naval Medical Research Center-Frederick and the Henry M. Jackson Foundation. Patrick Chain earned his master's of science in microbial genomics from McMaster University and his doctoral degree in molecular microbiology and molecular genetics at Michigan State University. He is currently leading the Bioinformatics and Analytics Team and the Metagenomics Program within the Biosecurity and Public Health group at Los Alamos National Laboratory. His background is in microbial ecology, evolution, genomics and bioinformatics, having spent the past 20 years using genomics to study various microbial systems, including the human microbiome, other environmental metagenomic communities, various isolate microbes or single cells, including bacterial and viral pathogens as well as fungal, algal, plant and animal systems. He currently leads a team of researchers whose charge is to devise novel methods, algorithms and strategies for the biological interpretation of massively parallel sequencing data. Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWXT Government Group, and URS, an AECOM company, for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.


News Article | November 11, 2015
Site: www.scientificcomputing.com

WASHINGTON (AP) — More than 70 years ago, scientists working in secret created the atomic bomb that ended World War II and ushered the world into the nuclear age. On November 10, 2015, at a ceremony in a federal building where clandestine plans for the bomb were developed just blocks from the White House, Interior Secretary Sally Jewell and Energy Secretary Ernest Moniz formally established the Manhattan Project National Historical Park. The park preserves three sites where work on the bomb was completed: Oak Ridge, Tennessee; Hanford, Washington; and Los Alamos, New Mexico. Jewell, Moniz and other officials said the sites will not glorify war or nuclear weapons, but will tell the story of the three historical sites from a range of perspectives, including the cities in Japan where two nuclear bombs were dropped in 1945. "It certainly is a celebration that we will be telling the story of these three important historical sites," Jewell told reporters. "It's not necessarily a celebration of the consequences of that, but rather an opportunity to tell that story to a broader audience." The new park will bring greater awareness of the development of nuclear weapons and energy to a worldwide audience, Jewell and Moniz said. "The Manhattan Project is an incredible scientific and engineering feat and that's obviously part of the message" of the new park, Moniz said. "But there is also a message about nuclear weapons and Japan. We want to keep driving toward a world free of nuclear weapons, and I think that's a U.S.-Japan shared story." The complex emotions the new park evokes were evident at the ceremony. Reporters and photographers from around the world — including more than a dozen from Japan — attended the event at the Interior Department's South Auditorium, as did anti-nuclear activists and representatives of the three communities commemorated by the park. The building, which sits a few blocks from the White House, is where plans for the bomb were first developed in an isolated wooden structure on the roof. Senator Lamar Alexander, R-Tenn., recounted that President Franklin D. Roosevelt summoned former Tennessee Senator Kenneth McKellar to the White House in 1942. McKellar, a Democrat, was chairman of the Senate Appropriations Committee, responsible for federal spending. Roosevelt asked him to "hide a billion dollars in the appropriations bill for a secret project to win the war," Alexander said. No problem, McKellar responded, according to Alexander. But he had one question: "Just where in Tennessee will the project be located?" The answer, Alexander noted, was Oak Ridge, where the Manhattan Project was headquartered and uranium was enriched for the atomic bomb dropped on Hiroshima in August 1945. The Hanford site was used to develop the plutonium dropped days later on Nagasaki, and Los Alamos is where more than 6,000 scientists, engineers and other workers designed and built the atomic bombs. The bomb was tested in the New Mexico desert in July 1945. Jewell, who oversees the National Park Service, said officials are acutely aware of the need to "tell the complete story" of the Manhattan Project, "listening to all sides." The park will include the voices of people who experienced devastation in Japan, as well as those "whose lives were spared because the war came to an end," Jewell said. The park also will tell the story of hundreds of thousands of ordinary people who were recruited to work in secret — often far away from home — on a project they were told was vital to the war effort, but was never clearly defined. "It did mark the end of the war, but it left devastation in its wake," Jewell said. Jewell briefly teared up as she described her mother-in-law's work as a nurse in Hiroshima and Nagasaki. "It was a powerful experience for her," Jewell said. While Japan is now one of America's closest allies, that country "felt the consequences" of the Manhattan project, Jewell added. "Your story needs to be told as well," she said, addressing Japanese citizens in the audience. Greg Mello, director of the Los Alamos Study Group, an anti-nuclear watchdog group, called the new park "pure propaganda for Los Alamos National Laboratory and its enduring mission of creating weapons of global destruction." Mello, who did not attend the ceremony, said in an e-mail that the new park was celebrating "heinous war crimes," adding: "This park is not exactly about the past, because the Manhattan Project never ended." Nuclear work continues at all three sites commemorated in the new park, a fact Moniz cited as an example of the project's enduring importance in war and peace. The Energy Department's 17 national labs that "have grown out of the roots of the Manhattan Project are part of this country's science and technology powerhouse," Moniz said. "They drive innovation. They address critical problems and they provide the backbone for basic science research in this country." Copyright 2015 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | August 30, 2016
Site: www.chromatographytechniques.com

A new study has found that plants regulate their leaf temperature with some independence from the surrounding air temperature, a trait that increases carbon uptake through photosynthesis. The research offers promise for refining Earth system models that help predict climate change impacts and feedbacks. "This research combines theory for leaf energy flows with globally distributed temperature data for diverse plant taxa to show that leaves generally do not match air temperature, but instead thermoregulate," said Sean Michaletz, a plant ecologist at Los Alamos National Laboratory, which led the study. Los Alamos studies and models climate change and related impacts as part of its mission to maintain the nation's energy security. "The end result is that leaves are generally warmer than air in cold temperatures, and cooler than air in warm temperatures." In the paper recently published in Nature Plants, Michaletz and the team developed a novel theory that combined energy budgets, which account for incoming and outgoing thermal energy fluxes in a leaf, with the carbon economics theory, which posits that leaf form and function are ultimately constrained by the efficiency of the leaf's structure in processing carbon. By synthesizing these theories, the team showed how leaf thermoregulation helps to maximize leaf photosynthesis and, therefore, the total lifetime carbon gain of a leaf. The team's theory is key to developing a more quantitative plant ecology that examines the origins of leaf thermoregulation, or the process whereby leaf temperature varies from ambient air temperature. Their research shows that plant functions are decoupled from ambient temperatures, a finding that will support improved climate models. Most plants photosynthesize, converting light energy and carbon dioxide from the atmosphere into sugars that become leaves, stems and roots. Leaf thermoregulation is critical for plant carbon economics because leaf temperatures determine the speed of photosynthesis and respiration. Because it is generally assumed that plants take on the temperature of the environment, many current Earth system models for predicting plant-atmosphere feedbacks assume that plant physiology operates at the ambient air temperature. However, data from the team's study show that leaf temperature can differ dramatically from air temperatures. This decoupling weakens the link between the climate and plant functions, limiting climactic impacts on plant growth and the carbon budgets of an ecosystem. Michaletz, McDowell and their colleagues at the Laboratory conduct this research to help identify linkages among climate, plant traits and plant physiology rates. These recent research results may help to improve Earth system models and predict climate change impacts and feedbacks. Michaletz, a Los Alamos Director's Postdoctoral Fellow, and his mentor Nate McDowell published their Nature Plants paper in collaboration with authors from the University of Arizona, University of Oklahoma, Tsinghua University, Lawrence Berkeley Laboratory, Smithsonian Tropical Research Institute, University of Pennsylvania, The Santa Fe Institute, The iPlant Collaborative, Aspen Center for Environmental Studies.


News Article | February 17, 2017
Site: www.futurity.org

Alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphone, computer, and other personal electronic devices. When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame Microsoft or Apple or Samsung. In many cases, though, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system. “This is a really big problem, but it is mostly invisible to the public,” says Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation today at the annual meeting of the American Association for the Advancement of Science in Boston. When cosmic rays traveling at fractions of the speed of light strike the Earth’s atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions, and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU. Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. “When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes,” Bhuva explains. There have been a number of incidents that illustrate how serious the problem can be, Bhuva says. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine’s register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers—some of which experts feel must have been caused by SEUs—that have resulted in cancellation of hundreds of flights resulting in significant economic losses. An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates: Bhuva is a member of Vanderbilt’s Radiation Effects Research Group, which analyzes, among other things, radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. “The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase,” Bhuva says. “In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them.” To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100’s and 1,000’s of FITs. “Our study confirms that this is a serious and growing problem,” says Bhuva.”This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment.” Although the details of the Vanderbilt studies are proprietary, Bhuva describes the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer, and 16-nanometer. As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will “flip” from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased. More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise. Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability. For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva points out: “The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct.” This is the approach that NASA used to maximize the reliability of spacecraft computer systems. The good news, Bhuva says, is that the aviation, medical equipment, IT, transportation, communications, financial, and power industries are all aware of the problem and are taking steps to address it. “It is only the consumer electronics sector that has been lagging behind in addressing this problem.” The engineer’s bottom line: “This is a major problem for industry and engineers, but it isn’t something that members of the general public need to worry much about.” The 16-nanometer study received funding from a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC.


News Article | February 15, 2017
Site: www.technologyreview.com

Clean energy made critical strides in 2016. The Paris Climate accords went into effect, the price of solar installations continued to drop, investments in renewable energy soared, offshore wind finally got under way in the United States, and scientists made a series of technical advances that promise to make sustainable energy increasingly efficient and affordable. That last one is key, since invention is still the surest way to avoid the greatest impacts of climate change.  Today's commercially available renewable technologies can't meet all of the world's energy demands, even if they're scaled up aggressively. The United States comes up about 20 percent short by 2050, according to a thorough analysis by the National Renewable Energy Laboratory. Meanwhile, the U.N.'s Intergovernmental Panel on Climate Change concluded the world must cut greenhouse gas emissions by as much as 70 percent by midcentury, and to nearly zero by 2100, to have any chance of avoiding warming levels that could ensure sinking cities, mass extinctions, and widespread droughts. So we need more highly efficient renewable energy sources, cheaper storage, smarter grids, and effective systems for capturing greenhouse gases. Here are some of the most promising scientific advances of 2016. One of the crucial missing pieces in the portfolio of renewable energy sources is a clean liquid fuel that can replace gasoline and other transportation fuels.  One of the most promising possibilities is artificial photosynthesis, mimicking nature's own method for converting sunlight, carbon dioxide, and water into fuels. There have been slow if steady improvements in the field in recent years. But this summer, Harvard scientists Daniel Nocera and Pamela Silvers, in partnership with their co-authors, developed a "bionic leaf" that could capture and convert 10 percent of the energy in sunlight, a big step forward for the field. It's also about 10 times better than the photosynthesis of your average plant. The researchers use catalysts made from a cobalt-phosphorous alloy to split the water into hydrogen and oxygen, and then set specially engineered bacteria to work gobbling up the carbon dioxide and hydrogen and converting them into liquid fuel. Others labs have also made notable strides in the efficiency and durability of solar fuel devices in recent months, including Lawrence Berkeley National Laboratory and the Joint Center for Artificial Photosynthesis. This year the latter lab created a solar-driven device that converted carbon dioxide to formate at 10 percent efficiency levels. Formate can be used as an energy source for specialized fuel cells. But the field still faces considerable technical challenges, as an earlier MIT Technology Review story explained, and any commercial products are still likely years away. This spring, a team of MIT researchers reported the development of a solar thermophotovoltaic device that could potentially push past the theoretical efficiency limits of the conventional photovoltaics used in solar panels.  Those standard solar cells can only absorb energy from a fraction of sunlight's color spectrum, mainly the visual light from violet to red. But the MIT scientists added an intermediate component made up of carbon nanotubes and nanophotonic crystals that together function sort of like a funnel, collecting energy from the sun and concentrating it into a narrow band of light. The nanotubes capture energy across the entire color spectrum, including in the invisible ultraviolet and infrared wavelengths, converting it all into heat energy. As the adjacent crystals heat up to high temperatures, around 1,000 °C, they reëmit the energy as light, but only in the band that photovoltaic cells can capture and convert. The researchers suggest that an optimized version of the technology could one day break through the theoretical cap of around 30 percent efficiency on conventional solar cells. In principle at least, solar thermophotovoltaics could achieve levels above 80 percent, though that's a long way off, according to the scientists. But there's another critical advantage to this approach. Because the process is ultimately driven by heat, it could continue to operate even when the sun ducks behind clouds, reducing the intermittency that remains one of the critical drawbacks of solar power. If the device were coupled with a thermal storage mechanism that could operate at these high temperatures, it could offer continuous solar power through the day and night. Perovskite solar cells are cheap, easy to produce, and very efficient at absorbing light. A thin film of the material, a class of hybrid organic and inorganic compounds with a particular type of crystal structure, can capture as much light as a relatively thick layer of the silicon used in standard photovoltaics. One of the critical challenges, however, has been durability. The compounds that actually absorb solar energy tend to quickly degrade, particularly in wet and hot conditions. But research groups at Stanford, Los Alamos National Laboratory, and the Swiss Federal Institute of Technology, among other institutions, made considerable strides in improving the stability of perovskite solar cells this year, publishing notable papers in Nature,  Nature Energy, and Science. "At the start of the year, they just weren't stable for long periods of time," says Ian Sharp, a staff scientist at Lawrence Berkeley National Lab. "But there have been some really impressive advances in that respect. This year things have really gotten serious." Meanwhile, other researchers have succeeded at boosting the efficiency of perovskite solar cells and identifying promising new paths for further advances. Electricity generation is responsible for producing 30 percent of the nation's carbon dioxide, so capturing those emissions at the source is crucial to any reduction plan. This year saw advances for several emerging approaches to capturing carbon in power plants, including carbonate fuel cells,  as well as at least some promising implementations of existing technology in the real world. (Though, to be sure, there have been some starkly negative examples as well.) But most of these approaches leave open the question of what to do with the stuff after it's successfully captured. And it's not a small problem. The world produces nearly 40 billion tons of carbon dioxide annually. One method, however, appears more promising than initially believed: burying carbon dioxide and turning it into stone. Since 2012, Reykjavik Energy’s CarbFix Project in Iceland has been injecting carbon dioxide and water deep underground, where they react with the volcanic basalt rocks that are abundant in the region. An analysis published in Science in June found that 95 percent of the carbon dioxide had mineralized in less than two years, much faster than the hundreds of thousands of years many had expected. So far, it also doesn't appear to be leaking out greenhouse gases, which suggests it could be both cheaper and more secure than existing burial approaches. But further research will be required to see how well it works in other areas, notably including under the ocean floors, outside observers say. Another promising option for captured carbon dioxide is, essentially, recycling it back into usable fuels. Earlier this year, researchers at the U.S. Department of Energy's Oak Ridge National Laboratory stumbled onto a method for converting it into ethanol, the liquid fuel already used as an additive in gasoline. The team developed a catalyst made from carbon, copper, and nitrogen with a textured surface, which concentrated the electrochemical reactions at the tips of nano spikes, according to a study published in Chemistry Select in October. When voltage was applied, the device converted a solution of carbon dioxide into ethanol at a high level of efficiency. The materials were also relatively cheap and the process worked at room temperature, both critical advantages for any future commercialization. “We’re taking carbon dioxide, a waste product of combustion, and we’re pushing that combustion reaction backwards,” said lead author Adam Rondinone in a news release. In addition to converting captured carbon dioxide, the process could be used to store excess energy from wind and solar electricity generation. Some outside researchers, however, are skeptical about the initial results and are anxiously awaiting to see if other labs can verify the findings.


Home > Press > Nanotubes line up to form films: Rice University researchers discover way to make highly aligned, wafer-scale films Scientists at Rice, with support from Los Alamos National Laboratory, have made inch-wide films of densely packed, chirality-enriched single-walled carbon nanotubes through a process revealed today in Nature Nanotechnology. In the right solution of nanotubes and under the right conditions, the tubes assemble themselves by the millions into long rows that are aligned better than once thought possible, the researchers reported. The thin films offer possibilities for making flexible electronic and photonic (light-manipulating) devices, said Rice physicist Junichiro Kono, whose lab led the study. Think of a bendable computer chip, rather than a brittle silicon one, and the potential becomes clear, he said. "Once we have centimeter-sized crystals consisting of single-chirality nanotubes, that's it," Kono said. "That's the holy grail for this field. For the last 20 years, people have been looking for this." The Rice lab is closing in, he said, but the films reported in the current paper are "chirality-enriched" rather than single-chirality. A carbon nanotube is a cylinder of graphene, with its atoms arranged in hexagons. How the hexagons are turned sets the tube's chirality, and that determines its electronic properties. Some are semiconducting like silicon, and others are metallic conductors. A film of perfectly aligned, single-chirality nanotubes would have specific electronic properties. Controlling the chirality would allow for tunable films, Kono said, but nanotubes grow in batches of random types. For now, the Rice researchers use a simple process developed at the National Institute of Standards and Technology to separate nanotubes by chirality. While not perfect, it was good enough to let the researchers make enriched films with nanotubes of different types and diameters and then make terahertz polarizers and electronic transistors. The Rice lab discovered the filtration technique in late 2013 when graduate students and lead authors Xiaowei He and Weilu Gao inadvertently added a bit too much water to a nanotube-surfactant suspension before feeding it through a filter assisted by vacuum. (Surfactants keep nanotubes in a solution from clumping.) The film that formed on the paper filter bore further investigation. "Weilu checked the film with a scanning electron microscope and saw something strange," He said. Rather than drop randomly onto the paper like pickup sticks, the nanotubes – millions of them – had come together in tight, aligned rows. "That first picture gave us a clue we might have something totally different," He said. A year and more than 100 films later, the students and their colleagues had refined their technique to make nanotube wafers up to an inch wide (limited only by the size of their equipment) and of any thickness, from a few to hundreds of nanometers. Further experiments revealed that each element mattered: the type of filter paper, the vacuum pressure and the concentration of nanotubes and surfactant. Nanotubes of any chirality and diameter worked, but each required adjustments to the other elements to optimize the alignment. The films can be separated from the paper and washed and dried for use, the researchers said. They suspect multiwalled carbon nanotubes and non-carbon nanotubes like boron nitride would work as well. Co-author Wade Adams, a senior faculty fellow at Rice who specializes in polymer science, said the discovery is a step forward in a long quest for aligned structures. "They formed what is called a monodomain in liquid crystal technology, in which all the rigid molecules line up in the same direction," Adams said. "It's astonishing. (The late Rice Nobel laureate) Rick Smalley and I worked very hard for years to make a single crystal of nanotubes, but these students have actually done it in a way neither of us ever imagined." Why do the nanotubes line up? Kono said the team is still investigating the mechanics of nucleation -- that is, how the first few nanotubes on the paper come together. "We think the nanotubes fall randomly at first, but they can still slide around on the paper," he said. "Van der Waals force brings them together, and they naturally seek their lowest-energy state, which is in alignment." Because the nanotubes vary in length, the researchers suspect the overhangs force other tubes to line up as they join the array. The researchers found their completed films could be patterned with standard lithography techniques. That's yet another plus for manufacturers, said Kono, who started hearing buzz about the discovery months before the paper's release. "I gave an invited talk about our work at a carbon nanotube conference, and many people are already trying to reproduce our results," he said. "I got so much enthusiastic response right after my talk. Everybody asked for the recipe." Co-authors are Rice graduate students Qi Zhang, Sidong Lei and John Robinson and postdoctoral researcher Bo Li; Lijuan Xie of Zhejiang University, who has a complimentary appointment at Rice; Rice alumnus Erik Haroz and Stephen Doorn of Los Alamos National Laboratory; Robert Vajtai, a faculty research fellow at Rice; Pulickel Ajayan, chair of Rice's Department of Materials Science and NanoEngineering, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of chemistry; and the late Robert Hauge, distinguished faculty fellow in chemistry and in materials science and nanoengineering at Rice. Adams is a senior faculty fellow in materials science and nanoengineering. Kono is a Rice professor of electrical and computer engineering, of physics and astronomy and of materials science and nanoengineering. The Department of Energy and the Robert A. Welch Foundation supported the research. About Rice University Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice’s undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for best quality of life and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance. To read “What they’re saying about Rice,” go to http://tinyurl.com/AboutRiceUniversity . Follow Rice News and Media Relations via Twitter @RiceUNews For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | April 7, 2016
Site: www.materialstoday.com

Using a simple filtration process, researchers at Rice University have managed to create flexible, wafer-scale films of highly aligned and closely-packed carbon nanotubes. With support from scientists at Los Alamos National Laboratory, the Rice University researchers have developed a process for making inch-wide films of densely packed, chirality-enriched single-walled carbon nanotubes. In the right solution of nanotubes and under the right conditions, the tubes assemble themselves by the millions into long rows that are aligned better than once thought possible, the researchers report in a paper in Nature Nanotechnology. The thin films offer possibilities for making flexible electronic and photonic (light-manipulating) devices, said Rice physicist Junichiro Kono, whose lab led the study. Think of a bendable computer chip rather than a brittle silicon one, and the potential becomes clear, he said. "Once we have centimeter-sized crystals consisting of single-chirality nanotubes, that's it," Kono said. "That's the holy grail for this field. For the last 20 years, people have been looking for this." The Rice lab is now closing in on this goal, although the films reported in the current paper are ‘chirality-enriched’ rather than single-chirality. A carbon nanotube is a cylinder of graphene, with its atoms arranged in hexagons. How the hexagons are turned determines the tube's chirality, and that determines its electronic properties: some are semiconducting like silicon, and others are metallic conductors. A film of perfectly aligned, single-chirality nanotubes would have specific electronic properties. Controlling the chirality would allow for tunable films, Kono said, but nanotubes naturally grow in batches of random types. For now, the Rice researchers use a simple process developed at the US National Institute of Standards and Technology to separate nanotubes by chirality. While not perfect, it was good enough to let the researchers make enriched films with nanotubes of different types and diameters, which could form the basis for terahertz polarizers and electronic transistors. The Rice lab discovered the filtration technique in late 2013 when graduate students and lead authors Xiaowei He and Weilu Gao inadvertently added a bit too much water to a nanotube-surfactant suspension before feeding it through a filter assisted by vacuum. The film that formed on the paper filter bore further investigation. "Weilu checked the film with a scanning electron microscope and saw something strange," explained He. Rather than drop randomly onto the paper like pickup sticks, the nanotubes – millions of them –came together in tight, aligned rows. "That first picture gave us a clue we might have something totally different," He said. A year and more than 100 films later, the students and their colleagues had refined their technique to make nanotube wafers up to an inch wide (limited only by the size of their equipment) and of any thickness, from just a few nanometers to hundreds. Further experiments revealed that each element of the production process mattered: the type of filter paper, the vacuum pressure, and the concentration of nanotubes and surfactant. The process worked with nanotubes of any chirality and diameter, but each required adjustments to the various elements to optimize the alignment. The resultant films can be separated from the paper and washed and dried for use, the researchers said. They suspect that the process would also work with multi-walled carbon nanotubes and non-carbon nanotubes like boron nitride. Co-author Wade Adams, a senior faculty fellow at Rice who specializes in polymer science, said the discovery is a step forward in a long quest for aligned structures. "They formed what is called a monodomain in liquid crystal technology, in which all the rigid molecules line up in the same direction. It's astonishing. (The late Rice Nobel laureate) Rick Smalley and I worked very hard for years to make a single crystal of nanotubes, but these students have actually done it in a way neither of us ever imagined." Why do the nanotubes line up? Kono said the team is still investigating the mechanics of nucleation – that is, how the first few nanotubes on the paper come together. "We think the nanotubes fall randomly at first, but they can still slide around on the paper," he said. "Van der Waals force brings them together, and they naturally seek their lowest-energy state, which is in alignment." Because the nanotubes vary in length, the researchers suspect the overhangs force other tubes to line up as they join the array. The researchers also found that their completed films could be patterned with standard lithography techniques. That's yet another plus for manufacturers, said Kono, who started hearing buzz about the discovery months before the paper's release. "I gave an invited talk about our work at a carbon nanotube conference, and many people are already trying to reproduce our results," he said. "I got so much enthusiastic response right after my talk. Everybody asked for the recipe." This story is adapted from material from Rice University, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | February 19, 2017
Site: www.nanotech-now.com

Home > Press > Particles from outer space are wreaking low-grade havoc on personal electronics Abstract: You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices. When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system. "This is a really big problem, but it is mostly invisible to the public," said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled "Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather" at the annual meeting of the American Association for the Advancement of Science in Boston. When cosmic rays traveling at fractions of the speed of light strike the Earth's atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU. Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. "When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes," Bhuva explained. There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine's register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers - some of which experts feel must have been caused by SEUs - that have resulted in cancellation of hundreds of flights resulting in significant economic losses. An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates: A simple cell phone with 500 kilobytes of memory should only have one potential error every 28 years. A router farm like those used by Internet providers with only 25 gigabytes of memory may experience one potential networking error that interrupts their operation every 17 hours. A person flying in an airplane at 35,000 feet (where radiation levels are considerably higher than they are at sea level) who is working on a laptop with 500 kilobytes of memory may experience one potential error every five hours. Bhuva is a member of Vanderbilt's Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group's primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC "The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase," Bhuva said. "In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them." To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100's and 1,000's of FITs. "Our study confirms that this is a serious and growing problem," said Bhuva. "This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment." Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer. As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will "flip" from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased. More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise. Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability. For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: "The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct." This is the approach that NASA used to maximize the reliability of spacecraft computer systems. The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. "It is only the consumer electronics sector that has been lagging behind in addressing this problem." The engineer's bottom line: "This is a major problem for industry and engineers, but it isn't something that members of the general public need to worry much about." For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | April 4, 2016
Site: phys.org

Scientists at Rice, with support from Los Alamos National Laboratory, have made inch-wide films of densely packed, chirality-enriched single-walled carbon nanotubes through a process revealed today in Nature Nanotechnology. In the right solution of nanotubes and under the right conditions, the tubes assemble themselves by the millions into long rows that are aligned better than once thought possible, the researchers reported. The thin films offer possibilities for making flexible electronic and photonic (light-manipulating) devices, said Rice physicist Junichiro Kono, whose lab led the study. Think of a bendable computer chip, rather than a brittle silicon one, and the potential becomes clear, he said. "Once we have centimeter-sized crystals consisting of single-chirality nanotubes, that's it," Kono said. "That's the holy grail for this field. For the last 20 years, people have been looking for this." The Rice lab is closing in, he said, but the films reported in the current paper are "chirality-enriched" rather than single-chirality. A carbon nanotube is a cylinder of graphene, with its atoms arranged in hexagons. How the hexagons are turned sets the tube's chirality, and that determines its electronic properties. Some are semiconducting like silicon, and others are metallic conductors. A film of perfectly aligned, single-chirality nanotubes would have specific electronic properties. Controlling the chirality would allow for tunable films, Kono said, but nanotubes grow in batches of random types. For now, the Rice researchers use a simple process developed at the National Institute of Standards and Technology to separate nanotubes by chirality. While not perfect, it was good enough to let the researchers make enriched films with nanotubes of different types and diameters and then make terahertz polarizers and electronic transistors. The Rice lab discovered the filtration technique in late 2013 when graduate students and lead authors Xiaowei He and Weilu Gao inadvertently added a bit too much water to a nanotube-surfactant suspension before feeding it through a filter assisted by vacuum. (Surfactants keep nanotubes in a solution from clumping.) The film that formed on the paper filter bore further investigation. "Weilu checked the film with a scanning electron microscope and saw something strange," He said. Rather than drop randomly onto the paper like pickup sticks, the nanotubes - millions of them - had come together in tight, aligned rows. "That first picture gave us a clue we might have something totally different," He said. A year and more than 100 films later, the students and their colleagues had refined their technique to make nanotube wafers up to an inch wide (limited only by the size of their equipment) and of any thickness, from a few to hundreds of nanometers. Further experiments revealed that each element mattered: the type of filter paper, the vacuum pressure and the concentration of nanotubes and surfactant. Nanotubes of any chirality and diameter worked, but each required adjustments to the other elements to optimize the alignment. The films can be separated from the paper and washed and dried for use, the researchers said. They suspect multiwalled carbon nanotubes and non-carbon nanotubes like boron nitride would work as well. Co-author Wade Adams, a senior faculty fellow at Rice who specializes in polymer science, said the discovery is a step forward in a long quest for aligned structures. "They formed what is called a monodomain in liquid crystal technology, in which all the rigid molecules line up in the same direction," Adams said. "It's astonishing. (The late Rice Nobel laureate) Rick Smalley and I worked very hard for years to make a single crystal of nanotubes, but these students have actually done it in a way neither of us ever imagined." Why do the nanotubes line up? Kono said the team is still investigating the mechanics of nucleation—that is, how the first few nanotubes on the paper come together. "We think the nanotubes fall randomly at first, but they can still slide around on the paper," he said. "Van der Waals force brings them together, and they naturally seek their lowest-energy state, which is in alignment." Because the nanotubes vary in length, the researchers suspect the overhangs force other tubes to line up as they join the array. The researchers found their completed films could be patterned with standard lithography techniques. That's yet another plus for manufacturers, said Kono, who started hearing buzz about the discovery months before the paper's release. "I gave an invited talk about our work at a carbon nanotube conference, and many people are already trying to reproduce our results," he said. "I got so much enthusiastic response right after my talk. Everybody asked for the recipe." Explore further: Comprehensive look at the fundamentals of most desirable nanotubes More information: Xiaowei He et al. Wafer-scale monodomain films of spontaneously aligned single-walled carbon nanotubes, Nature Nanotechnology (2016). DOI: 10.1038/nnano.2016.44


News Article | November 11, 2016
Site: www.24-7pressrelease.com

LOS ALAMOS, NM, November 11, 2016-- Dr. Robert Macek has been included in Marquis Who's Who. As in all Marquis Who's Who biographical volumes, individuals profiled are selected on the basis of current reference value. Factors such as position, noteworthy accomplishments, visibility, and prominence in a field are all taken into account during the selection process.Dr. Macek is a physicist and scientific expert whose career has included contributions to the research community across the country including research in electron cloud instabilities in proton storage rings, and successful development, operations and upgrades of the Los Alamos Proton Storage Ring. After retirement from Los Alamos National Laboratory in 2003, Dr. Macek continued his career as a senior scientist with TechSource in Santa Fe, N.M., and as guest scientist at LANL. Prior to retirement, he served as associate division leader, program manager and group leader at LANL, member of the Dahrt Red Team at LANL, and postdoctoral research associate at The University of Pennsylvania. Additionally, he was a project leader at LANL, as well as a technical staff member, and chemistry lab assistant at South Dakota State University. In recognition of professional excellence, Dr. Macek was named to Who's Who in America and Who's Who in the World.Outside of his professional duties, Dr. Macek is regarded for his involvement with research initiatives on many levels. He served on a DOE panel on accelerator physics and technology, as an expert consultant for the Atomic Energy Control Board of Canada, and member of the advanced science board of HiEnergy Technology. Additionally, he was a review panel member for the European Organization of Nuclear Research and member of the accelerator advisory committee of Oak Ridge National Laboratory.Prior to establishing his career, Dr. Macek earned a Bachelor of Science from South Dakota University in 1958 and a Ph.D. from the California Institute of Technology in 1964. He maintains membership with the IEEE and the American Physical Society. Although retired, he intends to continue consulting in the years to come.About Marquis Who's Who :Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America , Marquis Who's Who has chronicled the lives of the most accomplished individuals and innovators from every significant field of endeavor, including politics, business, medicine, law, education, art, religion and entertainment. Today, Who's Who in America remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms around the world. Marquis now publishes many Who's Who titles, including Who's Who in America , Who's Who in the World , Who's Who in American Law , Who's Who in Medicine and Healthcare , Who's Who in Science and Engineering , and Who's Who in Asia . Marquis publications may be visited at the official Marquis Who's Who website at www.marquiswhoswho.com


News Article | February 22, 2017
Site: www.eurekalert.org

LOS ALAMOS, N.M., Feb. 22, 2017--In a new study published today in the journal PLOS ONE, Los Alamos National Laboratory scientists have taken a condensed matter physics concept usually applied to the way substances such as ice freeze, called "frustration," and applied it to a simple social network model of frustrated components. They show that inequality of wealth can emerge spontaneously and more equality can be gained by pure initiative. It's a computer-modeling exploration of the 19th-century Horatio Alger theme, whereby a motivated young person overcomes poor beginnings and lives the "rags to riches" life thanks to strength of character. "Most theories of wealth inequality rely on social stratification due to income inequality and inheritance," said Cristiano Nisoli, of the Physics of Condensed Matter and Complex Systems group at Los Alamos and lead author of the study. "We consider, however, the possibility that in our more economically fluid world, novel, direct channels for wealth transfer could be available, especially for financial wealth." The work stems from Los Alamos research into computational material science, with broader applications to materials physics, energy security and weapons physics. In this case, the study's authors used computer modeling to conceptualize the situation of a set of agents, endowed with opportunities to acquire available wealth. As Nisoli describes it, "we assume that the possession of wealth endows the user with the power to attract more wealth." The team of Benoit Mahault (visiting from Université Paris Saclay), Avadh Saxena and Nisoli divided the problem into three sets of problems: The first set of results shows that in a static society--where the allocation of opportunities does not change in time--the "law of the jungle" allows anyone to gain wealth from or lose it to anyone else. Relative chaos ensues. The second set of results also pertains to static societies, but ones in which transactions of wealth are regulated. People cannot gain or lose wealth from just anybody, but only from their neighbors in the network in which they are linked. This scenario leads to substantially more fairness in the mathematical benchmark cases of Erdös models for random networks and of Barabasi-Albert algorithms for scale-free networks. However, marked differences between the two appear when it comes to overall rather than subjective fairness. The third set of results pertains to dynamic societies. Maintaining the overall wealth level as fixed, the researchers allow agents to freely shift links among themselves as their own initiative drives them. This is where the concepts of power, frustration and initiative, previously benchmarked on static markets, become crucial. Their interplay results in a complex dynamic. At a low level of initiative, results converge to more or less ameliorated inequality where the power of wealth concentrates and wins. At high initiative levels, results converge to strong equality where power never concentrates. For initiative levels somewhere in between, we see the interplay of three emergent social classes: lower, middle and upper. Said Nisoli, "If driven by power alone, the market evolution reaches a static equilibrium characterized by the most savage inequality. Power not only concentrates wealth, but reshapes the market topology to concentrate the very opportunities to acquire wealth on only a few agents, who now amass all the wealth of the society." This equilibrium scenario however, does not take into account personal frustration and initiative to act. If those elements are introduced, at sufficient initiative, a cyclical dynamic of three social classes emerges. "Periodically, a long 'time of inequality' is contrasted by the patient effort of the middle class to rise up, to bring down the upper class and to merge with it. When that finally happens, however, the situation proves unstable: a single egalitarian class forms for a brief time, only to be soon disrupted by the appearance of difficult-to-predict 'black swan' economic events. The power of the latter, now competing against unfrustrated and thus demotivated agents of an egalitarian class, wins easily and a new time of inequality is brought in as a new middle class emerges while the upper-class rises," Nisoli explained. To encapsulate the concept, he said, "We learn from this analysis that in our admittedly simplified model, equality can be improved either by proper engineering of a static market topology, which seems impracticable, or by dynamic emergent reshaping of the market via sufficient individual initiative to act upon frustration." But a successful society, with reduced frustration and improved equality, does not continue for long. "Equality is short lived, we find, as the disappearance of frustration that follows equality removes the fundamental drive toward equality. Perhaps a key element in preventing the cyclical return of inequality would be memory, which is absent from our framework. But then, is it present in a real society?" The paper, "Emergent Inequality and Self-Organized Social Classes in a Network of Power and Frustration," appears in this week's PLOS ONE. Link: http://journals. This research was funded by the U.S. Department of Energy, the Los Alamos National Laboratory Center for Nonlinear Studies, the Los Alamos Institute for Materials Science and Laboratory Directed Research and Development. Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWX Technologies, Inc. and URS Corporation for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health and global security concerns.


News Article | February 17, 2017
Site: www.eurekalert.org

You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices. When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system. "This is a really big problem, but it is mostly invisible to the public," said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled "Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather" at the annual meeting of the American Association for the Advancement of Science in Boston. When cosmic rays traveling at fractions of the speed of light strike the Earth's atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU. Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. "When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes," Bhuva explained. There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine's register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers - some of which experts feel must have been caused by SEUs - that have resulted in cancellation of hundreds of flights resulting in significant economic losses. An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates: Bhuva is a member of Vanderbilt's Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group's primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC "The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase," Bhuva said. "In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them." To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100's and 1,000's of FITs. "Our study confirms that this is a serious and growing problem," said Bhuva. "This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment." Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer. As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will "flip" from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased. More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise. Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability. For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: "The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct." This is the approach that NASA used to maximize the reliability of spacecraft computer systems. The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. "It is only the consumer electronics sector that has been lagging behind in addressing this problem." The engineer's bottom line: "This is a major problem for industry and engineers, but it isn't something that members of the general public need to worry much about."


News Article | November 10, 2015
Site: www.rdmag.com

More than 70 years ago scientists working in secret created the atomic bomb that ended World War II and ushered the world into the nuclear age. On Tuesday, at a ceremony near the White House, in a federal building where clandestine plans for the bomb were developed, Interior Secretary Sally Jewell and Energy Secretary Ernest Moniz formally established the Manhattan Project National Historical Park. The park preserves three sites where work on the bomb was completed: Oak Ridge, Tennessee; Hanford, Washington; and Los Alamos, New Mexico. Jewell, Moniz and other officials said the park will not glorify war or nuclear weapons, but will tell the story of the three historical sites from a range of perspectives, including the cities in Japan where two nuclear bombs were dropped in 1945. "It certainly is a celebration that we will be telling the story of these three important historical sites," Jewell said. "It's not necessarily a celebration of the consequences of that, but rather an opportunity to tell that story to a broader audience." The new park will bring greater awareness of the development of nuclear energy and weapons to a worldwide audience, Jewell and Moniz said. Part of that audience was present at Tuesday's ceremony, as reporters and photographers from around the world - including more than a dozen from Japan - attended the event at the Interior Department's South Auditorium. The building, a few blocks from the White House, is where plans for the bomb were first developed in an isolated wooden structure on the roof. Jewell, who oversees the National Park Service, said officials are acutely aware of the need to "tell the complete story" of the Manhattan Project, "listening to all sides." The park will include the voices of people who experienced devastation in Japan, as well as those "whose lives were spared because the war came to an end," Jewell said. The park also will tell the story of hundreds of thousands of ordinary people who were recruited to work in secret - often far away from home - on a project they were told was vital to the war effort, but was never clearly defined. "It did mark the end of the war, but it left devastation in its wake," Jewell said. Jewell briefly teared up as she described her mother-in-law's work as a nurse in Hiroshima and Nagasaki, the two Japanese cities where atomic bombs were dropped in August 1945. "It was a powerful experience for her," Jewell said, noting that her mother brought home shards of pottery damaged by the bombs. While Japan is now one of America's closest allies, that country "felt the consequences" of the Manhattan project, Jewell said. "Your story needs to be told as well," she said, addressing Japanese citizens in the audience. Greg Mello, director of the Los Alamos Study Group, an anti-nuclear watchdog group, called the new park "pure propaganda for Los Alamos National Laboratory and its enduring mission of creating weapons of global destruction." "This park is not exactly about the past because the Manhattan Project never ended," Mello said, noting that nuclear work continues at all three sites commemorated in the new park.


News Article | February 17, 2017
Site: www.chromatographytechniques.com

You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices. When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system. "This is a really big problem, but it is mostly invisible to the public," said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled "Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather" at the annual meeting of the American Association for the Advancement of Science in Boston. When cosmic rays traveling at fractions of the speed of light strike the Earth's atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU. Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. "When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes," Bhuva explained. There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine's register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers - some of which experts feel must have been caused by SEUs - that have resulted in cancellation of hundreds of flights resulting in significant economic losses. An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates: Bhuva is a member of Vanderbilt's Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group's primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC "The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase," Bhuva said. "In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them." To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100's and 1,000's of FITs. "Our study confirms that this is a serious and growing problem," said Bhuva. "This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment." Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer. As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will "flip" from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased. More significantly, the current generation of 16-nanometer circuits have a 3-D architecture that replaced the previous 2-D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise. Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability. For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: "The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct." This is the approach that NASA used to maximize the reliability of spacecraft computer systems. The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. "It is only the consumer electronics sector that has been lagging behind in addressing this problem." The engineer's bottom line: "This is a major problem for industry and engineers, but it isn't something that members of the general public need to worry much about."


News Article | November 7, 2016
Site: www.scientificcomputing.com

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory are playing key roles in two recently funded computing projects with the goal of developing cutting-edge scientific applications for future exascale supercomputers that can perform at least a billion billion computing operations per second – 50 to 100 times more than the most powerful supercomputers in the world today. The first project, led by SLAC, will develop computational tools to quickly sift through enormous piles of data produced by powerful X-ray lasers. The second project, led by DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab), will reengineer simulation software for a potentially transformational new particle accelerator technology, called plasma wakefield acceleration. The projects, which will each receive $10 million over four years, are among 15 fully-funded application development proposals and seven proposals selected for seed funding by the DOE’s Exascale Computing Project (ECP). The ECP is part of President Obama’s National Strategic Computing Initiative and intends to maximize the benefits of high-performance computing for U.S. economic competiveness, national security and scientific discovery. “Many of our modern experiments generate enormous quantities of data,” says Alex Aiken, professor of computer science at Stanford University and director of the newly formed SLAC Computer Science division, who is involved in the X-ray laser project. “Exascale computing will create the capabilities to handle unprecedented data volumes and, at the same time, will allow us to solve new, more complex simulation problems.” X-ray lasers, such as SLAC’s Linac Coherent Light Source (LCLS) have been proven to be extremely powerful “microscopes” that are capable of glimpsing some of nature’s fastest and most fundamental processes on the atomic level. Researchers use LCLS, a DOE Office of Science User Facility, to create molecular movies, watch chemical bonds form and break, follow the path of electrons in materials and take 3-D snapshots of biological molecules that support the development of new drugs. At the same time X-ray lasers also generate giant amounts of data. A typical experiment at LCLS, which fires 120 flashes per second, fills up hundreds of thousands of gigabytes of disk space. Analyzing such a data volume in a short amount of time is already very challenging. And this situation is set to become dramatically harder: The next-generation LCLS-II X-ray laser will deliver 8,000 times more X-ray pulses per second, resulting in a similar increase in data volumes and data rates. Estimates are that the data flow will greatly exceed a trillion data ‘bits’ per second, and require hundreds of petabytes of online disk storage. As a result of the data flood even at today’s levels, researchers collecting data at X-ray lasers such as LCLS presently receive only very limited feedback regarding the quality of their data. “This is a real problem because you might only find out days or weeks after your experiment that you should have made certain changes,” says Berkeley Lab’s Peter Zwart, one of the collaborators on the exascale project, who will develop computer algorithms for X-ray imaging of single particles. “If we were able to look at our data on the fly, we could often do much better experiments.” Amedeo Perazzo, director of the LCLS Controls & Data Systems Division and principal investigator for this “ExaFEL” project, says, “We want to provide our users at LCLS, and in the future LCLS-II, with very fast feedback on their data so that can make important experimental decisions in almost real time. The idea is to send the data from LCLS via DOE’s broadband science network ESnet to NERSC, the National Energy Research Scientific Computing Center, where supercomputers will analyze the data and send the results back to us – all of that within just a few minutes.”  NERSC and ESnet are DOE Office of Science User Facilities at Berkeley Lab. X-ray data processing and analysis is quite an unusual task for supercomputers. “Traditionally these high-performance machines have mostly been used for complex simulations, such as climate modeling, rather than processing real-time data” Perazzo says. “So we’re breaking completely new ground with our project, and foresee a number of important future applications of the data processing techniques being developed.” This project is enabled by the investments underway at SLAC to prepare for LCLS-II, with the installation of new infrastructure capable of handling these enormous amounts of data. “At Berkeley Lab, we’ll be heavily involved in developing algorithms for specific use cases,” says James Sethian, a professor of mathematics at the University of California, Berkeley, and head of Berkeley Lab’s Mathematics Group and the Center for Advanced Mathematics for Energy Research Applications (CAMERA). “This includes work on two different sets of algorithms. The first set, developed by a team led by Nick Sauter, consists of well-established analysis programs that we’ll reconfigure for exascale computer architectures, whose larger computer power will allow us to do better, more complex physics. The other set is brand new software for emerging technologies such as single-particle imaging, which is being designed to allow scientists to study the atomic structure of single bacteria or viruses in their living state.” The “ExaFEL” project led by Perazzo will take advantage of Aiken’s newly formed Stanford/SLAC team, and will collaborate with researchers at Los Alamos National Laboratory to develop systems software that operates in a manner that optimizes its use of the architecture of the new exascale computers. “Supercomputers are very complicated, with millions of processors running in parallel,” Aiken says. “It’s a real computer science challenge to figure out how to use these new architectures most efficiently.” Finally, ESnet will provide the necessary networking capabilities to transfer data between the LCLS and supercomputing resources. Until exascale systems become available in the mid-2020s, the project will use NERSC’s Cori supercomputer for its developments and tests. The second project aims at making use of exascale computing capabilities to simulate plasma wakefield accelerators. In this technology, charged particles such as electrons and their antimatter siblings, positrons, gain energy as they “surf” on waves of hot, ionized gas (known as ‘plasma’). The plasma wave is created by passing either an extremely powerful laser beam or a very energetic beam of electrons through the plasma. The first approach is being tested at facilities like Berkeley Lab’s BELLA Center, the other at SLAC’s Facility for Advanced Experimental Tests (FACET), also a DOE Office of Science User Facility. Plans for a FACET-II upgrade are currently under DOE review. Due to the extremely large accelerating electric fields in plasma waves, which are 1,000 times stronger than in conventional accelerators, the technology could lead to much more compact and potentially less expensive particle accelerators for X-ray laser science and particle physics. “To design plasma accelerators and better understand the complex processes inside plasmas, we frequently do demanding computer simulations,” says Jean-Luc Vay, a senior physicist at Berkeley Lab, who is the principal investigator for this exascale project. “Simulations of a single plasma stage, which do not even take into account all complexity, take days to weeks on today’s supercomputers. If we ever want to be able to run simulations for future particle colliders, which will require up to 100 stages chained together, exascale computing is absolutely necessary.” The goal of the project is to take Berkeley Lab’s existing accelerator simulation code “Warp” and combine it with a high-resolution refinement tool named “BoxLib” to form a new code “WarpX” designed for the future exascale computing architectures. Until these become available, tests will be performed on NERSC’s Cori supercomputer. “The code developments will be largely done at Berkeley with contributions from Lawrence Livermore National Laboratory,” says Cho-Kuen Ng, head of the Computational Electrodynamics Department in SLAC’s Technology Innovation Directorate. “SLAC has decades of experience with high-performance computing in the simulation of accelerators, and we’ll contribute, among other activities, by helping optimize the new code and making sure it runs fast and efficiently.” Mark Hogan, who leads SLAC’s plasma acceleration efforts, says, “The approaches at SLAC and Berkeley of creating plasma waves are quite different. Our lab’s role in this project is also to ensure that Berkeley’s codes will be applicable to particle-driven plasma accelerators – a technology that we’ll continue to develop at FACET-II.”


News Article | March 1, 2017
Site: www.theenergycollective.com

If a U.S.-based researcher or reactor designer needs to irradiate fuel or material with fast neutrons for testing, their current options are extremely limited. No domestic test facility can provide enough fast neutrons to do anything more than slowly irradiate a small quantity of tiny samples. Anything more requires the full cooperation of either Russia or China. It doesn’t take too much expertise or imagination to realize both of those options are difficult, expensive and loaded with risk in terms of schedule, intellectual property protection, export control limitations and test conditions. Lack of a facility hasn’t stopped people from recognizing that fast reactors have sufficient attractions to make them worth a considerable effort. Well resourced teams like Bill Gates’s TerraPower that are deeply interested in fast reactors have spent the money and taken the risks associated with performing tests in available facilities. Last summer, John Kotek, in his role as the Acting Assistant Secretary of Energy for Nuclear Energy tasked the Department of Energy’s Nuclear Energy Advisory Committee with evaluating the mission and requirements for a facility that could provide a domestic source of enough fast neutrons to support the testing that will be needed to design and license fast reactors here. The committee completed its work in December and produced a draft report. At the recent Advanced Reactor Technical Summit, Dr. Al Sattelberger, the chairman of the NEAC and a participant in the evaluation effort, described the document and its conclusions. The financially unconstrained conclusion of the group of evaluators, most with long experience in the DOE’s National Lab complex, is that the U.S. needs a new test reactor. The report includes a set of capabilities that the new facility should have. There is no design effort in progress, no site identified, and no money in the budget for such a facility. I was in the audience and took the opportunity to ask the obvious question. “The U.S. owns something called the Fast Flux Test Facility. Did your committee consider restoring the FFTF?” Dr. Sattelberger, who had introduced himself as a chemist among mostly nuclear engineers, responded as follows. Granting that Dr. Sattelberger is an advisor and not a representative of the Department of Energy, his response was still troubling. It was roughly equivalent to the response of a privileged teenager who says he wants mobility but then holds out for a dream car with options that haven’t been invented yet as a preferred path over fixing up the classic Cadillac loaded with all of the available options that is gathering dust in Grandma’s garage. His more impatient and practical sister might decide to go kick the tires on the Cadillac, find out what it would take to restore the vehicle to a like-new condition and imagine its nearer term potential and value. Dr. Sattelberger was right to note that there have been numerous studies done evaluating the option of using the FFTF for its designed purpose. One of the most comprehensive studies was completed in April 2007 by the Columbia Basin Consulting Group (CBCG) for the Tri-City Industrial Development Council. That study – Siting Study For Hanford Advanced Fuels Test & Research Center – was funded by DOE as part of the Global Nuclear Energy Partnership (GNEP) program. The evaluators were particularly well-suited to the task; several of the consultants were, at the time, relatively recently retired engineers and operators from the Energy Department who had deep experience at the FFTF during its operational lifetime and its subsequent deactivation. Bill Stokes, still with CBCG, led that study effort and shared a copy of the report. He emphasized the talent of the crew who did the evaluation and stated that they were not motivated by self interest; they were beyond the point of needing a job. The 116 page document provides a detailed description of an amazing facility provided with the kinds of capabilities affordable at a time when developing fast reactors was a national priority. Though some dismiss the FFTF as old, it is about 15 to 20 years newer than most of the other test reactors in the U.S. and only has about ten years worth of operational wear. It has largely been protected from any permanent damage. Fortunately, Grandma never got around to investing the money that destruction and cleanup of her “old” Cadillac would have required. Here is the pithy concluding statement from the report: Those numbers included a 20% contingency. Stokes said that very little has changed at the site during the past 10 years, though the numbers will probably need some revision. There are more than enough opportunities for young and midlevel engineers and scientists to get involved in pie-in-the-sky design efforts to develop a new digital reactor. [That is my term for what Rickover would have called a “paper reactor” in his less electronic era.] The FFTF is an existing facility with real materials, real pumps, real valves, real fuel handling devices. Most importantly for the future of U.S. nuclear technical leadership, the FFTF can provide 5 to 10 times the fast neutron flux of any existing facility and it has the testing location capacity to support numerous parallel experiments. Since it already exists, its siting process cannot become a new battleground for the ancient rivalries between the national labs, their local economic boosters and their congressional representatives. The facility has its required state and local permits and is covered by an active environmental impact statement. It might be operational before the first shovel full of dirt could be turned for a new facility whose requirements document isn’t even started. Stuart Maloy is the advanced materials test lead at the Los Alamos National Laboratory. Here is how he responded when asked about the urgency of a fast neutron test reactor. That statement is applicable to conventional reactors as well as fast reactors. Much of the neutron flux that affects cladding materials hasn’t been moderated. The FFTF offers an almost immediately available place for a new generation of nuclear professionals to learn that fast neutron fission isn’t something for the distant future or forgotten past. Designing systems and making them work isn’t just a programming exercise. There’s a cadre of willing and available teachers and mentors, some of who still reside in eastern Washington, who would eagerly accept the challenge of engaging in the task of transferring their knowledge to a new generation. It’s time to accept reality, quit holding out for a new facility and begin taking full advantage of our inheritance. While writing the above, I had contacted the Idaho National Laboratory (INL) for their comments. Unfortunately, I sent my the information request to the wrong office. The process of routing the request and obtaining a response thus took longer than usual, so the response missed the deadline for the edition of Fuel Cycle Week in which the article was run. Before simply republishing that article here, I asked INL to provide an updated response and provided a copy of the initial article. Here is the response provided by INL Public Affairs and Strategic Initiatives. Aside: It’s worth noting that the study mentioned in item #3 is the CBCG study conducted for the GNEP program that is mentioned earlier in this article. That study describes FFTF as an incredible asset. Here is another quote from the Executive Summary of the Siting Study for Hanford Advanced Fuels Test & Research Center. That quote introduces an additional facility – the FMEF – that makes the FFTF site even more attractive. This is how the report briefly describes the FMEF. There is one more facility – Maintenance and Storage Facility (MASF) – that is described in the report. It is an integral and important part of the currently idled FFTF complex. Here is the brief summary description of the MASF found on page 16 of the Siting Study. Any open-minded decision maker motivated to support development of advanced reactors with a capable fast neutron test facility would be impressed by the potential of the facility that already exists. Any reasonably experienced and knowledgable nuclear project manager would recognize that the path for building a brand new facility would be far more tortuous and fraught with the potential for serious delays or even cancellation somewhere along the 15-20 years the project would require starting today. INL’s response to my request for information contained an additional quote. DOE has a documented process for capital acquisitions that is as arduous and cumbersome as the major system acquisition process used by the Department of Defense. There are some pretty solid reasons why each milestone step is bureaucratically and politically important. Done correctly, the process can help avoid technical SNAFUs like the A-12 and political quagmires like the MOX facility. However, the process can be accelerated when there is a need and an obvious answer to that need sitting around in the land-based equivalent of a mothball fleet. With libraries worth of QA documents, the physical presence of the facilities and some subtle political pressure, it should be possible for a focused and motivated DOE to power through both CD-0 (Statement of Mission Need) and CD-1 (Analysis of Alternatives) in record time. The post FFTF restoration would provide the fastest, most efficient path to fast spectrum neutron testing appeared first on Atomic Insights.


News Article | December 15, 2016
Site: www.scientificamerican.com

SAN FRANCISCO—Anxiety among Earth and climate scientists has been mounting for weeks. The election of Donald J. Trump as U.S. president, a candidate who called human-driven climate change a hoax, was followed by Trump naming more and more climate-change doubters to run the government’s environment and energy agencies. So this Wednesday a throng of about 2,000 researchers packed a hall at the American Geophysical Union’s (AGU) annual meeting here to hear from politicians and leading scientists about whether science is about to be thrown out the window. Along with, possibly, their jobs. “I was definitely given a gut check this past week,” said a worried scientist from Los Alamos National Laboratory in New Mexico, which is part of the Department of Energy (DoE). But when California Gov. Jerry Brown strode to the microphone and delivered an animated pep talk the crowd’s mood rapidly changed. “We have the scientists, we have the lawyers and we are ready to fight!” Brown said, to loud cheers. “And California is no stranger to this fight. Our emission standards, our energy rules, drove U.S. policy. Whatever Washington thinks it is doing, California is the future.” Brown urged scientists to take the long view. “This is not one battle. This is a long slog,” he said. One election and a presidential term does not set policy forever, he noted. Scientists, he said to more cheers, are truth-tellers and truth-seekers. “I’ve been doing this a long time,” he said, “and eventually the truth will prevail.” In fact, he added, the election should energize the research community in a needed way. A lot of scientists and policy makers may have been quietly pushing for actions to combat warming, he said, but not pushing very hard. “You know that sometimes you need a heart attack to stop smoking?” he asked. “Well, maybe we just got our heart attack.” A lot of scientists were fearful because the Trump transition team has asked the DoE for names of employees who worked on climate research and were members of certain science groups. On Tuesday the department said it would not reply to a list of 74 questions designed to root out such information. Brown told the crowd that his state would brook no political interference in the research done at the many national labs operated by the University of California. Alexandra Shultz, the AGU’s public affairs director, told the audience that there are opportunities to advance climate science in the new Congress. “We do have a great deal of bipartisan legislative support,” she said. She specifically named Rep. John Culberson, a Republican congressman from Texas, who chairs the House Subcommittee on Commerce, Justice, Science and Related Agencies, which oversees NASA, the National Science Foundation and related agencies. Culberson, she said, supported science funding. He has agreed that humans are affecting climate, although he questions the degree. Other opportunities can be seen with some Trump nominees, said scientist Daniel Kammen, who runs the energy and resources group at the University of California, Berkeley’s Goldman School of Public Policy. Rick Perry, named to head the DoE, was a big supporter of wind energy development when he was governor of Texas, Kammen said. And Rex Tillerson, the ExxonMobil chief nominated as secretary of state, has supported the idea of putting a price on carbon emissions. (Kammen did emphasize that he thinks Tillerson has too many conflicts of interests to serve, however.) A cautionary note came from the AGU’s president-elect, Eric Davidson of the University of Maryland’s Center for Environmental Science. “It is great to know California has our backs,” he said. “But we need support from all 435 House districts and 100 senators” across the entire country, he continued. The Geophysical Union will be a force for what he called “safe places for science,” free from all forms of harassment. “But we have to work at building bridges,” he said.


Rauscher S.A.,Los Alamos National Laboratory | Kucharski F.,Abdus Salam International Center For Theoretical Physics | Enfield D.B.,University of Miami
Journal of Climate | Year: 2011

This paper addresses several hypotheses designed to explain why AOGCM simulations of future climate in the third phase of the Coupled Model Intercomparison Project (CMIP3) feature an intensified reduction of precipitation over the Meso-America (MA) region. While the drying is consistent with an amplification of the subtropical high pressure cells and an equatorward contraction of convective regions due to the "upped ante" for convection in awarmer atmosphere, the physicalmechanisms behind the intensity and robustness of theMAdrying signal have not been fully explored. Regional variations in sea surface temperature (SST) warmingmay play a role. First, SSTs over the tropical North Atlantic (TNA) do not warm as much as the surrounding ocean. The troposphere senses a TNAthat is cooler than the tropical Pacific, potentially exciting aGill-type response, increasing the strength of theNorth Atlantic subtropical high. Second, thewarmENSO-like state simulated in the eastern tropical Pacific could decrease precipitation over MA, as warm ENSO events are associated with drying over MA. The authors use the International Centre for Theoretical Physics (ICTP) AGCM to investigate the effects of these regional SST warming variations on the projected drying over MA. First, the change of SSTs [Special Report on Emissions Scenarios (SRES) A1B's Twentieth-Century Climate in Coupled Model (A1B-20C)] in the ensemble average of the CMIP3 models is applied to determine if the ICTP AGCM can replicate the future drying. Then the effects of 1) removing the reduced warming over the TNA, 2) removing the warm ENSO-event-like pattern in the eastern tropical Pacific, and 3) applying uniform SST warming throughout the tropics are tested. The ICTPAGCMcan reproduce the general pattern and amount of precipitation over MA. Simulations in which the CMIP3 A1B-20C ensemble-average SSTs are added to climatological SSTs show drying of more than 20% over the MA region, similar to the CMIP3 ensemble average. Replacing the relatively cooler SSTs over the TNA excites a Gill response consistent with an off-equatorial heating anomaly, showing that the TNA relative cooling is responsible for about 16% (31%) of the drying in late spring (early summer). The warm ENSO-like SST pattern over the eastern Pacific also affects precipitation over the MA region, with changes of 19% and 31% in March-June (MMJ) and June-August (JJA), respectively. This work highlights the importance of understanding even robust signals in the CMIP3 future scenario simulations, and should aid in the design and analysis of future climate change studies over the region. © 2011 American Meteorological Society.


Whitford P.C.,Rice University | Sanbonmatsu K.Y.,Los Alamos National Laboratory | Onuchic J.N.,Rice University
Reports on Progress in Physics | Year: 2012

While the energy landscape theory of protein folding is now a widely accepted view for understanding how relatively weak molecular interactions lead to rapid and cooperative protein folding, such a framework must be extended to describe the large-scale functional motions observed in molecular machines. In this review, we discuss (1) the development of the energy landscape theory of biomolecular folding, (2) recent advances toward establishing a consistent understanding of folding and function and (3) emerging themes in the functional motions of enzymes, biomolecular motors and other biomolecular machines. Recent theoretical, computational and experimental lines of investigation have provided a very dynamic picture of biomolecular motion. In contrast to earlier ideas, where molecular machines were thought to function similarly to macroscopic machines, with rigid components that move along a few degrees of freedom in a deterministic fashion, biomolecular complexes are only marginally stable. Since the stabilizing contribution of each atomic interaction is on the order of the thermal fluctuations in solution, the rigid body description of molecular function must be revisited. An emerging theme is that functional motions encompass orderdisorder transitions and structural flexibility provides significant contributions to the free energy. In this review, we describe the biological importance of orderdisorder transitions and discuss the statisticalmechanical foundation of theoretical approaches that can characterize such transitions. © 2012 IOP Publishing Ltd.


Boffetta G.,University of Turin | Ecke R.E.,Los Alamos National Laboratory
Annual Review of Fluid Mechanics | Year: 2011

In physical systems, a reduction in dimensionality often leads to exciting new phenomena. Here we discuss the novel effects arising from the consideration of fluid turbulence confined to two spatial dimensions. The additional conservation constraint on squared vorticity relative to three-dimensional (3D) turbulence leads to the dual-cascade scenario of Kraichnan and Batchelor with an inverse energy cascade to larger scales and a direct enstrophy cascade to smaller scales. Specific theoretical predictions of spectra, structure functions, probability distributions, and mechanisms are presented, and major experimental and numerical comparisons are reviewed. The introduction of 3D perturbations does not destroy the main features of the cascade picture, implying that 2D turbulence phenomenology establishes the general picture of turbulent fluid flows when one spatial direction is heavily constrained by geometry or by applied body forces. Such flows are common in geophysical and planetary contexts, are beautiful to observe, and reflect the impact of dimensionality on fluid turbulence.


Zhang Y.,Rice University | Grady N.K.,Rice University | Grady N.K.,Los Alamos National Laboratory | Ayala-Orozco C.,Rice University | Halas N.J.,Rice University
Nano Letters | Year: 2011

Plasmonic nanostructures enable the generation of large electromagnetic fields confined to small volumes, potentially providing a route for the development of nanoengineered nonlinear optical media. A metal-capped hemispherical nanoparticle, also known as a nanocup, generates second harmonic light with increasing intensity as the angle between the incident fundamental beam and the nanocup symmetry axis is increased. Nanoparticle orientation also modifies the emission direction of the second harmonic light. With conversion efficiencies similar to those of inorganic SHG crystals, these structures provide a promising approach for the design and fabrication of stable, synthetic second-order nonlinear optical materials tailored for specific wavelengths. © 2011 American Chemical Society.


Wang Z.Q.,University of North Texas | Beyerlein I.J.,Los Alamos National Laboratory
International Journal of Plasticity | Year: 2011

Atomistic simulations have shown that a screw dislocation in body-centered cubic (BCC) metals has a complex non-planar atomic core structure. The configuration of this core controls their motion and is affected not only by the usual resolved shear stress on the dislocation, but also by non-driving stress components. Consequences of the latter are referred to as non-Schmid effects. These atomic and micro-scale effects are the reason slip characteristics in deforming single and polycrystalline BCC metals are extremely sensitive to the direction and sense of the applied load. In this paper, we develop a three-dimensional discrete dislocation dynamics (DD) simulation model to understand the relationship between individual dislocation glide behavior and macro-scale plastic slip behavior in single crystal BCC Ta. For the first time, it is shown that non-Schmid effects on screw dislocations of both {110} and {112} slip systems must be implemented into the DD models in order to predict the strong plastic anisotropy and tension-compression asymmetry experimentally observed in the stress-strain curves of single crystal Ta. Incorporation of fundamental atomistic information is critical for developing a physics-based, predictive meso-scale DD simulation tool that can connect length/time scales and investigate the underlying mechanisms governing the deformation of BCC metals. © 2011 Elsevier Ltd. All rights reserved.


Wen X.-D.,Cornell University | Wen X.-D.,Los Alamos National Laboratory | Hoffmann R.,Cornell University | Ashcroft N.W.,Cornell University
Journal of the American Chemical Society | Year: 2011

In a theoretical study, benzene is compressed up to 300 GPa. The transformations found between molecular phases generally match the experimental findings in the moderate pressure regime (<20 GPa): phase I (Pbca) is found to be stable up to 4 GPa, while phase II (P43212) is preferred in a narrow pressure range of 4-7 GPa. Phase III (P21/c) is at lowest enthalpy at higher pressures. Above 50 GPa, phase V (P21 at 0 GPa; P21/c at high pressure) comes into play, slightly more stable than phase III in the range of 50-80 GP, but unstable to rearrangement to a saturated, four-coordinate (at C), one-dimensional polymer. Actually, throughout the entire pressure range, crystals of graphane possess lower enthalpy than molecular benzene structures; a simple thermochemical argument is given for why this is so. In several of the benzene phases there nevertheless are substantial barriers to rearranging the molecules to a saturated polymer, especially at low temperatures. Even at room temperature these barriers should allow one to study the effect of pressure on the metastable molecular phases. Molecular phase III (P21/c) is one such; it remains metastable to higher pressures up to ∼200 GPa, at which point it too rearranges spontaneously to a saturated, tetracoordinate CH polymer. At 300 K the isomerization transition occurs at a lower pressure. Nevertheless, there may be a narrow region of pressure, between P = 180 and 200 GPa, where one could find a metallic, molecular benzene state. We explore several lower dimensional models for such a metallic benzene. We also probe the possible first steps in a localized, nucleated benzene polymerization by studying the dimerization of benzene molecules. Several new (C6H6)2 dimers are predicted. © 2011 American Chemical Society.


Wang X.,Donghua University | Ding B.,Donghua University | Yu J.,Donghua University | Wang M.,Tsinghua University | And 2 more authors.
Nano Today | Year: 2011

Biomimetics provides a model for developments of functional surfaces with special wettability. Recently, manufacturing bio-inspired superhydrophobic surfaces has become an increasingly hot research topic. The electrospinning technique is a versatile and effective method for manufacturing nanomaterials with controllable compositions and structures, and therefore provides an ideal strategy for construction of superhydrophobic surfaces on a large scale. After a brief description of several superhydrophobic surfaces inspired by nature, we highlighted the recent progresses in design and fabrication of these bio-inspired superhydrophobic surfaces via electrospinning technique. The studies on the switchable wettability of nanofibrous surface brought about by external stimuli are also addressed. We conclude with a summary of current and future research efforts and opportunities in the development of electrospun nanomaterials for superhydrophobic applications. © 2011 Elsevier Ltd.


Kilina S.,North Dakota State University | Velizhanin K.A.,Theoretical Division and Center for Nonlinear Studies | Ivanov S.,Los Alamos National Laboratory | Prezhdo O.V.,University of Rochester | Tretiak S.,Los Alamos National Laboratory
ACS Nano | Year: 2012

Understanding the pathways of hot exciton relaxation in photoexcited semiconductor nanocrystals, also called quantum dots (QDs), is of paramount importance in multiple energy, electronics and biological applications. An important nonradiative relaxation channel originates from the nonadiabatic (NA) coupling of electronic degrees of freedom to nuclear vibrations, which in QDs depend on the confinement effects and complicated surface chemistry. To elucidate the role of surface ligands in relaxation processes of nanocrystals, we study the dynamics of the NA exciton relaxation in Cd 33Se 33 semiconductor quantum dots passivated by either trimethylphosphine oxide or methylamine ligands using explicit time-dependent modeling. The large extent of hybridization between electronic states of quantum dot and ligand molecules is found to strongly facilitate exciton relaxation. Our computational results for the ligand contributions to the exciton relaxation and electronic energy-loss in small clusters are further extrapolated to larger quantum dots. © 2012 American Chemical Society.


Yu R.,Rice University | Zhu J.-X.,Los Alamos National Laboratory | Si Q.,Rice University
Physical Review Letters | Year: 2011

The degree of electron correlations remains a central issue in the iron-based superconductors. The parent iron pnictides are antiferromagnetic, and their bad-metal behavior has been interpreted in terms of proximity to a Mott transition. We study such a transition in multiorbital models on modulated lattices containing an ordered pattern of iron vacancies, using a slave-rotor method. We show that the ordered vacancies lead to a band narrowing, which pushes the system to the Mott insulator side. This effect is proposed to underlie the insulating behavior observed in the parent compounds of the newly discovered (K,Tl)yFexSe2 superconductors. © 2011 American Physical Society.


Kato Y.,Los Alamos National Laboratory | Batista C.D.,Los Alamos National Laboratory | Vekhter I.,Louisiana State University
Physical Review Letters | Year: 2011

We develop a theory of the coexistence of superconductivity (SC) and antiferromagnetism (AFM) in CeCoIn5. We show that in Pauli-limited nodal superconductors the nesting of the quasiparticle pockets induced by Zeeman pair breaking leads to incommensurate AFM with the magnetic moment normal to the field. We compute the phase diagram and find a first order transition to the normal state at low temperatures, the absence of normal state AFM, and the coexistence of SC and AFM at high fields, in agreement with experiments. We also predict the existence of a new double-Q magnetic phase. © 2011 American Physical Society.


Palguta J.,University of California at Los Angeles | Schubert G.,University of California at Los Angeles | Travis B.J.,Los Alamos National Laboratory
Earth and Planetary Science Letters | Year: 2010

Carbonaceous chondrites are a class of primitive meteorites that likely represent early solar system materials, thus providing critical insights about planetary evolution. However, evidence shows that they have undergone considerable processing. Elucidating the complex formation history of aqueously altered chondrite groups has been a principal objective of cosmochemistry. Fulfilling this objective requires understanding the nature of the water-rock interactions involved. Many current alteration models are generally founded upon an assumption of uniform alteration in a closed system. In these models, different chondrite groups must originate on separate parent bodies. Contrary to this common view, we show that chondrite data are consistent with non-uniform alteration in a convecting system so that multiple chondrite groups can form within a single body. © 2010 Elsevier B.V.


Milonni P.W.,Los Alamos National Laboratory | Milonni P.W.,University of Rochester | Boyd R.W.,University of Rochester
Advances in Optics and Photonics | Year: 2010

We review different expressions that have been proposed for the stress tensor and for the linear momentum of light in dielectric media, focusing on the Abraham and Minkowski forms. Analyses of simple models and consideration of available experimental results support the interpretation of the Abraham momentum as the kinetic momentum of the field, while the Minkowski momentum is the recoil momentum of absorbing or emitting guest atoms in a host dielectric. Momentum conservation requires consideration not only of the momentum of the field and of recoiling guest atoms, but also of the momentum the field imparts to the medium. Different model assumptions with respect to electrostriction and the dipole force lead to different expressions for this momentum. We summarize recent work on the definition of the canonical momentum for the field in a dielectric medium. © 2010 Optical Society of America.


Wang M.,Los Alamos National Laboratory | Guo Z.-Y.,Tsinghua University
Physics Letters, Section A: General, Atomic and Solid State Physics | Year: 2010

The anomalous thermal transport properties of nanotubes may lead to many important applications, but the mechanisms are still unclear. In this work, we present new governing equations for non-Fourier heat conduction in nanomaterials based on the concept of thermomass. The effective thermal conductivities of nanotubes are therefore predicted which agree very well with the available experimental data. Analysis suggests that the inertial effect of heat and the confined heat flux by nanostructured surfaces are two key mechanisms causing the anomalous temperature and size dependences of effective thermal conductivity of nanotubes. © 2010 Elsevier B.V.


Das A.,Abdus Salam International Center For Theoretical Physics | Das A.,Los Alamos National Laboratory
Physical Review B - Condensed Matter and Materials Physics | Year: 2010

We show that when a quantum many-body system is subjected to coherent periodic driving, the response may exhibit exotic freezing behavior in high driving frequency (ω) regime. In a periodically driven classical thermodynamic system, freezing at high ω occurs when 1/ω is much smaller than the characteristic relaxation time of the system and hence the freezing always increases there as ω is increased. Here, in the contrary, we see surprising nonmonotonic freezing behavior of the response with ω, showing curious peak-valley structure. Quite interestingly, the entire system tends to freeze almost absolutely (the freezing peaks) when driven with a certain combination of driving parameters values (amplitude and ω) due to coherent suppression of dynamics of the quantum many-body modes, which has no classical analog. We demonstrate this new freezing phenomenon analytically (supported by large-scale numerics) for a general class of integrable quantum spin systems. © 2010 The American Physical Society.


Nelson A.E.,University of Washington | Roy T.S.,Tata Institute of Fundamental Research | Roy T.S.,Los Alamos National Laboratory
Physical Review Letters | Year: 2015

We propose the framework generalized supersoft supersymmetry breaking. "Supersoft" models, with D-type supersymmetry breaking and heavy Dirac gauginos, are considerably less constrained by the LHC searches than the well studied MSSM. These models also ameliorate the supersymmetric flavor and CP problems. However, previously considered mechanisms for obtaining a natural size Higgsino mass parameter (namely, μ) in supersoft models have been relatively complicated and contrived. Obtaining a 125 GeV for the mass of the lightest Higgs boson has also been difficult. Additional issues with the supersoft scenario arise from the fact that these models contain new scalars in the adjoint representation of the standard model, which may obtain negative squared-masses, breaking color and generating too large a T parameter. In this Letter, we introduce new operators into supersoft models which can potentially solve all these issues. A novel feature of this framework is that the new μ term can give unequal masses to the up and down type Higgs fields, and the Higgsinos can be much heavier than the Higgs boson without fine-tuning. However, unequal Higgs and Higgsino masses also remove some attractive features of supersoft supersymmetry. © 2015 American Physical Society.


Miao J.,University of California at Los Angeles | Sandberg R.L.,Los Alamos National Laboratory | Song C.,RIKEN
IEEE Journal on Selected Topics in Quantum Electronics | Year: 2012

For centuries, lens-based microscopy, such as optical, phase-contrast, fluorescence, confocal, and electron microscopy, has played an important role in the evolution of modern science and technology. In 1999, a novel form of microscopy, i.e., coherent diffraction imaging (also termed coherent diffraction microscopy or lensless imaging), was developed and transformed our conventional view of microscopy, in which the diffraction pattern of a noncrystalline specimen or a nanocrystal was first measured and then directly phased to obtain a high-resolution image. The well-known phase problem was solved by combining the oversampling method with iterative algorithms. In this paper, we will briefly discuss the principle of coherent diffraction imaging, present various implementation schemes of this imaging modality, and illustrate its broad applications in materials science, nanoscience, and biology. As coherent X-ray sources such as high harmonic generation and X-ray free-electron lasers are presently under rapid development worldwide, coherent diffraction imaging can potentially be applied to perform high-resolution imaging of materials/nanoscience and biological specimens at the femtosecond time scale. © 2006 IEEE.


Wang X.,Rice University | Belyanin A.A.,Texas A&M University | Crooker S.A.,Los Alamos National Laboratory | Mittleman D.M.,Rice University | Kono J.,Rice University
Nature Physics | Year: 2010

Maximum modulation of light transmission occurs when an opaque medium is suddenly made transparent. This phenomenon occurs in atomic and molecular gases through different mechanisms, whereas much room remains for further studies in solids. A plasma is an illustrative system showing opacity for low-frequency light, and light-plasma interaction theory provides a universal framework to describe diverse phenomena including radiation in space plasmas, diagnostics of laboratory plasmas and collective excitations in condensed matter. However, induced transparency in plasmas remains relatively unexplored. Here, we use coherent terahertz magneto-spectroscopy to reveal a thermally and magnetically induced transparency in a semiconductor plasma. A sudden appearance and disappearance of transmission through electron-doped InSb is observed over narrow temperature and magnetic field ranges, owing to coherent interference between left- and right-circularly polarized terahertz eigenmodes. Excellent agreement with theory reveals long-lived coherence of magneto-plasmons and demonstrates the importance of coherent interference in the terahertz regime. © 2010 Macmillan Publishers Limited. All rights reserved.


Wen X.-D.,Cornell University | Wen X.-D.,Los Alamos National Laboratory | Hoffmann R.,Cornell University | Ashcroft N.W.,Cornell University
Advanced Materials | Year: 2013

Single layer two-dimensional CdSe nanosheets are explored theoretically, highlighting the role of the coordinating (and stabilizing) ligands. Four isomeric CdSe single sheets bonded to H atoms are studied (CdSeH2), as are four similar sheets with NH3 coordinated to the Cd atoms. These are quite corrugated, unlike the pristine sheets, and have a different stability order from their graphane analogs. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Wu H.-C.,Los Alamos National Laboratory | Meyer-Ter-Vehn J.,Max Planck Institute of Quantum Optics
Nature Photonics | Year: 2012

Half-cycle picosecond pulses have been produced from thin photoconductors when applying an electric field across the surface and switching on conduction using a short laser pulse. The transverse current in the wafer plane then emits half-cycle pulses in a normal direction, and pulses of 500fs duration and 1×106V m-1 peak electric field have been observed. Here, we show that single half-cycle pulses with a duration of 50Â as and up to 1×1013V m-1 can be produced when irradiating a double foil target with intense few-cycle laser pulses. Focused onto an ultrathin foil, all electrons are blown out, forming a uniform sheet of relativistic electrons. A second layer, placed some distance behind, reflects the drive beam but lets electrons pass straight through. Under oblique incidence, beam reflection provides the transverse current, which emits intense half-cycle pulses. Such a pulse may completely ionize even heavier atoms. With these developments, new types of attosecond pump-probe experiments will become possible. © 2012 Macmillan Publishers Limited. All rights reserved.


"For the first time, we can begin to unravel the ultrafast transformation of a rock sample during a dynamic process like shock compression. By taking a series of snapshots, we can capture what is happening during very rapid processes," says Wendy Mao, an associate professor of geological sciences and of photon science. Using the Linac Coherent Light Source (LCLS) X-ray laser at SLAC National Accelerator Laboratory, Mao and Arianna Gleason, a postdoctoral researcher at Los Alamos National Laboratory and a visiting researcher in Mao's Extreme Environments Laboratory, are performing experiments that allow them to witness the swift transformations of shocked rocks. First, they use high-powered optical lasers to superheat a tiny part of a rock sample and create a plasma. "The part of the rock hit by the laser becomes a plasma that blows off, and it blows off so fast that it creates a shock wave traveling in the other direction," Gleason explains. The heat and compression generated by the moving shock wave changes the crystalline structure of the remaining rock sample, rearranging its atoms into a different mineral. In order to illuminate and record the properties of the changing mineral, the scientists fire a burst of focused X-ray beams from LCLS at the sample just nanoseconds after the first laser pulse. "The X-rays have a wavelength that is just right for letting us measure the distances between atoms," Gleason says. "Based on the position of the atoms, we can tell exactly what material it is." By varying the arrival time of the X-rays, the scientists can generate a series of snapshots of the mineral as it changes over time. Creating a time series that represents 50 nanoseconds of rock changes can require 6 to 12 hours in the lab. Using their technique, Mao and Gleason recently showed that the mineral stishovite, a rare, extremely hard and dense form of silica, can form in just a few nanoseconds, or billionths of a second – tens or even hundreds of times faster than previously thought. "The high-pressure behavior of silica has been studied extensively because of its application not only to planetary science but fundamental physics, chemistry and materials science as well," Mao says. "This study provides critical insight into the mechanism behind how different forms of silica transform from one structure to another." Explore further: Scientists use lasers to simulate shock effects of meteorite impact on silica


News Article | November 4, 2016
Site: www.newscientist.com

The numbers are in. We can now precisely count how many cancer-related DNA mutations accumulate in smokers’ organs over time. On average, there is one DNA mutation per lung cell for every 50 cigarettes smoked, according to a new analysis. People who smoke a pack of 20 a day for a year generate 150 mutations per lung cell, 97 per larynx cell, 39 per pharynx cell, 18 per bladder cell and six per liver cell. Epidemiological studies previously linked tobacco smoking with at least 17 classes of cancer, but this is the first time researchers have been able to quantify the molecular damage inflicted on DNA. Ludmil Alexandrov at Los Alamos National Laboratory in New Mexico and his colleagues achieved this by comparing tumour DNA from 2500 smokers and 1000 non-smokers. This allowed them to identify which mutations were associated with smoking. Theoretically, every DNA mutation has the potential to trigger a cascade of genetic damage that causes cells to become cancerous. However, we still don’t know what the probability is of a single smoking-related DNA mutation turning into cancer, or which mutation types are likely to be more malignant. “This is research we are currently pursuing,” Alexandrov says. Some smokers never develop cancer despite accruing thousands of mutations, but this is purely down to luck, Alexandrov says. “Smoking is like playing Russian roulette: the more you play, the higher the chance the mutations will hit the right genes and you will develop cancer,” he says. “However, there will always be people who smoke a lot but the mutations do not hit the right genes.” The team hopes their findings will deter people from taking up smoking and debunk the myth that social smoking is harmless. Every cigarette has the potential to cause genetic mutations, Alexandrov says. Quitting smoking will not reverse these mutations – they leave permanent scars on DNA – but it will prevent the added risk of more mutations, he says. There is good evidence that people who stop smoking have a significantly lower risk of premature death than those who continue, says Simon Chapman at the University of Sydney, Australia. For example, a UK study that followed 35,000 men for half a century found that smoking shaved 10 years off average life expectancy. But quitting at age 30 mostly erased the risk of premature death, and giving up at 50 halved it. “Many smokers believe there’s no point in quitting because the damage is already done,” says Chapman. “But if smokers quit by middle age, they can avoid nearly all the excess risk of tobacco-caused deaths.”


News Article | November 6, 2015
Site: phys.org

TED Fellows are individuals lauded for their innovative solutions to the world's most complex challenges today. For Boykin, the challenge came in the form of the whitefly, an insect that transmits a virus to cassava, a root vegetable that nearly 800 million people worldwide rely on for sustenance, some for income. In 2012, Boykin attended an Agricultural Research Connections workshop hosted by the Bill and Melinda Gates Foundation in Kenya. During that trip, she visited a smallholder farm in East Africa and saw the devastation caused by whiteflies. "At that moment, I was sold that this is what I needed to be doing," she said. "The situation was unacceptable, and my skills could be applied to the problem. I hadn't considered before that I could pass them on to the next generation of scientists. I decided the best use of my time on Earth was to make a difference here." A computational biologist, Boykin uses genomics, supercomputing and phylogenetics to identify whitefly species, gathering information necessary for researchers to modify cassava to resist both insect and virus. To accelerate progress, she launched WhiteFlyBase — the world's first database of whitefly genetic information — with the hope of eradicating whitefly and bringing food security to East Africa. According to Boykin, Cassava brown streak disease is caused by two devastating viruses, Cassava brown streak virus and Ugandan cassava brown streak virus, which are frequently found infecting cassava, one of sub-Saharan Africa's most important staple food crops. Each year these viruses cause losses of up to $100 million USD and can leave entire families without their primary food source for an entire year. "I became interested in Sub-Saharan Africa because, scientifically, it sits at the base of that evolutionary tree," Boykin said. "The fact that they've been around for millions of years fascinated me." For personal stories about the profound challenges Africa smallholder farmers face, Boykin recommends, "The Last Hunger Season: A Year in an African Farm Community on the Brink of Change" by Roger Thurow. "It outlines the day in the life of the strongest people on the planet, the women smallholder farmers," she said. While the whitefly has not yet fully been eradicated it is now highly regulated around the world. Countries have massive regulations on whitefly moving across borders. It can transmit viruses to tomatoes, sweet potatoes and ornamental plants. They are dreaded worldwide. The connection to New Mexico Boykin has a long history in New Mexico. Her mother and father met in Villanueva while student teaching. "I'm sure that hearing the stories of them growing up, the beautiful scenery, the warm people and amazing food, all influenced my decision to attend UNM," she said. Boykin started her Ph.D. in 1998 with biology professor Timothy Lowrey. "I chose UNM because I wanted to learn more about molecular evolution and systematics and many in the department were on the cutting edge of these fields," she said. "Laura excelled in her Ph.D. research at UNM and was never afraid to tackle difficult problems with innovative methods," Lowrey said. "She was particularly adept at forming interdisciplinary research relationships to achieve results that were published in high quality, peer-reviewed scientific journals. I am extremely happy that she is applying her communication and research talents to the control of Cassava whitefly in Africa, which will impact and improve life for millions of people on the continent where I began my academic career." During her time at UNM, Boykin worked at Los Alamos National Laboratory, in the Theoretical Biology and Biophysics Group studying the evolution of influenza and Hepatitis C, where she got her first exposure to supercomputing. "At the time, many thought I was crazy to have two major research projects going," she said, "but looking back, it was one of the best decisions of my life. The skills from both my plant systematics and viral evolution knowledge set me up to be exactly where I am today—-studying the speciation of whiteflies that transmit deadly viruses." Boykin said that she loved her time at UNM. "It was hard work but it has paid off," she said. "I must admit, I miss the Frontier and El Patio, as there is no comparable Mexican food in Perth, Western Australia where I currently live. I look forward to visiting again in the near future."


News Article | January 27, 2016
Site: phys.org

"Using cerium allows us to gain a fundamental understanding of a wide range of phenomena related to the multiphase properties of materials, including how strength evolves following phase transitions," said Brian Jensen, who is a physicist and team leader in the Shock and Detonation Physics Group at the U.S. Department of Energy's (DOE) Los Alamos National Laboratory (LANL). "Studying the formation and evolution of jets in metals and, more generally, how materials at extreme conditions respond using X-ray phase contrast imaging is relevant to such things as meteorite impacts, the performance of explosives and detonators, understanding crack nucleation and propagation in materials, and the development of new materials with tailored properties whose applications include automotive and airplane components, lighter and more impact-resistant armor, and debris shields in space, to name a few." Scientists from Los Alamos, Argonne National Laboratory and National Security Technologies LLC generated shock waves in cerium samples using the state-of-the-art IMPact system for ULtrafast Synchrotron Experiments (IMPULSE), which was developed specifically for use at the Advanced Photon Source (APS), a DOE Office of Science User Facility located at Argonne. IMPULSE is a gas-gun system that launches projectiles at targets placed in the path of the APS's X-ray beam at the 32-ID beamline. Projectiles with copper impactors struck the cerium, creating shock waves inside that transformed it from the ambient γ phase into a higher pressure, higher temperature α state. Subsequently, the shock waves reflected from the back of the cerium sample, interacting with grooves that had been machined there. As the shocked cerium returned to its γ phase at a high temperature, metal jets formed and grew at the groove locations. Because cerium is opaque to X-rays at the X-ray energies used, the researchers could obtain images of the jets as they formed. Photonic Doppler Velocimetry probes simultaneously measured jet-growth velocities. By measuring jet heights as a function of time and velocity histories for a range of impact stresses, the scientists used several methods, including computer simulation, to estimate the yield stress for cerium following its reversion to the γ phase. Good agreement was found among the calculations and the data, illustrating the sensitivity of jet formation to yield stress values. "We are pursuing work to gain a fundamental understanding of the evolution of strength at extreme conditions," Jensen said. "That includes work using IMPULSE at the APS, experiments to be conducted at the new Dynamic Compression Sec