Oak Ridge, TN, United States
Oak Ridge, TN, United States

Oak Ridge National Laboratory is a multiprogram science and technology national laboratory managed for the United States Department of Energy by UT-Battelle. ORNL is the largest science andenergy national laboratory in the Department of Energysystem by acreage. ORNL is located in Oak Ridge, Tennessee, near Knoxville. ORNL's scientific programs focus on materials,neutron science, energy, high-performance computing,systems biology and national security.ORNL partners with the state of Tennessee, universities and industries to solve challenges in energy, advanced materials, manufacturing, security and physics.The laboratory is home to several of the world's top supercomputers including the world's second most powerful supercomputer ranked by the TOP500, Titan, and is a leading neutron science and nuclear energy research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor. ORNL hosts the Center for Nanophase Materials science, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light-Water Reactors. Wikipedia.


Time filter

Source Type

Wilbanks T.J.,Oak Ridge National Laboratory | Kates R.W.,33 Popple Point
Annals of the Association of American Geographers | Year: 2010

Climate change impacts are already being experienced in every region of the United States and every part of the world-most severely in Arctic regions-and adaptation is needed now. Although climate change adaptation research is still in its infancy,significant adaptation planning in the United States has already begun in a number of localities. This article seeks to broaden the adaptation effort by integrating it with broader frameworks of hazards research, sustainability science, and community and regional resilience. To extend the range of experience, we draw from ongoing case studies in the Southeastern United States and the environmental history of New Orleans to consider the multiple threats and stresses that all communities and regions experience. Embedding climate adaptation in responses to multiple threats and stresses helps us to understand climate change impacts, themselves often products of multiple stresses, to achieve community acceptance of needed adaptations as cobenefits of addressing multiple threats, and to mainstream the process of climate adaptation through the larger envelope of social relationships, communication channels, and broad-based awareness of needs for risk management that accompany community resilience. © 2010 by Association of American Geographers.


Gao C.,Xi'an University of Science and Technology | Gao C.,University of California at Riverside | Hu Y.,Argonne National Laboratory | Wang M.,University of California at Riverside | And 2 more authors.
Journal of the American Chemical Society | Year: 2014

We report that fully alloyed Ag/Au nanospheres with high compositional homogeneity ensured by annealing at elevated temperatures show large extinction cross sections, extremely narrow bandwidths, and remarkable stability in harsh chemical environments. Nanostructures of Ag are known to have much stronger surface plasmon resonance than Au, but their applications in many areas have been very limited by their poor chemical stability against nonideal chemical environments. Here we address this issue by producing fully alloyed Ag/Au nanospheres through a surface-protected annealing process. A critical temperature has been found to be around 930 °C, below which the resulting alloy nanospheres, although significantly more stable than pure silver nanoparticles, can still gradually decay upon extended exposure to a harsh etchant. Nanospheres annealed above the critical temperature show a homogeneous distribution of Ag and Au, minimal crystallographic defects, and the absence of structural and compositional interfaces, which account for the extremely narrow bandwidths of the surface plasmon resonance and may enable many plasmonic applications with high performance and long lifetime, especially for those involving corrosive species. © 2014 American Chemical Society.


Yu Y.,National University of Singapore | Luo Z.,National University of Singapore | Chevrier D.M.,Dalhousie University | Leong D.T.,National University of Singapore | And 3 more authors.
Journal of the American Chemical Society | Year: 2014

The luminescence property of thiolated gold nanoclusters (Au NCs) is thought to involve the Au(I)-thiolate motifs on the NC surface; however, this hypothesis remains largely unexplored because of the lack of precise molecular composition and structural information of highly luminescent Au NCs. Here we report a new red-emitting thiolated Au NC, which has a precise molecular formula of Au22(SR)18 and exhibits intense luminescence. Interestingly, this new Au22(SR)18 species shows distinctively different absorption and emission features from the previously reported Au22(SR)16, Au22(SR)17, and Au25(SR)18. In stark contrast, Au22(SR) 18 luminesces intensely at ∼665 nm with a high quantum yield of ∼8%, while the other three Au NCs show very weak luminescence. Our results indicate that the luminescence of Au22(SR)18 originates from the long Au(I)-thiolate motifs on the NC surface via the aggregation-induced emission pathway. Structure prediction by density functional theory suggests that Au22(SR)18 has two RS-[Au-SR] 3 and two RS-[Au-SR]4 motifs, interlocked and capping on a prolate Au8 core. This predicted structure is further verified experimentally by Au L3-edge X-ray absorption fine structure analysis. © 2014 American Chemical Society.


Abou-Hanna J.,Bradley University | McGreevy T.E.,Oak Ridge National Laboratory
International Journal of Pressure Vessels and Piping | Year: 2011

Two effective approaches for obtaining ratchet boundaries of a structure undergoing cyclic loads are presented. The approaches use limit analysis of a structure whose yield surface is modified according to the cyclic load. In the first approach, Uniform Modified Yield (UMY) surface is used. UMY approach reduces the Mises-based cylindrical yield surface by Mises stress of the cyclic stress amplitude. UMY method was slightly conservative, and sometimes overly conservative, especially at high ratio of cyclic load to primary steady load. Conservatism, caused by the assumption that the modified yield surface remains isotropic, is eliminated by considering anisotropic Load Dependent Yield Modification approach, LDYM. This approach reduces yield strength based on relative orientation of steady primary and cyclic stress tensors. This work assumed elastic perfect plastic material behavior, with no strain hardening for both original and modified yield surfaces. Ratchet boundaries of several structures, published in literature, were obtained using UMY and LDYM approaches and verified against published data and results of conventional methods. Numerical procedures for UMY and LDYM approaches are extremely fast relative to conventional numerical schemes, and are not restricted by complex geometry or loading. © 2010 Elsevier Ltd.


Berland K.,University of Oslo | Berland K.,Chalmers University of Technology | Cooper V.R.,Oak Ridge National Laboratory | Lee K.,Lawrence Berkeley National Laboratory | And 5 more authors.
Reports on Progress in Physics | Year: 2015

A density functional theory (DFT) that accounts for van der Waals (vdW) interactions in condensed matter, materials physics, chemistry, and biology is reviewed. The insights that led to the construction of the Rutgers-Chalmers van der Waals density functional (vdW-DF) are presented with the aim of giving a historical perspective, while also emphasizing more recent efforts which have sought to improve its accuracy. In addition to technical details, we discuss a range of recent applications that illustrate the necessity of including dispersion interactions in DFT. This review highlights the value of the vdW-DF method as a general-purpose method, not only for dispersion bound systems, but also in densely packed systems where these types of interactions are traditionally thought to be negligible. © 2015 IOP Publishing Ltd.


Osetsky Y.N.,Oak Ridge National Laboratory | Yip S.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2012

The strain-rate response of flow stress in a plastically deforming crystal is formulated through a stress-sensitive dislocation mobility model that can be evaluated by atomistic simulation. For the flow stress of a model crystal of bcc Fe containing a 12-111 screw dislocation, this approach describes naturally a non-Arrhenius upturn at high strain rate, an experimentally established transitional behavior for which the underlying mechanism has not been clarified. Implications of our findings regarding the previous explanations of strain-rate effects on flow stress are discussed. © 2012 American Physical Society.


Hagen G.,Oak Ridge National Laboratory | Papenbrock T.,Oak Ridge National Laboratory | Papenbrock T.,University of Tennessee at Knoxville | Hjorth-Jensen M.,University of Oslo
Physical Review Letters | Year: 2010

We perform coupled-cluster calculations of the energies and lifetimes of single-particle states around the doubly magic nucleus O16 based on chiral nucleon-nucleon interactions at next-to-next-to-next-to-leading order. To incorporate effects from the scattering continuum, we employ a Gamow-Hartree-Fock basis. Our calculations for the Jπ=1/2+ proton halo state in F17 and the 1/2+ state in O17 agree well with experiment, while the calculated spin-orbit splitting between 5/2+ and 3/2+ states is too small due to the lack of three-nucleon forces. Continuum effects yield a significant amount of additional binding energy for the 1/2+ and 3/2+ states in O17 and F17. © 2010 The American Physical Society.


Lu Z.,University of California at Riverside | Gao C.,University of California at Riverside | Zhang Q.,University of California at Riverside | Chi M.,Oak Ridge National Laboratory | And 2 more authors.
Nano Letters | Year: 2011

We present a general process that allows convenient production of multifunctional composite particles by direct self-assembly of hydrophobic nanoparticles on host nanostructures containing high-density surface thiol groups. Hydrophobic nanoparticles of various compositions and combinations can be directly assembled onto the host surface through the strong coordination interactions between metal cations and thiol groups. The resulting structures can be further conveniently overcoated with a layer of normal silica to stabilize the assemblies and render them highly dispersible in water for biomedical applications. As the entire fabrication process does not involve complicated surface modification procedures, the hydrophobic ligands on the nanoparticles are not disturbed significantly so that they retain their original properties such as highly efficient luminescence. Many complex composite nanostructures with tailored functions can be efficiently produced by using this versatile approach. For example, multifunctional nonspherical nanostructures can be efficiently produced by using mercapto-silica coated nano-objects of arbitrary shapes as hosts for immobilizing functional nanoparticles. Multilayer structures can also be achieved by repeating the mercapto-silica coating and nanoparticle immobilization processes. Such assembly approach will provide the research community a highly versatile, configurable, scalable, and reproducible process for the preparation of various multifunctional structures. © 2011 American Chemical Society.


Zhang Q.,University of California at Riverside | Lima D.Q.,University of California at Riverside | Lee I.,University of California at Riverside | Zaera F.,University of California at Riverside | And 2 more authors.
Angewandte Chemie - International Edition | Year: 2011

Nicely decorated: A sandwich-structured photocatalyst shows an excellent performance in degradation reactions of a number of organic compounds under UV, visible light, and direct sunlight (see picture). The catalyst was synthesized by a combination of nonmetal doping and plasmonic metal decoration of TiO 2 nanocrystals, which improves visible-light activity and enhances light harvesting and charge separation, respectively. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Hagen G.,Oak Ridge National Laboratory | Hagen G.,University of Tennessee at Knoxville | Papenbrock T.,Oak Ridge National Laboratory | Papenbrock T.,University of Tennessee at Knoxville | And 3 more authors.
Reports on Progress in Physics | Year: 2014

In the past decade, coupled-cluster theory has seen a renaissance in nuclear physics, with computations of neutron-rich and medium-mass nuclei. The method is efficient for nuclei with product-state references, and it describes many aspects of weakly bound and unbound nuclei. This report reviews the technical and conceptual developments of this method in nuclear physics, and the results of coupled-cluster calculations for nucleonic matter, and for exotic isotopes of helium, oxygen, calcium, and some of their neighbors. © 2014 IOP Publishing Ltd.


Gao C.,University of California at Riverside | Lu Z.,University of California at Riverside | Liu Y.,University of California at Riverside | Zhang Q.,University of California at Riverside | And 3 more authors.
Angewandte Chemie - International Edition | Year: 2012

An SPR biosensor was developed by employing highly stable Au-protected Ag nanoplates (NP) as enhancers (see picture). Superior performance was achieved by depositing a thin and uniform coating of Au on the Ag surface while minimizing disruptive galvanic replacement and retaining the strong surface plasmon resonance (SPR) of the silver nanoplates. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Koehler P.E.,University of Oslo | Guber K.H.,Oak Ridge National Laboratory
Physical Review C - Nuclear Physics | Year: 2013

192Pt is produced solely by the slow neutron capture (s) nucleosynthesis process and hence an accurate (n,γ) reaction rate for this nuclide would allow its use as an important calibration point near the termination of the s-process nucleosynthesis flow. For this reason, we have measured neutron capture and total cross sections for 192,194,195 ,196,natPt in the energy range from 10 eV to several hundred keV at the Oak Ridge Electron Linear Accelerator. Measurements on the other Pt isotopes were, in part, necessitated by the fact that only a relatively small 192Pt sample of modest enrichment was available. Astrophysical 192 ,194,195,196Pt(n,γ) reaction rates, accurate to approximately 3%-5%, were calculated from these data. No accurate reaction rates have been published previously for any of these isotopes. At s-process temperatures, previously recommended rates are larger (by as much as 35%) and have significantly different shapes as functions of temperature than our new rates. We used our new Pt results, together with 191,193Ir(n, γ) data, to calibrate nuclear statistical model calculations and hence obtain an improved rate for the unmeasured s-process branching-point isotope 192Ir. © 2013 American Physical Society.


Curtarolo S.,Duke University | Hart G.L.W.,Duke University | Hart G.L.W.,Brigham Young University | Nardelli M.B.,Duke University | And 8 more authors.
Nature Materials | Year: 2013

High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead. © 2013 Macmillan Publishers Limited. All rights reserved.


Vautard F.,Oak Ridge National Laboratory | Ozcan S.,Oak Ridge National Laboratory | Meyer H.,Tech Lab Group
Composites Part A: Applied Science and Manufacturing | Year: 2012

Carbon fibers were surface treated by a continuous gas phase thermo-chemical treatment. The surface and the mechanical properties of the fibers were investigated before and after treatment and compared to the properties obtained with a conventional industrial electro-chemical surface treatment. The increase of the oxygen atomic content was much sharper, the surface chemistry was better controlled, and the tensile strength of the fibers increased slightly with the thermo-chemical surface treatment. The thermo-chemical surface treatment created a topography which amplitudes were under 10 nm, thus creating some mechanical interlocking with the matrix. The electro-chemical surface treatment did not generate such a topography. The improvement of interfacial adhesion with a vinyl ester matrix was limited, revealing that oxidation of the carbon fiber surface alone cannot tremendously improve the mechanical properties of carbon fiber-vinyl ester composites. © 2012 Elsevier Ltd. All rights reserved.


Jiang D.-E.,Oak Ridge National Laboratory | Jin Z.,University of California at Riverside | Henderson D.,Brigham Young University | Wu J.,University of California at Riverside
Journal of Physical Chemistry Letters | Year: 2012

Organic electrolytes such as tetraethylammonium tetrafluoroborate dissolved in acetonitrile (TEA-BF4/ACN) are widely used in commercial supercapacitors and academic research, but conflicting experimental results have been reported regarding the dependence of surface-area-normalized capacitance on the pore size. Here we show from a classical density functional theory the dependence of capacitance on the pore size from 0.5 to 3.0 nm for a model TEA-BF4/ACN electrolyte. We find that the capacitance-pore size curve becomes roughly flat after the first peak around the ion diameter, and the peak capacitance is not significantly higher than the large-pore average. We attribute the invariance of capacitance with the pore size to the formation of an electric double-layer structure that consists of counterions and highly organized solvent molecules. This work highlights the role of the solvent molecules in modulating the capacitance and reconciles apparently conflicting experimental reports. © 2012 American Chemical Society.


Ullrich P.A.,University of California at Davis | Norman M.R.,Oak Ridge National Laboratory
Quarterly Journal of the Royal Meteorological Society | Year: 2014

The spectral element dynamical core has been demonstrated to be an accurate and scalable approach for solving the equations of motion in the atmosphere. However, it is also known that use of the spectral element method for tracer transport is costly and requires substantial parallel communication over a single time step. Consequently, recent efforts have turned to finding alternative transport schemes which maintain the scalability of the spectral element method without its significant cost. This article proposes a conservative semi-Lagrangian approach for tracer transport which uses upstream trajectories to solve the transport equation on the native spectral element grid. This formulation, entitled the Flux-Form Semi-Lagrangian Spectral Element (FF-SLSE) method, is highly accurate compared to many competing schemes, allows for large time steps, and requires fewer parallel communications over the same time interval than the spectral element method. In addition, the approach guarantees local conservation and is easily paired with a filter which can be used to ensure positivity. This article presents the dispersion relation for the 1D FF-SLSE approach and demonstrates stability up to a Courant number of 2.44 with cubic basis. Several standard numerical tests are presented for the method in 2D to verify correctness, accuracy and robustness of the method, including a new test of a divergent flow in Carteisan geometry. © 2013 Royal Meteorological Society.


Zhao Q.,Samuel Roberts Noble Foundation | Dixon R.A.,University of North Texas | Dixon R.A.,Oak Ridge National Laboratory
Annual Review of Phytopathology | Year: 2014

The individual sugars found within the major classes of plant cell wall polymers are dietary components of herbivores and are targeted for release in industrial processes for fermentation to liquid biofuels. With a growing understanding of the biosynthesis of the complex cell wall polymers, genetic modification strategies are being developed to target the cell wall to improve the digestibility of forage crops and to render lignocellulose less recalcitrant for bioprocessing. This raises concerns as to whether altering cell wall properties to improve biomass processing traits may inadvertently make plants more susceptible to diseases and pests. Here, we review the impacts of cell wall modification on plant defense, as assessed from studies in model plants utilizing mutants or transgenic modification and in crop plants specifically engineered for improved biomass or bioenergy traits. Such studies reveal that cell wall modifications can indeed have unintended impacts on plant defense, but these are not always negative. ©2014 by Annual Reviews. All rights reserved.


Aivazian G.,University of Washington | Gong Z.,University of Hong Kong | Jones A.M.,University of Washington | Chu R.-L.,University of Texas at Dallas | And 8 more authors.
Nature Physics | Year: 2015

Local energy extrema of the bands in momentum space, or valleys, can endow electrons in solids with pseudospin in addition to real spin. In transition metal dichalcogenides this valley pseudospin, like real spin, is associated with a magnetic moment that underlies the valley-dependent circular dichroism that allows optical generation of valley polarization, intervalley quantum coherence and the valley Hall effect. However, magnetic manipulation of valley pseudospin via this magnetic moment, analogous to what is possible with real spin, has not been shown before. Here we report observation of the valley Zeeman splitting and magnetic tuning of polarization and coherence of the excitonic valley pseudospin, by performing polarization-resolved magneto-photoluminescence on monolayer WSe2. Our measurements reveal both the atomic orbital and lattice contributions to the valley orbital magnetic moment; demonstrate the deviation of the band edges in the valleys from an exact massive Dirac fermion model; and reveal a striking difference between the magnetic responses of neutral and charged valley excitons that is explained by renormalization of the excitonic spectrum due to strong exchange interactions. © 2015 Macmillan Publishers Limited.


Hacquard S.,Max Planck Institute for Plant Breeding Research | Schadt C.W.,Oak Ridge National Laboratory | Schadt C.W.,University of Tennessee at Knoxville
New Phytologist | Year: 2015

Interactions between trees and microorganisms are tremendously complex and the multispecies networks resulting from these associations have consequences for plant growth and productivity. However, a more holistic view is needed to better understand trees as ecosystems and superorganisms, where many interacting species contribute to the overall stability of the system. While much progress has been made on microbial communities associated with individual tree niches and the molecular interactions between model symbiotic partners, there is still a lack of knowledge of the multi-component interactions necessary for holistic ecosystem-level understanding. We review recent studies in Populus to emphasize the importance of such holistic efforts across the leaf, stem and rooting zones, and discuss prospects for future research in these important ecosystems. © 2014 New Phytologist Trust.


Yang J.,Samuel Roberts Noble Foundation | Yang J.,Oak Ridge National Laboratory | Worley E.,Samuel Roberts Noble Foundation | Worley E.,Oak Ridge National Laboratory | And 2 more authors.
Plant Cell | Year: 2014

Chlorophyll degradation is an important part of leaf senescence, but the underlying regulatory mechanisms are largely unknown. Excised leaves of an Arabidopsis thaliana NAC-LIKE, ACTIVATED BY AP3/PI (NAP) transcription factor mutant (nap) exhibited lower transcript levels of known chlorophyll degradation genes, STAY-GREEN1 (SGR1), NON-YELLOW COLORING1 (NYC1), PHEOPHYTINASE (PPH), and PHEIDE a OXYGENASE (PaO), and higher chlorophyll retention than the wild type during dark-induced senescence. Transcriptome coexpression analysis revealed that abscisic acid (ABA) metabolism/ signaling genes were disproportionately represented among those positively correlated with NAP expression. ABA levels were abnormally low in nap leaves during extended darkness. The ABA biosynthetic genes 9-CIS-EPOXYCAROTENOID DIOXYGENASE2, ABA DEFICIENT3, and ABSCISIC ALDEHYDE OXIDASE3 (AAO3) exhibited abnormally low transcript levels in dark-treated nap leaves. NAP transactivated the promoter of AAO3 in mesophyll cell protoplasts, and electrophoretic mobility shift assays showed that NAP can bind directly to a segment (2196 to 2162 relative to the ATG start codon) of the AAO3 promoter. Exogenous application of ABA increased the transcript levels of SGR1, NYC1, PPH, and PaO and suppressed the stay-green phenotype of nap leaves during extended darkness. Overexpression of AAO3 in nap leaves also suppressed the stay-green phenotype under extended darkness. Collectively, the results show that NAP promotes chlorophyll degradation by enhancing transcription of AAO3, which leads to increased levels of the senescence-inducing hormone ABA © 2014 American Society of Plant Biologists. All rights reserved.


Moitra P.,Vanderbilt University | Yang Y.,Vanderbilt University | Anderson Z.,School for Science and Mathematics at Vanderbilt | Kravchenko I.I.,Oak Ridge National Laboratory | And 2 more authors.
Nature Photonics | Year: 2013

Metamaterials offer unprecedented flexibility for manipulating the optical properties of matter, including the ability to access negative index, ultrahigh index and chiral optical properties. Recently, metamaterials with near-zero refractive index have attracted much attention. Light inside such materials experiences no spatial phase change and extremely large phase velocity, properties that can be applied for realizing directional emission, tunnelling waveguides, large-area single-mode devices and electromagnetic cloaks. However, at optical frequencies, the previously demonstrated zero- or negative-refractive-index metamaterials have required the use of metallic inclusions, leading to large ohmic loss, a serious impediment to device applications. Here, we experimentally demonstrate an impedance-matched zero-index metamaterial at optical frequencies based on purely dielectric constituents. Formed from stacked silicon-rod unit cells, the metamaterial has a nearly isotropic low-index response for transverse-magnetic polarized light, leading to angular selectivity of transmission and directive emission from quantum dots placed within the material. © 2013 Macmillan Publishers Limited.


Guy R.D.,University of California at Davis | Philip B.,Oak Ridge National Laboratory
Communications in Computational Physics | Year: 2012

Explicit time stepping schemes for the immersed boundary method require very small time steps in order to maintain stability. Solving the equations that arise from an implicit discretization is difficult. Recently several different approaches have been proposed, but a complete understanding of this problem is still emerging. A multigrid method is developed and explored for solving the equations in an implicittime discretization of a model of the immersed boundary equations. The model problem consists of a scalar Poisson equation with conformation-dependent singular forces on an immersed boundary. This model does not include the inertial terms or the incompressibility constraint. The method is more efficient than an explicit method, but the efficiency gain is limited. The multigrid method alone may not be an effective solver, but when used as a preconditioner for Krylov methods, the speed-up over the explicit-time method is substantial. For example, depending on the constitutive law for the boundary force, with a time step 100 times larger than the explicit method, the implicit method is about 15-100 times more efficient than the explicit method. A very attractive feature of this method is that the efficiency of the multigrid preconditioned Krylov solver is shown to be independent of the number of immersed boundary points. © 2012 Global-Science Press.


Jiang D.-E.,Oak Ridge National Laboratory | Jin Z.,University of California at Riverside | Wu J.,University of California at Riverside
Nano Letters | Year: 2011

Porous carbons of high surface area are promising as cost-effective electrode materials for supercapacitors. Although great attention has been given to the anomalous increase of the capacitance as the pore size approaches the ionic dimensions, there remains a lack of full comprehension of the size dependence of the capacitance in nanopores. Here we predict from a classical density functional theory that the capacitance of an ionic-liquid electrolyte inside a nanopore oscillates with a decaying envelope as the pore size increases. The oscillatory behavior can be attributed to the interference of the overlapping electric double layers (EDLs); namely, the maxima in capacitance appear when superposition of the two EDLs is most constructive. The theoretical prediction agrees well with the experiment when the pore size is less than twice the ionic diameter. Confirmation of the entire oscillatory spectrum invites future experiments with a precise control of the pore size from micro- to mesoscales. © 2011 American Chemical Society.


Grant
Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase I | Award Amount: 80.00K | Year: 2011

The proliferation of complex electronic devices for use in warfare ranging across handheld electronics, sensors, hybrid vehicles, unmanned aircraft systems, and directed energy systems are increasing the demand for battery power needed for mission operations. The dramatic evolution in the inventory and use of these electrically-powered systems has stimulated new thrusts in energy harvesting and storage, to enable continuous and autonomous battlefield operations of these systems. To meet the demand for portable electric power, effort is being directed at novel methods for coupling energy harvesting, such as flexible solar cells, with flexible batteries which can be integrated into UAS structures and other portable systems. As identified by the U.S. Navy, requirements for such systems include improvements in charge capacity, internal leakage on charging, shelf life, and cycle life. Applied Sciences, Inc. (ASI) and Oak Ridge National Laboratory (ORNL) propose to jointly develop low-cost, high capacity, thin, flexible, conformable, lightweight batteries that are compatible with energy harvesting devices. The proposed battery will use flexible cathodes and anodes developed at ORNL and ASI respectively that can ultimately meet goals of a conformal battery with>10 mAh/cm2,<100 A/cm2, 6 month shelf life, and cycle life requirements.


News Article | August 31, 2016
Site: www.greencarcongress.com

« Hyundai Mobis develops Electronic Parking Brakes (eDIH) for large pickup trucks | Main | SAKOR Technologies provides dual-dynamometer testing system to major global powertrain manufacturer in China » Among the 43 small businesses participating in the newly announced second round of the US Department of Energy (DOE) Small Business Vouchers (SBV) pilot are 10 projects in the area of hydrogen and fuel cells. These selected projects are aimed at reducing the costs and improving the performance of hydrogen and fuel cell technologies, such as finding ways to lower costs of membrane electrode assemblies (MEAs), a critical component of fuel cell stacks and electrolyzers. Some of these collaborations will explore using lower cost materials, reducing the quantity of pricey raw material required, and developing better and more precise methods in manufacturing MEAs. The ten companies receiving Round 2 SBVs for hydrogen and fuel cells projects include: Altergy. Altergy will work with the National Renewable Energy Laboratory (NREL) to leverage lab expertise to reduce cost and improve performance in its proprietary fuel cell products. The objective of their work with NREL is to observe the fabrication of MEAs at NREL using non- proprietary methods, and evaluate MEAs made at NREL with ETFECS-based catalyst to reduce costs and/or enhance durability. American Fuel Cell. AFC will work with Oak Ridge National Laboratory (ORNL) to optimize deposition techniques for roll-to-roll direct coating of electrodes on anode and cathode gas diffusion media leading to improved quality and lower-cost manufacturing of various fuel cell applications. Garmor Inc. Garmor, working with NREL, will utilize existing bipolar plate (BPP) technology to develop a BPP that will include composites that will be used to form microstructured surface features to improve BPP surface tension for water droplet control. Ion Power. Ion Power will work with Los Alamos National Laboratory (LANL) to analyze various performance improvement mechanisms to improve the impact on durability of its fuel cell catalyst layer. Nanosonic. NanoSonic, working with LANL, will receive assistance in the preparation of membrane electrode assembly and performance of fuel cell testing and validation of new anion exchange fuel cell membranes. Nzyme2HC LLC. Nzyme2HC has novel acellular biological approach to scalable/clean/potentially low-cost H generation. Combining minimal electric current, bacterially-extricated hydrogenase, and industrial waste as the feedstocks/production materials. RFA award monies would go to building end-to-end bench prototype at NREL to test/metric. Oorja Fuel Cells. Oorja Fuel Cells will work with NREL to determine the efficacy and performance of membrane electrode assembly (MEA) of Oorja’s direct methanol cells operating conditions (DMFC) and compare the results with the commercially available MEA used in Oorja DMFC power system. Opus 12. Opus 12 will receive assistance to develop a computational model of ions, products, reactants, and water transport within their PEM electrolyzer for the co-electrolysis of water and carbon dioxide to make syngas. The model will aid in increasing system performance to make Opus 12’s syngas cost-competitive with traditional syngas sources. Working with Lawrence Berkeley National Laboratory. Pajarito Powder. Pajarito will work with LANL to improve electrode structures and to implement electrode manufacturing techniques that allow full catalyst utilization. Ultimately, this work will increase customer confidence in the technology and accelerate commercial adoption. Proton Energy Systems. Proton Energy Systems will work with NREL to develop, design and validate a small-scale advanced power converter to provide a pathway for reduced capital cost and improved efficiency of Proton’s M-series electrolyzer that offers a carbon-free source of hydrogen fuel or process gas.


News Article | December 13, 2016
Site: www.eurekalert.org

TALLAHASSEE, Fla. -- Deep stores of carbon in northern peatlands may be safe from rising temperatures, according to a team of researchers from several U.S.-based institutions. And that is good news for now, the researchers said. Florida State University research scientist Rachel Wilson and University of Oregon graduate student Anya Hopple are the first authors on a new study published today in Nature Communications. The study details experiments suggesting that carbon stored in peat -- a highly organic material found in marsh or damp regions -- may not succumb to the Earth's warming as easily as scientists thought. That means if these northern peatlands -- found in the upper half of the northern hemisphere -- remain flooded, a substantial amount of carbon will not be released into the atmosphere. "We do see some breakdown of peat on the surface, but not below 2 feet deep, where the bulk of the carbon is stored," Wilson said. The study is part of a long-term look at how carbon stored in peat will respond to climate and environmental change. The team of researchers, led by Paul Hanson of the Oak Ridge National Laboratory, includes scientists from FSU, University of Oregon, Georgia Institute of Technology, the U.S. Department of Agriculture-Forest Service, Chapman University, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory and Oak Ridge National Laboratory. Researchers ran four different temperature simulations -- increasing the temperature of the peat by 2.25 degrees Celsius, 4.5 degrees Celsius, 6.25 degrees Celsius and 9 degrees Celsius -- to see how it would respond to increased heat. They found that the surface peat did emit more methane gas when warmed, but the deep peat did not break down and did not start emitting additional methane or carbon dioxide. "If the release of greenhouse gases is not enhanced by temperature of the deep peat, that's great news because that means that if all other things remain as they are, the deep peat carbon remains in the soil," said Joel Kostka, professor of microbiology at Georgia Institute of Technology. The Earth's soils contain 1,550 billion tons of organic carbon, and 500 billion tons of this carbon is stored in northern peatlands around the world. This quantity is roughly the same amount as carbon in the atmosphere. Scientists have been anxious to learn how these northern peatlands will respond to warming because a tremendous amount of carbon could be released into the atmosphere. Researchers worked at the Oak Ridge National Laboratory's experimental site known as SPRUCE in northern Minnesota to examine both surface peat and peat up to 6 feet deep. The majority of the carbon is stored deeper in the ground. Large environmental chambers were constructed by the Oak Ridge team to enclose portions of the peatlands. Within these chambers, scientists simulated climate change effects such as higher temperatures and elevated carbon dioxide levels. They also took some of the deep peat back to their labs to heat in additional studies. While scientists said they were surprised by the results, they also cautioned that this came only after one year of warming. "There are the necessary caveats that this was only for one year, and the experiment is planned to run for a decade, and other ecosystem feedbacks may become important in greenhouse gas emissions," said Scott Bridgham, director of the Institute of Ecology and Evolution at University of Oregon and Hopple's adviser. In the future, scientists also plan to look at how these peatlands respond to heightened carbon dioxide levels combined with the temperature increases. "In the future, we'll be warmer, but we'll also have more carbon dioxide in the atmosphere, so we need to understand how these deep stores of peat, which have all this carbon, respond to these conditions," said Jeff Chanton, professor of oceanography at Florida State University. This work was funded by the U.S. Department of Energy.


News Article | November 14, 2016
Site: www.theenergycollective.com

The development of new nuclear fuels is crucial to the success of new fast reactor designs. Examples include TRISO fuel for HTGRs and Molten Salt fuel for 21st century iterations of the work done at Oak Ridge in the 1960s. (WNN) Russia has started testing its new type of nuclear fuel, REMIX, at the MIR research reactor at the Research Institute of Atomic Reactors in Dimitrovgrad, which is in the Ulyanovsk region. Rostaom said on 11/3 that REMIX fuel rods manufactured in July had been “immersed in the active zone” of MIR. Development of REMIX (from Regenerated Mixture) fuel is part of state nuclear corporation Rosatom’s strategy to enable better use of recycled uranium and plutonium on an industrial scale in pressurized water reactors. Some of the plutonium may come from nuclear weapons decommissioned as part of international treaties. Russia is also using this inventory of surplus plutonium to make MOX fuel for its BN-800 fast reactor which was recently connected to the grid to generate electricity. A loop-type research reactor, MIR is designed mainly for testing fuel elements, fuel assemblies and other core components of different types of operating and promising nuclear power reactors. The first data from testing the fuel in MIR will include the “swelling, gassing and distribution of fission products and, of course, the isotopic composition of the used fuel rods,” the head of innovation at the Khlopin Radium Institute, Andrey Belozub, said in the Rosatom statement. Use of the MIR research reactor is an “extremely important step”, Rosatom said, towards full implementation of the project to introduce REMIX into the Russian fuel cycle. According to World Nuclear News, REMIX fuel is produced directly from a non-separated mix of recycled uranium and plutonium from reprocessing used fuel, with a low-enriched uranium (LEU, up to 17% U-235) make-up comprising about 20% of the mix. This gives fuel initially with about 1% Pu-239 and 4% U-235 which can sustain burn-up of 50 GWd/t over four years. REMIX fuel can be repeatedly recycled with 100% core load in current VVER-1000 reactors, and correspondingly reprocessed many times – up to five times according to Russian nuclear fuel manufacturer Tenex, so that with less than three fuel loads in circulation a reactor could run for 60 years using the same fuel, with LEU recharge and waste removal on each cycle. (WNN) Canadian reactor designer StarCore Nuclear has applied to the Canadian Nuclear Safety Commission (CNSC) to begin the vendor design review process for its Generation IV high temperature gas reactor (HTGR). The CNSC’s pre-licensing vendor review process is an optional service to provide an assessment of a nuclear power plant design based on a vendor’s reactor technology. The three-phase review is not a required part of the licensing process for a new nuclear power plant, but aims to verify the acceptability of a nuclear power plant design with respect to Canadian nuclear regulatory requirements and expectations. Earlier this year the CNSC agreed to conduct a phase 1 vendor design review for Terrestrial Energy’s integral molten salt reactor design concept. StarCore CEO David Dabney said the company’s application to the CNSC, lodged on 24 October, marked the culmination of eight years’ work. “We are confident that our plant size and technology will enable us to bring safe, clean energy to the many remote sites in Northern Canada that currently have no choice other than to use costly, unreliable and polluting carbon-based fuels,” he said. Montréal-based StarCore, founded in 2008, is focused on developing small modular reactors (SMRs) to provide power and potable water to remote communities in Canada. Its standard HTGR unit would produce 20 MWe (36 MWth), expandable to 100 MWe, from a unit small enough to be delivered by truck. The helium-cooled reactor uses Triso fuel, – spherical particles of uranium fuel coated by carbon which effectively gives each tiny particle its own primary containment system, manufactured by BWXT Technologies. Each reactor would require refueling at five-yearly intervals. StarCore describes its reactor as “inherently safe.” The use of helium, which does not become radioactive, as a coolant means that any loss of coolant would be “inconsequential”, the company says. The reactors would be embedded 50 metres underground in concrete silos sealed with ten-tonne caps. DOE Inks Deal with GE-Hitachi for Laser Enrichment Plant at Paducah The Department of Energy (DOE) has agreed to sell depleted uranium to GE-Hitachi Global Laser Enrichment, LLC (GLE) over a 40-year period which would be enriched at a proposed GLE state-of-the-art facility. DOE has agreed to sell 300,000 tonnes of depleted uranium hexafluoride (UF6) to GE Hitachi Global Laser Enrichment (GLE) for re-enrichment at a proposed plant to be built near DOE’s Paducah site in Kentucky. The agreement paves the way for commercialization of Silex laser enrichment technology. The proposed new facility would use depleted uranium to produce natural uranium which is used for production of fuel for civil nuclear reactors. The facility would be built near DOE’s Paducah Gaseous Diffusion Plant in western Kentucky. The construction and operation of the billion-dollar facility at Paducah could to bring approximately 800 to 1,200 jobs to the local community. “This agreement furthers the Energy Department’s environmental cleanup mission while reducing cleanup costs, creating good local jobs, and supporting an economical enrichment enterprise for our energy needs,” said Energy Secretary Ernest Moniz. GLE will finance, construct, own and operate the Paducah Laser Enrichment Facility (PLEF) adjacent to the Energy Department site. The facility will be a commercial uranium enrichment production facility under a Nuclear Regulatory Commission (NRC) license. DOE’s inventory of depleted uranium is safely stored in approximately 65,000 specialized storage cylinders at the Department’s Paducah and Portsmouth (Ohio) sites. The Paducah plant was constructed in the 1950s to enrich uranium for national security applications, and later enriched uranium for commercial nuclear power generation. The Energy Department resumed control of the plant enrichment facilities in 2014 after the operator ceased gaseous-diffusion enrichment operations in 2013. GLE is a joint business venture of GE (51%), Hitachi (25%) and Cameco (24%). Earlier this year GE Hitachi announced its desire to reduce its equity interest in GLE and in April signed a term sheet with Silex giving the Australian company an exclusive option to acquire GE Hitachi’s entire 76% interest in GLE. In 2012, the US NRC granted GLE a combined construction and operating licence for a laser enrichment plant of up to 6 million separative work units at Wilmington, North Carolina. GLE has successfully demonstrated the concept in a test loop at Global Nuclear Fuel’s Wilmington fuel fabrication facility but has not yet decided whether to proceed with a full-scale commercial plant there. (NucNet): Russia is considering asking foreign partners to join its development of the Generation IV SVBR 100 reactor design, but has denied reports that the cost of the project has more than doubled. The original cost of the project was put at 15bn rubles (€209m, $226m) and this has not changed, Rosatom said. The SVBR 100 is one of six designs chosen by the Generation IV International Forum (GIF) for its program of research and development into the next generation nuclear energy systems. GIF said the SVBR 100 is a lead-cooled fast reactor which features a fast neutron spectrum, high temperature operation, and cooling by molten lead or lead-bismuth. It would have multiple applications including production of electricity, hydrogen and process heat. Molten Salt Reactors: IAEA to Establish New Platform for Collaboration Experts from 17 countries laid the foundations last week for enhanced international cooperation on a technology that promises to deliver nuclear power with a lower risk of severe accidents, helping to decrease the world’s dependence on fossil fuels and mitigate climate change. “It is the first time a comprehensive IAEA international meeting on molten salt reactors has ever taken place,” said Stefano Monti, Head of the Nuclear Power Development Section at the IAEA. “Given the interest of Member States, the IAEA could provide a platform for international cooperation and information exchange on the development of these advanced nuclear systems.” Molten salt reactor technology has attracted private funding over the last few years, and several reactor concepts are under development. One area under research is the compatibility between the salt coolant and the structural materials and, for some designs, the chemical processes related to the associated fuel cycle, Monti said. The challenges are not only technical. Nuclear regulators will need to review existing safety regulations to see how these can be modified, if necessary, to fit molten salt reactors, since they differ significantly from reactors in use today, said Stewart Magruder, senior nuclear safety officer at the IAEA. Participants, including researchers, designers and industry representatives, emphasized the need for an international platform for information exchange. “While the United States is actively developing both technology and safety regulations for molten salt reactors, the meeting is an important platform to exchange knowledge and information with Member States not engaged in the existing forums,” said David Holcomb from the Oak Ridge National Laboratory. Molten salt reactors, nuclear power reactors that use liquid salt as primary coolant or a molten salt mixture as fuel, have many favorable characteristics for nuclear safety and sustainability. The concept was developed in the 1960s, but put aside in favor of what has become mainstream nuclear technology since. In recent years, however, technological advances have led to growing interest in molten salt technology and to the launch of new initiatives. The technology needs at least a decade of further intensive research, validation and qualification before commercialization. Molten salt reactors operate at higher temperatures, making them more efficient in generating electricity. In addition, their low operating pressure can reduce the risk of coolant loss, which could otherwise result in an accident. Molten salt reactors can run on various types of nuclear fuel and use different fuel cycles. This conserves fuel resources and reduces the volume, radiotoxicity and lifetime of high-level radioactive waste. To help speed up research, it is essential to move from bilateral to multilateral cooperation, said Chen Kun from the Shanghai Institute of Applied Physics of the Chinese Academy of Sciences. “It is the first time China has the opportunity to share knowledge with India, Indonesia and Turkey on this technology.” Indonesia is considering building its first nuclear power plant with molten salt reactor design, said Bob Soelaiman Effendi from Indonesia Thorium Energy Community. (WNN) China and the UK have signed a joint R&D agreement which created their Joint Research and Innovation Centre (JRIC) to be opened soon in Manchester, England. Initial work is expected to include developing advanced manufacturing methods. JRIC will support innovation in nuclear research and development through UK-China collaboration. This will develop, it said, “leading-edge research and innovative technologies which will support safe and reliable nuclear energy around the globe.” With NNL and CNNC each owning a 50% share, they will jointly pay for the centre’s research and development expenses and plan to invest 422 million yuan ($65.1 million) over a five-year period, CNNC said. (WNN) The UK’s Nuclear Advanced Manufacturing Research Centre (AMRC) said it has signed a new agreement with the US Nuclear Infrastructure Council (USNIC) to work together on research and development to support the UK’s civil nuclear program. The memorandum of understanding was signed by Jay Shaw, senior business development manager for the Nuclear AMRC, and David Blee, executive director of USNIC, during a visit to the Nuclear AMRC on 10/26.


News Article | November 14, 2016
Site: www.eurekalert.org

Scientists at Rice University have discovered that an atom-thick material being eyed for flexible electronics and next-generation optical devices is more brittle than they expected. The Rice team led by materials scientist Jun Lou tested the tensile strength of two-dimensional, semiconducting molybdenum diselenide and discovered that flaws as small as one missing atom can initiate catastrophic cracking under strain. The team's report appears this month in Advanced Materials. The finding may cause industry to look more carefully at the properties of 2-D materials before incorporating them in new technologies, he said. "It turns out not all 2-D crystals are equal," said Lou, a Rice professor of materials science and nanoengineering. "Graphene is a lot more robust compared with some of the others we're dealing with right now, like this molybdenum diselenide. We think it has something to do with defects inherent to these materials." The defects could be as small as a single atom that leaves a vacancy in the crystalline structure, he said. "It's very hard to detect them," he said. "Even if a cluster of vacancies makes a bigger hole, it's difficult to find using any technique. It might be possible to see them with a transmission electron microscope, but that would be so labor-intensive that it wouldn't be useful." Molybdenum diselenide is a dichalcogenide, a two-dimensional semiconducting material that appears as a graphene-like hexagonal array from above but is actually a sandwich of metallic atoms between two layers of chalcogen atoms, in this case, selenium. Molybdenum diselenide is being considered for use as transistors and in next-generation solar cells, photodetectors and catalysts as well as electronic and optical devices. Lou and colleagues measured the material's elastic modulus, the amount of stretching a material can handle and still return to its initial state, at 177.2 (plus or minus 9.3) gigapascals. Graphene is more than five times as elastic. They attributed the large variation to pre-existing flaws of between 3.6 and 77.5 nanometers. Its fracture strength, the amount of stretching a material can handle before breaking, was measured at 4.8 (plus or minus 2.9) gigapascals. Graphene is nearly 25 times stronger. Part of the project led by Rice postdoctoral researcher Yingchao Yang required moving molybdenum diselenide from a growth chamber in a chemical vapor deposition furnace to a microscope without introducing more defects. Yang solved the problem using a dry transfer process in place of a standard acid washing that would have ruined the samples. To test samples, Yang placed rectangles of molybdenum diselenide onto a sensitive electron microscope platform invented by the Lou group. Natural van der Waals forces held the samples in place on springy cantilever arms that measured the applied stress. Lou said the group attempted to measure the material's fracture toughness, an indicator of how likely cracks are to propagate, as they had in an earlier study on graphene. But they found that pre-cutting cracks into molybdenum diselenide resulted in it shattering before stress could be applied, he said. "The important message of this work is the brittle nature of these materials," Lou said. "A lot of people are thinking about using 2-D crystals because they're inherently thin. They're thinking about flexible electronics because they are semiconductors and their theoretical elastic strength should be very high. According to our calculations, they can be stretched up to 10 percent. "But in reality, because of the inherent defects, you rarely can achieve that much strength. The samples we have tested so far broke at 2 to 3 percent (of the theoretical maximum) at most," Lou said. "That should still be fine for most flexible applications, but unless they find a way to quench the defects, it will be very hard to achieve the theoretical limits." Co-authors of the paper are Rice graduate students Emily Hacopian, Weibing Chen, Jing Zhang and Bo Li, alumna Yongji Gong and Pulickel Ajayan, chair of Rice's Department of Materials Science and NanoEngineering, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of chemistry; Xing Li of Rice and Peking University, China; Minru Wen of Tsinghua University, China, and the Georgia Institute of Technology; Wu Zhou of Oak Ridge National Laboratory, Oak Ridge, Tenn.; Qing Chen of Peking University; and Ting Zhu of the Georgia Institute of Technology. The research was supported by the Air Force Office of Scientific Research, the Welch Foundation, the Department of Energy Office of Basic Energy Sciences, the National Science Foundation and the National Science Foundation of China. This news release can be found online at http://news. Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice's undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for happiest students and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. To read "What they're saying about Rice," go to http://tinyurl. .


News Article | October 28, 2016
Site: www.eurekalert.org

OAK RIDGE, Tenn., Oct. 28, 2016--The gap between the computational science and open source software communities just got smaller - thanks to an international collaboration among national laboratories, universities and industry. The Eclipse Science Working Group (SWG), a global community for individuals and organizations who collaborate on commercially-friendly open source software, recently released five projects aimed at expediting scientific breakthroughs by simplifying and streamlining computational science workflows. While computational science has made a great impact across the scientific spectrum, from climate change to astrophysics to genomics, today's data-centric simulation environments require enhanced analysis, visualization, and workflow management to further reduce time to solution and increase the frequency of research breakthroughs. Hence Eclipse's software release. "Oak Ridge National Laboratory is renowned for its hardware--like Titan, the world's third fastest supercomputer--but we are also leaders in scientific software development," said Jay Billings, the SWG Chair and a research scientist at Oak Ridge National Laboratory. "The Science Working Group has not only helped us create software to simplify complex computational science but it has also allowed us to become a part of a thriving community that focuses on collaboration, interoperability, and cutting edge research." That community extends far beyond Oak Ridge into the wider national laboratory complex. "The Eclipse Science Working Group provides an open, international forum for design and development of critical components of scientific software, bringing together a diverse set of application requirements drivers and software architects to address the challenges of next-generation scientific computing," said Robert Clay of Sandia National Laboratory. Likewise, small businesses are also engaged as either members of the group or consumers of its software. Dr. Gerald Sabin from RNET Technologies, Inc., a Dayton Ohio-area firm focused on high-performance computing research noted: "The ICE workflow environment is an ideal platform for us to develop a cutting-edge, cloud-based collaboration environment to support scientific computing. The licensing model and openness of the Science Working Group is to our advantage." The open source projects, which represent years of development and thousands of users, are the product of intense collaboration among SWG members including ORNL, Diamond Light Source, Itema AS, iSencia, Kichwa Coders, Lablicate GmbH and others. The five projects are: "Open source is having tremendous impact on both productivity and innovation in industry, government, and academia," explained Eclipse Foundation Executive Director Mike Milinkovich. "The Eclipse Science Working Group is a great example of how world-leading scientific research organizations like Oak Ridge National Laboratory, Diamond Light Source, and others can collaborate on delivering free software that will enable the next major scientific breakthrough. This is an important advancement for the scientific research community, and we are proud to be hosting the collaboration at the Eclipse Foundation." More information on the Science Working Group is available on their website. ORNL's Eclipse projects are currently supported by DOE's Advanced Manufacturing Office and have previously received support from DOE's Office of Nuclear Energy. DOE's Advanced Manufacturing Office supports applied research, development and demonstration of new materials and processes for energy efficiency in manufacturing as well as platform technologies for the manufacturing of clean energy products. UT-Battelle manages ORNL for the DOE's Office of Science. The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website. Eclipse is a community for individuals and organizations who wish to collaborate on open source software. There are over 300 open source projects in the Eclipse community, ranging from tools for software developers, geo-spatial technology, system engineering and embedded development tools, frameworks for IoT solutions, tool for scientific research, and much more. The Eclipse Foundation is a not-for-profit foundation that is the steward of the Eclipse community. More information is available at eclipse.org. Caption 1: The Eclipse Integrated Computational Environment (ICE) is a scientific workbench and workflow environment developed to improve the user experience for computational scientists. Caption 2: Additive manufacturing researchers at DOE's Manufacturing Demonstration Facility at ORNL use the ICE workbench and EAVP 3D visualizations to analyze pre- and post-processing in thermomechanical simulations of binder jet-printed cantilevers. NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www. . Additional information about ORNL is available at the sites below:


News Article | October 23, 2015
Site: news.mit.edu

Medical devices designed to reside in the stomach have a variety of applications, including prolonged drug delivery, electronic monitoring, and weight-loss intervention. However, these devices, often created with nondegradable elastic polymers, bear an inherent risk of intestinal obstruction as a result of accidental fracture or migration. As such, they are usually designed to remain in the stomach for a limited time. Now, researchers at MIT’s Koch Institute for Integrative Cancer Research and Massachusetts General Hospital (MGH) have created a polymer gel that overcomes this safety concern and could allow for the development of long-acting devices that reside in the stomach, including orally delivered capsules that can release drugs over a number of days, weeks, or potentially months following a single administration. This polymer is pH-responsive: It is stable in the acidic stomach environment but dissolves in the small intestine’s near-neutral pH, allowing for safe passage through the remainder of the gastrointestinal (GI) tract. The material is also elastic, allowing for the compression and folding of devices into easily ingestible capsules — meaning this polymer can be used to create safe devices designed for extremely prolonged residence in the stomach. “One of the issues with any device in the GI tract is that there’s the potential for an obstruction, which is a medical emergency potentially requiring surgical intervention,” says Koch Institute research affiliate Giovanni Traverso, also a gastroenterologist at MGH and an instructor at Harvard Medical School. “A material like this represents a real advance because it is both safe and stable in the stomach environment.” Traverso and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, are the senior authors of a paper in the July 27 issue of Nature Materials that describes the application of this new polymer gel for creating gastric devices. Shiyi Zhang, a postdoc at the Koch Institute, is the paper’s lead author. Designing devices for the stomach is a complicated matter of sizes and shapes. The stomach naturally empties its contents in a matter of hours, so for devices to be retained, they must be wider than the pylorus — the valve at the end of the stomach, about 1.5 to 2 centimeters in diameter, that allows gastric contents to pass into the small intestine. However, because the most convenient path to deliver these devices is through the esophagus, which is only marginally wider than the pylorus, the researchers were interested in developing a polymer with elastic properties. “An elastic device can be folded into something small, such as a capsule, and be easily ingested through the esophagus before expanding in the stomach, restoring its original shape,” Zhang says. “When you have a device that’s larger than the pylorus, there’s a much higher probability that it will be retained longer in the stomach.” But the size and shape of existing devices with elastic polymers have been limited by safety concerns, as there is a greater risk for fracture if a device is too large or too complex. Because of this, the researchers wanted their polymer to also be enteric — or have a mechanism that would enable it to pass through the stomach unaltered before disintegrating in the intestines. “To lower any possible risk of obstruction, we wanted a material that could dissolve in the intestines, thereby dissociating the device, and safely pass out of the body,” Zhang says. To create this new material, the researchers synthesized an elastic polymer and combined it in solution with a clinically utilized enteric polymer. Adding hydrochloric acid and centrifuging the solution resulted in a flexible, yet resilient, polymer gel that exhibits both elastic and enteric properties. The researchers used the gel polycaprolactone (PCL), a nontoxic, degradable polyester, to construct several device prototypes. They first created ring-shaped devices by using the gel to link arcs of PCL in a circular mold. These elastic devices had a diameter of 3 centimeters — wider than the pylorus — before they were folded into orally ingestible capsules. In testing the capsules in pigs, the researchers found that the rings expanded into their original shape within 15 minutes of ingestion and remained in the stomach for up to seven days. After the device passed out of the stomach, the polymer gel dissolved, allowing for the safe passage of the small PCL pieces without obstruction. The researchers also created larger devices in various shapes that could be folded and delivered through the esophagus with the assistance of an endoscope. These devices remained in the stomach for up to five days, after which the gel similarly dissolved, allowing for the remnants of the device to safely pass. The combined enteric and elastic properties of this polymer gel could significantly improve the design and adoption of gastric-resident devices. Devices that could incorporate this material include bariatric devices for the treatment of obesity, which control how hungry or full a person feels; ingestible electronics, which can diagnose and monitor a variety of conditions in the GI tract; or extended-release drug-delivery systems that could last for weeks or months after a single administration. “This delivery system provides a flexible and smooth external covering that slowly disintegrates, which could release drugs or small devices for monitoring and imaging the GI tract,” says Edith Mathiowitz, a professor of medical science and engineering at Brown University who was not involved with this study. “This is a very smart approach.” In particular, the authors say they are excited for the drug-delivery applications of this technology. With further work in adjusting the polymer composition or the design of the system, they say that they could tailor devices to release drugs over a specific timeframe of up to weeks or months at a time. MIT is negotiating an exclusive license agreement with Lyndra, an early-stage biotechnology company developing novel oral drug-delivery systems, for this and other related technologies. “I’m delighted to see these new oral systems provide an opportunity that I’ve not seen previously — enabling patients to swallow a single pill that can then act for whatever length of time is desired,” Langer says. Such single-administration events could improve medication adherence, which remains a major clinical barrier. According to the World Health Organization, patients’ adherence to long-term therapies for chronic illnesses is only 50 percent in developed countries, with lower rates of adherence in developing nations. Medication nonadherence costs the U.S. an estimated $100 billion every year, the bulk of which comes in the form of unnecessary hospitalizations. The researchers also say that single-administration delivery systems for the radical treatment of malaria and other infections could significantly benefit from these technologies. In a March 2015 commentary piece in Nature, Traverso and Langer wrote that the GI tract is an area rife with opportunity for prolonged drug delivery in tackling this global health problem. With this new material, which can be used to create extended-release systems via swallowable “pills,” they envision an emerging field of orally delivered devices that can maximize adherence and therapeutic efficacy. The paper’s other authors include Andrew Bellinger, Dean Glettig, Ross Barman, Young-Ah Lee, Cody Cleveland, Veronica Montgomery, and Li Gu from MIT; Jiahua Zhu from Oak Ridge National Laboratory; and Landon Nash and Duncan Maitland from Texas A&M University. This research was funded by the Bill and Melinda Gates Foundation, the National Institutes of Health, and the Alexander von Humboldt Foundation.


News Article | October 23, 2015
Site: news.mit.edu

For faster, longer-lasting water filters, some scientists are looking to graphene —thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water. Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak. Now engineers at MIT, Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through. Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger. In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran. Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks. “We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.” Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong. “The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.” O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication. However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material. To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy. The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time. Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water. In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects. Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water. The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements. “Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.” De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.” “The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.” This research was supported in part by the Center for Clean Water and Clean Energy at MIT and KFUPM, the U.S. Department of Energy, and the National Science Foundation.


News Article | November 6, 2015
Site: phys.org

The hybrid optical microscope/mass spectrometry-based imaging system developed at the Department of Energy's Oak Ridge National Laboratory operates under ambient conditions and requires no pretreatment of samples to analyze chemical compounds with sub-micron resolution. One micron is equal to about 1/100th the width of a human hair. Results of the work by postdoctoral associate Jack Cahill and Gary Van Berkel and Vilmos Kertesz of ORNL's Chemical Sciences Division are detailed in Analytical Chemistry. "Knowing the chemical basis of material interactions that take place at interfaces is vital for designing and advancing new functional materials that are important for DOE missions such as organic photovoltaics for solar energy," Van Berkel said. "In addition, the new tool can be used to better understand the chemical basis of important biological processes such as drug transport, disease progression and response for treatment." The hybrid instrument transfers tiny amounts of a material such as human tissue or an organic polymer from a sample by a laser ablation process in which material is captured and transported via liquid stream to the ionization source of the mass spectrometer. In just seconds, a computer screen displays the results. Researchers noted that the resolution of less than one micron is essential to accurately differentiate and distinguish between polymers and sub-components of similar-sized cells. "Today's mass spectrometry imaging techniques are not yet up to the task of reliably acquiring molecular information on a wide range of compound types," Cahill said. "Examples include synthetic polymers used in various functional materials like light harvesting and emitting devices or biopolymers like cellulose in plants or proteins in animal tissue." This technology, however, provides the long-sought detailed chemical analysis through a simple interface between a hybrid optical microscope and an electrospray ionization system for mass spectrometry. More information: John F. Cahill et al. Characterization and Application of a Hybrid Optical Microscopy/Laser Ablation Liquid Vortex Capture/Electrospray Ionization System for Mass Spectrometry Imaging with Sub-micrometer Spatial Resolution, Analytical Chemistry (2015). DOI: 10.1021/acs.analchem.5b03293


News Article | February 28, 2017
Site: www.cemag.us

It’s not enough to design new drugs. For drugs to be effective, they have to be delivered safely and intact to affected areas of the body. And drug delivery, much like drug design, is an immensely complex task. Cutting-edge research and development like that conducted at the U.S. Department of Energy’s Oak Ridge National Laboratory can help solve some of the challenges associated with drug delivery. In fact, ORNL researchers and collaborators at Wayne State University recently used a unique combination of experimentation and simulation to shed light on the design principles for improved delivery of RNA drugs, which are promising candidates in the treatment of a number of medical conditions including cancers and genetic disorders. Specifically, the research team discovered that the motions of a tRNA (or transfer RNA) model system can be enhanced when coupled with nanodiamonds, or diamond nanoparticles approximately 5 to 10 nanometers in size. Nanodiamonds are good delivery candidates due to their spherical shape, biocompatibility and low toxicity. And because their surfaces can be easily tailored to facilitate the attachment of various medicinal molecules, nanodiamonds have tremendous potential for the delivery of a vast range of therapies. The discovery involved ORNL’s Spallation Neutron Source, which provides the most intense pulsed neutron beams in the world for scientific research and industrial development, and ORNL’s Titan supercomputer, the nation’s most powerful for open science -- a one-two punch for illuminating the physical properties of potential drugs that inform new design principles for safer, improved delivery platforms. By comparing the SNS neutron scattering data with the data from the team’s molecular dynamics simulations on Titan, the researchers have confirmed that nanodiamonds enhance the dynamics of tRNA when in the presence of water. This cross-disciplinary research was profiled in Journal of Physical Chemistry B. The project began when ORNL’s P. Ganesh and Xiang-Qiang Chu of Wayne State University wondered how the water-phobic surfaces of nanoparticles alter the dynamics of biomolecules coated with water, and if it might be something that they could eventually control. They then formed a team including Gurpreet Dhindsa, Hugh O’Neill, Debsindhu Bhowmik, and Eugene Mamontov of ORNL and Liang Hong of Shanghai Jiao Tong University in China to observe the motions of hydrogen atoms from the model system, tRNA, in water using SNS’s BASIS neutron backscattering spectrometer, SNS beam line 2. Hydration is essential for biomolecules to function, and neutrons are excellent at distinguishing between the motions of hydration water molecules and the biomolecule they are surrounding. Therefore, by measuring the atoms’ neutron scattering signals, the team was able to discern the movement of tRNA in water, providing valuable insight into how the large molecule relaxes in different environmental conditions. After comparing the results of the individual atoms, it was clear that the nanodiamonds were having a profound effect on their companion RNA molecules. The results were somewhat baffling because similar experiments had demonstrated that companion solid materials (such as nanodiamonds) tended to dampen biomolecule dynamics. Surprisingly however, nanodiamonds did the opposite for tRNA. “Scientists are always interested in the bio-nano interactions,” says Chu. “While the interfacial layer of the bio-nano systems has very distinctive properties, it is very hard to study this mysterious zone without neutron scattering, which only sees hydrogen.” To realize the potential of nanodiamonds in the delivery of biomolecules using tRNA as a model, the team turned to Titan to shed a much-needed light on the underlying physics. “Molecular dynamics simulation can really tell those stories that current experimental advancement might not be able to,” says Bhowmik of ORNL’s Computational Science and Engineering Division, who set up and conducted the simulations alongside Monojoy Goswami of the laboratory’s Computer Science and Mathematics Division and Hong of Shanghai Jiao Tong University. “By combining these two techniques, you can enter a whole new world.” These simulations revealed that the “weak dynamic heterogeneity” of RNA molecules in the presence of nanodiamonds was responsible for the enhanced effect. In other words, the reactions among the nanodiamonds, water and the RNA molecule forms a water layer on the nanodiamond surface, which then blocks it and prevents strong RNA contact to the nanodiamond. Since RNA is hydrophilic, or “likes water,” the molecules on the nanodiamond surface swell with excess hydration and weaken the heterogeneous dynamics of the molecules. “You can fine-tune these dynamics with chemical functionalization on the nanodiamond surface, further enhancing its effectiveness,” says Goswami. The findings will likely guide future studies not only on the potential of nanodiamonds in drug delivery but also on fighting bacteria and treating viral diseases. Using simulation to confirm and gain insight into experiments is nothing new. But mimicking large-scale systems precisely is often a challenge, and the lack of quantitative consistency between the two disciplines makes data comparison difficult and answers more elusive to researchers. This lack of precision, and by extension lack of consistency, is largely driven by the uncertainty surrounding force-field parameters or the interaction criteria between different particles. The exact parameters are scarce for many macromolecules, often forcing researchers to use parameters that closely, but not exactly, match the experiment. Miscalculating the precision of these parameters can have major consequences for the interpretation of the experimental results. To ensure the calculations were correct, Goswami worked with Jose Borreguero and Vickie Lynch, both of ORNL’s Neutron Data Analysis and Visualization Division and Center for Accelerated Materials Modeling, to develop a workflow optimization technique known as Pegasus. This method compares molecular dynamics simulations with neutron scattering data and refines the simulation parameters to validate the results with the proper experimental precision. “Using the Pegasus workflow to run simulations sampling, the force-field parameter space saved time and eliminated input errors,” says Lynch. These parameters also helped researchers better characterize the nanodiamond-water interactions and tRNA dynamics in the presence of nanodiamonds. The researchers then developed an automated system capable of optimizing parameters across a wide spectrum of simulation systems and neutron experiments, an effort that will be of great worth to similar experiments going forward. This new workflow is also compatible with the laboratory’s Compute and Data Environment for Science (CADES), which assists experimentalists with the analysis of vast quantities of data. “Users of the CADES infrastructure can carry the optimization of the simulations within the Bellerophon Environment for the Analysis of Materials, in active development at ORNL,” says Borreguero. The Bellerophon Environment for the Analysis of Materials (BEAM) is an end-to-end workflow software system, developed at ORNL, enabling user-friendly, remote access to robust data storage and compute capabilities offered at CADES and the Oak Ridge Leadership Computing Facility, home of Titan, for scalable data analysis and modeling. It’s these in-house resources that make ORNL a world leader in experimentation, modeling, and the nexus in between and that make discoveries like this possible.


News Article | March 22, 2016
Site: phys.org

Now, researchers at the Department of Energy's Oak Ridge National Laboratory have made a better thermoplastic by replacing styrene with lignin, a brittle, rigid polymer that, with cellulose, forms the woody cell walls of plants. In doing so, they have invented a solvent-free production process that interconnects equal parts of nanoscale lignin dispersed in a synthetic rubber matrix to produce a meltable, moldable, ductile material that's at least ten times tougher than ABS. The resulting thermoplastic—called ABL for acrylonitrile, butadiene, lignin—is recyclable, as it can be melted three times and still perform well. The results, published in the journal Advanced Functional Materials, may bring cleaner, cheaper raw materials to diverse manufacturers. "The new ORNL thermoplastic has better performance than commodity plastics like ABS," said senior author Amit Naskar in ORNL's Materials Science and Technology Division, who along with co-inventor Chau Tran has filed a patent application for the process to make the new material. "We can call it a green product because 50 percent of its content is renewable, and technology to enable its commercial exploitation would reduce the need for petrochemicals." The technology could make use of the lignin-rich biomass byproduct stream from biorefineries and pulp and paper mills. With the prices of natural gas and oil dropping, renewable fuels can't compete with fossil fuels, so biorefineries are exploring options for developing other economically viable products. Among cellulose, hemicellulose and lignin, the major structural constituents of plants, lignin is the most commercially underutilized. The ORNL study aimed to use it to produce, with an eye toward commercialization, a renewable thermoplastic with properties rivaling those of current petroleum-derived alternatives. To produce an energy-efficient method of synthesizing and extruding high-performance thermoplastic elastomers based on lignin, the ORNL team needed to answer several questions: Can variations in lignin feedstocks be overcome to make a product with superior performance? Can lignin integrate into soft polymer matrices? Can the chemistry and physics of lignin-derived polymers be understood to enable better control of their properties? Can the process to produce lignin-derived polymers be engineered? "Lignin is a very brittle natural polymer, so it needs to be toughened," explained Naskar, leader of ORNL's Carbon and Composites group. A major goal of the group is producing industrial polymers that are strong and tough enough to be deformed without fracturing. "We need to chemically combine soft matter with lignin. That soft matrix would be ductile so that it can be malleable or stretchable. Very rigid lignin segments would offer resistance to deformation and thus provide stiffness." All lignins are not equal in terms of heat stability. To determine what type would make the best thermoplastic feedstock, the scientists evaluated lignin from wheat straw, softwoods like pine and hardwoods like oak. They found hardwood lignin is the most thermally stable, and some types of softwood lignins are also melt-stable. Next, the researchers needed to couple the lignin with soft matter. Chemists typically accomplish this by synthesizing polymers in the presence of solvents. Because lignin and a synthetic rubber containing acrylonitrile and butadiene, called nitrile rubber, both have chemical groups in which electrons are unequally distributed and therefore likely to interact, Naskar and Chau Tran (who performed melt-mixing and characterization experiments) instead tried to couple the two in a melted phase without solvents. In a heated chamber with two rotors, the researchers "kneaded" a molten mix of equal parts powdered lignin and nitrile rubber. During mixing, lignin agglomerates broke into interpenetrating layers or sheets of 10 to 200 nanometers that dispersed well in and interacted with the rubber. Without the proper selection of a soft matrix and mixing conditions, lignin agglomerates are at least 10 times larger than those obtained with the ORNL process. The product that formed had properties of neither lignin nor rubber, but something in between, with a combination of lignin's stiffness and nitrile rubber's elasticity. By altering the acrylonitrile amounts in the soft matrix, the researchers hoped to improve the material's mechanical properties further. They tried 33, 41 and 51 percent acrylonitrile and found 41 percent gave an optimal balance between toughness and stiffness. Next, the researchers wanted to find out if controlling the processing conditions could improve the performance of their polymer alloy. For example, 33 percent acrylonitrile content produced a material that was stretchy but not strong, behaving more like rubber than plastic. At higher proportions of acrylonitrile, the researchers saw the materials strengthen because of the efficient interaction between the components. They also wanted to know at what temperature the components should be mixed to optimize the material properties. They found heating components between 140 and 160 degrees Celsius formed the desired hybrid phase. Using resources at ORNL including the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, the scientists analyzed the morphologies of the blends. Scanning electron microscopy, performed by Chau Tran, explored the surfaces of the materials. Jihua Chen and Tran characterized soft matter phases using transmission electron microscopy, placing a thin slice of material in the path of an electron beam to reveal structure through contrast differences in the lignin and rubber phases. Small-angle x-ray scattering by Jong Keum revealed repeated clusters of certain domain or layer sizes. Fourier transform infrared spectroscopy identified chemical functional groups and their interactions. Future studies will explore different feedstocks, particularly those from biorefineries, and correlations among processing conditions, material structure and performance. Investigations are also planned to study the performance of ORNL's new thermoplastic in carbon-fiber-reinforced composites. "More renewable materials will probably be used in the future," Naskar said. "I'm glad that we could continue work in renewable materials, not only for automotive applications but even for commodity usage." The title of the paper is "A New Class of Renewable Thermoplastics with Extraordinary Performance from Nanostructured Lignin-Elastomers."


The 2015 R&D 100 Awards Banquet took place on November 13, 2015 at Caesars Palace in Las Vegas, Nevada and welcomed hundreds of executives, scientists and researchers. Celebrating the event’s 53rd year, leaders of science and technology were honored for their innovative, high-tech products and processes. The R&D 100 Awards is an international competition that recognizes the 100 most technologically significant products introduced into the marketplace over the past year, recognizing excellence across a wide range of industries, including telecommunications, optics, high-energy physics, materials science, chemistry and biotechnology. The full list of 2015 R&D 100 Award Winners is available here: www.rdmag.com/news/2015/11/2015-r-d-100-award-winners. This year, the R&D 100 Awards also included a brand-new Special Recognition Awards category, as well as a two-day science and technology conference focused on the theme of innovation. Gold, Silver and Bronze Awards for the Special Recognition Awards were announced during the ceremony for Green Tech, Corporate Social Responsibility, Market Disruptor Services and Market Disruptor Product. The R&D 100 Green Tech Special Recognition Gold Award went to the CO2 Memzyme by Sandia National Laboratories and the Univ. of New Mexico; the Silver Award was presented to Dow UF PURINZE and Its Application in Eco Washing Machine by Dow Water and Process Solution; and the Bronze Award went to the High Efficient and Reliable Ocean Wave Energy Harvester by Virginia Polytechnic Institute and State Univ. The Corporate Social Responsibility Special Recognition Gold Award was presented to the KM-CDR Process by Southern Company and Mitsubishi Heavy Industries and the Silver Award went to the Echo-Screen III Hearing Screener by Natus Medical Inc. The Market Disruptor Services Special Recognition Gold Award went to the ELUTE Fiber by TissueGen Inc., while the Silver Award was presented to the Infrared Nanodestructive Wield Examination System by Oak Ridge National Laboratory. The Market Disruptor Product Special Recognition Gold Award was presented to HeliAct Muscles by Lintec and the Univ. of Texas at Dallas; the Silver Award went to the Solaris Open Air Fluorescence Imaging System by PerkinElmer; and the Bronze Award was awarded to Airyscan by Carl Zeiss Microscopy. “These special awards, new this year, highlight the incredible innovations that will have a direct impact on people’s daily lives, from health care to energy technologies, and demonstrate just how the pace of research and development affects how we live,” said Bea Riemschneider, Editorial Director of the Advantage Business Media Science Group, the parent company of the R&D 100 Awards and R&D Magazine. The much-anticipated R&D Magazine Editor’s Choice Awards concluded the R&D 100 gala event as the winners came on stage to accept the crystal awards from Advantage Business Media CEO Jim Lonergan. This year, one winner was announced from each of the five R&D 100 Award categories—Analytical/Test, IT/Electrical, Mechanical/Materials, Process/Prototyping and Software/Services. “These five technologies had the ‘wow’ factor that our editors look for each year among the R&D 100 Awards, and they demonstrated once again the creativity and innovation in the marketplace,” said R&D Editor Lindsay Hock. The R&D Magazine’s Editor’s Choice Award in Analytical/Test went to Thermo Fisher Scientific’s Thermo Scientific Gemni analyzer. The IT/Electrical Editor’s Choice Award was presented to IBM’s TrueNorth neurosynaptic chip, and the Mechanical/Materials award went to National Renewable Energy Laboratory’s cyanobacterial bioethylene. The Process/Prototyping Editor’s Choice Award was awarded to Cincinnati Incorporated and Oak Ridge National Laboratory’s BAAM-CI, while the Software/Services award went to Applied Brain Research’s Nengo 2.0. Next year’s R&D 100 Awards will take place November 4, 2016, at the Gaylord National Resort and Convention Center in the National Harbor in Maryland in conjunction with the second annual R&D 100 Awards & Technology Conference, scheduled for November 3-4, 2016. Entries for the 2016 R&D 100 Awards are now open. Please go to www.rd100awards.com for more information.


News Article | October 28, 2016
Site: www.prweb.com

The Eclipse Science Working Group (SWG), a global community for individuals and organizations who collaborate on commercially-friendly open source software, today released five projects aimed at expediting scientific breakthroughs by simplifying and streamlining computational science workflows. While computational science has made a great impact across the scientific spectrum, from climate change to astrophysics to genomics, today’s data-centric simulation environments require enhanced analysis, visualization, and workflow management to further reduce time to solution and increase the frequency of research breakthroughs. Hence Eclipse’s software release. “Oak Ridge National Laboratory is renowned for its hardware - like Titan, the world’s third fastest supercomputer - but we are also leaders in scientific software development.” said Jay Jay Billings, the SWG Chair and a research scientist at Oak Ridge National Laboratory. “ The Science Working Group has not only helped us create software to simplify complex computational science but it has also allowed us to become part of a thriving community that focuses on collaboration, interoperability, and cutting edge research .” Other U.S. National Laboratories are also benefiting from the Science Working Group. “The Eclipse Science Working Group provides an open, international forum for design and development of critical components of scientific software, bringing together a diverse set of application requirements drivers and software architects to address the challenges of next-generation scientific computing,” says Robert Clay of Sandia National Laboratory. Likewise, small businesses are also engaged as either members of the group or consumers of its software. Dr. Gerald Sabin from RNET Technologies, Inc., a Dayton Ohio-area firm focused on high-performance computing research, said, "The ICE workflow environment is an ideal platform for us to develop a cutting edge, cloud-based collaboration environment to support scientific computing. The licensing model and openness of the Science Working Group is to our advantage.” The open source projects, which represent years of development and thousands of users, are the product of intense collaboration among SWG members including Oak Ridge National Laboratory (ORNL), Diamond Light Source, Itema AS, iSencia, Kichwa Coders, Lablicate GmbH and others. The five projects released today are: “Open source is having tremendous impact on both productivity and innovation in industry, government, and academia,” explained Mike Milinkovich. “The Eclipse Science Working Group is a great example of how world-leading scientific research organizations like Oak Ridge National Laboratory, Diamond Light Source, and others can collaborate on delivering free software that will enable the next major scientific breakthrough. This is an important advancement for the scientific research community, and we are proud to be hosting the collaboration at the Eclipse Foundation.” More information on the Science Working Group is available on their website. UT-Battelle manages ORNL for the DOE's Office of Science. The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website. Eclipse is a community for individuals and organizations who wish to collaborate on open source software. There are over 300 open source projects in the Eclipse community, ranging from tools for software developers, geo-spatial technology, system engineering and embedded development tools, frameworks for IoT solutions, tool for scientific research, and much more. The Eclipse Foundation is a not-for-profit foundation that is the steward of the Eclipse community. More information is available at eclipse.org


« New Lexus LC 500h with Multi Stage Hybrid System | Main | Cummins Westport ISB6.7 G midrange natural gas engine now in full production » Nuclear-renewable hybrid energy systems (N-R HESs) can enable low-carbon, on-demand electricity while providing reduced-emission thermal energy for industrial processes. N-R HES systems are managed by a single entity that link a nuclear reactor that generates heat, a thermal power cycle for heat-to-electricity conversion, at least one renewable energy source, and an industrial process that uses thermal and/or electrical energy. However, the economic feasibility of these systems may depend on future natural gas prices, electricity market structures, and clean energy incentives. A series of new reports from the Joint Institute for Strategic Energy Analysis (JISEA) and Idaho National Laboratory (INL) examines various hybrid system configurations to provide a basis to identify opportunities for clean energy use and examine the most economically viable configurations. N-R HESs are physically coupled facilities that include both nuclear and renewable energy sources and produce electricity and another product such as a fuel, thermal energy, hydrogen, and desalinated water. Energy and materials flows among energy production and delivery systems are dynamically integrated so that the production rate of each product can be varied. In one report, Generation and Use of Thermal Energy in the US Industrial Sector and Opportunities to Reduce its Carbon Emissions, researchers from INL and the Energy Department’s National Renewable Energy Laboratory (NREL) identify key greenhouse gas (GHG) emission sources in the industrial sector and propose low-emitting alternatives using targeted, process-level analysis of industrial heat requirements. The report examines emissions generated during process heat generation. The study focuses on the 14 industries with the largest emissions as reported under the Environmental Protection Agency’s Greenhouse Gas Reporting Program in 2014. Approximately 960 plants from those industries represent less than one half of one percent of all manufacturing in the US, but they emit nearly 25% of all industrial sector emissions—5% of total US GHG emissions in 2014. The report also identifies non-GHG-emitting thermal energy sources that could be used to generate heat without emissions. Those potential sources include small modular nuclear reactors, solar heat for industrial processes, and geothermal heat. The report identifies potential opportunities for each source, identifies implementation challenges, and proposes analyses to identify approaches to overcome the challenges. In a second report, Status on the Component Models Developed in the Modelica Framework: High-Temperature Steam Electrolysis Plant & Gas Turbine Power Plant, INL details a modeling and simulation framework to assess the technical and economic viability of an N-R HES. INL, with support from Oak Ridge National Laboratory and Argonne National Laboratory, developed a dynamic, physics-based modeling capability of N-R HESs using the Modelica programming language. The report presents details on newly developed high-temperature steam electrolysis (for hydrogen production) and gas turbine power plant subsystems. Simulations of several case studies show that the suggested control scheme could maintain satisfactory plant operations even under rapid variations in net load. The study finds that the N-R HESs modeled could provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess plant capacity. In a third report, The Economic Potential of Three Nuclear-Renewable Hybrid Energy Systems Providing Thermal Energy to Industry, NREL researchers explore the economics of an N-R HES that sells a thermal product (steam or a high-temperature heat transfer fluid) to one or more industrial customers. Under each scenario examined, the economically optimal system configuration includes a nuclear reactor generating a thermal product such as steam or a heat transfer fluid—a configuration that can economically reduce GHG emissions from industry. In addition, configurations that include a thermal power cycle can support resource adequacy for the electricity grid while maximizing production of the thermal energy product if the markets sufficiently incentivize that option. Together, these three reports indicate nuclear renewable hybrid energy systems can reduce industrial emissions and support the power system. The Joint Institute for Strategic Energy Analysis is operated by the Alliance for Sustainable Energy, LLC, on behalf of the US Department of Energy’s National Renewable Energy Laboratory, the University of Colorado-Boulder, the Colorado School of Mines, Colorado State University, Massachusetts Institute of Technology, and Stanford University.


News Article | November 11, 2015
Site: www.nature.com

When Fiona Ingleby took to Twitter last April to vent about a journal’s peer-review process, she didn’t expect much of a response. With only around 100 followers on the social-media network, Ingleby — an evolutionary geneticist at the University of Sussex near Brighton, UK — guessed that she might receive a few messages of support or commiseration from close colleagues. What she got was an overwhelming wave of reaction. In four pointed tweets, Ingleby detailed her frustration with a PLoS ONE reviewer who tried to explain away her findings on gender disparities in the transition from PhD to postdoc. He suggested that men had “marginally better health and stamina”, and that adding “one or two male biologists” as co-authors would improve the analysis. The response was a full-fledged ‘Twitterstorm’ that spawned more than 5,000 retweets, a popular hashtag — #addmaleauthorgate — and a public apology from the journal. “Things went really mental,” Ingleby says. “I had to turn off the Twitter notifications on my e-mail.” Yet her experience is not as unusual as it may seem. Social media has enabled an increasingly public discussion about the persistent problem of sexism in science. When a male scientist with the European Space Agency (ESA) wore a shirt patterned with half-naked women to a major media event in November 2014, Twitter blazed with criticism. The site was where the first reports surfaced in June of Nobel Prizewinning biologist Tim Hunt’s self-confessed “trouble with girls” in laboratories. And in mid-October, many astronomers took to Twitter to register their anger and disappointment when the news broke that Geoffrey Marcy, an exoplanet hunter at the University of California, Berkeley, was found to have sexually harassed female subordinates for at least a decade. “I have been in [the] field for 15 years,” wrote Sarah Hörst, a planetary scientist at Johns Hopkins University in Baltimore, Maryland. “It is my field now too & we are not going to do things this way anymore if I have anything to do w/ it.” Scientists studying the rise of social media are still trying to understand the factors that can whip an online debate into a raging Twitterstorm. Such events often have far-reaching and unpredictable consequences — for participants as well as targets. Sometimes this continuing public discussion prompts action: PLoS ONE is re-reviewing Ingleby’s paper, and its original editor and reviewer no longer work for the journal, for example. But women who speak out about sexism often face a vicious backlash, ranging from insults to threats of physical violence. Although it is not yet clear whether the social-media conversation about sexism in science will help to create lasting change, some scientists think that it may provide a sense of solidarity for women across disciplines. “You may not be changing minds, but you may be finding people who have your back,” says Brooke Foucault Welles, a communications scientist at Northeastern University in Boston, Massachusetts. “And that’s powerful.” On 12 November 2014, the ESA Rosetta mission landed a spacecraft on a comet — a milestone for space exploration. But in certain corners of the Internet, Rosetta’s landing day may be best remembered for the scantily clad women on Matt Taylor’s shirt. Taylor, a Rosetta project scientist, sported the Hawaiian-style garment as he gave interviews to reporters at mission headquarters in Darmstadt, Germany, and answered questions on an ESA webcast. (His comments were also suggestive: Rosetta “is sexy, but I never said she was easy”, he told viewers.) It wasn’t long before people following the historic comet landing took notice — and took to Twitter. “What a lost opportunity to encourage girls into science,” tweeted Fernanda Foertter, a computer programmer at Oak Ridge National Laboratory in Tennessee. Others approached it with a bit more snark: “No no women are toooootally welcome in our community, just ask the dude in the shirt,” wrote New York-based science journalist Rose Eveleth, who linked to a Nature video interview with Taylor. What started as a trickle of tweets soon became a flood. By 14 November, the day that Taylor gave a tearful public apology on another ESA webcast, Twitter users had posted more than 3,100 messages using the #shirtstorm hashtag (see ‘Anatomy of a Twitterstorm’). In many ways, #shirtstorm and other Twitter conversations about sexism are not new. It is only the venue that has changed, says Hope Jahren, a geobiologist at the University of Hawaii at Manoa who is active on Twitter. “Guys have been wearing girly shirts forever,” she says. “The women around them have been rolling their eyes and going home and saying, ‘What a buffoon. I’m so sick of this crap.’ They’ve been doing it in the women’s room and doing it in the coffee room.” But now, Jahren says, “Twitter is that thought under your breath.” The social-media service is also an enormous megaphone that claims to have 320 million active users each month. Research suggests that hashtag-driven Twitter conversations can help to amplify the voices of people who are not powerful by conventional measures. One example comes from Foucault Welles and her colleagues’ analysis of a hashtag that arose after police in Ferguson, Missouri, shot an unarmed African American teenager in August 2014. The killing quickly became a national news story, and the #ferguson hashtag became part of a broader US debate over police violence. Yet more than a year later, one of the most retweeted #ferguson contributors was a teenager from the Ferguson area. “People who don’t have power really can have their voices heard,” Foucault Welles says. “They can reframe the story.” And that can make Twitter an important outlet for younger scientists, who often don’t know how to respond to instances of sexism or sexual harassment. One 2014 survey of 666 scientists — including 516 women — found that 64% had experienced sexual harassment, and only 20% of that group said that they knew how to report such behaviour1. Most were students or postdoctoral researchers at the time they were harassed. When scientists talk about sexism and harassment on Twitter, it presents younger researchers with a model for confronting such issues. “This way, they can see other people are going through it, and there is a positive effect to speaking out,” says Zuleyka Zevallos, a sociologist who manages the Science Australia Gender Equity Project at the Australian Academy of Science in Canberra. For Ingleby, venting about her sexist journal review on Twitter paid unexpected dividends. She and her co-author, both postdocs, had waited three weeks for PLoS ONE to decide whether to grant their appeal and re-examine their paper. By making their plight public, Ingleby drew public support from other scientists — and, privately, invaluable advice from more-experienced researchers about how to deal with the journal. “I did get some messages that called me a feminazi and all that stuff,” Ingleby says, “but that was by far the minority.” She has one crucial piece of advice for those who may follow in her footsteps: “Be a bit more prepared for things going viral. Maybe pick a few quiet days in your calendar.” Determining which factors can fan a handful of messages into an Internet firestorm, or what gives a hashtag staying power, is tricky. One study2, published in 2012 by researchers at the University of Pennsylvania in Philadelphia, suggests that Internet content goes viral when it elicits a strong emotional reaction. Marketing researcher Jonah Berger and decision scientist Katherine Milkman analysed the popularity of 6,956 news stories posted to the New York Times homepage between 30 August and 30 November 2008. The pair found that stories that inspired intense positive emotions, such as awe or amusement, were the most likely to go viral; anger, anxiety and other strong negative feelings also propelled articles to wide readership, but sadness seemed to reduce the chance that a reader would share a story with others. The recent science Twitterstorms, which are often fuelled by a combination of frustration, anger and black humour, fit with those ideas. Yet an element of randomness is also at play. Joseph Reagle, a communications researcher at Northeastern University, sees this in the story of Cecil, a lion killed by an American tourist in Hwange National Park in Zimbabwe in July. The animal’s death became an international cause célèbre, inspiring a hashtag (#CeciltheLion) that racked up 1.2 million tweets in one month — despite the fact that hunters kill dozens of lions in Zimbabwe each year3. To Reagle, Cecil’s tale also suggests that ‘hashtag activism’ is here to stay. “We are seeing the emergence of a genre,” he says. “And we will see it repeated.” The conversations sparked by popular hashtags can shift the focus of media coverage and broader public discussion. The #YesAllWomen hashtag began in May 2014, in response to a shooting spree in California in which the killer said that his motivation was a hatred of women. Women used the hashtag to connect this violent misogyny to examples of everyday sexism and harassment — giving rise to a new wave of media coverage4. “That’s one of the really interesting things that starts to happen with some hashtags — they become news in their own right,” says Samantha Thrift, a feminist media scholar at the University of Calgary in Canada. Hunt learned about the amplifying power of social media the hard way on 8 June. “You fall in love with them, they fall in love with you, and when you criticize them, they cry,” he said in a speech at the World Conference of Science Journalists in Seoul. His comments were tweeted by audience members, creating an Internet furore that quickly hit mainstream news outlets. On 10 June, Hunt told BBC Radio 4 that he was “really sorry”. But in later comments to The Observer, he said that he had been “hung out to dry” and forced to resign an honorary post at University College London. “It has done me lasting damage,” he said. “What they did was unacceptable.” For those in positions of power, such as Hunt, finding themselves at the centre of a Twitterstorm can be deeply unsettling, given social media’s ability to upend traditional hierarchies. But many women who talk about sexism, feminism and gender issues online face a harsher reception, from abusive comments to threats of physical harm. When Eveleth tweeted her criticism of Taylor’s shirt, she received death threats. When others joined the fray, such as Jacquelyn Gill, a palaeoecologist at the University of Maine in Orono, they became targets, too. “I stand with @roseveleth and others who are calling out sexism despite online harassment,” she tweeted. “I’m reporting abusive tweets as I’m able.” She added: “Free-speech apparently only applies to dudes threatening violence to women with an opinion — not the women with an opinion. #shirtstorm”. The reaction to her commentary was swift and punishing. “For the next 72 hours I got death and rape threats,” Gill says. “It was a non-stop barrage of people trolling those hashtags.” As the stream of vitriol became overwhelming, some of Gill’s colleagues wrote a computer program to scan Twitter for threatening messages that mentioned her username. That spared Gill from constantly monitoring her account for serious threats. But no program could spare her from the awkward conversations that she had with University of Maine officials after realizing that some of her harassers on Twitter were discussing how to get her fired in retaliation for her ‘Shirtgate’ activism. “I’ve run up against the real-world consequences of speaking as a woman on the Internet,” she says. This problem is not limited to science: in a study of 2,849 Internet users, the Pew Research Center in Washington DC reported that 40% had been harassed online. Although men are more likely to be called offensive names or purposefully embarrassed, women are more likely to be stalked or sexually harassed as a result of their Internet use. The survey also found that social media is the place where women are most vulnerable to harassment of all types, ranging from stalking to physical threats. Faced with such attacks, some scientists have begun to rethink how they participate in online discussions about sexism. Some retreat entirely; others, wary of being silenced by abuse, try to find safer ways to engage online. One female researcher who has suffered Internet harassment now tweets about feminist issues under a pseudonym while also maintaining an active Twitter account under her real name. “It makes me feel safer,” says the researcher, who asked not to be named. “Although, in a lot of these cases, if someone wants to find you, they will.” Researchers tracking the rise of social media are trying to understand whether intense discussions online translate into real-world change. The difficulty lies in deciding how to measure such effects. One approach draws on network analysis. A team of computer scientists at Carnegie Mellon University in Pittsburgh, Pennsylvania, tracked Twitter users’ interactions before, during and after 20 Twitterstorms between 2011 and 2014 — most centred on targets of broad interest, such as US late-night television host Stephen Colbert and fast-food chain McDonald’s5. The researchers found that these events did not create lasting links between participants, as measured by who these users follow or message on Twitter. This suggests that Internet dust-ups do not usually lead to sustained discussion or greater awareness of a given issue. But other studies show that intense Twitter discussions may affect contributors in ways that are harder to quantify. Mindi Foster, a social psychologist at Wilfrid Laurier University in Waterloo, Canada, decided to investigate the psychological effects of tweeting on the basis of her own experience using social media. After hearing an anti-Semitic remark on a television programme one night, Foster joined Twitter to vent her anger — and it felt good. Foster’s research seems to confirm her hunch: that when women tweet about sexism, it improves their sense of well-being6. The study involved 93 female university students who were presented with information about sexism in academia, politics and the media. One group of students was asked to tweet publicly about what they had learned, another to tweet privately and a third to tweet about the weather. (A fourth group was told to do nothing.) During the three-day study, each participant filled out a daily questionnaire on her emotional state. Those who were assigned to tweet publicly reported a greater sense of well-being, on average, by the end of the experiment; those in the other groups showed no change. These results, although preliminary, are in line with earlier research that shows that expressive writing — such as writing in a diary — can provide similar benefits. But Foster speculates that public tweeting may confer an extra boost because it spurs writers to think more deeply about what they are saying. Twitter can also help to build a sense of community among scientists in different disciplines who are confronting sexism and sexual harassment. Sometimes these bonds grow out of dark humour, such as the #distractinglysexy hashtag birthed in reaction to Hunt’s comments. Thousands of female researchers posted pictures of themselves in labs and at field sites, up to their knees in mud or swathed in shapeless biosafety suits. “Filter mask protects me from hazardous chemicals and muffles my woman cries,” wrote Amelia Cervera, a biochemist at the University of Valencia in Spain, who shared a photo of herself wearing the face-obscuring gear. Gill, a palaeoecologist, says that she has begun to connect with researchers in astronomy, anthropology, engineering and computer science, among other fields. Such links can help researchers to learn from each other’s experiences of confronting sexism. “Some of our disciplines have been better at gender equality than others,” she notes. “Some of us have been having these discussions for a long time.” But the ongoing Twitter conversation about sexism is also limited in some important ways. It often ignores the concerns of women whose experiences with sexism are exacerbated by discrimination on the basis of race, sexual orientation or disability. For example, a US survey of 557 female scientists from ethnic minority groups found that two-thirds felt pressure to prove themselves over and over again — beyond what was asked of white colleagues. And 48% of African American respondents said that they had been mistaken for janitors (caretakers) or administrative staff in their workplaces. “If you are a minority within a minority, you are actually dealing with multiple problems,” says Zevallos. That is just as true on Twitter as it is in the lab or office. And this can make women who are dealing with the effects of multiple forms of discrimination feel excluded from conversations that focus on sexism or sexual harassment. Such concerns surfaced recently in the wake of the Marcy sexual-harassment case, which had prompted a vigorous online debate under the #astroSH hashtag. “If you are not talking about and confronting racism with same vigilance as sexism, might as well hang ‘no Blacks’ signs,” tweeted Chanda Prescod-Weinstein, an astrophysicist at the Massachusetts Institute of Technology (MIT) in Cambridge. “And I say that as a victim of both sexual assault and sexual harassment.” Sarah Ballard, also an MIT astrophysicist, echoed the sentiment: “We can’t rely on crowdsourcing meting out of justice- (Mostly white) crowds will stand up for white women, *crickets* otherwise.” And although social media can help to create a community discussion about sexism and other forms of discrimination, fighting for equality requires the real-world cooperation of universities, governments and other institutions. Some of these have taken action in response to sexist incidents that online discussions helped to bring to wider attention. But although Twitter may be hard to ignore, it does not have the authority to set and enforce expectations for fair treatment. Despite those caveats, Thrift finds great value in the ongoing social-media conversations among scientists, which she sees as a form of public education — and the first step towards concrete change. “That’s hugely important,” she says. “If we don’t name something as sexist, as harassment, as misogyny, it will continue unchecked.”


News Article | March 31, 2016
Site: www.materialstoday.com

A team from the US DoE’s Oak Ridge National Laboratory has developed a moldable thermoplastic polymer made with 50% renewable content from biomass, which could replace the polymer currently used in the manufacture of many consumer products, such as car bumpers, kitchen appliances, and even Lego bricks. The research could lead to cleaner and cheaper raw materials for manufacturing, and reduce demand for the petrochemicals necessary for traditional polymers. The new polymer, acrylonitrile, butadiene and lignin (ABL), is light, strong and tough as it used replaces the usual styrene with lignin, a brittle, hard polymer that helps forms the cell walls of plants. This class of thermoplastic elastomers is based on a solvent-free production process that interconnects nanoscale lignin dispersed in a synthetic rubber matrix, nitrile rubber, to produce a meltable, moldable, ductile and stronger material, while rigid lignin sections provide resistance to deformation and stiffness. The thermoplastic is also recyclable, as it can be melted down a number of times and still retain its performance. The study, as reported in Advanced Functional Materials [Tran et al. Adv. Funct. Mater. (2016) DOI: 10.1002/adfm.201504990], had the aim of developing high-value product from lignin. However, as it is very brittle, the lignin had to be toughened by chemically combining it with the soft matrix. Equal parts of lignin and synthetic nitrile rubber were heated, mixed and extruded. As lignins are not all the same with regard to heat stability, it was important to identify which would make the most effective thermoplastic feedstock, which was achieved by comparing lignin from wheat straw, softwoods and hardwoods, with the last being found to the most thermally stable. The researchers then coupled the preferred lignin fraction with the soft matter in a melted phase. Using a heated chamber with two rotors, they “kneaded” a molten mix of equal amounts of powdered lignin and nitrile rubber. Lignin agglomerates broke down into interpenetrating layers that dispersed well in – and also interacted with – the rubber. The resulting product offered properties between that of lignin and rubber, with a combination of the stiffness of the former and the elasticity of the latter. Changing the acrylonitrile amounts in the soft matrix allowed them to improve the mechanical properties of the material further, including finding the best balance between toughness and stiffness, and improving the performance of the polymer alloy. Biorefineries view lignin as a commercially underutilized plant product, and hope it will help in the production of renewable thermoplastics that can compete with petroleum-derived alternatives. The team is now seeking to demonstrate the use of these plastics as matrix material for fiber-reinforced composite applications, and carry out further exploration into different feedstocks, as well as correlations among processing conditions, material structure and performance.


News Article | October 25, 2016
Site: www.materialstoday.com

This shows the simulated collision path of a krypton ion (blue) with a defected graphene sheet and the subsequent formation of a carbon vacancy (red); red color shading indicates local strain in the graphene. Image: Kichul Yoon, Penn State.Researchers at Penn State, the US Department of Energy (DOE)'s Oak Ridge National Laboratory and Lockheed Martin Space Systems Company have developed novel methods for controlling defects in two-dimensional (2D) materials such as graphene. These methods could lead to the development of improved membranes for water desalination, energy storage and sensing, as well as advanced protective coatings. For a 2D, one-atom-thick material like graphene, defects such as small cracks or holes can make a big difference to its performance. Usually these defects are considered undesirable, but with some form of control they can also be used to engineer new, desirable properties into the material. "As long as you can control defects, you might be able to synthesize into graphene whatever kinds of response the graphene material will give you," said Adri van Duin, professor of mechanical engineering and chemical engineering at Penn State and corresponding author of a recent paper describing this work in ACS Nano. "But that does require that you have very good control over defect structure and defect behavior. What we have done here is a pretty strong step towards that." van Duin is the co-inventor and main developer of a mathematical modeling and simulation technique called the Reactive Force Field Method (ReaxFF). This technique is capable of predicting the interactions of thousands of atoms when they are perturbed by an external force, such as the bombardment of graphene by noble gas atoms. Noble gases such as helium, neon, argon, krypton and xenon are frequently used to create defects in graphene to modify its properties. By knocking one or more carbon atoms out of graphene's chicken wire-shaped structure, the resulting hole can be filled by other atoms or molecules in a process called doping. Doping can change the chemical or electrical properties of the graphene to, for example, allow water molecules to flow through a membrane while rejecting salt particles. "We have done a series of atomistic scale simulations where we accelerate noble gas ions into the graphene," said van Duin, who is also director of the Materials Computation Center, part of the user facility of Penn State's Materials Research Institute. "The simulations gave much the same defect patterns as experiments. That means our simulations can tell experimentalists what dose of atoms at which acceleration they need to get those types of defects." Because defects can morph into different shapes or move around in the seconds after creation, his group also simulates putting the graphene in an oven and annealing – heating at high temperature – to stabilize the structure. It’s unusual for an atomistic simulation to match the size, time and exposure range of an experiment, due to the computational expense of calculating interactions among thousands of atoms over the time scale required to stabilize a material, van Duin said. ReaxFF, developed by van Duin and Caltech's William Goddard, is able to model chemical and physical interactions in molecules and materials as the bonds between atoms form and break. "This study provides insight into the atomistic scale details of graphene irradiation and is a preliminary step in designing functionalized carbon materials in two dimensions," said Kichuel Yoon, the paper's lead author and a graduate student in van Duin's group. "Clearly there is nothing that makes this exclusive to graphene," adds van Duin. "Any 2D material can be treated with the same simulations. Anyone who wants to dope a 2D material or understand defects will be interested in these simulations." The researchers intend to continue working with Lockheed Martin on aerospace applications and will also pursue the goal of graphene-based water desalination. Jacob Swett of Lockheed Martin prepared the samples used in the experiments and was crucial in moving the project forward. To correlate simulations with experiments, the researchers relied upon the Center for Nanophase Materials Sciences (CNMS), a DOE Office of Science User Facility at ORNL, to create defects using ion bombardment and then characterize those defects using atomic-resolution imaging. "At CNMS, we have state-of-the-art helium and neon ion beam and aberration-corrected scanning transmission electron microscopy instruments that permit atomistic scale characterization," said Raymond Unocic, an R&D staff scientist at ORNL. This story is adapted from material from Penn State, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | October 28, 2016
Site: phys.org

The Eclipse Science Working Group (SWG), a global community for individuals and organizations who collaborate on commercially-friendly open source software, recently released five projects aimed at expediting scientific breakthroughs by simplifying and streamlining computational science workflows. While computational science has made a great impact across the scientific spectrum, from climate change to astrophysics to genomics, today's data-centric simulation environments require enhanced analysis, visualization, and workflow management to further reduce time to solution and increase the frequency of research breakthroughs. Hence Eclipse's software release. "Oak Ridge National Laboratory is renowned for its hardware—like Titan, the world's third fastest supercomputer—but we are also leaders in scientific software development," said Jay Billings, the SWG Chair and a research scientist at Oak Ridge National Laboratory. "The Science Working Group has not only helped us create software to simplify complex computational science but it has also allowed us to become a part of a thriving community that focuses on collaboration, interoperability, and cutting edge research." That community extends far beyond Oak Ridge into the wider national laboratory complex. "The Eclipse Science Working Group provides an open, international forum for design and development of critical components of scientific software, bringing together a diverse set of application requirements drivers and software architects to address the challenges of next-generation scientific computing," said Robert Clay of Sandia National Laboratory. Likewise, small businesses are also engaged as either members of the group or consumers of its software. Dr. Gerald Sabin from RNET Technologies, Inc., a Dayton Ohio-area firm focused on high-performance computing research noted: "The ICE workflow environment is an ideal platform for us to develop a cutting-edge, cloud-based collaboration environment to support scientific computing. The licensing model and openness of the Science Working Group is to our advantage." The open source projects, which represent years of development and thousands of users, are the product of intense collaboration among SWG members including ORNL, Diamond Light Source, Itema AS, iSencia, Kichwa Coders, Lablicate GmbH and others. The five projects are: The software downloads are available here: "Open source is having tremendous impact on both productivity and innovation in industry, government, and academia," explained Eclipse Foundation Executive Director Mike Milinkovich. "The Eclipse Science Working Group is a great example of how world-leading scientific research organizations like Oak Ridge National Laboratory, Diamond Light Source, and others can collaborate on delivering free software that will enable the next major scientific breakthrough. This is an important advancement for the scientific research community, and we are proud to be hosting the collaboration at the Eclipse Foundation." More information on the Science Working Group is available on their website. Explore further: New tool helps researchers using computers


News Article | April 19, 2016
Site: phys.org

That same energy also makes actinides highly radioactive. Scientists doing early research with these substances in the 1940s and 50s did not fully understand the risks of long-term handling and storage for the resulting waste products. Locations such as the Department of Energy's (DOE's) Hanford and Savannah River Sites still have radioactive materials from the Manhattan Project in storage awaiting processing and final disposition. Often stored in large holding containers, these volatile substances mix with one another over time. One of the major challenges for decontaminating these sites lies in separating radioactive compounds from their more inert counterparts for safe removal and storage. To this end, a multi-institution team has been using computing resources at DOE's Oak Ridge National Laboratory (ORNL) to understand actinide chemistry at the molecular level in hopes of designing methods to clean up contamination and safely store spent nuclear fuel. "We put this group together because we were already collaborating on scientific papers," said professor David Dixon of The University of Alabama. "Rather than just saying that we were going to do one reaction and needed computing time, we're using a team approach to try and get a broad, fundamental understanding of actinide science." Dixon serves as the principal investigator on the project, which includes collaborators from Lawrence Berkeley National Laboratory, Washington State University, The State University of New York at Buffalo, the University of Minnesota, and Rice University. The team is using the Titan supercomputer at the Oak Ridge Leadership Computing Facility—a DOE Office of Science User Facility located at ORNL—to work on a variety of research goals related to understanding the chemical reactions taking place with actinides. Supercomputing plays an essential role in understanding these elements. Because of their high radioactivity, actinides are difficult to handle, and those experiments that can be performed are expensive. "These elements are really radioactive, and in many cases, there is not much of it around," Dixon said. "The radioactivity part makes it very difficult to handle in a lab. We want to try and get as much information as we can about their properties before people do specific lab experiments to test out ideas and design new systems to help remediate contamination." The Dixon team gained access to Titan through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Within the larger allocation of computing time, the team is divided into several smaller groups looking at specific problems related to actinide chemistry. After spent nuclear fuel begins to dissolve within a storage tank, actinides still exist as ions in an aqueous mixture. To clean up this mixture, researchers must separate these ions into groups based on how they affect the remediation process and by the radioactive hazards they present. Currently, state-of-the-art methods revolve around a multistage process where researchers form a metal-ligand complex with an actinide ion, which then helps the ion migrate across the boundary between oil and water. Washington State University professor Aurora Clark leads a group that studies how solutions of actinides can be separated, primarily focusing on liquid-liquid interfaces. "One of the most critical components of actinide separation is the interface between water or an aqueous solvent system and an organic solvent (such as oil), and often these mixtures are incredibly complicated," Clark said. "We're doing simulations to understand the essential physics of what's going on at these liquid-liquid interfaces. We're trying to understand how the structure and dynamics at the oil-water interface can influence permeability of different compounds across these interfaces and can influence the ability of an actinide-containing molecule to migrate across the interface. Essentially, much of the storage for radioactive waste needs to be cleaned of actinides. Clark's group runs large-scale molecular dynamics (MD) simulations to understand this process with a level of detail unattainable through experiment. The team uses Titan to design a solution containing thousands to tens of thousands of individual atoms. During the simulation, the team charts the particles' interactions as the solution moves to a state of equilibrium—or where a reaction and any counter-reaction proceed at an equal rate—and then analyzes the data from these interactions. Clark's group uses some novel approaches for data analysis. One of the team's achievements was creating intermolecular network theory, which allows the team to focus less on the positions of each individual particle in the simulation (like many MD simulations) and more on how the interactions of various particles dictate the physical properties of these radioactive mixtures. Intermolecular network theory gets the team to take its data and analyze a "network" of the various chemical interactions. Researchers then analyze that network much like a network analyst would observe internet connectivity or monitor a network of computers—in fact, the team uses a modified version of Google's PageRank algorithm, which is instrumental in ranking the importance and relevance of web content, to help understand the immediate organization of solvents around ions. Through its suite of computational tools and access to Titan in the last year, the Clark group was able to observe how the molecule tributyl phosphate changes its conformation when at the interface of water and hexane. As the molecule changes, a massive reorganization at the interface makes the transfer of an actinide across the interface more permeable. In addition to working on liquid-liquid interfaces, Clark's group is also starting to simulate liquid-solid interactions, with a specific focus on metal organic frameworks—compounds that create nanoscale pores that attract and bind to actinides. As supercomputing power continues to increase, Clark hopes that metal organic frameworks may become the gold standard for scrubbing actinides from an environment. "The strategy [of employing metal organic frameworks] compliments our other research and provides a way to both separate and potentially store radioactive material in a single step," Clark said. Clark's group is generating large amounts of data in these simulations and has been working with OLCF support staff to develop methods for in situ data analysis techniques that would allow the team to drastically reduce the amount of data it would need to transport out of the OLCF. Other researchers on the Dixon team, such as Lawrence Berkeley National Laboratory's (LBNL's) Bert de Jong, are using their time on Titan and the OLCF's developmental and analysis cluster Eos to collaborate closely with experimentalists. De Jong's group works with teams at both LBNL and the University of Utah to understand how to change and control an element's oxidation state—or how many electrons an atom can shed or add to change its charge. By altering the behavior of elements like uranium—or its actinide relatives, thorium, neptunium, and plutonium—incorporating them in molecules and forming chemical bonds, researchers hope to extract radioactive elements from the environment or waste storage tanks and keep them in a more innocuous form, or to separate them for reuse as nuclear fuel. De Jong's group benefitted from Titan's hybrid architecture. "For our thorium work, we had to do very high-accuracy simulations, and for those, we have to do coupled cluster calculations," de Jong said. "The NWChem code that we use is GPU enabled, so that allows us to do these kinds of calculations with the fastest time-to-solution possible." Throughout the group's work on Titan and Eos, simulations have served as both the basis for driving experimental research and verification of certain experimental hypotheses. "I believe that computing is both predictive and interpretive," de Jong said. "Experiments often can only tackle aspects of the problem, but computing can help broaden insights and trends that can confirm or change interpretations and provide more additional information very quickly and safely." De Jong pointed to research on chemical reactions involving ligands and proteins—molecules that bind to radioactive elements—where Titan simulations suggested that certain reactions would shed water molecules while reducing uranium. This phenomenon was subsequently observed experimentally in the Netherlands. Simulations on Titan also predicted the formation of an unusually bonded uranium-nitrosyl molecule, work that was published in the Journal of the American Chemical Society. "Simulations on elements on the bottom of the periodic table, such as uranium and plutonium, are a challenge, and we are using accurate experimental data to verify that our models and methods are accurate," de Jong said. The verification of experiments performed at the University of Utah, studying thorium reactions in a gas phase, lays the foundation for complex simulations of uranium or heavier, more volatile elements that are dangerous to work with in a laboratory. As the OLCF moves toward its next-generation supercomputer, Summit, the Dixon team plans to focus on optimizing its various codes for taking full advantage of GPUs and other computing methods. The team wants to continue expanding its work on metal organic frameworks and incorporate quantum Monte Carlo methods—a computational technique that uses statistical data and random numbers to accurately plot electrons in a simulation. Explore further: What, oh, what are those actinides doing? More information: Richard M Cox et al. Reactions of Th + H , D , and HD Studied by Guided Ion Beam Tandem Mass Spectrometry and Quantum Chemical Calculations , The Journal of Physical Chemistry B (2016). DOI: 10.1021/acs.jpcb.5b08008 Yasaman Ghadar et al. Solutes at the liquid:liquid phase boundary—Solubility and solvent conformational response alter interfacial microsolvation, The Journal of Chemical Physics (2015). DOI: 10.1063/1.4914142 Yu Gong et al. Gas Phase Uranyl Activation: Formation of a Uranium Nitrosyl Complex from Uranyl Azide, Journal of the American Chemical Society (2015). DOI: 10.1021/jacs.5b02420


News Article | October 26, 2016
Site: www.nature.com

List of discoveries shows US contributions have declined, but Japanese, Russian and European work is on the rise. When it comes to discovering nuclear isotopes, retired physicist Gottfried Münzenberg is top, with 222. His colleague Hans Geissel, from the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, is next with 211. Then comes Francis Aston, the British chemist who won the 1922 Nobel Prize in Chemistry, with 207. These provisional rankings have been assembled by Michael Thoennessen, a physicist at the National Superconducting Cyclotron Laboratory at Michigan State University in East Lansing. With the help of several undergraduate students, he has assembled a database of the almost 3,100 known nuclear isotopes — atoms that share the same number of protons but have varying numbers of neutrons. He lists the discovery of each by researcher, the lab at which they worked and the method by which they produced the isotope. The vast majority of these isotopes only exist naturally for fleeting moments inside stars, formed during the production of heavy elements. They never exist on Earth at all, except for a few nanoseconds in elite nuclear-physics labs. The International Union of Pure and Applied Chemistry, headquartered in Zurich, Switzerland, assigns the official credit for discovering elements, but Thoennessen is now doing the same for all the isotopes of those elements. "I don't think it matters so much who gets the credit, but historically it's important to identify which countries and labs are driving the field," he says. The database lets Thoennessen rank labs as well as people, and to track how different countries' rising and falling investments in nuclear technology have affected where and how isotopes are discovered (Click here for an animated chart of known isotopes that shows how many were found in quick succession with each new wave of technology). Top of the lab list is Lawrence Berkeley National Laboratory in Berkeley, California, with 634 isotopes. GSI Darmstadt is second with 368; the University of Cambridge, UK, is third with 222, its discoveries made mostly in the days of Aston; and the Joint Institute for Nuclear Research in Dubna, Russia, is fourth with 215. Berkeley may reign supreme in terms of sheer numbers, but the trend over time tells a different story. In the 1980s and 1990s the lab failed to upgrade its equipment and so fell behind other labs. It also lost some credibility in 2002, when allegations surfaced that physicist Victor Ninov had faked the discovery of elements 116 and 118 there. These days, most new finds come from GSI Darmstadt, Dubna and other locations, such as the Radioactive Isotope Beam Factory(RIBF) in Wako, part of Japan's RIKEN national network of labs. Although the decline of Berkeley means that the United States is losing ground, in the longer term Thoennessen expects the country to maintain its supremacy with the Facility for Rare Isotope Beams (FRIB), construction of which is expected to begin in 2012 at Michigan State University. Not everyone agrees that this ranking is the best way to capture the science of the field. Krzysztof Rykaczewski, a nuclear physicist at Oak Ridge National Laboratory in Tennessee, who according to Thoennessen's list has discovered 60 isotopes, would like to see a ranking include not only discoveries, but also the first person to study the properties and nuclear structure of the atoms. "Identifying is only the first step," he says. But Patrick Regan, a physicist at the University of Surrey in Guildford, UK, who has 20 isotope discoveries to his name, thinks that the discovery itself does reflect something important. "I'm proud of each one of those 20," he says, "each of them is like a child to science." Thoennessen set the bar high when deciding what counted as a discovery, in the hope of improving standards in the field. He considers an isotope "discovered" only if its mass and charge have been identified in a peer-reviewed paper. As he dug into the literature, he found that some generally accepted isotopes in fact appeared only in less rigorously vetted sources, such as PhD theses, conference proceedings or unpublished 'private communications'. He discounted such discoveries. "There are numbers people take for granted where one really shouldn't," says Thoennessen. One person who is pleasantly surprised by the ranking is Münzenberg. He was aware that he had made a large contribution, including discovering elements 107–112 and some of their isotopes, but he hadn't realized that he would be top in the world. "I didn't know how many we'd made," he says. GSI Darmstadt is expected to announce a further 60 isotopes in the next year, and as equipment Münzenberg designed is still in use he may well remain ahead for the foreseeable future.


News Article | December 14, 2016
Site: www.eurekalert.org

Findings by Berkeley Lab researchers mean the Arctic may be even less of a carbon sink than previously thought When the frozen Arctic tundra starts to thaw around June of each year, the snow melting and the ground softening, the soil may release a large pulse of greenhouse gases, namely, carbon dioxide and methane. Little has been known about such releases. Now scientists at the U.S. Department of Energy's (DOE) Lawrence Berkeley National Laboratory, in collaboration with a team of other scientists taking measurements both in the field and in the lab, have quantified the scale of such releases and explained the underlying mechanisms for the phenomenon. Their study was based on a spring pulse in northern Alaska that they documented in 2014 that included CO2 emissions equivalent to 46 percent of the net CO2 that is absorbed in the summer months and methane emissions that added 6 percent to summer fluxes. What's more, recent climate trends may make such emissions more frequent, the scientists conclude. "We can see the effects of climate change happening more rapidly in the Arctic than in any other part of world," said Berkeley Lab scientist Naama Raz-Yaseef. "So we need to understand the processes that are occurring and what to expect in the future. The amount of CO2 and methane (CH¬¬4) in the atmosphere determines the greenhouse effect--so we need to know more about these sources." Their study was recently published in the journal Geophysical Research Letters in a paper titled, "Large CO2 and CH4 Emissions from Polygonal Tundra During Spring Thaw in Northern Alaska." Raz-Yaseef, an ecohydrologist, was the lead author. Co-authors were Berkeley Lab researchers Margaret Torn, Yuxin Wu, and Tim Kneafsey; Dave Billesbach of the University of Nebraska; Anna Liljedahl and Vladimir Romanovsky of the University of Alaska; David Cook of Argonne National Laboratory; and Stan Wullschleger of Oak Ridge National Laboratory. The study was a project of DOE's Next-Generation Ecosystem Experiment (NGEE-Arctic), which seeks to gain a predictive understanding of the Arctic terrestrial ecosystem's feedback to climate. The team used two towers 4 meters high to collect gas flux readings. The towers are located about 5 km apart near Barrow, Alaska, the northernmost town in the U.S, one tower operated by NGEE and the other by DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility. "Typically we just measure what happens during summer," Raz-Yaseef said. "We assume nothing happens during winter because it's frozen, so there's no biological activity. But in 2014 we measured a big flux of carbon and methane emitted from the ground, at the very beginning of thaw and snowmelt. At first we didn't know if it was real or a measurement error. Then we confirmed it with another tower 5 km away (the ARM tower), which measured a very similar flux of the same size during the same period." That spring they measured a total of three pulses, the longest lasting five days. These findings are important in understanding the carbon balance of ecosystems in the Arctic. So far, the Arctic is considered a carbon sink, meaning it absorbs more CO2 than it emits on an annual basis, thanks mainly to the vegetation that grows in the summer. (Oceans and forests are far larger carbon sinks.) "Given our findings, the Arctic is an even smaller carbon sink than we thought since during some years nearly half of the summer uptake of CO2 is offset with these spring emissions," Raz-Yaseef said. Why is the soil releasing gases? Soils everywhere are full of microbes, and microbial activity emits gases. What makes the Arctic different is the way the ground freezes. As temperatures start to drop in the late fall, the soil freezes from the permafrost up and from the surface down. The active layer in between is trapped in the middle. "For about a month after the surface freezes, the middle layer is still active," Raz-Yaseef explained. "The temperature is still around 0 (Celsius), so microbial activity continues to take place, producing carbon and methane, but it's trapped beneath the surface ice. It can't be emitted, so it's stored throughout the winter." When the ice cap thaws in spring, the gases escape to the atmosphere. The scientists confirmed these observations in several ways. First, they measured soil temperatures every 10 cm down through the permafrost 1.5 meters below ground every 5 minutes year-round. Second, they extracted large, frozen soil cores that were transported to the laboratory for experiments. "(Berkeley Lab scientist) Yuxin Wu thawed a core under controlled conditions, but just like in nature," Raz-Yaseef said. "Just when the surface ice melted in the lab, he measured a large flux of carbon and methane. This is formed from gases trapped in the soil over winter." Separately, Berkeley Lab scientist Timothy Kneafsey took CT scans of hundreds of frozen cores and found gas-rich channels and pockets near the surface, which the scientists posited could serve as accumulation zones and pathways for gas flow. Spring pulses not annual but may become more frequent In fact, it's these channels and pockets that enable the emission pulses, the scientists conclude in their paper. The spring pulses were not detected in 2013 or 2016, and 2015 could not be measured due to equipment malfunction. Since it's not an annual event they analyzed the conditions under which the pulses might occur. "The process is more likely to occur when there are events of rain on ice," Raz-Yaseef said. "When it thaws and freezes repeatedly, that produces cracks in the frozen soil, and through these cracks the gas can be emitted." During warmer years, the scientists expect the spring pulses to be more frequent. "We expect there will be more gas built up due to longer and warmer fall seasons and more frequent pulse events due to more rain on ice in the spring," Raz-Yaseef said. One thing that's for sure is the scientists now know not to ignore gas emissions in early spring. "Now we'll put the towers up much, much earlier, just in case there's another event, we'll be sure to catch it," Raz-Yaseef said. "It's one of those things, once you're aware of it, you open your eyes and looks for it." NGEE-Arctic is supported by the DOE Office of Science. ARM is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | March 23, 2016
Site: www.greencarcongress.com

« Statoil launches Batwind: 1MWh battery storage for pilot offshore wind project | Main | Toyota doubles the electric range in the new version of Prius PHEV with 8.8 kWh pack » Researchers at Oak Ridge National Laboratory (ORNL) have developed a new class of high-performance thermoplastic elastomers for cars and other consumer products by replacing the styrene in ABS (acrylonitrile, butadiene and styrene) with lignin, a brittle, rigid polymer that, with cellulose, forms the woody cell walls of plants. In doing so, they have invented a solvent-free production process that interconnects equal parts of nanoscale lignin dispersed in a synthetic rubber matrix to produce a meltable, moldable, ductile material that’s at least ten times tougher than ABS. The resulting thermoplastic—called ABL for acrylonitrile, butadiene, lignin—is recyclable, as it can be melted three times and still perform well. The results, published in the journal Advanced Functional Materials, may bring cleaner, cheaper raw materials to diverse manufacturers. The ORNL team created the new thermoplastic elastomers has been created by introducing nanoscale-dispersed lignin (a biomass-derived phenolic oligomer) into nitrile rubber. They used temperature to control the miscibility between the lignin and the rubber, which enabled tuning the material’s morphology and performance. The sustainable product has unprecedented yield stress (15–45 MPa), strain hardens at large deformation, and has outstanding recyclability. Naskar and co-inventor Chau Tran have filed a patent application for the process to make the new material. The technology could make use of the lignin-rich biomass byproduct stream from biorefineries and pulp and paper mills. With the prices of natural gas and oil dropping, renewable fuels can’t compete with fossil fuels, so biorefineries are exploring options for developing other economically viable products. Among cellulose, hemicellulose and lignin, the major structural constituents of plants, lignin is the most commercially underutilized. The ORNL study aimed to use it to produce, with an eye toward commercialization, a renewable thermoplastic with properties rivaling those of current petroleum-derived alternatives. Broadly, the researchers noted, solutions that link the biorefinery and pulping industries face four challenges: All lignins are not equal in terms of heat stability. To determine what type would make the best thermoplastic feedstock, the scientists evaluated lignin from wheat straw, softwoods like pine and hardwoods like oak. They found hardwood lignin is the most thermally stable, and some types of softwood lignins are also melt-stable. Next, the researchers needed to couple the lignin with soft matter. Chemists typically accomplish this by synthesizing polymers in the presence of solvents. Because lignin and a synthetic rubber containing acrylonitrile and butadiene, called nitrile rubber, both have chemical groups in which electrons are unequally distributed and therefore likely to interact, Naskar and Chau Tran (who performed melt-mixing and characterization experiments) instead tried to couple the two in a melted phase without solvents. In a heated chamber with two rotors, the researchers “kneaded” a molten mix of equal parts powdered lignin and nitrile rubber. During mixing, lignin agglomerates broke into interpenetrating layers or sheets of 10 to 200 nanometers that dispersed well in and interacted with the rubber. Without the proper selection of a soft matrix and mixing conditions, lignin agglomerates are at least 10 times larger than those obtained with the ORNL process. The product that formed had properties of neither lignin nor rubber, but something in between, with a combination of lignin’s stiffness and nitrile rubber’s elasticity. By altering the acrylonitrile amounts in the soft matrix, the researchers hoped to improve the material’s mechanical properties further. They tried 33%, 41% and 51% acrylonitrile and found 41% gave an optimal balance between toughness and stiffness. Next, the researchers wanted to find out if controlling the processing conditions could improve the performance of their polymer alloy. For example, 33% acrylonitrile content produced a material that was stretchy but not strong, behaving more like rubber than plastic. At higher proportions of acrylonitrile, the researchers saw the materials strengthen because of the efficient interaction between the components. They also wanted to know at what temperature the components should be mixed to optimize the material properties. They found heating components between 140 and 160 degrees Celsius formed the desired hybrid phase. Using resources at ORNL including the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, the scientists analyzed the morphologies of the blends. Scanning electron microscopy, performed by Chau Tran, explored the surfaces of the materials. Jihua Chen and Tran characterized soft matter phases using transmission electron microscopy, placing a thin slice of material in the path of an electron beam to reveal structure through contrast differences in the lignin and rubber phases. Small-angle x-ray scattering by Jong Keum revealed repeated clusters of certain domain or layer sizes. Fourier transform infrared spectroscopy identified chemical functional groups and their interactions. Future studies will explore different feedstocks, particularly those from biorefineries, and correlations among processing conditions, material structure and performance. Investigations are also planned to study the performance of ORNL’s new thermoplastic in carbon-fiber-reinforced composites. ORNL’s Technology Innovation Program, which reinvests royalties from the lab’s patents in innovative, commercially promising projects, sponsored the study. The researchers conducted polymer characterization experiments (microscopy and X-ray scattering) at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL.


News Article | November 2, 2015
Site: phys.org

Conceptual art connects the atomic underpinnings of the neutron-rich calcium-48 nucleus with the Crab Nebula, which has a neutron star at its heart. Zeros and ones depict the computational power needed to explore objects that differ in size by 18 orders of magnitude. Credit: Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy; conceptual art by LeJean Hardin and Andy Sproles An international team led by Gaute Hagen of the Department of Energy's Oak Ridge National Laboratory used America's most powerful supercomputer, Titan, to compute the neutron distribution and related observables of calcium-48, an isotope with an atomic nucleus consisting of 20 protons and 28 neutrons. Computing the nucleus from first principles revealed that the difference between the radii of neutron and proton distributions (called the "neutron skin") is considerably smaller than previously thought. The collaboration also made predictions for physical quantities that are current targets of precision measurements. These calculations also impact the size of a neutron star, thereby connecting objects that differ in size by 18 orders of magnitude. The work is published in the journal Nature Physics. "We built a nucleus from scratch from its basic constituents—protons and neutrons," said Hagen, a theoretical physicist who initiated and led the project with an award from DOE's Office of Science Early Career Research Program. "To solve this strongly correlated system of 48 nucleons is far from trivial, because it is a complicated quantum mechanical many-body problem. Many things had to come together—accurate nuclear forces, sophisticated computational algorithms and a powerful tool such as Titan at ORNL—to achieve these results." While the distribution of electric charge inside the atomic nucleus is well known from experiments involving electron scattering, the distribution of the neutrons, which have no electric charge, is difficult to measure. In the nucleus of calcium-48, which has eight more neutrons than protons, the neutron distribution extends beyond the charge distribution and thereby sets the actual size of this nucleus. "This is the first really reliable calculation of such a massive nucleus from first principles," said Hagen. "We reproduced basic observables for the first time by building this nucleus from scratch. We've answered a basic question—what is the size of the atomic nucleus?" The ORNL-led team calculated radii, binding energies and dipole polarizabilities for helium-4, oxygen-16 and calcium-40 and accurately reproduced measurements of these isotopes. "That was a quality check of our calculation," Hagen said. "It gives us confidence about our predictions for calcium-48." The team, which included partners from the University of Tennessee, Michigan State University, Chalmers University of Technology (Sweden), TRIUMF (Canada), Hebrew University (Israel), Technical University Darmstadt (Germany), University of Oslo (Norway) and University of Trento (Italy), turned to Titan, a Cray XK7 system at Oak Ridge Leadership Computing Facility, a DOE Office of Science User facility at ORNL. The project ran the nuclear structure code NUCCOR (Nuclear Coupled Cluster at Oak Ridge), for about 15 million CPU hours, allocated through the Innovative and Novel Impact on Theory and Experiment program. Researchers at Jefferson Lab are preparing to measure the neutron radius of calcium-48. A team of physicists, the Darmstadt-Osaka Collaboration, has already measured its dipole polarizability and is analyzing the results. The findings could validate the work of Hagen's team and constrain future theoretical models. The title of the Nature Physics paper is "Neutron distribution, electric dipole polarizability and weak form factor of the nucleus 48Ca." Explore further: Physicists pin down the proton-halo state in Flourine-17 More information: Neutron distribution, electric dipole polarizibility and weak form factor of the nucleus 48Ca, Nature Physics, DOI: 10.1038/nphys3529


News Article | February 15, 2017
Site: www.materialstoday.com

Ever since Italian physicist Alessandro Volta invented the first battery out of a stack of copper and zinc disks separated by moistened cardboard, scientists have been searching for better battery materials. Lithium-ion batteries, which are light, long-lasting and operate under a wide range of temperatures, now power everything from cell phones to aircraft carriers to electric cars. The ubiquity of lithium-ion batteries makes their stability, efficiency and safety important for businesses and consumers alike. One of the main challenges researchers face in improving lithium-ion batteries is finding novel, nonflammable materials for the electrolyte. The electrolyte is the crucial battery component that shuttles lithium ions during charging and discharging, thereby transferring energy. Scientists are continually looking for novel electrolytes that are not only stable but also conductive to lithium ions, a property that lithium-ion batteries require to maintain efficiency during charge cycles. A team led by Thomas Miller at the California Institute of Technology (Caltech) has now used the Cray XK7 Titan supercomputer at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) to identify potential electrolyte materials and predict which ones could enhance the performance of lithium-ion batteries. Using Titan, the researchers ran hundreds of simulations – each comprising thousands of atoms – on new electrolyte candidates. This work, which is reported in a paper in the Journal of Physical Chemistry Letters, led them to identify new electrolytes with promising properties for lithium-ion conduction. According to Miller, a professor of chemistry at Caltech and principal investigator on the project, a leadership-class supercomputer was essential to meeting the project's goals. This is because the simulations ran on timescales that ranged from a femtosecond (one quadrillionth of a second) up to a microsecond (one millionth of a second), spanning nine orders of magnitude. "These calculations are extremely demanding in terms of computational resources," he explained. "We are dealing with – from a molecular perspective – very big systems and long timescales." In order to screen multiple candidate electrolytes, the team needed to rapidly describe a range of complex materials. Luckily, Titan – part of the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL – enabled them to do just that. All batteries contain an electrolyte, a liquid or solid material that insulates the flow of electrons but promotes the flow of ions between the anode and cathode, which are the two electrodes that conduct electrical current. Electrons inside the battery move through an external circuit, allowing them to power a device, on the way to the cathode, the battery's positive electrode. When the electrons leave the anode, the positively-charged ions (lithium ions) travel across the electrolyte to the cathode. This process continues until the reactants are depleted (meaning the battery loses its charge) or the circuit is disconnected. In rechargeable batteries the cycle can be reversed, with the lithium ions traveling back to the anode during charging. The lithium ions are conserved during charging and discharging, continually switching back and forth between the electrodes. Typically, lithium-ion batteries feature liquid electrolytes, but new research is focusing on polymer-based electrolytes, which are known to be more stable, less flammable and less volatile. Historically, the best polymer electrolyte for lithium-ion batteries has been polyethylene oxide (PEO), a versatile polymer with various applications in both medicine and science. The addition of lithium salts to polymers like PEO allows them to be used as solid polymer electrolytes; the salt contains both lithium cations and some negatively-charged anions to balance the charge. An ideal electrolyte is one that readily dissolves and then conducts lithium ions. The problem with PEO is that it conducts lithium ions poorly in comparison to liquid electrolytes. This means the ions travel slowly to the cathode, limiting the current the battery can produce. PEO also conducts the anions too quickly; while the anions are useful for balancing the charge of lithium cations, their rapid conduction produces a loss in battery voltage. Using this knowledge, Miller's team began a hunt for more efficient polymers. Brett Savoie, a postdoctoral fellow at Caltech, thought that reversing the typical solid polymer electrolyte dynamic could help Miller's team find more ideal polymers. "You want to conduct the positive lithium ions," Miller said. "You don't want to conduct the negative ions in the salt." The team set out in search of polymers that would conduct lithium ions more quickly than PEO. Using high-performance computing, Miller's team created the chemically-specific dynamic bond percolation model, a coarse-grained simulation that was developed and validated by the group, to screen electrolyte materials based on short molecular dynamics trajectories. They first screened a set of 500 diverse classes of polymers to find ones that were better conductors of lithium ions. One group of polymers, in particular, seemed to fit the bill: Lewis-acidic polymers. Lewis-acidic molecules hold a positive charge and strongly interact with anions. Miller's team designed simulations using Lewis-acidic polymers as the electrolyte in the hope they would slow down anion conduction. These polymers, Miller said, had not been simulated or studied experimentally before. In their simulation, the team found that this class of polymers not only conducted the anions more slowly than PEO but also conducted the positive lithium ions more quickly. Because Lewis-acidic chemical groups' positive regions are contained in a small amount of space and their negative regions are spread out over a large amount of space, the positive lithium ions have more opportunities to dissolve. "It was known that Lewis-acidic molecules slowed down anions," Savoie said. "What was surprising here was that by using a purely Lewis-acidic system, we also sped up the lithium." The simulations indicated that these polymers may be capable of producing an eight-fold increase in desired lithium conduction and a marked decrease in the unwanted anion conduction. This would be – given the historically slow pace of discovering new polymer materials – a very large jump. The team tracked the molecular evolution on timescales that ranged from a femtosecond to a microsecond. The ability to span this vast range of timescales was made possible with Titan, which can compute at a rate of 27 petaflops, or 27 quadrillion calculations per second. The team used LAMMPS, an open-source classical molecular dynamics code, to run its simulations, looking at several dozen polymer-salt combinations under different salt concentrations. Around 400 simulations at a time were run in parallel. Each simulation consisted of around 3000 atoms periodically replicated in three-dimensional space to create the effect of a bulk polymer material, with a certain concentration of ions per unit of periodic replication. Though Miller's team is continuing to screen promising polymer sequences with the goal of completing its 5000th candidate electrolyte by the end of 2017, the project has already led to the identification of polymers that may favor lithium-ion conduction. "These new polymers are exciting because they seem to overcome some of the main problems with other polymer materials," Miller said. "The predictions indicate that these polymers might exhibit a substantial increase in conductivity. It would be a tremendous improvement from the current lithium-ion conductivity that PEO affords." The researchers continue to run their simulations on Titan under an Innovative and Novel Computational Impact on Theory and Experiment award, for which they've been allocated 40 million core hours. According to Miller, next-generation supercomputers like the OLCF's Summit, scheduled to come online in 2018, will greatly expand their research capabilities, allowing his team to explore even larger areas of chemical space. "With faster computers we'll be able to do this with even better accuracy," he said. "We'll also be able to look at more polymers more reliably and on longer timescales. Improved computers are going to rapidly accelerate the pace of discovery for materials of this kind." This story is adapted from material from ORNL, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | November 14, 2016
Site: www.cemag.us

Scientists at Rice University have discovered that an atom-thick material being eyed for flexible electronics and next-generation optical devices is more brittle than they expected. The Rice team led by materials scientist Jun Lou tested the tensile strength of two-dimensional, semiconducting molybdenum diselenide and discovered that flaws as small as one missing atom can initiate catastrophic cracking under strain. The team’s report appears this month in Advanced Materials. The finding may cause industry to look more carefully at the properties of 2D materials before incorporating them in new technologies, he says. “It turns out not all 2D crystals are equal,” says Lou, a Rice professor of materials science and nanoengineering. “Graphene is a lot more robust compared with some of the others we’re dealing with right now, like this molybdenum diselenide. We think it has something to do with defects inherent to these materials.” The defects could be as small as a single atom that leaves a vacancy in the crystalline structure, he said. “It’s very hard to detect them,” he says. “Even if a cluster of vacancies makes a bigger hole, it’s difficult to find using any technique. It might be possible to see them with a transmission electron microscope, but that would be so labor-intensive that it wouldn’t be useful.” Molybdenum diselenide is a dichalcogenide, a two-dimensional semiconducting material that appears as a graphene-like hexagonal array from above but is actually a sandwich of metallic atoms between two layers of chalcogen atoms, in this case, selenium. Molybdenum diselenide is being considered for use as transistors and in next-generation solar cells, photodetectors and catalysts as well as electronic and optical devices. Lou and colleagues measured the material’s elastic modulus, the amount of stretching a material can handle and still return to its initial state, at 177.2 (plus or minus 9.3) gigapascals. Graphene is more than five times as elastic. They attributed the large variation to pre-existing flaws of between 3.6 and 77.5 nanometers. Its fracture strength, the amount of stretching a material can handle before breaking, was measured at 4.8 (plus or minus 2.9) gigapascals. Graphene is nearly 25 times stronger. Part of the project led by Rice postdoctoral researcher Yingchao Yang required moving molybdenum diselenide from a growth chamber in a chemical vapor deposition furnace to a microscope without introducing more defects. Yang solved the problem using a dry transfer process in place of a standard acid washing that would have ruined the samples. To test samples, Yang placed rectangles of molybdenum diselenide onto a sensitive electron microscope platform invented by the Lou group. Natural van der Waals forces held the samples in place on springy cantilever arms that measured the applied stress. Lou said the group attempted to measure the material’s fracture toughness, an indicator of how likely cracks are to propagate, as they had in an earlier study on graphene. But they found that pre-cutting cracks into molybdenum diselenide resulted in it shattering before stress could be applied, he says. “The important message of this work is the brittle nature of these materials,” Lou says. “A lot of people are thinking about using 2D crystals because they’re inherently thin. They’re thinking about flexible electronics because they are semiconductors and their theoretical elastic strength should be very high. According to our calculations, they can be stretched up to 10 percent. “But in reality, because of the inherent defects, you rarely can achieve that much strength. The samples we have tested so far broke at 2 to 3 percent (of the theoretical maximum) at most,” Lou says. “That should still be fine for most flexible applications, but unless they find a way to quench the defects, it will be very hard to achieve the theoretical limits.” Co-authors of the paper are Rice graduate students Emily Hacopian, Weibing Chen, Jing Zhang, and Bo Li, alumna Yongji Gong and Pulickel Ajayan, chair of Rice’s Department of Materials Science and NanoEngineering, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of chemistry; Xing Li of Rice and Peking University, China; Minru Wen of Tsinghua University, China, and the Georgia Institute of Technology; Wu Zhou of Oak Ridge National Laboratory, Oak Ridge, Tenn.; Qing Chen of Peking University; and Ting Zhu of the Georgia Institute of Technology. The research was supported by the Air Force Office of Scientific Research, the Welch Foundation, the Department of Energy Office of Basic Energy Sciences, the National Science Foundation and the National Science Foundation of China.


News Article | March 23, 2016
Site: www.rdmag.com

Your car's bumper is probably made of a moldable thermoplastic polymer called ABS, shorthand for its acrylonitrile, butadiene and styrene components. Light, strong and tough, it is also the stuff of ventilation pipes, protective headgear, kitchen appliances, Lego bricks and many other consumer products. Useful as it is, one of its drawbacks is that it is made using chemicals derived from petroleum. Now, researchers at the Department of Energy's Oak Ridge National Laboratory have made a better thermoplastic by replacing styrene with lignin, a brittle, rigid polymer that, with cellulose, forms the woody cell walls of plants. In doing so, they have invented a solvent-free production process that interconnects equal parts of nanoscale lignin dispersed in a synthetic rubber matrix to produce a meltable, moldable, ductile material that's at least ten times tougher than ABS. The resulting thermoplastic—called ABL for acrylonitrile, butadiene, lignin—is recyclable, as it can be melted three times and still perform well. The results, published in the journal Advanced Functional Materials, may bring cleaner, cheaper raw materials to diverse manufacturers. "The new ORNL thermoplastic has better performance than commodity plastics like ABS," said senior author Amit Naskar in ORNL's Materials Science and Technology Division, who along with co-inventor Chau Tran has filed a patent application for the process to make the new material. "We can call it a green product because 50 percent of its content is renewable, and technology to enable its commercial exploitation would reduce the need for petrochemicals." The technology could make use of the lignin-rich biomass byproduct stream from biorefineries and pulp and paper mills. With the prices of natural gas and oil dropping, renewable fuels can't compete with fossil fuels, so biorefineries are exploring options for developing other economically viable products. Among cellulose, hemicellulose and lignin, the major structural constituents of plants, lignin is the most commercially underutilized. The ORNL study aimed to use it to produce, with an eye toward commercialization, a renewable thermoplastic with properties rivaling those of current petroleum-derived alternatives. To produce an energy-efficient method of synthesizing and extruding high-performance thermoplastic elastomers based on lignin, the ORNL team needed to answer several questions: Can variations in lignin feedstocks be overcome to make a product with superior performance? Can lignin integrate into soft polymer matrices? Can the chemistry and physics of lignin-derived polymers be understood to enable better control of their properties? Can the process to produce lignin-derived polymers be engineered? "Lignin is a very brittle natural polymer, so it needs to be toughened," explained Naskar, leader of ORNL's Carbon and Composites group. A major goal of the group is producing industrial polymers that are strong and tough enough to be deformed without fracturing. "We need to chemically combine soft matter with lignin. That soft matrix would be ductile so that it can be malleable or stretchable. Very rigid lignin segments would offer resistance to deformation and thus provide stiffness." All lignins are not equal in terms of heat stability. To determine what type would make the best thermoplastic feedstock, the scientists evaluated lignin from wheat straw, softwoods like pine and hardwoods like oak. They found hardwood lignin is the most thermally stable, and some types of softwood lignins are also melt-stable. Next, the researchers needed to couple the lignin with soft matter. Chemists typically accomplish this by synthesizing polymers in the presence of solvents. Because lignin and a synthetic rubber containing acrylonitrile and butadiene, called nitrile rubber, both have chemical groups in which electrons are unequally distributed and therefore likely to interact, Naskar and Chau Tran (who performed melt-mixing and characterization experiments) instead tried to couple the two in a melted phase without solvents. In a heated chamber with two rotors, the researchers "kneaded" a molten mix of equal parts powdered lignin and nitrile rubber. During mixing, lignin agglomerates broke into interpenetrating layers or sheets of 10 to 200 nanometers that dispersed well in and interacted with the rubber. Without the proper selection of a soft matrix and mixing conditions, lignin agglomerates are at least 10 times larger than those obtained with the ORNL process. The product that formed had properties of neither lignin nor rubber, but something in between, with a combination of lignin's stiffness and nitrile rubber's elasticity. By altering the acrylonitrile amounts in the soft matrix, the researchers hoped to improve the material's mechanical properties further. They tried 33, 41 and 51 percent acrylonitrile and found 41 percent gave an optimal balance between toughness and stiffness. Next, the researchers wanted to find out if controlling the processing conditions could improve the performance of their polymer alloy. For example, 33 percent acrylonitrile content produced a material that was stretchy but not strong, behaving more like rubber than plastic. At higher proportions of acrylonitrile, the researchers saw the materials strengthen because of the efficient interaction between the components. They also wanted to know at what temperature the components should be mixed to optimize the material properties. They found heating components between 140 and 160 degrees Celsius formed the desired hybrid phase. Using resources at ORNL including the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, the scientists analyzed the morphologies of the blends. Scanning electron microscopy, performed by Chau Tran, explored the surfaces of the materials. Jihua Chen and Tran characterized soft matter phases using transmission electron microscopy, placing a thin slice of material in the path of an electron beam to reveal structure through contrast differences in the lignin and rubber phases. Small-angle x-ray scattering by Jong Keum revealed repeated clusters of certain domain or layer sizes. Fourier transform infrared spectroscopy identified chemical functional groups and their interactions. Future studies will explore different feedstocks, particularly those from biorefineries, and correlations among processing conditions, material structure and performance. Investigations are also planned to study the performance of ORNL's new thermoplastic in carbon-fiber-reinforced composites. "More renewable materials will probably be used in the future," Naskar said. "I'm glad that we could continue work in renewable materials, not only for automotive applications but even for commodity usage." The title of the paper is "A New Class of Renewable Thermoplastics with Extraordinary Performance from Nanostructured Lignin-Elastomers."


News Article | January 20, 2016
Site: phys.org

The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoO . Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. "Here for the first time," she says, "we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoO ." "It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties," Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite. The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage—just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage. Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors—a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say. Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. "So our idea was, don't change the atmosphere, just apply a voltage." "Voltage modifies the effective oxygen pressure that the material faces," Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia. In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT's Center for Materials Science and Engineering. The basic principle of switching this material between the two phases by altering the gas pressure and temperature in the environment was developed within the last year by scientists at Oak Ridge National Laboratory. "While interesting, this is not a practical means for controlling device properties in use," says Yildiz. With their current work, the MIT researchers have enabled the control of the phase and electrical properties of this class of materials in a practical way, by applying an electrical charge. In addition to memory devices, the material could ultimately find applications in fuel cells and electrodes for lithium ion batteries, Lu says. "Our work has fundamental contributions by introducing electrical bias as a way to control the phase of an active material, and by laying the basic scientific groundwork for such novel energy and information processing devices," Yildiz adds. In ongoing research, the team is working to better understand the electronic properties of the material in its different structures, and to extend this approach to other oxides of interest for memory and energy applications, in collaboration with MIT professor Harry Tuller. José Santiso, the nanomaterials growth division leader at the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Spain, who was not involved in this research, calls it "a very significant contribution" to the study of this interesting class of materials, and says "it paves the way for the application of these materials both in solid state electrochemical devices for the efficient conversion of energy or oxygen storage, as well as in possible applications in a new kind of memory devices." Explore further: Transistor made from vanadium dioxide could function as smart window for blocking infrared light More information: Qiyang Lu et al. Voltage-Controlled Topotactic Phase Transition in Thin-Film SrCoO Monitored by In Situ X-ray Diffraction , Nano Letters (2016). DOI: 10.1021/acs.nanolett.5b04492


News Article | January 20, 2016
Site: news.mit.edu

Two MIT researchers have developed a thin-film material whose phase and electrical properties can be switched between metallic and semiconducting simply by applying a small voltage. The material then stays in its new configuration until switched back by another voltage. The discovery could pave the way for a new kind of “nonvolatile” computer memory chip that retains information when the power is switched off, and for energy conversion and catalytic applications. The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoO . Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. “Here for the first time,” she says, “we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoO .” “It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties,” Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite. The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage — just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage. Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors — a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say. Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. “So our idea was, don’t change the atmosphere, just apply a voltage.” “Voltage modifies the effective oxygen pressure that the material faces,” Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia. In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT’s Center for Materials Science and Engineering. The basic principle of switching this material between the two phases by altering the gas pressure and temperature in the environment was developed within the last year by scientists at Oak Ridge National Laboratory. “While interesting, this is not a practical means for controlling device properties in use,” says Yildiz. With their current work, the MIT researchers have enabled the control of the phase and electrical properties of this class of materials in a practical way, by applying an electrical charge. In addition to memory devices, the material could ultimately find applications in fuel cells and electrodes for lithium ion batteries, Lu says. “Our work has fundamental contributions by introducing electrical bias as a way to control the phase of an active material, and by laying the basic scientific groundwork for such novel energy and information processing devices,” Yildiz adds. In ongoing research, the team is working to better understand the electronic properties of the material in its different structures, and to extend this approach to other oxides of interest for memory and energy applications, in collaboration with MIT professor Harry Tuller. José Santiso, the nanomaterials growth division leader at the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Spain, who was not involved in this research, calls it “a very significant contribution” to the study of this interesting class of materials, and says “it paves the way for the application of these materials both in solid state electrochemical devices for the efficient conversion of energy or oxygen storage, as well as in possible applications in a new kind of memory devices.” The work was supported by the National Science Foundation.


News Article | December 16, 2015
Site: www.rdmag.com

As the year comes to a close, it’s only natural to speculate about what lies ahead. In the laboratory world, the “lab of the future” is always a hot topic. Augmented reality, robots as lab personnel, the complete elimination of paper—these subjects are all common discussion points around this time of year. But the fact is, we’re still years away from that version of a laboratory. Once the clock strikes 12:00 a.m. on Jan. 1, 2016, labs around the world will remain mostly unchanged. Spectrometers and chromatographs will still take up bench space, freezers and refrigerators will still be noisy and the lighting will still not be quite right. But that’s not to say the lab of 2016 is without its opportunities—and challenges. The discussion around the revolutionary CRISPR-Cas9 genome editing technique is expected to continue as scientists work to improve the system daily. Refined approaches and breakthroughs occurred frequently in 2015, even as scientists called for a moratorium on the controversial system. While there will always be scientific challenges in a laboratory, participants in a recent panel titled “Eye on the Future” at the R&D 100 Awards & Technology Conference pointed to two less traditional challenges the science and technology industry must overcome. Daryl Belock, VP of Innovation and R&D Collaboration at Thermo Fisher Scientific, said one of the biggest challenges in coming up with the next generation of technology is finding the right talent to do so. While Thomas Mason, Director of Oak Ridge National Laboratory (ORNL), and Dean Kamen, entrepreneur and inventor, agreed with Belock, they also pointed to another hardship facing the science industry—poor public perception. Challenge 1: Scientific workforce According to the National Science Board’s (NSB) biennial Science and Engineering Indicators 2014 report, the science and engineering (S&E) workforce has grown steadily over time. Between 1960 and 2011, the number of workers in S&E occupations grew at an average annual rate of 3.3 percent, greater than the 1.5 percent growth rate of the total workforce. Additionally, during the 2007-2009 recession, S&E employment compared favorably to overall employment trends. While the idea of a shortage in the field has been pervasive recently, the numbers seem to indicate otherwise. It’s not so much finding employees, it’s finding the right employee with the right skill set. “There always seems to be a small group of very talented people that stimulate breakthroughs in innovation,” said Belock. “It’s very easy to fall into a short-term vision of the future, where teams get into incrementalism in terms of what they are bringing to the market. Taking a longer-term view and developing talent, and rare talent, is critical.” Terry Adams, Shimadzu Scientific’s Vice President of Marketing, agrees that a long-term vision, and cultivating a culture that is committed to the long-term, is half the hiring battle these days. “We’re at a stage where we need experienced people, but they come with history and baggage based on their past jobs or what they perceive this job to be,” Adams told Laboratory Equipment. “[New employees] are surprised about our patience, how deliberate we are, how much planning we do—we work off of a very deliberate 9-year plan in 3-year increments. We’re not driven quarter to quarter or half to half. We’re thinking year to year.” Like every other industry, the aging of the baby boomer generation will have/is having significant implications in science and technology. Five years ago, the oldest baby boomers were 60, and still a productive part of the workforce. In fact, according to the NSB report, between 1993 and 2010, increasing percentages of scientists and engineers in their 60s reported they were still in the labor force. Whereas 54 percent of scientists and engineers between the ages of 60 and 69 were employed in 1993, the comparable percentage rose to 63 percent in 2010. However, more recent estimates from the Bureau of Labor Statistics indicate that the oldest baby boomers—of which about 10,000 turn 65 years old every day—have begun the wave of retirements, significantly shifting the country’s age demographics “Fifty percent of our staff has been with us less than 10 years,” said ORNL’s Mason. “It’s tremendous [and] exciting, lots of new people and ideas coming in. But how can we create in them the kind of culture that will allow us to continue to push the envelope? That’s probably the thing I feel the greatest need around.” In industry specifically, a popular approach to hiring young staff members is to train them as early as possible, even while they are still in school. For example, Shimadzu (and other equipment manufacturers) have established Centers of Excellence in research-heavy universities. These centers are typically gifted with millions of dollars of a manufacturer’s state-of-the-art equipment. Students working in the center are then heavily exposed to the company’s equipment and techniques well before they look to enter the workforce. “[Because] the students are already being trained on our instruments, we can pluck them right out of school to start the lower-tier jobs and move everyone else up through the ranks,” Shimadzu’s Adams explained. “We try to grow our own talent. [The students] come in already knowing our technology and science, now it’s time for them to understand the business world. If we can move them up through the corporate ladder, they do fit the culture. They are already a part of it.” Adams also commented that, recently, the pool of qualified candidates has featured more women and minorities than in the past. According to the 2014 NSB report, women remain underrepresented in the S&E workforce, but to a lesser degree than in the past. Additionally, the number and percentage of undergraduate and master’s degrees in most major S&E fields has increased for both woman and minorities since 2000. “The only thing that has yet to be commoditized, the only thing that will add value is innovation,” said Kamen, the inventor of the insulin pump and Segway. “To do that innovation, more than ever, will require people. Smart people with vision and courage that will focus on the right things.” Challenge 2: Public perception The general public and scientists express strikingly different views on science-related issues, according to a Pew Research Center report that was published January 2015. The report found significant differences in views on 13 science-related issues, the largest of which include: •    A 51-percentage point gap between scientists and the public about the safety of eating genetically modified foods– 88% of American Association for the Advancement of Science (AAAS) scientists think eating GM food is safe, while 37% of the public believes that. •    42-percentage point gap over the issue of using animals in research– 89% of scientists favor it, while 47% of the public backs the idea. •    40-percentage point gap on the question of whether it is safe to eat foods grown with pesticides– 68% of scientists say that it is, compared with 28% of citizens. •    37-percentage point gap over whether climate change is mostly caused by human activity– 87% of AAAS scientists say it is, while 50% of the public does. “Scientists get an A+ for the accelerated rate at which technology is happening, but we get a C- for engaging the global public in understanding why it’s so important, why we need to keep moving with it and how we can do it in a responsible way,” said Kamen. “If we don’t deliver that clear message over and over again until the world accepts it, we’re headed for a train wreck.” Overall, the report—based on two surveys performed in collaboration with AAAS—suggests science holds an esteemed place among citizens and professionals, but both groups are less upbeat about the scientific enterprise than they were in 2009—a concerning fact, and something to monitor closely in the coming years. From the public perspective, the trust decline could be due to a series of high-profile scientific papers found to be fraudulent, the most popular of which was a study claiming “acid baths” offered an easy pathway to the generation of new stem cells called STAP. The papers, along with the authors from RIKEN, immediately gained notoriety. However, no one could duplicate the results, and complaints started forming just a few days after the papers’ release in January 2014. Six months later, Nature retracted the papers, citing critical errors and falsified data. But the damage was already done. News of this “amazing” new stem cell had already hit mainstream media, and the scandal—which saw one co-author take his own life—became a story all to its own. “One of the concerns I have is you have instances where people have actually been doing deceitful things, and that gets discovered—in a sense you can say that is the system working or self-correcting. However, then you juxtapose that against some of the areas of science that are politically or social controversial, like climate change, for example. In the minds of the general public, it’s hard to extinguish fraudulent behavior. If you’re not an expert in the field, it is extremely difficult to distinguish those cases,” said Mason. Despite that, the Pew Research Center found there is still broad public support for government investment in scientific research. Seven in 10 adults say government investments in engineering and technology and basic scientific research usually pay off in the long run. Some 61 percent say government investment is essential for scientific progress, while 34 percent believe private investment is enough to ensure that scientific progress is made. Opportunity: CRISPR CRISPR-Cas9 is a genetic engineering technique invented by molecular biologist Jennifer Doudna that is capable of quickly, easily and inexpensively performing precise changes in DNA. Cas9, a naturally occurring protein in the immune system of certain bacteria, acts like a pair of molecular scissors to precisely cut or edit specific sections of DNA. Invented in 2012, interest in the technique has increased dramatically in the past two years, prompting scientists to gather at a summit earlier this month to establish guidelines for its responsible use in the future. The main concern with CRISPR (and similar techniques) is that it can be used to perform germline genetic modifications, which means making changes in a human egg, sperm or embryo. These modifications would be passed down for generations, impacting an entire lineage rather than just one person. However, at the same time, CRISPR’s ability to edit human DNA could eliminate genetic diseases. Fortunately, the recent uptick of interest in the technology has been accompanied by dramatic improvements in the past year—including multiple enhancements in just the last few months. While Cas9 is highly efficient at cutting its target site, a major drawback of the system has been that, once inside a cell, it can bind to and cut additional sites that are not targeted. This has the potential to produce undesired edits that can alter gene expression or knock a gene out entirely, which could lead to the development of cancer or other problems. But, researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have devised a way to dramatically reduce “off-target editing” to undetectable levels. Using previous knowledge about the structure of the Cas9 enyzme, Feng Zhang and colleagues were able to predict that replacing some of the positively charged amino acids in DNA with neutral ones would decrease the binding of “off-target” sequences much more than “on-target” sequences. After experimenting with various changes, Zhang’s team found that mutations in just three of approximately 1,400 amino acids dramatically reduced “off-target” cuts. The newly engineered enzyme, which the team calls enhanced S. pyogenes Cas9, or eSpCas9, will be useful for genome editing applications that require a high level of specificity. The lab has made the eSpCas9 enzyme available for researchers worldwide. “We hope the development of eSpCas9 will help address some of [safety] concerns [related to off-target effects], said Zhang, who was a speaker at the International Summit on Human Gene Editing. “But we certainly don’t see this as a magic bullet. The field is advancing at a rapid pace, and there is still a lot to learn before we can consider applying this technology for clinical use.” Additional research out of Harvard’s Wyss Institute has recently added another layer of safety to CRISPR with the ability to reverse the spread of imposed genetic traits. George Church and Kevin Esvelt developed molecular confinement mechanisms to prevent gene drives from functioning in the wild by manipulating CRISPR’s biological components. By separating the guide RNA and the Cas9 protein so they are not encoded together in the same organism, or by inserting an artificial sequence into the target gene, gene drives can only be activated in lab organisms. According to Harvard, using this safeguard, essentially any population-level change mediated by a gene drive could be overwritten if the need ever arose. In such a case, the originally imposed trait would be reversed and the biological machinery of the CRISPR gene drive system—the guide RNAs and the Cas9 protein—would remain present, albeit inactive, in the DNA of organisms. What’s more, the reversibility mechanism isn’t just a useful backup in case a gene drive ever had an unexpected side effect—the researchers believe the ability to impose or reverse gene drive effects could one day prove powerful for the management of disease-transmitting organisms, invasive species and crop-destroying insects. “The gene drive research community has been actively discussing what should be done to safeguard shared ecosystems, and now we have demonstrated that the proposed safeguards work extremely well and should therefore be used by every gene drive researcher in every relevant lab organism,” said Esvelt. Opportunity: CRISPR guidelines The International Summit on Human Gene Editing—hosted by the U.S. National Academies, UK Royal Society and the Chinese Academy of Sciences—concluded on Dec. 3 after three days of discussion on the scientific, ethical and governance issues associated with human gene editing technologies like CRISPR. The 12-member organizing committee acknowledged that basic and preclinical research is very much needed and should proceed; however, modified cells should not be used to establish a pregnancy. In fact, the committee cautioned against the clinical use of gene editing in regards to the germline, which can pass edits and mutations on to subsequent generations. “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application,” the committee wrote in its statement. However, the green light was given to clinical applications of gene editing that are directed toward altering somatic cells only—or those cells whose genomes will not be transmitted to the next generation. Since the proposed clinical uses are intended to affect only the individual who receives care, the committee approved use of the technology within “existing and evolving regulatory framework.” Overall, the committee stopped short of calling for a permanent ban on editing human embryos and germline cells. “As scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis,” committee members wrote. That regular basis will begin immediately, with the national academies that co-hosted the summit pledging to take the lead in creating an ongoing international forum. Over the next year, scientists and ethicists from the U.S., UK and China will convene to examine some of the issues raised at the meeting, and release an additional report with more concrete guidelines in late 2016.


High-quality YbMgGaO single crystals were synthesized using the optical floating zone technique19. A representative single crystal, which is optically transparent with mirror-like cleaved surfaces, is shown in Extended Data Fig. 1a. Our X-ray diffraction (XRD) measurements revealed that all of the reflections from the cleaved surface could be indexed by (0, 0, L) peaks of triangular YbMgGaO ; no impurity phases were observed (Extended Data Fig. 1b). The full-width at half-maximum (FWHM) of the rocking curve of the (0, 0, 18) peak was about 0.009°, indicating an extremely high crystallization quality (Extended Data Fig. 1c). This was confirmed by the sharp and clear diffraction spots in the X-ray Laue pattern (Extended Data Fig. 1d). Powder XRD patterns on ground single crystals also revealed no indication of impurity phases (Extended Data Fig. 1e). The Rietveld refinements31 confirm that the XRD pattern can be described by the space group. The refined structural parameters are given in Extended Data Table 1. These results suggested that the YbMgGaO single crystal possessed a perfect triangular lattice with no detectable impurities. This is consistent with previous measurements that have demonstrated that the impurity/isolated spins are less than 0.04% in similar samples18, 19. Although the Mg/Ga site disorder in the non-magnetic layers does not directly affect the exchange interaction between the Yb local moments, it may have an indirect effect and could lead to some exchange disorder. It seems that this disorder is not significant, because no signs of spin freezing were observed. A QSL is often stable against weak local perturbations, provided that the perturbation is irrelevant or not significant. Therefore, if a QSL is realized as the ground state for YbMgGaO , then the possible exchange disorder will not destabilize this state if the disorder strength is not significant. In addition, the field dependence of magnetization in our single crystal displayed a linear behaviour above 12 T (Extended Data Fig. 1f), indicative of a fully polarized state. The Van Vleck susceptibility extracted from the linear-field-dependent magnetization data was subtracted in the inset of Fig. 1c. INS measurements were carried out on the ThALES cold triple-axis spectrometer at the Institut Laue-Langevin, Grenoble, France, and at the FLEXX cold triple-axis spectrometer in the BER-II reactor at Helmholtz-Zentrum Berlin, Germany32. For the ThALES experiment, silicon (111) was used as a monochromator and analyser; the final neutron energies were fixed at E  = 3 meV (energy resolution of about 0.05 meV), E  = 3.5 meV (energy resolution of about 0.08 meV) or E  = 4 meV (energy resolution of about 0.1 meV). For the FLEXX experiment, pyrolythic graphite (002) was used as a monochromator and analyser. Contamination from higher-order neutrons was eliminated through a velocity selector installed in the front of the monochromator. The final neutron energy was fixed at E  = 3.5 meV (energy resolution of about 0.09 meV). Three (six) pieces of single crystals with total a mass of about 5 g (19 g) were coaligned in the (HK0) scattering plane for the ThALES (FLEXX) experiment. The FWHM of the rocking curve of the coaligned crystals for the ThALES and FLEXX experiments were approximately 0.95° and 0.92°, respectively. The elastic neutron scattering experiment was carried out at the WAND neutron diffractometer at the High Flux Isotope Reactor, Oak Ridge National Laboratory, USA; one single crystal was used for the experiment, with the incident wavelength λ = 1.488 Å (Extended Data Fig. 2). For the low-temperature experiments, a dilution insert for the standard 4He cryostat was used to reach temperatures down to around 30–70 mK. Because of the non-uniform shape of the single crystal, the relatively large sample volume and the extremely broad spin-excitation spectrum, the neutron beam self-attenuation (by the sample) may require consideration. In most cases the self-attenuation is dependent on only the distance traversed by the neutrons through the sample. We observed the self-attenuation effect in an elastic incoherent scattering image of our sample at 20 K, which exhibited an anisotropic intensity distribution (Extended Data Fig. 3a). The self-attenuation effect was also observed in the raw constant-energy images (Extended Data Fig. 3b–f), which were shown to be anisotropic, with slightly higher intensities occurring at approximately the same direction as that observed in the elastic incoherent scattering images. The self-attenuation can be corrected by normalizing the data with the elastic incoherent scattering image; that is, the elastic incoherent scattering intensity, which is dependent on the sample position (ω) and scattering angle (2θ), is converted to a linear attenuation correction factor for the scattering images measured at different energies. The normalized constant-energy images are presented in Fig. 2a–e, revealing a nearly isotropic intensity distribution. Extended Data Fig. 4 shows the spin excitation spectrum at 20 K, which is broadened and weakened compared with that at 70 mK (discussed below). Here we explain the spinon mean-field state that is used to explain the dynamic spin structure factor of the neutron scattering experiments. As we proposed in the main text, a QSL with a spinon Fermi surface gives a compatible explanation for the INS results for YbMgGaO . To describe the candidate spinon-Fermi-surface QSL state in YbMgGaO , we formally express the Yb3+ effective spin as the bilinear combination of the fermionic spinon with spin and a Hilbert space constraint , where σ is a vector whose three components are the Pauli matrices and ( f ) creates (annihilates) a spinon with spin α = ↑, ↓ at site i. For the QSL with a spinon Fermi surface, we propose a minimal mean-field Hamiltonian H for the spinons on the triangular lattice. We consider a uniform spinon hopping with a zero background flux: where t is the mean-field parameter, which represents the hopping amplitude between nearest-neighbour sites. The chemical potential μ is included to impose the Hilbert space constraint on average. Here, we have treated the spinons freely by neglecting the gauge fluctuations. This mean-field state gives a single spinon dispersion where {a } are six nearest-neighbour vectors of the triangular lattice. Owing to the Hilbert space constraint, the spinon band is half-filled, leading to a large Fermi surface in the Brioullin zone (Extended Data Fig. 5a). where N is total number of lattice sites, the summation goes over all eigenstates, |Ω〉 refers to the spinon ground state with the spinons filling the Fermi sea, E is the energy of the ground state and E (p) is the energy of the nth excited state with momentum p. In the actual calculation, owing to the energy resolution of the experiments, the δ function is taken to have a broadening: where η is the broadening and ε is the measured energy. Because , the summation in equation (2) would be over all possible spin-1 excited states that are characterized by one spinon particle–hole pair crossing the spinon Fermi surface (Fig. 2g) with a total momentum p and a total energy E. As we show in Fig. 2f and Extended Data Fig. 5b, and discuss in the main text, this spinon-Fermi-surface QSL state gives the three crucial features of the INS results: (1) the broad continuum that covers the large portion of the Brioullin zone; (2) the broad continuum persisting from the lowest energy transfer to the highest energy transfer; and (3) the clear upper excitation edge near the Γ point. In our calculation of Fig. 2f and Extended Data Fig. 5b, we choose the lattice size to be 40 × 40 and η = 1.2t, in accordance with the energy and momentum resolution of the instruments. The energy scale of Fig. 2f is set to be 7.5t. Here we explain the details of the dynamic spin structure factor in Fig. 2f and Extended Data Fig. 5b, based on the particle–hole excitation of the spinon Fermi surface. For an infinitesimal energy transfer, the neutrons simply probe the spinon Fermi surface. Because the spinon particle and hole can be excited anywhere near the Fermi surface, the neutron spectral intensity appears from p = 0 to p = 2k , where k is the Fermi wavevector. Because |2k | already exceeds the first Brillouin zone, the neutron spectral intensity then covers the whole Brillouin zone including the Γ point. For a small but finite E, as we explain in the main text, a minimal momentum transfer p  ≈ E/v is required to excite the spinon particle–hole pairs. Therefore, the spectral intensity gradually moves away from the Γ point as E increases. Because it is always possible to excite the spinon particle–hole pair with the momenta near the zone boundary, the spectral intensity is not greatly affected at the zone boundary as E increases. Thus, the broad continuum continues to cover a large portion of the Brillouin zone at a finite E. With the free spinon mean-field model H , we further calculate the spectral weight along the energy direction for fixed momenta. The discrepancy between the theoretical results in Extended Data Fig. 5d and the experimental results in Extended Data Fig. 5e occurs at low energies. We attribute this low-energy discrepancy to the fact that the free spinon theory ignores the gauge fluctuations. The enhancement of the low-energy spectral weight compared to the free spinon results is then identified as possible evidence of strong gauge fluctuations in the system; we elaborate on this in the following discussion of the heat capacity behaviour. To account for the heat capacity behaviour, we suggest that the candidate QSL is a spinon-Fermi-surface U(1) QSL. Here we elaborate on this point and discuss the U(1) gauge fluctuation of this state in detail. The stability of U(1) QSLs with a spinon Fermi surface against the spinon confinement has been addressed extensively33, 34. It was proposed and understood that the large densities of gapless fermionic spinons on the spinon Fermi surface help to suppress the instanton events of the compact U(1) gauge field in two-dimensional U(1) QSLs33, 34. The proliferation of instanton events is the cause of the gauge confinement of a U(1) lattice gauge theory for a U(1) QSL without gapless spinons35. Because the instanton event is suppressed here, the compactness of the U(1) gauge field is no longer an issue, and the low-energy property of our U(1) QSL is then described by gapless fermionic spinons coupled with a non-compact U(1) gauge field27, 33, 36. Owing to the coupling to the gapless spinons, the U(1) gauge photon is overly Landau damped and becomes very soft. The soft gauge photon strongly scatters the fermionic spinons, provides a self-energy correction to the Green’s function of the spinon, and thus makes the quasi-particle weight for spinons equal to zero27, 29, 33, 36. The resulting spinon non-Fermi liquid state has an enhanced density of low-energy spinon states that results in a sublinear power-law temperature dependence for the low-temperature heat capacity27, 29, 33, 36. In addition to the heat capacity behaviour, we find that, owing to the spinon–gauge coupling and the U(1) gauge fluctuation, the enhanced density of the low-energy spinon states is consistent with the enhanced spectral intensities at low energies for the fixed momenta in Extended Data Fig. 5e. The stability of the spinon non-Fermi liquid against spinon pairing has also been considered theoretically37. When the spinons pair up, similar to the Cooper pairing of electrons in a superconductor, the continuous part of the U(1) gauge field becomes massive, owing to the Anderson–Higgs mechanism, leaving the Z part of the gauge field unaffected. The resulting state from the spinon pairing of a spinon-Fermi-surface U(1) QSL is a Z QSL. Such a spinon pairing scenario was proposed to account for the very low-temperature behaviour of the organic spin liquid κ-(BEDT-TTF) Cu (CN) (ref. 38). However, for YbMgGaO , we do not find any evidence of spinon pairings in either thermodynamic or spectroscopic measurements. Although the INS measurement might be constrained by the energy resolution, the thermodynamic measurement did not find any suppression of the density of states down to the lowest temperatures18, 19. If the spinon pairing instability occurs for YbMgGaO , it must be at a much lower temperature or energy scale than those of current and previous experiments18, 19. In any case, the presence of a spinon Fermi surface is the precondition for any spinon pairing instability. We now discuss the finite-temperature thermal effect of the QSL. For the spinon-Fermi-surface U(1) QSL in two dimensions, there is no line-like object in the excitation spectrum. Therefore, as the temperature is increased from this QSL ground state, there is no thermal phase transition caused by proliferating any extended line-like excitations. Moreover, the spinon-Fermi-surface U(1) QSL is not characterized by any symmetry. Consequently, there is no symmetry-breaking transition as the temperature is increased. The absence of the thermal phase transition is consistent with what has been observed in YbMgGaO . As the temperature is increased from the T = 0 K ground state of the QSL, the system involves more thermal superposition of excited states and gradually loses its quantum coherence. A temperature of 20 K is approximately the energy scale of the spin excitation bandwidth, which sets the interaction energy scale between the Yb local moments. At this temperature, the correlation between the local moments cannot be ignored. Its consequence is the diffusive feature in the INS spectrum. This is consistent with our data measured at 20 K shown in Extended Data Fig. 4a, b, in which the spectral weight becomes more diffusive. Finally, we comment on the weak spectral peak at the M points at low energies (Figs 2a, 3a). This non-generic feature of the neutron spectrum is not obtained in the theoretical calculation using the minimal spinon mean-field model in equation (1). This is because we did not include the effect of anisotropic spin interaction, which would break the spin rotational symmetry of equation (1). In the strong anisotropic limit, the generic spin interaction for the Yb local moments favours stripe-like magnetic order, with the wavevectors at the M points20. In a recent calculation, it was shown that the anisotropic spin interaction enhances the spin correlation at the M points21. Despite the presence of a weak peak at M, the vast majority of the spectral weight is still dominated by a broad continuum across the Brillouin zone at the lowest energies measured (Figs 2a, 3a). As a comparison with the spinon-Fermi-surface QSL, we carry out the same calculation for the spinon mean-field Hamiltonian with a background π flux through each unit cell. This choice of the background flux gives a Dirac U(1) QSL. We fix the gauge according to the hopping parameters that are specified in Extended Data Fig. 6a. The spinon band structure of this mean-field Hamiltonian is where we have set the lattice constant to unity. We observe two Dirac nodes at (Extended Data Fig. 6b); the spinon Fermi energy is right at the Dirac nodes. At low energies, the only spin-1 excited states involve either an intra-Dirac-cone spinon particle–hole pair or an inter-Dirac-cone particle–hole pair. Therefore, the spectral intensity of the dynamic spin structure factor should be concentrated at the momentum transfer that corresponds to the intra-Dirac-cone and the inter-Dirac-cone processes. As shown Extended Data Fig. 6c, d, the dynamic spin structure factor at low energies is peaked at the Γ point, the point and the symmetry-equivalent momentum points. This result differs from the broad continuum that is observed in the experiment. Therefore, the π-flux state is inconsistent with the experimental data, as are other Dirac spin liquids. The data that support the findings of this study are available from the corresponding author on reasonable request.


News Article | November 30, 2016
Site: www.eurekalert.org

OAK RIDGE, Tenn., Nov. 30, 2016 -- The recently discovered element 117 has been officially named "tennessine" in recognition of Tennessee's contributions to its discovery, including the efforts of the Department of Energy's Oak Ridge National Laboratory and its Tennessee collaborators at Vanderbilt University and the University of Tennessee. "The presence of tennessine on the Periodic Table is an affirmation of our state's standing in the international scientific community, including the facilities ORNL provides to that community as well as the knowledge and expertise of the laboratory's scientists and technicians," ORNL Director Thom Mason said. "The historic discovery of tennessine is emblematic of the contributions Tennessee institutions like Oak Ridge National Laboratory, the University of Tennessee and Vanderbilt University make toward a better world," Tennessee Gov. Bill Haslam said. "On behalf of all Tennesseans we thank this world body for honoring our state this way." The International Union of Pure and Applied Chemistry (IUPAC)--which validates the existence of newly discovered elements and approves their official names--gave its final approval to the name "tennessine" following a year-long process that began Dec. 30, 2015, when IUPAC and the International Union of Pure and Applied Physics announced verification of the existence of the superheavy element 117, more than five years after scientists first reported its discovery in April 2010. ORNL had several roles in the discovery, the most prominent being production of the radioisotope berkelium-249 for the search. The berkelium-249 used in the initial discovery and subsequent confirmatory experiments for element 117 was produced by ORNL and the Department of Energy's Isotope Program, and was provided as a U.S. contribution to those experiments. Superheavy elements, which do not occur naturally, are synthesized by exposing a radioisotope target to a beam of another specific isotope. In theory, the nuclei will in rare cases combine into a "superheavy" and heretofore unknown element. In tennessine's case, the atomic recipe for element 117 required the berkelium-249 target, which was available only from ORNL's High Flux Isotope Reactor (HFIR), which produces radioisotopes for industry and medicine in addition to its neutron scattering research mission, and the adjoining Radiochemical Engineering Development Center (REDC), where the radioisotopes are processed. Over a year-long campaign, ORNL produced and then shipped the 22 milligrams of berkeleium-249 to Russia, where the experiment that would yield element 117 was carried out with a heavy-ion cyclotron at Russia's Joint Institute for Nuclear Research (JINR) in Dubna. After six months of relentless bombardment with a calcium-48 beam, researchers had detected six atoms in which the nuclei of the calcium and berkelium had fused to create element 117. Subsequent experiments confirmed the results. "The discovery of tennessine is an example of the potential that can be realized when nations come together to lend their unique capabilities toward a scientific vision," said ORNL's Jim Roberto, who helped put together the element 117 U.S.-Russia collaboration with JINR's Yuri Oganessian. Beyond producing the necessary radioisotope, ORNL has a long history in nuclear physics research that enabled the laboratory to contribute knowledge from researchers experienced in nuclear physics and international collaboration and tools in the form of detectors, instruments and electronics. The lab also has a history of partnership in physics research with Vanderbilt University in Nashville, Tenn., which initiated discussions that led to the historic collaboration, and the University of Tennessee, Knoxville, which participated in experiments that confirmed the discovery. DOE's Lawrence Livermore National Laboratory in California rounded out the element 117 team. Livermore has an accomplished record in superheavy element research and is the namesake of livermorium (element 116). The specific spelling of tennessine was chosen because the new element is classified as a halogen, a type of element that by convention ends in the suffix "-ine." Halogens include elements such as chlorine and fluorine. Tennessine's symbol on the Periodic Table will be Ts. Discoveries of new elements at ORNL began with the Manhattan Project. During World War II, researchers at ORNL's Graphite Reactor discovered promethium--element 61 on the Periodic Table. Because of the secrecy that enveloped the project to develop the first nuclear weapons, the discovery wasn't reported until after the war, in 1947. In addition to tennessine (element 117), ORNL-produced isotopes via the DOE Isotope Program have been used in the discoveries of superheavy elements 114, 115 and 118 through international collaborations. The discovery of superheavy elements, which typically exist for only fractions of seconds, is driven by a quest for the long-predicted "island of stability," in which new elements beyond the existing Periodic Table may survive for exceptionally long periods of time, opening up new and useful vistas of physics and chemistry. This research was supported by the DOE Office of Science. The High Flux Isotope Reactor is a DOE Office of Science User Facility. UT-Battelle manages ORNL for the Department of Energy's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. . Cutline: The new element tennessine is denoted by the symbol Ts on the Periodic Table. NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www. . Additional information about ORNL is available at the sites below:


News Article | February 27, 2017
Site: www.businesswire.com

FAIRFAX, Va.--(BUSINESS WIRE)--ICF (NASDAQ:ICFI), a consulting and technology services provider to government and commercial clients around the world, reported results for the fourth quarter and twelve months ended December 31, 2016. “2016 was a year of solid execution for ICF in which we achieved mid-single-digit organic revenue growth consistent with our expectations. Revenue from commercial clients increased 4.8 percent, driven by the strong performance of energy markets, which includes energy efficiency programs for utilities, and modest year-on-year growth in our marketing services. A similar 4.6 percent growth in our government business reflected a 4.1 percent increase in federal government revenue and double-digit growth in state and local government revenue, which more than offset the anticipated decline in revenue from international government business. Diluted earnings per share growth outpaced revenue growth by a factor of four, demonstrating the strength of our balanced portfolio and the benefit of reduced amortization expense, a lower share count and a lower tax rate,” said Sudhakar Kesavan, ICF’s Chairman and Chief Executive Officer. “In the fourth quarter, we experienced slower-than-expected work flow on certain federal government contracts and postponements of specific asset valuation and M&A assignments in our commercial energy advisory group. Strong performance in other areas, particularly energy efficiency and state and local infrastructure programs, mostly offset the revenue impact. However, reduced service revenue2 levels resulted in fourth quarter diluted earnings per share being below our guidance range, although earnings were markedly ahead of last year. Since the beginning of 2017, we have seen a return to our expected levels of activity in our federal government business and commercial energy advisory work. “ICF’s business development programs continued to yield positive results in 2016. It was a record year for us in terms of contract awards, with the majority representing new business wins. We ended 2016 with a record fourth quarter backlog, and the funded portion increased significantly from 2015 levels, in part reflecting the greater mix of commercial business,” Mr. Kesavan noted. Fourth quarter 2016 revenue was $289.6 million, a 3.1 percent increase from $280.8 million in the fourth quarter of 2015. Service revenue was stable at approximately $207 million. Net income was $12.7 million in the fourth quarter of 2016, or $0.65 per diluted share, up 18.2 percent from $0.55 per diluted share in the prior year period. Non-GAAP EPS increased 4.1 percent to $0.76 per diluted share in the fourth quarter of 2016 compared to $0.73 in the prior year. EBITDA3 was $29.5 million, up from $27.5 million in the fourth quarter of 2015. Fourth quarter 2016 EBITDA margin was 10.2 percent, a 40 basis point increase from the 9.8 percent reported in the comparable period last year. Adjusted EBITDA4, which excludes special charges related to severance for staff realignment, acquisition-related expenses and international office closures of $0.4 million, was $29.9 million, or 10.3 percent of revenue, up from last year’s $28.3 million or 10.1 percent of revenue. For 2016, revenue was $1.19 billion, up 4.7 percent over the $1.13 billion reported for full year 2015. Service revenue was $864.8 million, or 1.8 percent above the prior year. Net income was $46.6 million, or $2.40 per diluted share for full year 2016 compared to $39.4 million or $2.00 per diluted share in the prior year. The increase in diluted earnings per share represented a 20 percent year-on-year increase. Non-GAAP EPS was $2.87 per share in 2016, an increase of 8.7 percent from the $2.64 per share reported in 2015. For 2016, EBITDA was $111.9 million, a 3.0 percent increase as compared to $108.6 million for 2015. Adjusted EBITDA in 2016 was $113.9 million, a 2.8 percent increase as compared to $110.7 million for 2015. Adjusted EBITDA excluded special charges related to severance for staff realignment, acquisition-related expenses and international office closures of $2.0 million in 2016 and $2.1 million in 2015. Operating cash flow was $79.6 million for 2016 compared to $76.3 million in 2015, a 4.3 percent increase. During 2016, the company used $52.1 million in cash to pay down debt and $11.9 million to repurchase company shares. Total backlog was $2.1 billion at the end of the fourth quarter of 2016. Funded backlog was $1.0 billion, or approximately 48 percent of the total backlog. The total value of contracts awarded in the 2016 fourth quarter was $296 million, up 31 percent year-on-year. For full year 2016, contract awards were $1.5 billion, up 13 percent year-on-year and representing a book-to-bill ratio of 1.26. ICF was awarded more than 90 U.S. federal contracts and task orders and more than 200 additional contracts from state and local and international governments. The largest awards were: Other government contract wins with a value greater than $1 million included: training and technical assistance, and research and evaluation services, for the U.S. Department of Health and Human Services’ Administration for Children and Families; regulatory, operational, communications, training, and other administrative and analytic support for the U.S. Department of Homeland Security; policy and communications support for the U.S. Department of Energy Oak Ridge National Laboratory; program management services for the Defense Contract Management Agency; support for block grant evaluations and the diabetes prevention program at the Centers for Disease Control and Prevention; technology support services for the Office of the Secretary of the Navy; change management support for the U.S. Department of Veterans Affairs; and enterprise strategy and management services for the U.S. Department of State’s Bureau of Consular Affairs. Commercial sales were $158.7 million in the fourth quarter of 2016, and ICF was awarded more than 500 commercial projects globally during the period. The largest awards were: Other commercial contract wins with a value of at least $1 million included: customer loyalty program services for a national auto parts chain, two national retail chains, a financial services company, and a major international hotel chain; program support and communications services for a number of U.S. utilities; brand and digital solutions services for a national health insurer; marketing and digital solutions services for a regional financial institution; marketing services for a floor care product manufacturer; and public relations support and digital solutions for a major manufacturer of access control products. “ICF ended 2016 with key measures of future performance, notably, contract wins, backlog and business development pipeline at record levels. In setting expectations for 2017, we have made certain assumptions regarding the transition to a new administration, particularly after a two-term president, that could affect federal government revenue this year. Importantly, ICF is well positioned in several key government priority areas, including infrastructure renewal and resilience, energy, public health and veterans affairs. Similarly, in our commercial business, we are executing on more than 150 energy efficiency programs for over 40 utilities across the United States and Canada, and we continue to leverage our marketing and communications services across our client base. “Based on our current visibility, we expect full year 2017 diluted earnings per share to be in the range of $2.50 to $2.75 per share on revenue ranging from $1.20 billion to $1.24 billion. Underpinning this guidance are the following assumptions: “Operating cash flow for 2017 is expected to be in the range of $90 million to $100 million,” Mr. Kesavan concluded. ICF (NASDAQ:ICFI) is a global consulting and technology services provider with more than 5,000 professionals focused on making big things possible for our clients. We are business analysts, public policy experts, technologists, researchers, digital strategists, social scientists and creatives. Since 1969, government and commercial clients have worked with ICF to overcome their toughest challenges on issues that matter profoundly to their success. Come engage with us at www.icf.com. Statements that are not historical facts and involve known and unknown risks and uncertainties are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. Such statements may concern our current expectations about our future results, plans, operations and prospects and involve certain risks, including those related to the government contracting industry generally; our particular business, including our dependence on contracts with U.S. federal government agencies; and our ability to acquire and successfully integrate businesses. These and other factors that could cause our actual results to differ from those indicated in forward-looking statements are included in the "Risk Factors" section of our securities filings with the Securities and Exchange Commission. The forward-looking statements included herein are only made as of the date hereof, and we specifically disclaim any obligation to update these statements in the future. 1 Non-GAAP EPS is a non-GAAP measurement. A reconciliation of all non-GAAP measurements is set forth below. 2, 3, 4 Service Revenue, EBITDA, and Adjusted EBITDA are non-GAAP measurements. A reconciliation of all non-GAAP measurements is set forth below.


In this edition of the clean transport and energy news roundup, we’ve got stories on remote EV charging, a cheaper carbon fiber production process, BYD’s EV profits, a water injection system from Bosch, Audi’s Formula E bid, Baidu and Nvidia’s autonomous car project, Portland’s EV showcase, solar shingles, Sri Lanka’s cash solar payouts, Pakistan’s wind energy additions, India’s solar boom, and more. [CleanTechnica isn’t the only Important Media site to cover clean energy & transport news, and if you’re looking for more stories on electric mobility, bicycles, solar and wind energy, and other related issues, we’ve got them at sites such as Solar Love, CleanTechies, Planetsave, Bikocity, Gas2, and EV Obsession. We also host a large cleantech group on LinkedIn, called CleanTechies Around The World.] Electric vehicle owners who have ever been worried about potentially running out of charge in a remote location may be happy to hear that the OEM power solutions firm Phihong will be releasing a new portable 10 kilowatt (kW) DC charger for use with vehicles. The new portable charger will also be able to provide 3.5 kW backup AC power. Delivery to electric vehicles is at 12V or 24V, depending on needs. New process could cut cost of carbon fiber production by 50%: Scientists at the Oak Ridge National Laboratory say they have invented a new process for making carbon fiber components that slashes the cost of manufacture by 50% or more. It also reduces the amount of energy needed by more than 60%. The process will be commercialized by a new start-up venture led by three time Tour de France champion Greg Lemond. He has long been involved with carbon fiber technology as part of his quest to lighter, stronger racing bicycles. BYD EV profits up 400% in first half of 2016: China may not be the best place to talk about pure capitalism, since its government is so heavily involved in every aspect of the national economy. But it is the world’s largest automobile market, so what happens there is important. The news is that BYD doubled its electric car sales in the first half of 2016. The big news is that its profits were up nearly 400% for the same period. Apparently there is money to be made from selling electric cars. Who knew? Just last week, I wrote about my unsatisfactory experience with a water injection system I purchased from JC Whitney long ago. This week comes word that a new water injection system is included in the engine for the new BMW M4 GTS, a car that is as close to a street legal race car as you can buy. Bosch says water injection helps cool the engine internally, leading to more power and higher fuel economy. Weird. That is almost exactly the way JC Whitney described its system in the catalog. Audi will have its own Formula E team in 2018: Audi has decided to it wants to expand its involvement in Formula E, the open wheel racing series for electric race cars. Since Audi is part of the Volkswagen group, VW has clearly decided to turn its back on Formula One. Baidu and Nvidia team up on self-driving cars: The two companies began working together on artificial intelligence projects in 2012. Speaking last week at the annual Baidu World Conference, Nvidia co-founder and Chief Executive Jen-Hsun Huang announced that his company is working with Baidu to build a self driving car. The vehicle will use an Nvidia processor as part of an algorithm based operating system that will include cloud based, high definition maps. It will also feature the ability to perceive and react to different road environments. We need autonomous cars because people are terrible drivers: Self-driving technologies don’t have to be perfect, they just have to be better at driving than most people are. That brings me to the subject of this article — a driver in a parking lot in Ann Arbor (not far from the offices of Autoblog) apparently somehow rolled a car. In a parking lot. Portland, OR, will showcase the benefits of EVs: With the help of a $1,000,000 grant from the US Department of Energy, Drive Oregon will create an electric car storefront located across the street from Portland’s Electric Avenue — a downtown area that offers many charging stations for local EV drivers. The facility will offer information about and test drives of a variety of electric cars. Drive Oregon says it will also host a number of public events and pop-up EV stores in other parts of Oregon and nearby Washington communities. The Cabinet has approved a proposal to introduce cash payouts to consumers for the excess electricity they generate. At present, the excess electricity generation by rooftop solar power systems is carried forward to up to 10 years. Under this net accounting system, Ceylon Electricity Board will pay LKR22.00/kWh (US¢15/kWh) for the first seven years while from the eighth year onward it will pay LKR15.50/kWh (US¢11/kWh). The payments will continue for a period of 20 years. Pakistan will add 1 GW of wind energy capacity by 2018: Pakistan is expected to see a huge jump in wind energy installed capacity over the next two years as 21 projects are lined up for commissioning. Government officials in Pakistan have told local media that almost two dozen wind energy projects are currently at various stages of development and 1,012 MW capacity is expected to be added to the grid by 2018. Adani Enterprises will begin solar module production at Indian factory by next year: Adani Enterprises recently stated that it plans to begin the construction of phase one of the planned 2 GW solar equipment manufacturing factory this year. The first phase is expected to be operationalized by March 2017 and will have production capacity of 1.2 GW; a second phase is expected to be operational by June 2017, increasing the production capacity to 2 GW. Pakistan gets $500 million in offers for renewable energy investments: Pakistan continues to receive investment offers for renewable energy projects as the current demand-supply situation in the country makes it attractive for developers to set up new projects. Solar Energy Corporation of India (SECI) auctioned 65 MW of solar PV capacity in the Charanka solar park, in Gujarat. The projects were auction under the viability gap funding scheme wherein developers are required to submit the minimum capital cost support they will require to set up the projects. Today, if you add a rooftop solar system to your house, the installers will bolt a metal rack through your existing roof and into the rafters of your house. Then the solar panels are attached to the frame. But there are other options available that simply replace your roof entirely with shingles that make electrical power from sunlight. Elon Musk has stated he expects such systems to be available from SolarCity shortly. There are companies that are already in the game, as he noted, but he says Tesla’s/SolarCity’s system will be much better. Here’s a look at what’s out there now. India will add 4.8 GW of solar this year: According to a recent report from Mercom Capital Group, a global clean energy communications and research firm, India is likely to add a total of 4.8 gigawatts of new solar power this calendar year. Through August, new solar installations accounted for 2.8 gigawatts, bring the total nationwide to 8.1 gigawatts.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 4, 2016
Site: www.eurekalert.org

OAK RIDGE, Tenn., Nov. 4, 2016--Researchers at the Department of Energy's Oak Ridge National Laboratory have received seven R&D 100 Awards in recognition of their significant advancements in science and technology. The honorees were recognized on Nov. 3 at the 54th annual R&D 100 Conference, sponsored by R&D Magazine. The awards, known as the "Oscars of Invention," honor innovative breakthroughs in materials science, biomedicine, consumer products and more from academia, industry and government-sponsored research agencies. This year's seven honors bring ORNL's total of R&D 100 awards to 200 since their inception in 1963. Oak Ridge Graph Analytics for Medical Innovation was developed by a team of ORNL researchers led by Sreenivas Sukumar in collaboration with the National Library of Medicine. ORiGAMI is an artificial intelligence system designed to search, collect and connect medical literature resources to improve medical research discovery. The open source algorithm can search and process NLM's entire MEDLINE database about 1,000 times faster than a normal workstation, find unexplored connections in medical texts and reason with the vast amounts of available information added every day. While most search engines look for keywords in titles and abstracts, ORiGAMI learns language patterns from its users and literature and is able to search in-depth for hidden associations that other searches might miss. Already, the engine has helped researchers investigate possible carcinogens, explore drug-disease interactions and find other serendipitous connections between disparate medical sources. The development team was Sreenivas Sukumar, Larry Roberts, Sangkeun Lee, Alexandra Zakrezewksa (Yale University), Katherine Senter (University of Pennsylvania), Seokyong Hong (North Carolina State) and Seung-Hwan Lim. Funding for this project was provided by the Laboratory Directed Research and Development Program. The Roof Savings Calculator Suite was developed by a team of researchers from ORNL, Jacksonville State University and White Box Technologies. The Roof Savings Calculator is a Web-based tool for simulating energy flow and loss in businesses and homes and predicting the cost-effectiveness of cool roofing and attic technologies based on building type and location. The suite integrates the AtticSim engine with the DOE-2.1E whole-building engine to create hour-by-hour annual simulations of building heating and cooling loads and estimate the impact of cool roofing products on energy and cost savings. It is more accurate and customizable than the current DOE and EPA energy calculators and is designed to educate builders and consumers about the money saving potential of roofing upgrades. The development team was Joshua New, William Miller, Aaron Garrett (Jacksonville State University) and Yu Huang (White Box Technologies). This project was funded by the DOE's Building Technologies Office, the California Energy Commission, White Box Technologies, and CentiMark, with support from additional private companies and trade associations. G-Mode: Full Information Acquisition in Scanning Probe Microscopy and Spectroscopy was developed by a team of ORNL researchers led by Stephen Jesse. Scanning probe microscopy uses a rastering probe to map the shape of surfaces and capture physical and chemical properties of materials on the nanometer and atomic levels. However, this approach is traditionally limited to a minute fraction of available information, making interpretation of observed nanoscale phenomena much more difficult. G-Mode uses a custom controller to rapidly collect and thoroughly process the information flow from a microscope detector in its entirety, yielding all accessible information about minute changes sensed by the probe as it interacts with nanoscopic volumes of a sample. While competing technologies focus on recording a single, high quality component of the probe signal, they can incidentally filter away or ignore information. G-Mode uses a fundamentally different approach by capturing and storing almost 200 million interactions streaming from the detectors per scan. It then processes the data through advanced machine learning algorithms to expose valuable, information-rich correlations and produce large high-resolution images of dynamic material properties 3,500 times faster than conventional state-of-the-art methods. The development team was Stephen Jesse, Liam Collins, Suhas Somnath, Sergei Kalinin and Alex Belianinov. G-Mode was funded by ORNL's Center for Nanophase Materials Science, a DOE Office of Science User Facility. Virtual Environment for Reactor Applications was developed by a consortium of researchers from ORNL, the Electric Power Research Institute, Westinghouse Nuclear and Idaho, Sandia and Los Alamos national laboratories. VERA is a physics simulation tool that visualizes the internal processes of commercial nuclear fission power plants and predicts reactor behavior in a number of potential scenarios. The engine allows nuclear technicians to produce 3-D, high-fidelity power distributions to simulate atomic particle behavior and core conditions on a reactor-wide scale. It has demonstrated the power up process of a new plant, modeled accidents and emergency situations and examined the effects of corrosion on fuel rod performance. The development team consisted of 42 members. The ORNL researchers were Jess Gehin, John Turner, Kevin Clarno, Benjamin Collins, Gregory Davidson, Thomas Evans, Andrew Godfrey, Steven Hamilton, Douglas Kothe, Rose Montgomery, Robert Salko, Srdjan Simunovic, Stuart Slattery and Shane Stimpson. VERA was funded by the DOE's Consortium for Advanced Simulation of Light Water Reactors. U-Grabber was developed by a team of ORNL researchers led by Sheng Dai in collaboration with the University of Tennessee and 525 Solutions. The U-Grabber is an adsorbent material designed to extract uranium and other metals from water inexpensively and efficiently. The material is made from polyethylene fibers, similar to PVC, woven into braids and grafted with chains of a uranium-attractive chemical called amidoxine. The free-floating uranium in the water binds with the fibers and can be extracted, purified and sold as nuclear fuel. The fibers can be reconstituted and reused, are cheap to produce at scale and can bolster dwindling terrestrial supplies of uranium. They can also be customized to bind with other toxic or valuable aqueous metals, providing an environmentally sound method of cleaning bodies of water such as mines or fly ash ponds. The development team was Sheng Dai, Suree Brown (UT), Robin Rogers (525 Solutions), Christopher Janke, Richard Mayes, Tomonori Saito and Ronnie Hanes (525 Solutions). U-Grabber was funded by the DOE's Office of Nuclear Energy. Open Port Sampling Interfaces for Mass Spectrometry were developed by ORNL's Gary Van Berkel and Vilmos Kertesz. The most difficult usability barrier for mass spectrometry is producing and transferring viable samples into the device. The Open Port Sampling Interfaces remove this impediment and allow easier, universal input of samples via accessible intake ports. The ports, one a shallow swirling vortex, the other a conical solvent dome, use constantly flowing liquid streams to convey the sample into the ionization source of the spectrometer. They accept many types of samples, from organic oils to biological tissue, and produce results almost instantaneously. The interfaces make mass spectrometry easier for novices and experts alike, as samples do not require processing and the ports are self-cleaning, reducing the risk of cross contamination. Mass spectrometers modified with these new sampling interfaces combine speed, ease of use, and high imaging resolution to serve as a valuable tool in materials science, biology, chemistry and more. This technology is licensed to SCIEX. The development team was Gary Van Berkel and Vilmos Kertesz. This project was funded by the DOE's Office of Science. Waste Tire Derived Carbon was developed by a team of ORNL researchers led by Parans Paranthaman and the RJ Lee Group. ORNL material chemists devised a proprietary process for repurposing discarded car tires as a source of carbon powder, a sooty hydrocarbon byproduct that can be modified to incorporate into anodes of lithium-ion batteries. The recycled tires are shredded, cryogenically pulverized to powder and soaked in sulfuric acid, then roasted in a furnace to recover the carbon composite powders with a yield of more than 50 percent of the original mass. The carbon powder is chemically similar to graphite but has a unique microstructure that is superior in rechargeable batteries, with a higher electrochemical performance and longer cycle life than the best graphite anodes. A large-scale tire-to-battery conversion operation could reduce total battery production costs by 11-12 percent while improving electrical storage capacity and performance. This inexpensive, environmentally beneficial and seemingly unlimited resource is an opportunity to repurpose more than 1 billion car tires discarded globally every year while meeting the rising demand for cheap carbon as the battery market continues to grow. The development team was Parans Paranthaman, Richard Lee (RJ Lee Group), Amit Naskar, Yunchao Li (UT), Kokouvi Akato (UT) and Alan Levine (RJ Lee Group). This project was funded by the DOE's Office of Science and ORNL's Technology Innovation Program. ORNL also received a special recognition award from R&D Magazine for the Wireless Power Transfer Based Electric and Plug-In Vehicle Charging System, submitted by Toyota Motor Engineering & Manufacturing North America and co-developed by a team of ORNL researchers with support from Cisco Systems and the International Transportation Innovation Center. The team developed the world's first 20-kilowatt wireless charging system for passenger cars and achieved 90 percent efficiency at three times the rate of the plug-in systems commonly used for electric vehicles today. Providing the same speed with the convenience of wireless charging could increase consumer acceptance of electric vehicles and is considered a key enabler for hands-free, autonomous vehicles. Higher power levels are also essential for powering larger vehicles such as trucks and buses. The high-power wireless charging system relies on a unique architecture that includes an ORNL-built inverter, isolation transformer, vehicle-side electronics and coupling technologies. The ORNL researchers on the development team were Steven Campbell, Paul Chambon, Madhu Chinthavali, Omer Onar, Burak Ozpineci, Larry Seiber, David Smith, Lixin Tang, Cliff White and Randy Wiles as well as retired staff members Curt Ayers, Chester Coomer and John Miller. Funding for this project was provided by the DOE's Energy Efficiency and Renewable Energy Office, Office of Vehicle Technologies. UT-Battelle manages ORNL for the DOE's Office of Science. The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website. Cutline: A close-up look at the Open Port Sampling Interfaces for Mass Spectrometry, one of ORNL's seven 2016 R&D 100 Award winners. NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www. . Additional information about ORNL is available at the sites below:


News Article | March 28, 2016
Site: www.materialstoday.com

Your car's bumper is probably made of a moldable thermoplastic polymer called acrylonitrile butadiene styrene (ABS). Light, strong and tough, ABS is also used to produce ventilation pipes, protective headgear, kitchen appliances, Lego bricks and many other consumer products. Useful as it is, one of its drawbacks is that it is made using chemicals derived from petroleum. Now, researchers at the US Department of Energy's Oak Ridge National Laboratory have made a better thermoplastic by replacing the styrene in ABS with lignin, a brittle, rigid polymer that is an important component of the woody cell walls of plants. To do this, they invented a solvent-free production process that disperses nanoscale lignin in a synthetic rubber matrix. The end result is a meltable, moldable, ductile material that's at least 10 times tougher than ABS. This novel thermoplastic, called acrylonitrile butadiene lignin (ABL), is also recyclable, able to be melted three times and still perform well. This work, reported in Advanced Functional Materials, may bring cleaner, cheaper raw materials to diverse manufacturers. "The new ORNL thermoplastic has better performance than commodity plastics like ABS," said senior author Amit Naskar from ORNL's Materials Science and Technology Division, who along with co-inventor Chau Tran has filed a patent application on the production process for the new material. "We can call it a green product because 50% of its content is renewable, and technology to enable its commercial exploitation would reduce the need for petrochemicals." The technology could make use of the lignin-rich by-product streams from biorefineries and pulp and paper mills. With the prices of natural gas and oil dropping, renewable fuels can't compete with fossil fuels, so biorefineries are exploring options for developing other economically-viable products. Among cellulose, hemicellulose and lignin, which are the major structural constituents of plants, lignin is the most commercially underutilized. The ORNL study aimed to use it as a feedstock for a renewable thermoplastic with properties rivaling those of current petroleum-derived alternatives. "Lignin is a very brittle natural polymer, so it needs to be toughened," explained Naskar, leader of ORNL's Carbon and Composites group. "We need to chemically combine soft matter with lignin. That soft matrix would be ductile so that it can be malleable or stretchable. Very rigid lignin segments would offer resistance to deformation and thus provide stiffness." All lignins are not equal in terms of heat stability. To determine what type would make the best thermoplastic feedstock, the scientists evaluated lignin derived from wheat straw, softwoods like pine and hardwoods like oak. They found that hardwood lignin is the most thermally stable, while some types of softwood lignins are also melt-stable. Next, the researchers needed to couple the lignin with soft matter. Chemists typically accomplish this by synthesizing polymers in the presence of solvents. However, lignin and a synthetic rubber containing acrylonitrile and butadiene, called nitrile rubber, both possess chemical groups in which the electrons are unequally distributed and likely to interact. So Naskar and Chau Tran (who performed melt-mixing and characterization experiments) tried to couple the two in a melted phase without solvents. In a heated chamber with two rotors, the researchers ‘kneaded’ a molten mix of equal parts powdered lignin and nitrile rubber. During mixing, lignin agglomerates broke into interpenetrating layers or sheets of 10–200nm that dispersed well in, and interacted with, the rubber. Without the proper selection of a soft matrix and mixing conditions, lignin agglomerates are at least 10 times larger than those obtained with the ORNL process. The product that formed had properties of neither lignin nor rubber, but something in between, with a combination of lignin's stiffness and nitrile rubber's elasticity. By altering the acrylonitrile amounts in the soft matrix, the researchers hoped to improve the material's mechanical properties further. They tried 33%, 41% and 51% acrylonitrile and found that 41% gave an optimal balance between toughness and stiffness. They also wanted to know at what temperature the components should be mixed to optimize the material properties. They found heating components between 140°C and 160°C produced the desired hybrid phase. Using resources at ORNL, including the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, the scientists analyzed the morphologies of these blends. They used scanning electron microscopy to explore the surfaces of the materials and transmission electron microscopy to explore the soft matter phases. They also used small-angle x-ray scattering to reveal repeated clusters of certain domain or layer sizes and Fourier transform infrared spectroscopy to identify chemical functional groups and their interactions. Future studies will explore different feedstocks, particularly those from biorefineries, and correlations among processing conditions, material structure and performance. Investigations are also planned to study the performance of ORNL's new thermoplastic in carbon-fiber-reinforced composites. "More renewable materials will probably be used in the future," Naskar said. "I'm glad that we could continue work in renewable materials, not only for automotive applications but even for commodity usage." This story is adapted from material from ORNL, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | December 3, 2015
Site: news.yahoo.com

Constance Walter is the communications director for the Sanford Underground Research Facility. She explores the stars vicariously through the physics experiments running nearly a mile underground in the former Homestake Gold Mine. She contributed this article to Space.com's Expert Voices: Op-Ed & Insights. In 1969, Neil Armstrong fired my imagination when he took "a giant leap" onto the moon. I was 11 years old as I watched him take that first step, and like millions around the world, I was riveted to the screen. Today I wonder how I would have reacted if the news anchor had simply described this incredible moment. Would I have been so excited? So inspired? So eager to learn more? I don't think so. It was seeing the story unfold that made it magical, that pulled me into the story. How we see the world impacts how we view it: That first glimpse of outer space sparked an interest in science. And although I didn't become a scientist, I found a career in science, working with researchers at Sanford Underground Research Facility in Lead, South Dakota, explaining the abstract and highly complex physics experiments in ways the rest of us can appreciate. It isn't always easy. Ever heard of neutrinoless double-beta decay? Probably not. If I told you this rare form of nuclear decay could go a long way in helping us understand some of the mysteries of the universe, would you get the picture? Maybe. The words are important, but an illustration or animation might give you a better idea. Kathryn Jepsen, editor-in-chief of the physics magazine Symmetry, captured this need for the visual in this way: In trying to create images for her readers, she is never sure if her intent is what readers "see" in their mind's eye — so she works with illustrators, videographers and photographers to create the images she wants them to see. "Videos and animations show them exactly what we want to get across," Jepsen said. And such visualizations can be profound. Take a look at this operatic animation from Oak Ridge National Laboratory. Created using simulations run on the supercomputers at the National Center of Computational Sciences, it shows the expected operation of the ITER fusion reactor. The video clearly outlines the objectives of the experiment, but the animation allows greater understanding as to how the fusion reactor could be used to create energy. The Sanford Lab has many stories to tell: complex research experiments, a Nobel Prize, and a 126-year history as a mine, to name a few. We write stories for a newsletter called Deep Thoughts, the Sanford Lab website and other publications. But we don't rely solely on words. Photographs and video play a big role in how we present the lab to the world. Researchers at Sanford Lab go deep underground to try to answer some of the most challenging physics questions about the universe. What is the origin of matter? What is dark matter and how do we know it exists? What are the properties of neutrinos? Going deep underground may help them answer these fundamental questions about the universe. [Where Is All the Dark Energy and Dark Matter?] Here's how: Hold out your hand. On the Earth's surface, thousands of cosmic rays pass through it every day. But nearly a mile underground, where these big physics experiments operate, it's more than a million times quieter. The rock acts as a natural shield, blocking most of the radiation that can interfere with sensitive physics experiments. It turns out Sanford Lab is particularly suited to large physics experiments for another reason — the hard rock of the former Homestake Gold Mine is perfect for excavating the large caverns needed for big experiments. From 1876 to 2001, miners pulled more than 40 million ounces of gold and 9 million ounces of silver from the mine. In the beginning they mined with pickaxes, hammers and shovels — often in the dark with only candles for light. As they dug deeper, they brought wagons and mules underground to haul ore. Some animals were born, raised and died without ever seeing the sunshine. By the early 1900s, Homestake was using locomotives, drills and lights. By the early 1980s, the mine reached 8,000 feet, becoming the deepest gold mine in North America, with tunnels and drifts pocketing 370 miles of underground. At its heyday, Homestake employed nearly 2,000 people, but as gold prices plummeted and operation costs soared, the company began decreasing operations and reducing staff. Finally, in 2001, the Barrick Gold Corp., which owned the mine, closed the facility. Five years later, the company donated the property to South Dakota for use as an underground laboratory. That same year, philanthropist T. Denny Sanford donated $70 million to the project. Since then, the state has committed more than $45 million in funds to the project. Early on, South Dakota received a $10 million Community Development Block Grant to help rehabilitate the aging facility. Part of the glamor of using Homestake to build a deep underground science laboratory was its history as a physics landmark. Starting in the mid-1960s, nuclear chemist Ray Davis operated his solar neutrino experiment 4,850 feet underground (designated the 4850 Level) of Homestake mine. Using a 100,000-gallon tank full of perchloroethylene (fluid used in dry cleaning), Davis looked for interactions between neutrinos and the chlorine atoms, believing they would change into argon atoms. Far from the mining activity, Davis worked for nearly three decades to prove the theory developed with his collaborator John Bahcall, professor of astrophysics in the School of Natural Sciences at the Institute for Advanced Study at Princeton. The two proposed that the mysteries of the sun could be examined by measuring the number of neutrinos arriving on Earth from the sun. By the 1970s, Davis proved the theory worked; however, there was a slight problem: Davis found only one-third of the neutrinos predicted based on the standard solar and particle physics model. This led to the solar neutrino problem. "The solar neutrino problem caused great consternation among physicists and astrophysicists," Davis said years later. "My opinion in the early years was that something was wrong with the standard solar model; many physicists thought there was something wrong with my experiment." Scientists at underground laboratories around the world wanted an answer to this riddle. Eventually, the mystery was solved by researchers in two separate experiments: one at SNOLab in Canada, the other at the Super-Kamiokande Collaboration in. As it turns out, neutrinos are pretty tricky characters, changing "flavors" as they travel through space, oscillating between electron, muon and tau neutrinos. Davis's detector was only able to see the electron neutrino. In 2002, Davis's groundbreaking research earned him the Nobel Prize in Physics — energizing physicists to lobby for a laboratory on the hallowed ground at the abandoned Homestake Mine. (This year, Takaaki Kajita of Super-Kamiokande and Arthur McDonald of SNOLab shared the Nobel Prize in Physics for their discoveries of neutrino oscillation.) Because of the site's rich physics history and unique structure, South Dakota and many scientists lobbied to have a billion-dollar deep underground laboratory at the mine, as deep as 7,400 feet — and in 2007 the U.S. National Science Foundation (NSF) selected it as the preferred site for a proposed Deep Underground Science and Engineering Laboratory (DUSEL). But in 2010, the National Science Board decided not to fund further design of DUSEL. Physicists, citizens and politicians immediately began seeking other funding sources, and in 2011, the U.S. Department of Energy (DOE), through the Lawrence Berkeley National Laboratory, agreed to support ongoing science operations at Sanford Lab, while investigating how to use the underground research facility for other longer-term experiments. Today, Sanford Lab hosts three large physics experiments nearly a mile underground on the 4850 Level. The Large Underground Xenon (LUX) experiment, is looking for dark matter, which makes up most of the matter in the universe, but has yet to be detected. We can't see or touch dark matter, but we know it exists because of its gravitational effects on galaxies and clusters of galaxies. Scientists with LUX use a vessel filled with one-third of a ton of liquid xenon, hoping that when a weekly interacting massive particle, or WIMP, strikes a xenon atom, detectors will recognize the signature. In October 2013, after an initial run of 80 days, LUX was named the most sensitive dark-matter detector in the world. [Dark Matter, Dark Energy, Mystery Explained: A Reader's Guide (Infographic)] The Majorana experiment brings us back to that obscure-sounding neutrinoless double-beta decay. Neutrinos, among the most abundant particles in the universe, are often called "ghost" particles because they pass through matter like it isn't there. Scientists with the Majorana experiment hope to spot the rare neutrinoless double-beta decay phenomenon, which could reveal if neutrinos are their own antiparticles. The answer to this question could help us understand why humans — and, indeed, the universe — exist. Majorana needs an environment so clean it was built almost entirely out of copper, electroformed deep underground, and it uses dozens of detectors made of enriched germanium crystals (76Ge) in its quest. The detectors are built in an ultraclean "glove box," which is purged periodically with nitrogen gas, to ensure not even a single speck of dust will touch the highly sensitive detectors. When completed, the strings of detectors are placed inside a copper vessel that goes into a layered shield for extra protection against the environment. CASPAR (Compact Accelerator System for Performing Astrophysical Research) researchers are studying the nuclear processes in stars. Essentially, the goal is to create the same reactions that happen in stars a bit "older" than our sun. If researchers can do that, it could help complete the picture of how the elements in our universe are built. The experiment is undergoing calibration tests and will go online in early 2016. Do you have a picture in your mind of each of these experiments? Is it the right picture? It's not easy. Writers want the public to clearly understand why the science is important. And so we look for images that will complement our stories. Matt Kapust is the creative services developer at Sanford Lab (the two of us make up the entire communications team). Since 2009, Kapust has been documenting the conversion of the mine into a world-leading research laboratory, using photography and video to record each stage of construction and outfitting. "Video is one of the most important tools we have in our tool belt," Kapust said. "As content developers, we need to find creative ways to explain esoteric science concepts to mainstream audiences in ways that get them excited about science." Film is important for other reasons, as well. "Massive science projects like the ones we have at Sanford Lab are not privately funded, they are not corporate run," Kapust said. "They are funded by the public and need public support. Film's mass appeal allows us to tell the stories in new ways and generate that widespread support." Sanford Lab receives $15 million year for operations each year from the DOE. In addition to the $40 million given to support the lab in 2007, South Dakota recently gave the lab nearly $4 million for upgrades to one of the shafts. The individual experiments receive millions of dollars in funding from NSF and DOE, and a proposed future experiment, the Long Baseline Neutrino Facility and associated Deep Underground Neutrino Experiment (LBNF/DUNE) is expected to cost $1 billion. All of this comes from taxpayers. And they want to know where their money is going — and why. Our stories, if we do them right, create excitement and spur the public's collective imagination — I mean, we're talking about possibly discovering the origins of the universe! When you think about it in those terms, a picture — or video — could be worth a million words, or a billion dollars. Kapust points to that billion-dollar experiment as an example. LBNF/DUNE, currently in the planning stages, will be an internationally designed, coordinated and funded collaboration that will attempt to unlock the mysteries of the neutrino. Billions of neutrinos pass through our bodies every second. Billions. They are formed in nuclear reactors, the sun (a huge nuclear reactor) and other stars, supernovae and cosmic rays as they strike the Earth's atmosphere. In particular, researchers with LBNF/DUNE want to more fully understand neutrino oscillations, determine the mass of these ghostly particles, and solve the mystery of the matter/antimatter imbalance in the universe. To do this, they will follow the world's highest-intensity neutrino beam as it travels 800 miles through the Earth, from Fermilab in Batavia, Illinois, to four massive detectors on the 4850 Level of Sanford Lab. And should a star go supernova while the experiment is running, the researchers could learn a lot more. LBNF/DUNE will be one of the largest international megascience experiments to ever occur on U.S. soil. The sheer scale of the experiment is mind-boggling. For example, the detectors are filled with 13 million gallons of liquid argon, an element used in the SNOLab experiment that discovered neutrino oscillation. And more than 800,000 tons of rock will be excavated to create three caverns — two for the detectors and one for utilities. Each cavern will be nearly the length of two football fields. That will require a lot of blasting, and engineers at Sanford Lab want to document the test blasts for a couple of reasons: They want a graphic representation of what the blast will look like and they hope to catch any visual appearance of dust going down the drift. The huge experiment is being built near existing experiments and dust could have a negative impact. Capturing the event on video could help them determine better ways to blast the rock to route the dust away from other sensitive physics experiments. As the experiment moves forward, our team will document each stage. We can't bring visitors underground, but we can show them our progress. Katie Yurkewicz, head of communications at Fermi National Accelerator Laboratory (Fermilab), said, "If words are our only tools, it can be extremely difficult (if not impossible) to get people to that 'Aha!' moment of understanding. Video and animations are invaluable in communicating those complex construction and physics topics." In our field, it's important to seek the expertise and interest of other communicators and the media. "We often rely on documentary filmmakers, news organizations and public broadcasting to help us tell our stories," Kapust said, citing RAW Science, the BBC and South Dakota Public Broadcasting among those entities. "It's important for us to be able to work with these groups because we have limited resources. We need the assistance and networking opportunities they offer." In May 2015, a team from PRI's Science Friday arrived at Sanford Lab to do a story about LUX and the search for dark matter. The team spent three days filming underground and on the surface. They interviewed scientists, students and administrators. The story was told on radio, of course, but the program also included a 17-minute video on Science Friday's website. The radio program used sound, tone and words to great effect. But the video takes viewers onto the cage and down the shaft, into a modern, well-lit laboratory, and on a locomotive ride through the dark caverns of the underground. (Science Friday submitted the video for competition in the RAW Science Film Festival, which takes place Dec. 4-5 in Los Angeles.) Producing film at Sanford Lab isn't easy. Trips underground require careful planning, and even a trip action plan, part of a log that keeps track of everyone working underground. Should an emergency arise, the underground will be evacuated; the log ensures everyone gets to the surface safely. Because we are required to spend a lot of time underground, we undergo regular safety training that adds up to several hours a year. For every trip, we don restrictive clothing — hardhats as a safety measure and coveralls to keep dust from our clothing — then take an 11-minute ride in a dark cage, or elevator, to laboratories nearly a mile down. We lug our heavy lighting, sound and camera equipment with us, and shoot video in tight spaces. If we forget something, we can't turn around and go back — the cage only runs at certain times of the day. Bringing our lunch is a definite must. Once underground, we enter the cart wash area, where we remove our coveralls, don clean hardhats, and clean all of the equipment with alcohol wipes — we don't want to bring any dirt into the lab. Finally, we put booties over our shoes, then enter the laboratory area. One big perk? There's an espresso machine and a panini press. Recently, we did a story about the innermost portion of the six-layered shield around the Majorana Demonstrator project. The shield gives the experiment extra protection from the radiation that permeates through the surrounding rock, especially radon, which can create noise in the experiment. The inner shield is special — it was made with ultrapure electroformed copper grown on the 4850 Level of Sanford Lab. We interviewed physicist Vincent Guiseppe, the mastermind behind the shield, inside the deeply buried class-100 clean room where all the work is done. Despite our precautions, we couldn't go into the clean room without putting on a "bunny suit": Tyvek clothing that includes a hood, booties, two pairs of gloves and a face mask, and we had to maneuver carefully as the research continued around us. It was a challenge, but it was worth it to get the story and a stunning image of the shield. While the lunar landing inspired my generation to look to the cosmos — and inspired me to want to fly to distant planets, see the Milky Way from a distant galaxy, and learn the secrets of the universe — none of us expected to be looking up from nearly a mile underground. But with the right mix of sights and stories, science is inspiring a new generation, while searching for answers to universal questions using tools that are only now reaching for the stars. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | January 20, 2016
Site: www.rdmag.com

Two MIT researchers have developed a thin-film material whose phase and electrical properties can be switched between metallic and semiconducting simply by applying a small voltage. The material then stays in its new configuration until switched back by another voltage. The discovery could pave the way for a new kind of “nonvolatile” computer memory chip that retains information when the power is switched off, and for energy conversion and catalytic applications. The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoO . Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. “Here for the first time,” she says, “we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoO .” “It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties,” Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite. The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage — just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage. Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors — a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say. Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. “So our idea was, don’t change the atmosphere, just apply a voltage.” “Voltage modifies the effective oxygen pressure that the material faces,” Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia. In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT’s Center for Materials Science and Engineering. The basic principle of switching this material between the two phases by altering the gas pressure and temperature in the environment was developed within the last year by scientists at Oak Ridge National Laboratory. “While interesting, this is not a practical means for controlling device properties in use,” says Yildiz. With their current work, the MIT researchers have enabled the control of the phase and electrical properties of this class of materials in a practical way, by applying an electrical charge. In addition to memory devices, the material could ultimately find applications in fuel cells and electrodes for lithium ion batteries, Lu says. “Our work has fundamental contributions by introducing electrical bias as a way to control the phase of an active material, and by laying the basic scientific groundwork for such novel energy and information processing devices,” Yildiz adds. In ongoing research, the team is working to better understand the electronic properties of the material in its different structures, and to extend this approach to other oxides of interest for memory and energy applications, in collaboration with MIT professor Harry Tuller. José Santiso, the nanomaterials growth division leader at the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Spain, who was not involved in this research, calls it “a very significant contribution” to the study of this interesting class of materials, and says “it paves the way for the application of these materials both in solid state electrochemical devices for the efficient conversion of energy or oxygen storage, as well as in possible applications in a new kind of memory devices.” The work was supported by the National Science Foundation.


News Article | January 21, 2016
Site: www.cemag.us

Two MIT researchers have developed a thin-film material whose phase and electrical properties can be switched between metallic and semiconducting simply by applying a small voltage. The material then stays in its new configuration until switched back by another voltage. The discovery could pave the way for a new kind of “nonvolatile” computer memory chip that retains information when the power is switched off, and for energy conversion and catalytic applications. The findings, reported in the journal Nano Letters in a paper by MIT materials science graduate student Qiyang Lu and associate professor Bilge Yildiz, involve a thin-film material called a strontium cobaltite, or SrCoO . Usually, Yildiz says, the structural phase of a material is controlled by its composition, temperature, and pressure. “Here for the first time,” she says, “we demonstrate that electrical bias can induce a phase transition in the material. And in fact we achieved this by changing the oxygen content in SrCoO .” “It has two different structures that depend on how many oxygen atoms per unit cell it contains, and these two structures have quite different properties,” Lu explains. One of these configurations of the molecular structure is called perovskite, and the other is called brownmillerite. When more oxygen is present, it forms the tightly-enclosed, cage-like crystal structure of perovskite, whereas a lower concentration of oxygen produces the more open structure of brownmillerite. The two forms have very different chemical, electrical, magnetic, and physical properties, and Lu and Yildiz found that the material can be flipped between the two forms with the application of a very tiny amount of voltage — just 30 millivolts (0.03 volts). And, once changed, the new configuration remains stable until it is flipped back by a second application of voltage. Strontium cobaltites are just one example of a class of materials known as transition metal oxides, which is considered promising for a variety of applications including electrodes in fuel cells, membranes that allow oxygen to pass through for gas separation, and electronic devices such as memristors — a form of nonvolatile, ultrafast, and energy-efficient memory device. The ability to trigger such a phase change through the use of just a tiny voltage could open up many uses for these materials, the researchers say. Previous work with strontium cobaltites relied on changes in the oxygen concentration in the surrounding gas atmosphere to control which of the two forms the material would take, but that is inherently a much slower and more difficult process to control, Lu says. “So our idea was, don’t change the atmosphere, just apply a voltage.” “Voltage modifies the effective oxygen pressure that the material faces,” Yildiz adds. To make that possible, the researchers deposited a very thin film of the material (the brownmillerite phase) onto a substrate, for which they used yttrium-stabilized zirconia. In that setup, applying a voltage drives oxygen atoms into the material. Applying the opposite voltage has the reverse effect. To observe and demonstrate that the material did indeed go through this phase transition when the voltage was applied, the team used a technique called in-situ X-ray diffraction at MIT’s Center for Materials Science and Engineering. The basic principle of switching this material between the two phases by altering the gas pressure and temperature in the environment was developed within the last year by scientists at Oak Ridge National Laboratory. “While interesting, this is not a practical means for controlling device properties in use,” says Yildiz. With their current work, the MIT researchers have enabled the control of the phase and electrical properties of this class of materials in a practical way, by applying an electrical charge. In addition to memory devices, the material could ultimately find applications in fuel cells and electrodes for lithium ion batteries, Lu says. “Our work has fundamental contributions by introducing electrical bias as a way to control the phase of an active material, and by laying the basic scientific groundwork for such novel energy and information processing devices,” Yildiz adds. In ongoing research, the team is working to better understand the electronic properties of the material in its different structures, and to extend this approach to other oxides of interest for memory and energy applications, in collaboration with MIT professor Harry Tuller. José Santiso, the nanomaterials growth division leader at the Catalan Institute of Nanoscience and Nanotechnology in Barcelona, Spain, who was not involved in this research, calls it “a very significant contribution” to the study of this interesting class of materials, and says “it paves the way for the application of these materials both in solid state electrochemical devices for the efficient conversion of energy or oxygen storage, as well as in possible applications in a new kind of memory devices.” The work was supported by the National Science Foundation.


News Article | November 11, 2015
Site: cen.acs.org

How is a new type of liquid like the London Underground? In both, one must mind the gaps. Stuart James of Queen’s University Belfast and coworkers have devised a liquid with permanent porosity (Nature 2015, DOI: 10.1038/nature16072). Such holey fluids could be useful in gas separation, process chemistry, and other applications, if they can be made economically. Their persistent porosity comes from hollow organic cage molecules coated with solvent-soluble surface groups. The cage openings are too small to be clogged by the surface groups or by large solvent molecules in the surrounding solvent. Liquids contain spaces between their molecules, but they are tiny. Bubbles can be blown into liquids, but they will quickly float to the surface and dissipate—air bubbles in glass being a rare exception. On the other hand, solids such as zeolites and metal-organic frameworks have permanent pores. They can be used to separate molecules by size and can host added catalysts to drive chemical reactions. But unlike liquids, these materials can’t flow through channels or be smoothed onto surfaces. James got the idea for holey liquids a decade ago, when a colleague, Queen’s University chemical engineer David W. Rooney, wondered whether mixtures of porous solids and liquids might be pumped through pipes and used in continuous processes more easily than is generally possible with porous solids alone. Porous solids expert Russell Morris of the University of St. Andrews comments that the concept of liquids with permanent porosity has actually been around for decades, but “the practicalities of designing such a system take skill and perseverance. This work delivers real liquids that show that property, which is a great achievement.” The researchers initially achieved liquid porosity by dissolving a hollow organic core structure coated with crown-ether surface groups in a crown-ether solvent. To optimize gas absorption capacity, the team packed the fluid with as many cage molecules as possible—one for every 12 solvent molecules. When exposed to methane, the liquid absorbs eight times as much gas as the solvent alone. But the crown ethers were hard to synthesize and the liquid thick and slow-flowing. So collaborators Andrew I. Cooper and Rebecca L. Greenaway at the University of Liverpool developed another porous liquid by coating a hollow organic cage with a mixture of diamines and dissolving it in the solvent hexachloropropene. The resulting porous liquid flows ten times as readily as the crown-ether-based material. The researchers synthesize the coated cages in a single step, and the solvent is available commercially. In earlier work, Shannon M. Mahurin and Sheng Dai at Oak Ridge National Laboratory and coworkers created hollow colloidal silica nanoparticles with a fluidlike polymer surface coating (Angew. Chem. Int. Ed. 2015, DOI: 10.1002/anie.201409420), but the new porous liquids are easier to molecularly modify. Dai comments that “the liquids will open up new frontiers in how we think about porosity.” The surface area and gas uptake of the new materials are too modest for them to compete with porous solids immediately, says cage compound specialist Michael Mastalerz of Germany’s Heidelberg University in a Nature commentary. “They should instead be seen as a prototype of a new class of material [that] will undoubtedly find technological applications” if these properties can be improved.


News Article | October 17, 2015
Site: www.techtimes.com

The White House recently announced its Climate Action Plan which targets to replace the usage of super greenhouse gases called hydrofluorocarbons (HFCs) in air-conditioning and refrigeration with new methods that require less HFC. The U.S. Environmental Protection Agency (EPA) has teamed up with other agencies and industry leaders all over the world to set this plan into motion. In November this year, world leaders will attend the 27th Meeting of the Parties to the Montreal Protocol in Dubai. The Montreal Protocol is a convention that focuses on phasing out the production of harmful substances such as HFCs that cause ozone depletion. HFCs are 10,000 times more potent than carbon dioxide in trapping heat and even tiny amounts of it in the atmosphere can destroy the ozone, according to the U.S. Department of Energy (DOE). The White House mentioned that the EPA, alongside the application of the Clean Air Act, had proposed standards to control leakage and intentional releases from equipment that contains HFC. The EPA will also go into a second set of rules called the Significant New Alternative Policy or SNAP, which is a proposal that includes safer and newer alternatives for air-conditioning and refrigeration. White House energy and climate adviser Dan Utech explained that if HFC emissions were reduced in the United States, it will have a huge impact on the climate. "[The new commitments] will reduce consumption of HFCs by the equivalent of 1 billion tons of CO2 between now and 2025," Utech said. "That's the equivalent of taking 210 million cars off the road." Meanwhile, the DOE plans to develop new heating and air-conditioning technologies that will have zero or minimal effect on the environment. One example is a technology that utilizes magnets to create the cooling effect similar to the effect that traditional compressors produce. The department has invested about $8 million to advance research and development in these technologies. One problem that was raised by world leaders is whether refriegerants with low global warming potential (GWP) can be efficient in high ambient temperature conditions of up to 131 degrees Fahrenheit. With this, the DOE worked with Oak Ridge National Laboratory to examine the issue. Researchers discovered that there are viable replacements for these refrigerants. Energy officials hope that the results of the research can add up to the talks of amendment at the Montreal Protocol. If done, the replacements can significantly prevent further increase in global temperature, experts say.


News Article | February 15, 2017
Site: www.eurekalert.org

OAK RIDGE, Tenn., Feb. 9, 2017 - Four technologies developed at the Department of Energy's Oak Ridge National Laboratory have earned Federal Laboratory Consortium awards for excellence in technology transfer. The FLC is a network of more than 300 federal laboratories, facilities and research centers dedicated to promoting and strengthening the commercialization of federal laboratory-developed technologies and expertise. The honors place ORNL in elite company, according to Donna Bialozor, FLC Awards committee chair, who wrote: "This year your laboratory is one of a select number of recipients, an indication that your nominations were truly of the highest caliber." Mike Paulus, ORNL's director of technology transfer, noted that it is especially significant that ORNL won four of the 19 FLC awards for excellence in technology transfer. "With these latest awards, ORNL has now won 55 since 1986, reaffirming the importance and relevance of our research and development." ORNL earned recognition for the following technologies: Piranha Text Mining Tool, licensed to Vortex Analytics. Piranha, the recipient of an R&D 100 Award in 2007, is an agent-based technology to analyze text data with unprecedented speed and accuracy. The software suite enables the evaluation of large volumes of unstructured text-based data across many different formats cost effectively and efficiently. The software provides unparalleled processing speeds and is ideal for data mining applications in any situation in which large amounts of text-based data must be analyzed quickly and effectively. Prior to the technology transfer effort that is the subject of this honor, ORNL researchers used the technology in their work with the FBI to mine large amounts of text-based data. Piranha was invented by Thomas Potok, Robert Patton and James Treadwell of ORNL. Others named on the award were David Sims, Marc Filigenzi and Barry Goss. Open Port Sampling Interfaces for Mass Spectrometry, licensed by SCIEX. This innovation dramatically simplifies mass spectrometry, enabling its wider use. Scientists use mass spectrometry to perform chemical analysis with high selectivity and sensitivity, detecting and differentiating millions of chemicals in quantities down to 1 part per billion. The simplicity of the device allows even novices with the means to introduce unprocessed solid or liquid samples into a mass spectrometer without fear of instrument contamination. To identify a compound, researchers simply touch the object of interest to the solvent dome at the sampling end of the probe, which is connected to a mass spectrometer. Almost instantaneously, a screen displays data that identifies the chemical and its approximate concentration. Because this approach requires no sample preparation and the device is self-cleaning, the system is especially attractive for a wide range of applications. The Open Port Sampling Interfaces for Mass Spectrometry technology was invented by Gary Van Berkel and Vilmos Kertesz of ORNL and Tom Covey and Kevin Smith of SCIEX. Also named on the award were Jennifer Caldwell, Edna Gergel and Marc Filligenzi of ORNL. Superhydrophobic Transparent Glass Thin Film Innovation, licensed to Samsung. This transparent glass thin film coating provides glass displays that are extremely water repellant, durable and fingerprint-proof for products such as smart phones and tablets, solar panels, optical components and windows. Unlike more fragile polymeric and powder-based options, this coating exhibits superior resistance to abrasion and is thermally stable to 500 degrees Celsius. The coating uses non-toxic naturally abundant and inexpensive precursor materials. The superhydrophobic coating was invented by ORNL's Tolga Aytug and John Simpson, formerly of ORNL. Also named on the award were Eugene Cochran and Marc Filligenzi of ORNL and Yoon Goo Lee and Debasis Bera of Samsung. Miniature Ion Trap Mass Analyzer, licensed by 908 Devices. This ORNL technology takes mass spectrometry to a new level, making it elegantly simple, easy to use and more affordable. 908 Devices products based on the ORNL technology have demonstrated important public safety benefits in law enforcement and hazardous materials detection. The ORNL technology consists of a highly miniaturized implementation of an ion trap mass analyzer 10 times smaller than conventional scale ion traps. Despite its small size, the device offers the same resolving power as conventional-scale ion trap mass analyzers. The device is also extremely portable. Products that utilize this technology are found in a range of 908 Devices instruments used in more than 18 countries. In addition, 908 Devices is working with the U.S. Army's Next Generation Chemical Detector program to develop devices to detect and identify chemical warfare agents, toxic chemicals and other hazards. The Miniature Ion Trap Mass Analyzer was developed by Michael Ramsey, formerly of ORNL, William Whitten of ORNL, and Kevin Knopp and Christopher Brown, co-founders of 908 Devices. Also named on the award were Jennifer Caldwell, Edna Gergel and Jud Hightower of ORNL. ORNL is managed by UT-Battelle for DOE's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. Caption1: Oak Ridge National Laboratory's Open Port Sampling Interfaces for Mass Spectrometry, invented by Gary Van Berkel (left) and Vilmos Kertesz, features simplicity and elegance. Caption 2: The Superhydrophobic Transparent Glass Thin Film Innovation, invented by a team led by Tolga Aytug, provides glass displays that are extremely water repellant, durable and fingerprint-proof for products such as smart phones and tablets, solar panels, optical components and windows. Caption 3: Mike Ramsey, a former researcher at Oak Ridge National Laboratory and co-inventor of the Miniature Ion Trap Mass Analyzer, visited the lab in 2012 to participate in the license signing ceremony with 908 Devices. NOTE TO EDITORS: You may read other press releases from Oak Ridge National Laboratory or learn more about the lab at http://www. . Additional information about ORNL is available at the sites below:


News Article | February 1, 2016
Site: news.mit.edu

The dark-colored hydrocarbon solid known as kerogen gives rise to the fuels that power many of our daily activities: Petroleum is the source of gasoline and diesel fuels, and natural gas is used for cooking, heating, and increasingly for producing electricity. And yet, kerogen’s basic internal structure has remained poorly understood — until now. A new analysis, by a joint team of researchers at MIT, the French government research organization CNRS, and elsewhere, has revealed kerogen’s internal structure, in detail down to the atomic level. Their results were just published in the journal Nature Materials in a paper by MIT postdoc Colin Bousige, visiting scientist Benoit Coasne, senior research scientist Roland J.-M. Pellenq, professor Franz-Josef Ulm, and colleagues at MIT, CNRS, and other institutions. The findings reveal important details about how gas and oil move through pores in formations deep underground, making it possible to estimate the amount of recoverable reserves more accurately and potentially pointing to better ways of extracting them. Kerogen is a mixture of organic materials, primarily the remains of dead microbes, plants, and animals that have decomposed and been buried deep underground and compressed. This process forms a carbon-rich, rock-hard material riddled with pores of various sizes. When transformed as a result of pressure or geothermal heat, hydrocarbon molecules in the kerogen break down into gas or petroleum. These flow through the pores and can be released through drilling. It turns out that the formula, known as the Darcy equation, that the petroleum and gas industries have traditionally used to describe the way these fluids move underground is not accurate when the hydrocarbon fluids are inside kerogen. The new understanding could change the interpretation of how some gas and oil reservoirs, often found in shale formations, actually behave. As these fluids move through pores in the deep rock, “the flux in these nanopores, what we call the transport properties, are not what is given by the macroscale physics of liquids,” says Pellenq, a senior research scientist in the Department of Civil and Environmental Engineering at MIT and co-director, with Ulm, of the joint MIT-CNRS program called MultiScale Materials Science for Energy and Environment. In many situations where the standard formula predicts that oil or gas will flow, in reality — and as predicted by the new model — the flow stops. The pore sizes in the rock are often smaller and less interconnected than expected, the study shows, so the individual molecules of oil or gas no longer behave collectively as fluids. Instead, they get stuck in place, like a large dog trying to crawl through a cat door. This understanding of the nanoscale structure of pore spaces in kerogen is “a true new idea, it’s a game changer,” says Pellenq. Previously it had been assumed that the pore spaces in these deep underground formations were larger — microscale rather than nanoscale — and thus would allow the petroleum or gas to flow more easily through them. “Those nanopores were not expected by the industry,” he says. What that reveals, he explains, is that “those molecules trapped in those pores are really trapped.” Although researchers had generally assumed that these molecules could be released from the rock simply by applying more pressure or better solvents, “these nanopores are really a big part of the porosity of kerogen,” he says, and essential for understanding the recoverability of reserves. Right now, industry practices extract fluids from the few big pores in the fracture, Pellenq says. However this fracking process “is not even touching the real treasure, which is in the walls, in the pores of the wall.” A better approach, the research suggests, might be to replace the conventional water-based hydrofracking solutions. “Those formations, and especially those pores, are hydrophobic, so the hydrofracking is not touching those nanopores,” Pellenq says. But “if you were change the fluids from water-based to carbon-dioxide based, we know that CO will go into those nanopores, because those pores are CO -philic” (that is, carbon-dioxide attracting). This would force out at least the lighter molecules, such as the methane that is the main component of natural gas, though perhaps not the heavier molecules of petroleum. Moving away from the use of water would be “good news” because it would reduce the need for using and then cleaning or disposing of contaminated water, according to Pellenq. In addition, it might even be a way of sequestering some carbon dioxide, he says, providing another potential bonus. Jean-Noël Rouzaud, the CNRS research director at the geology laboratory of the Ecole Normale Supérieure in Paris, who was not involved in this research, says this work “appears me very original and of really good scientific quality.” He adds that it brings “an essential contribution to the study of non-conventional sources of hydrocarbons such as oil and gas shales. [This work] should allow people to envisage more effective and environment-friendlier techniques of recovery of hydrocarbons.” The study also included researchers from Oak Ridge National Laboratory; the Institut de Science des Materiaux de Mulhouse in Mulhouse, France; Schlumberger-Doll Research in Cambridge, Massachusetts; the European Synchrotron Radiation Facility in Grenoble, France; and Aix Marseille University in Marseille, France. It was supported by the MIT Energy Initiative, as part of the X-Shale project funded by Shell and Schlumberger, and by the French National Research Agency through the Laboratory of Excellence (Labex) ICoME2.


News Article | November 30, 2016
Site: physicsworld.com

Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters Radioactive waste from nuclear reactors could be used to create tiny diamonds that produce small amounts of electricity for thousands of years. That is the claim at the heart of a proposal from researchers at the University of Bristol in the UK, who say they have a practical way of dealing with some of the nearly 95,000 tonnes of radioactive graphite that was used as a moderator in the UK's nuclear reactors. The idea is to make the waste less radioactive by removing radioactive carbon-14 nuclei, which are concentrated on the surface of the graphite. The isotope would then be integrated into artificial diamonds. Carbon-14 has a half-life of about 5700 years and decays to non-radioactive nitrogen-14 by emitting a high-energy electron. It turns out that diamond is very good at turning the energy released in the decay into an electrical current – essentially creating a battery that will last for thousands of years. Embedding carbon-14 in diamond is a safe option, say the researchers, because diamond is hard and non-reactive, so it is unlikely that the radioactive carbon will leak into the environment. And because nearly all of the decay energy is deposited within the diamond, the radiation emitted by such a battery would be about the same as that emitted by a banana. The team reckons that a diamond battery containing about 1 g of carbon-14 would deliver about 15 joules per day. A standard 20 g AA battery could sustain this power for about 2.5 years, whereas the diamond battery would last hundreds of years without a significant drop in output. "We envision these batteries to be used in situations where it is not feasible to charge or replace conventional batteries," says Bristol's Tom Scott. "Obvious applications would be in low-power electrical devices where long life of the energy source is needed, such as pacemakers, satellites, high-altitude drones or even spacecraft." The team has already shown that the device could work by placing a non-radioactive diamond next to nickel-63, which emits high-energy electrons. The Indian particle physicist and cosmic-ray expert M G K Menon has died at the age of 88. Menon was educated at Jaswant College, Jodhpur, and the Royal Institute of Science in Bombay (now Mumbai), before moving to the University of Bristol in 1953, where he did a PhD in particle physics under the supervision of Nobel laureate Cecil Powell. Two years later, he joined the Tata Institute of Fundamental Research in Bombay, researching cosmic rays before becoming the institute's director from 1966 to 1975. Later in his career, Menon was appointed to a number of notable policy positions. He became a member of India's Planning Commission from 1982 to 1989 and was science advisor to Indian prime minister Rajiv Gandhi from 1986 to 1989. In 1989 he became minister of state for science and technology and education, and a year later was elected as a member of parliament. The International Union of Pure and Applied Chemistry (IUPAC) has officially named four new elements: 113, 115, 117 and 118. Element 113 was discovered at the RIKEN Nishina Center for Accelerator-Based Science in Japan and will be called nihonium (Nh). Nihon is a transliteration of "land of the rising sun", which is a Japanese name for Japan. Moscovium (Mc) is the new moniker for element 115 and was discovered at the Joint Institute for Nuclear Research (JINR) in Moscow. Element 117 will be called tennessine (Ts) after the US state of Tennessee, which is home to the Oak Ridge National Laboratory, while element 118 will be named oganesson after the Russian physicist Yuri Oganessian, who led the team at JINR that discovered the element. "The names of the new elements reflect the realities of our present time," says IUPAC president Natalia Tarasova. She adds that the names reflect the "universality of science, honouring places from three continents where the elements have been discovered – Japan, Russia, the US – and the pivotal role of human capital in the development of science, honouring an outstanding scientist – Yuri Oganessian". The names were proposed in June and then underwent a five-month consultation period before they were approved by the IUPAC Bureau on Monday.


News Article | February 6, 2017
Site: www.theenergycollective.com

A sense of panic is emerging globally as Toshiba, troubled by extensive losses and fake financial reports, heads toward a complete exit from the commercial nuclear energy industry. The two countries that will be hardest hit by the expected actions will be the UK and India. Unlike the situation following the Fukushima crisis, in which the Japanese government in effect nationalized TEPCO, no bailout of Toshiba is expected to come to its rescue. Financial wire service reports indicate that the firm will sell off its stake in Westinghouse, which is worth about $250M. After nine years of writing about the global nuclear industry, these development make for an unusually grim outlook. It’s a very big rock hitting the pond. Toshiba’s self-inflicted wounds will result in long lasting challenges to the future of the global nuclear energy industry. Worse, it comes on top of the French government having to restructure and recapitalize Areva, its state-owned nuclear power corporation, so that it can complete two 1650 MW EPR reactors that are under construction in Europe and to begin work on the Hinkley project the UK. Are these enough financial resources, and management capabilities, for the nuclear energy industry to make good on its commitment to have an impact on global climate change? Toshiba will likely end its planned commitment for a 60% equity stake in the NuGen Consortium at the UK Moorside project located in Cumbria. An effort to build three 1150 MW Westinghouse AP1000 nuclear reactors will now need new investors or a new reactor vendor or both. In the UK backers of the NuGen project are looking to see if the government will directly fund the effort with Westinghouse acting as a technology vendor uncoupled from its parent’s convoluted corporate structure. That might be possible if Toshiba can find a buyer for the firm. The government might wind up being the investor of last resort and also have the role of setting the rates for the electricity generated by the reactors. Another potential investor in the NuGen project could be South Korea which has had success in bringing in the first of four 1400 MW reactors for the UAE as part of a $20 billion project. The question is whether South Korea would want to take on another major project while it is still completing the other three units in the UAE plus it has domestic reactors that have capital requirements. The risk of being overextended in terms of money and management capability is one the firm will likely weigh relative to its interest in entering the UK nuclear market. It is also less likely that Chinese state owned nuclear firms will have an appetite for further investments in the UK’s new nuclear build. They already have a full plate. Two firms have combined to take a 33% stake in the massive Hinkley Point C project. Also, they have also committed to enter the costly and lengthy Generic Design Assessment effort for the Hualong One reactor. Two reference units of the the Hualong One, units 3 & 4, are now under construction at Fangchenggang site in in the south of China’s Guangxi Zhuang Region. They are expected to be completed by 2020. The two firms building the units at Fangchenggang hope to export the Hualong One to the UK for the Bradwell site once the Hinkley project is complete sometime in the mid-2020s. The firms have plans for a majority equity stake in the Bradwell project which could cost $10 billion. That’s a lot to take on and the prospect of being overextended is very real. NPCIL Sees Andhra Pradesh as Now Being “Impossible” Efforts by Westinghouse to close a deal to build six AP1000s for NPCIL at a coastal site in the southern state of Andhra Pradesh will go by the boards. The Indian government has not make any official statement about Toshiba’s problems. However, Reuters reported that it was told it now looks to be “impossible” for the six unit project to move ahead. The Indian utility had been seeking U.S. Export Import Bank loan guarantees for the project which has cost estimates of at least $15 billion. Congress throttled the bank’s loan powers in 2015 and is unlikely to loosen the restraints for a project sponsored by Toshiba, at least in its current financially distressed state. The risks that Westinghouse faces even if the reactor division is able to establish itself as an independent vendor to EPC firms and investors include keeping its work force intact during what could be a lengthy transition. Layoffs and cost cutting could reduce the core competencies of the firm and its ability to meet the service needs of existing customers much less be a vendor of nuclear technologies for new projects. China has plans to rely on the experience and skills of Westinghouse engineers as it completes the first of four AP1000s now under construction in that country. The same risk is posed for the four units being built in the US. Westinghouse is mired in protracted and complex contractual disputes with its nuclear components supplier for the U.S. reactor projects. Westinghouse and CB&I disagree on cost figures included in the acquisition of that firm. The claims and counter claims total hundreds of millions. Wire service reports indicate there is a possibility Westinghouse will sell its nuclear fuel business to cover these costs, but this is not confirmed. In any case, investors will likely wait to bid on the fuel unit until settlement of the dispute between CB&I and Westinghouse In the US Duke Energy recently received NRC licenses to build two AP1000 reactors at the Lee site in South Carolina. However, the utility has postponed a decision to build them to the indefinite future. In May 2016 Florida Power & Light postponed development efforts for two AP1000s at its Turkey Point site near Miami for at least four years. For both utilities, the transition of Westinghouse from being a wholly-owned firm by Toshiba, which invests in nuclear power projects, to being just a reactor vendor could be well on its way to completion, or done, by the time they decide to move ahead with these projects. Because rate payers are covering the costs of the four AP1000 reactors as they are being built in the U.S, there is no risk of Toshibas pulling out as investor. EPC firms manage the day-to-day construction work. However, all four units are over budget and behind schedule which makes for contentious rate case hearings as the utilities in Georgia and South Carolina seek to recover their costs. If projects in the UK and India do not get new investor, the answers to questions of where electricity will come from will be fossil fuels. That’s not good for the planet or our place on it. Turkey’s first nuclear power plant Akkuyu to be operational by 2023 (Daily Sabah) Energy and Natural Resources Minister Berat Albayrak announced last week the first of four Rosatom 1200 MW VVER nuclear reactors will be operational at the Akkuyu site located on Turkey’s Mediterranean coast by 2023. The nuclear plant in the southern province of Mersin is the first of three nuclear power plants Turkey currently plans to build to reduce its dependence on imported energy from exporters. A second plant will be built by a French-Japanese consortium in the northern city of Sinop near the Black Sea. It is expected to include four Mitsubishi/Areva ATMEA 1100 MW reactors. The site has not yet broken ground. Last October the government announced that the country’s third nuclear power plant will be built in the Igneada district in the northwestern province of Kirklareli. There have been reports Chinese state owned nuclear firms will bid the CAP1400, which is based on the AP1000. (NucNet) The US Department of Energy’s Oak Ridge National Laboratory (ORNL) has released a technical memorandum independently verifying the performance of an advanced nuclear power reactor being developed by Massachusetts-based Transatomic Power Corporation. Transatomic said scientists at ORNL have verified the viability of reducing nuclear waste production with the company’s technology. The laboratory’s conclusions represent an important milestone for the nascent advanced nuclear industry and help position advanced nuclear technology as a key player in the country’s future energy mix. Transatomic has designed a nuclear reactor that uses a molten fluoride salt to carry its fuel, meaning it cannot melt down and possesses improved, inherent safety features over current nuclear technology. Additionally, ORNL’s results “conclusively show” that the Transatomic reactor can operate for decades using the commercially available 5% low-enriched uranium supply chain. The company expects to build a demonstration reactor by the mid-2020s.


News Article | March 25, 2016
Site: www.nanotech-now.com

Abstract: -Your car's bumper is probably made of a moldable thermoplastic polymer called ABS, shorthand for its acrylonitrile, butadiene and styrene components. Light, strong and tough, it is also the stuff of ventilation pipes, protective headgear, kitchen appliances, Lego bricks and many other consumer products. Useful as it is, one of its drawbacks is that it is made using chemicals derived from petroleum. Now, researchers at the Department of Energy's Oak Ridge National Laboratory have made a better thermoplastic by replacing styrene with lignin, a brittle, rigid polymer that, with cellulose, forms the woody cell walls of plants. In doing so, they have invented a solvent-free production process that interconnects equal parts of nanoscale lignin dispersed in a synthetic rubber matrix to produce a meltable, moldable, ductile material that's at least ten times tougher than ABS. The resulting thermoplastic--called ABL for acrylonitrile, butadiene, lignin--is recyclable, as it can be melted three times and still perform well. The results, published in the journal Advanced Functional Materials, may bring cleaner, cheaper raw materials to diverse manufacturers. "The new ORNL thermoplastic has better performance than commodity plastics like ABS," said senior author Amit Naskar in ORNL's Materials Science and Technology Division, who along with co-inventor Chau Tran has filed a patent application for the process to make the new material. "We can call it a green product because 50 percent of its content is renewable, and technology to enable its commercial exploitation would reduce the need for petrochemicals." The technology could make use of the lignin-rich biomass byproduct stream from biorefineries and pulp and paper mills. With the prices of natural gas and oil dropping, renewable fuels can't compete with fossil fuels, so biorefineries are exploring options for developing other economically viable products. Among cellulose, hemicellulose and lignin, the major structural constituents of plants, lignin is the most commercially underutilized. The ORNL study aimed to use it to produce, with an eye toward commercialization, a renewable thermoplastic with properties rivaling those of current petroleum-derived alternatives. To produce an energy-efficient method of synthesizing and extruding high-performance thermoplastic elastomers based on lignin, the ORNL team needed to answer several questions: Can variations in lignin feedstocks be overcome to make a product with superior performance? Can lignin integrate into soft polymer matrices? Can the chemistry and physics of lignin-derived polymers be understood to enable better control of their properties? Can the process to produce lignin-derived polymers be engineered? "Lignin is a very brittle natural polymer, so it needs to be toughened," explained Naskar, leader of ORNL's Carbon and Composites group. A major goal of the group is producing industrial polymers that are strong and tough enough to be deformed without fracturing. "We need to chemically combine soft matter with lignin. That soft matrix would be ductile so that it can be malleable or stretchable. Very rigid lignin segments would offer resistance to deformation and thus provide stiffness." All lignins are not equal in terms of heat stability. To determine what type would make the best thermoplastic feedstock, the scientists evaluated lignin from wheat straw, softwoods like pine and hardwoods like oak. They found hardwood lignin is the most thermally stable, and some types of softwood lignins are also melt-stable. Next, the researchers needed to couple the lignin with soft matter. Chemists typically accomplish this by synthesizing polymers in the presence of solvents. Because lignin and a synthetic rubber containing acrylonitrile and butadiene, called nitrile rubber, both have chemical groups in which electrons are unequally distributed and therefore likely to interact, Naskar and Chau Tran (who performed melt-mixing and characterization experiments) instead tried to couple the two in a melted phase without solvents. In a heated chamber with two rotors, the researchers "kneaded" a molten mix of equal parts powdered lignin and nitrile rubber. During mixing, lignin agglomerates broke into interpenetrating layers or sheets of 10 to 200 nanometers that dispersed well in and interacted with the rubber. Without the proper selection of a soft matrix and mixing conditions, lignin agglomerates are at least 10 times larger than those obtained with the ORNL process. The product that formed had properties of neither lignin nor rubber, but something in between, with a combination of lignin's stiffness and nitrile rubber's elasticity. By altering the acrylonitrile amounts in the soft matrix, the researchers hoped to improve the material's mechanical properties further. They tried 33, 41 and 51 percent acrylonitrile and found 41 percent gave an optimal balance between toughness and stiffness. Next, the researchers wanted to find out if controlling the processing conditions could improve the performance of their polymer alloy. For example, 33 percent acrylonitrile content produced a material that was stretchy but not strong, behaving more like rubber than plastic. At higher proportions of acrylonitrile, the researchers saw the materials strengthen because of the efficient interaction between the components. They also wanted to know at what temperature the components should be mixed to optimize the material properties. They found heating components between 140 and 160 degrees Celsius formed the desired hybrid phase. Using resources at ORNL including the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, the scientists analyzed the morphologies of the blends. Scanning electron microscopy, performed by Chau Tran, explored the surfaces of the materials. Jihua Chen and Tran characterized soft matter phases using transmission electron microscopy, placing a thin slice of material in the path of an electron beam to reveal structure through contrast differences in the lignin and rubber phases. Small-angle x-ray scattering by Jong Keum revealed repeated clusters of certain domain or layer sizes. Fourier transform infrared spectroscopy identified chemical functional groups and their interactions. Future studies will explore different feedstocks, particularly those from biorefineries, and correlations among processing conditions, material structure and performance. Investigations are also planned to study the performance of ORNL's new thermoplastic in carbon-fiber-reinforced composites. "More renewable materials will probably be used in the future," Naskar said. "I'm glad that we could continue work in renewable materials, not only for automotive applications but even for commodity usage." ### ORNL's Technology Innovation Program, which reinvests royalties from the lab's patents in innovative, commercially promising projects, sponsored the study. The researchers conducted polymer characterization experiments (microscopy and X-ray scattering) at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL. About Oak Ridge National Laboratory UT-Battelle manages ORNL for DOE's Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time.--by Dawn Levy For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 11, 2015
Site: www.nanotech-now.com

Abstract: Electron microscopy researchers at the Department of Energy's Oak Ridge National Laboratory have developed a unique way to build 3-D structures with finely controlled shapes as small as one to two billionths of a meter. The ORNL study published in the journal Small demonstrates how scanning transmission electron microscopes, normally used as imaging tools, are also capable of precision sculpting of nanometer-sized 3-D features in complex oxide materials. By offering single atomic plane precision, the technique could find uses in fabricating structures for functional nanoscale devices such as microchips. The structures grow epitaxially, or in perfect crystalline alignment, which ensures that the same electrical and mechanical properties extend throughout the whole material. "We can make smaller things with more precise shapes," said ORNL's Albina Borisevich, who led the study. "The process is also epitaxial, which gives us much more pronounced control over properties than we could accomplish with other approaches." ORNL scientists happened upon the method as they were imaging an imperfectly prepared strontium titanate thin film. The sample, consisting of a crystalline substrate covered by an amorphous layer of the same material, transformed as the electron beam passed through it. A team from ORNL's Institute for Functional Imaging of Materials, which unites scientists from different disciplines, worked together to understand and exploit the discovery. "When we exposed the amorphous layer to an electron beam, we seemed to nudge it toward adopting its preferred crystalline state," Borisevich said. "It does that exactly where the electron beam is." The use of a scanning transmission electron microscope, which passes an electron beam through a bulk material, sets the approach apart from lithography techniques that only pattern or manipulate a material's surface. "We're using fine control of the beam to build something inside the solid itself," said ORNL's Stephen Jesse. "We're making transformations that are buried deep within the structure. It would be like tunneling inside a mountain to build a house." The technique offers a shortcut to researchers interested in studying how materials' characteristics change with thickness. Instead of imaging multiple samples of varying widths, scientists could use the microscopy method to add layers to the sample and simultaneously observe what happens. "The whole premise of nanoscience is that sometimes when you shrink a material it exhibits properties that are very different than the bulk material," Borisevich said. "Here we can control that. If we know there is a certain dependence on size, we can determine exactly where we want to be on that curve and go there." Theoretical calculations on ORNL's Titan supercomputer helped the researchers understand the process's underlying mechanisms. The simulations showed that the observed behavior, known as a knock-on process, is consistent with the electron beam transferring energy to individual atoms in the material rather than heating an area of the material. "With the electron beam, we are injecting energy into the system and nudging where it would otherwise go by itself, given enough time," Borisevich said. "Thermodynamically it wants to be crystalline, but this process takes a long time at room temperature." The study is published as "Atomic-level sculpting of crystalline oxides: towards bulk nanofabrication with single atomic plane precision." ### Coauthors are ORNL's Stephen Jesse, Qian He, Andrew Lupini, Donovan Leonard, Raymond Unocic, Alexander Tselev, Miguel Fuentes-Cabrera, Bobby Sumpter, Sergei Kalinin and Albina Borisevich, Vanderbilt University's Mark Oxley and Oleg Ovchinnikov and the National University of Singapore's Stephen Pennycook. The research was conducted as part of ORNL's Institute for Functional Imaging of Materials and was supported by DOE's Office of Science and ORNL's Laboratory Directed Research and Development program. The study used resources at ORNL's Center for Nanophase Materials Sciences and the Oak Ridge Leadership Computing Facility, which are both DOE Office of Science User Facilities. About Oak Ridge National Laboratory ORNL is managed by UT-Battelle for the Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 21, 2016
Site: www.eurekalert.org

Five of the University of South Florida's leading scientific researchers have been named to the new class of Fellows of the American Association for the Advancement of Science (AAAS), the world's largest and one of its most prestigious scientific societies. Spanning medicine, public health, and technology research, the new group of USF AAAS Fellows are among some of the university's most accomplished faculty members, representing decades of scientific accomplishments and more than 50 patented technologies. The new designations bring the total number of AAAS Fellows among USF's faculty to 61. "The global recognition of the accomplishments and careers of these five stellar scientists says much about the quality and impact of research across the University of South Florida System," said Paul Sanberg, senior vice president for research, innovation and economic development at USF and himself a AAAS Fellow. "These five faculty members are on the leading edge of discovery in areas that have great impact on the daily lives of people everywhere. Their scientific accomplishments have led to better healthcare and more advanced technology that serve humankind in a myriad of ways. We're very proud these individuals are leaders in our community here at USF and among scientists around the world." With this year's new Fellows class, USF again ranks fourth among all organizations worldwide, tied with University of Florida, in the designation of new AAAS Fellows, joining Oak Ridge National Laboratory, Pennsylvania State University, Texas A&M University, University of Nebraska-Lincoln, University of Texas at Austin, and University of Wisconsin-Madison. USF and UF lead Florida universities in new AAAS Fellows selection. Election as a AAAS Fellow is an honor bestowed upon AAAS members by their peers. This year 391 members have been awarded this honor by AAAS because of their scientifically or socially distinguished efforts to advance science or its applications. New Fellows will be presented with an official certificate and a gold and blue rosette pin in February during the 2017 AAAS Annual Meeting in Boston. The tradition of AAAS Fellows began in 1874. The five faculty members from USF are: John H. Adams, Ph.D. Elected AAAS Fellow in the Biological Sciences Section Citation: For pioneering efforts and distinguished contributions in fundamental and translational malaria research, particularly in discoveries to improve antimalarial drugs and vaccines. Adams is a Distinguished University Professor in the Department of Global Health, in the College of Public Health. He also holds joint appointments in the Department of Molecular Medicine and Division of Infectious Disease & International Medicine, and Department of Internal Medicine in the Morsani College of Medicine. He is an internationally recognized scientist who has distinguished himself in the field of malaria research and dedicated his career to finding solutions for one of the leading causes of death and disease throughout the world. Early in his career he identified and characterized the proteins of Plasmodium vivax, one of the five types of malaria parasites that infect humans. He also assisted in the sequencing of the complete genome of Plasmodium falciparum, another human malaria, in 2008, published as a cover story in the journal Nature, which has stimulated new research pathways for potential drug targets and vaccines. He received an $8.5 million grant in 2010 (as PI) from the Bill and Melinda Gates Foundation to lead an interdisciplinary multi-national team to develop new technologies to advance research on Plasmodium vivax. The development of these state of the art genomic and functional tools for both Plasmodium falciparum and Plasmodium vivax greatly propelled the field of anti-malarial drug discovery and vaccine development. He has published more than 120 articles, and is an inventor on 6 patents. He earned his M.Sc. and Ph.D. at the University of Illinois at Urbana-Champaign. In addition to AAAS, he is an active member of the American Society for Microbiology (ASM), American Society of Parasitologists, American Society of Tropical Medicine and Hygiene (ASTMH), and the Society of Protozoologists. Dmitry B. Goldgof, Ph.D. Elected AAAS Fellow in the Information, Computing, and Communication Section Citation: For distinguished contributions to the fields of computer vision, pattern recognition and biomedical applications, particularly in biomedical image analysis. Goldgof is a Professor in the USF Department of Computer Science & Engineering in the College of Engineering, and the Department of Oncological Sciences in the Morsani College of Medicine, and a Member of the H. Lee Moffitt Cancer Center and Research Institute. His expertise spans the research areas of computer vision, image analysis, and pattern recognition, with an emphasis in biomedical applications. For example, he developed a system that automatically identifies tumors in human brain MRI scans, and techniques for automated tracking of deformation in cardiac MRIs. These developments have led to faster and more precise evaluations of medical imaging. He has also made significant advances in the area of biometrics and facial analysis for security applications. He holds five patents and published five edited volumes, 20 book chapters, and more than 85 journal articles. In addition to AAAS, he is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and International Association for Pattern Recognition (IAPR); and member of the International Society for Optics and Photonics (SPIE), Optical Society of America (OSA), Pattern Recognition Society, and Sigma Xi: The Scientific Research Society. He earned his Ph.D. at the University of Illinois at Urbana-Champaign, his M.S. at Rensselaer Polytechnic Institute, and his B.S. from Moscow Forest Engineering Institute, Moscow, Russia. Dennis K. Killinger, Ph.D. Elected AAAS Fellow in the Physics Section Citation: For pioneering contributions in tunable laser spectroscopy and atmospheric remote sensing, especially new techniques for Lidar sensing of global CO2 and environmental trace species. Killinger is a USF Distinguished University Professor Emeritus of Physics and Director of the Lidar Remote Sensing Laboratory in the College of Arts and Sciences. He is also President and CEO of SenOptics, Inc., a developer of patented LIF and Lidar sensors. He was one of the early pioneers in the field of laser remote sensing more than 30 years ago, and he is responsible for some of the major advances of this field, such as the understanding of "noise" in these systems, and for developing parameters to determine signal-to-noise ratio, among many other leading contributions. He is a past Member of the National Academy/NRC Committee on Optical Science and Engineering (COSE) to assess the future technology trends in optics and lasers: Harnessing Light; and University Representative to the White House Office of Science and Technology Policy (OSTP) Southeastern Science Policy Colloquium. In addition to AAAS, he is a Fellow of the Optical Society of America (OSA) and SPIE; founding member of the National Academy of Inventors; and Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). He has published eight patents and more than 100 papers and book chapters. He earned his doctorate from the University of Michigan, M.S. from DePauw University, and B.A. from the University of Iowa. Charles J. Lockwood, MD, MHCM Elected AAAS Fellow in the Medical Sciences Section Citation: For distinguished contributions to reproductive science particularly discovery of the first biochemical marker of preterm birth, fetal fibronectin, and the molecular mechanisms underlying uterine hemostasis. Lockwood is the Senior Vice President for USF Health and Dean of the Morsani College of Medicine. At USF Health, Lockwood leads the Morsani College of Medicine and the Colleges of Nursing, Public Health and Pharmacy; and the School of Physical Therapy and Rehabilitation Sciences. He also oversees the USF Physicians Group, the faculty group practice of the medical school -- and the largest multispecialty group practice on the West Coast of Florida. He is a Professor of Obstetrics & Gynecology and Public Health at USF. Lockwood is an internationally recognized health care and research leader who earned a Sc.B., magna cum laude, with distinction, from Brown University, his medical degree from the University of Pennsylvania School of Medicine, and his Master of Science in Health Care Management degree from the Harvard School of Public Health. He served his residency in obstetrics and gynecology at Pennsylvania Hospital and his fellowship in maternal-fetal medicine at the Yale-New Haven Hospital. Lockwood is the recipient of multiple research grant awards from the National Institutes of Health (NIH), the March of Dimes and other foundations. He has authored 290 peer-reviewed publications and 170 editorials, authored or co-authored three books, and co-edited seven major textbooks. He led a research team that discovered fetal fibronectin, the first biochemical predictor of prematurity. His clinical interests include prevention of recurrent pregnancy loss, preterm delivery and maternal thrombosis, and he maintains an active laboratory at USF Health dedicated to research in these areas. Lockwood is also member of the March of Dimes Board of Trustees. Shyam S. Mohapatra, Ph.D., MBA, FAAAAI, FNAI, FAIMBE Elected AAAS Fellow in the Pharmaceutical Sciences Section Citation: For outstanding contributions in the field of pharmaceutical and health sciences, particularly for pioneering achievements in advancing biomedical nanotechnology for inflammatory diseases. Mohapatra is a Distinguished USF Health Professor; Associate Dean of Graduate Programs and Professor in the College of Pharmacy; Director of Translational Medicine; Distinguished Professor in the Institute for Advanced Discovery & Innovation; and Vice Chair of Research in the Department of Internal Medicine, Morsani College of Medicine. His research on respiratory syncytial virus (RSV), atrial natriuretic peptides (ANPs), and Nanoparticle-mediated gene/drug delivery has helped guide the fields of immunology, infectious disease, biotherapeutics and translational medicine. RSV infection is a condition which afflicts primarily infants, but also adults and the elderly. Even 45 years after the discovery of RSV, there is no vaccine or other effective therapy against RSV, however, Mohapatra's research has led to the unraveling of the molecular mechanisms underlying RSV infection and resulting illnesses, and the development of a potential multi-gene vaccine against RSV. He has also pioneered novel treatment approaches for lung cancer, respiratory viral infections, respiratory allergies, and other chronic lung diseases. He also founded the USF Center for Research & Education in Nanobioengineering in 2010. In addition to AAAS, he is a Fellow of the American Institute of Medical and Biological Engineers (AIMBE), and American Academy of Allergy, Asthma and Clinical Immunology; member of the National Academy of Inventors; and among the inaugural inductees of the Florida Inventors Hall of Fame. He holds 28 patents, and has published nearly 200 articles and book chapters. He earned his Ph.D. from the Australian National University; M.S. from the GB Pant University of Agriculture & Technology, India; and B.S. from Orissa University of Agriculture & Technology, India. This year's AAAS Fellows will be formally announced in the AAAS News & Notes section of the journal Science on Nov. 25. The University of South Florida is a high-impact, global research university dedicated to student success. USF is a Top 25 research university among public institutions nationwide in total research expenditures, according to the National Science Foundation. Serving over 48,000 students, the USF System has an annual budget of $1.6 billion and an annual economic impact of $4.4 billion. USF is a member of the American Athletic Conference. The American Association for the Advancement of Science (AAAS) is the world's largest general scientific society and publisher of the journal Science as well as Science Translational Medicine, Science Signaling, a digital, open-access journal, Science Advances, Science Immunology, and Science Robotics. AAAS was founded in 1848 and includes nearly 250 affiliated societies and academies of science, serving 10 million individuals. Science has the largest paid circulation of any peer-reviewed general science journal in the world. The non-profit AAAS is open to all and fulfills its mission to "advance science and serve society" through initiatives in science policy, international programs, science education, public engagement, and more. For the latest research news, log onto EurekAlert!, the premier science-news Web site, a service of AAAS. See http://www. .


News Article | November 23, 2016
Site: www.businesswire.com

SALT LAKE CITY--(BUSINESS WIRE)--SC16, the 28th annual international conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers and scientists - from those just starting their careers to those whose contributions have made lasting impacts. The conference drew more than 11,100 registered attendees and featured a technical program spanning six days. The exhibit hall featured 349 exhibitors from industry, academia and research organizations from around the world. “There has never been a more important time for high performance computing, networking and data analysis,” said SC16 General Chair John West from the Texas Advanced Computing Center. “But it is also an acute time for growing our workforce and expanding diversity in the industry. SC16 was the perfect blend of research, technological advancement, career recognition and improving the ways in which we attract and retain that next generation of scientists.” According to Trey Breckenridge, SC16 Exhibits Chair from Mississippi State University, the SC16 Exhibition was the largest in the history of the conference. The overall size of the exhibition was 150,000 net square feet (breaking the 2015 record of 141,430). The 349 industry and research-focused exhibits included 44 first-timers and 120 organizations from 25 countries outside the United States. During the conference, Salt Lake City also became the hub for the world’s fastest computer network: SCinet, SC16’s custom-built network which delivered 3.15 terabits per second in bandwidth. The network featured 56 miles of fiber deployed throughout the convention center and $32 million in loaned equipment. It was all made possible by 200 volunteers representing global organizations spanning academia, government and industry. For the third year, SC featured an opening “HPC Matters” plenary that this year focused on Precision Medicine, which examined what the future holds in this regard and how advances are only possible through the power of high performance computing and big data. Leading voices from the frontlines of clinical care, medical research, HPC system evolution, pharmaceutical R&D and public policy shared diverse perspectives on the future of precision medicine and how it will impact society. The Technical Program again offered the highest quality original HPC research. The SC workshops set a record with more than 2,500 attendees. There were 14 Best Paper Finalists and six Gordon Bell Finalists. These submissions represent the best of the best in a wide variety of research topics in HPC. “These awards are very important for the SC Conference Series. They celebrate the best and the brightest in high performance computing,” said Satoshi Matsuoka, SC16 Awards Chair from Tokyo Institute of Technology. “These awards are not just plaques or certificates. They define excellence. They set the bar for the years to come and are powerful inspiration for both early career and senior researchers.” Following is the list of Technical Program awards presented at SC16: SC16 received 442 paper submissions, of which 81 were accepted (18.3 percent acceptance rate). Of those, 13 were selected as finalists for the Best Paper (six) and Best Student Paper (seven) awards. The Best Paper Award went to “Daino: A High-Level Framework for Parallel and Efficient AMR on GPUs” by Mohamed Wahib Attia and Naoya Maruyama, RIKEN; and Takayuki Aoki, Tokyo Institute of Technology. The Best Student Paper Award went to “Flexfly: Enabling a Reconfigurable Dragonfly Through Silicon Photonics” by Ke Wen, Payman Samadi, Sebastien Rumley, Christine P. Chen, Yiwen Shen, Meisam Bahadori, and Karen Bergman, Columbia University and Jeremiah Wilke, Sandia National Laboratories. The ACM Gordon Bell Prize is awarded for outstanding team achievement in high performance computing and tracks the progress of parallel computing. This year, the prize was awarded to a 12-member Chinese team for their research project, “10M-Core Scalable Fully-Implicit Solver for Nonhydrostatic Atmospheric Dynamics.” The winning team presented a solver (method for calculating) atmospheric dynamics. In the abstract of their presentation, the winning team writes, “On the road to the seamless weather-climate prediction, a major obstacle is the difficulty of dealing with various spatial and temporal scales. The atmosphere contains time-dependent multi-scale dynamics that support a variety of wave motions.” To simulate the vast number of variables inherent in a weather system developing in the atmosphere, the winning group presents a highly scalable fully implicit solver for three-dimensional nonhydrostatic atmospheric simulations governed by fully compressible Euler equations. Euler equations are a set of equations frequently used to understand fluid dynamics (liquids and gasses in motion). Winning team members are Chao Yang, Chinese Academy of Sciences; Wei Xue, Weimin Zheng, Guangwen Yang, Ping Xu, and Haohuan Fu, Tsinghua University; Hongtao You, National Research Center of Parallel Computer Engineering and Technology; Xinliang Wang, Beijing Normal University; Yulong Ao and Fangfang Liu, Chinese Academy of Sciences, Lin Gan, Tsinghua University; Lanning Wang, Beijing Normal University. This year, SC received 172 detailed poster submissions that went through a rigorous review process. In the end, 112 posters were accepted and five finalists were selected for the Best Poster Award. As part of its research poster activities, SC16 also hosted the ACM Student Research Competition for both undergraduate and graduate students. In all 63 submissions were received, 26 Student Research Competition posters were accepted – 14 in the graduate category and 12 in the undergraduate category. The Best Poster Award went to “A Fast Implicit Solver with Low Memory Footprint and High Scalability for Comprehensive Earthquake Simulation System” with Kohei Fujita from RIKEN as the lead author. First Place: “Touring Dataland? Automated Recommendations for the Big Data Traveler” by Willian Agnew and Michael Fischer, Advisors: Kyle Chard and Ian Foster. Second Place: “Analysis of Variable Selection Methods on Scientific Cluster Measurement Data” by Jonathan Wang, Advisors: Wucherl Yoo and Alex Sim. Third Place: “Discovering Energy Usage Patterns on Scientific Clusters” by Matthew Bae, Advisors: Wucherl Yoo, Alex Sim and Kesheng Wu. First Place: “Job Startup at Exascale: Challenges and Solutions” by Sourav Chakroborty, Advisor: Dhabaleswar K. Panda. Second Place: “Performance Modeling and Engineering with Kerncraft,” by Julian Hammer, Advisors: Georg Hager and Gerhard Wellein. Third Place: “Design and Evaluation of Topology-Aware Scatter and AllGather Algorithms for Dragonfly Networks” by Nathanael Cheriere, Advisor: Matthieu Dorier. The Scientific Visualization and Data Analytics Award featured six finalists. The award went to “Visualization and Analysis of Threats from Asteroid Ocean Impacts” with John Patchett as the lead author. The Student Cluster Competition returned for its 10th year. The competition which debuted at SC07 in Reno and has since been replicated in Europe, Asia and Africa, is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC16 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges. The students partner with vendors to design and build a cutting-edge cluster from commercially available components, not to exceed a 3120-watt power limit and work with application experts to tune and run the competition codes. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery. This is the university’s third appearance in the competition. Also, an ACM SIGHPC Certificate of Appreciation is presented to the authors of a recent SC paper to be used for the SC16 Student Cluster Competition Reproducibility Initiative. The selected paper was “A Parallel Connectivity Algorithm for de Bruijn Graphs in Metagenomic Applications” by Patrick Flick, Chirag Jain, Tony Pan and Srinivas Aluru from Georgia Institute of Technology. The George Michael Memorial HPC Fellowship honors exceptional Ph.D. students. The first recipient is Johann Rudi from the Institute for Computational Engineering and Sciences at the University of Texas at Austin for his project, “Extreme-Scale Implicit Solver for Nonlinear, Multiscale, and Heterogeneous Stokes Flow in the Earth’s Mantle.” The second recipient is Axel Huebl from Helmholtz-Zentrum Dresden-Rossendorf at the Technical University of Dresden for his project, “Scalable, Many-core Particle-in-cell Algorithms to Stimulate Next Generation Particle Accelerators and Corresponding Large-scale Data Analytics.” The SC Conference Series also serves as the venue for recognizing leaders in the HPC community for their contributions during their careers. Here are the career awards presented at SC16: The IEEE-CS Seymour Cray Computer Engineering Award recognizes innovative contributions to high performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The 2016 IEEE-CS Seymour Cray Computer Engineering Award was presented to William J. Camp of Los Alamos National Laboratory “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Camp previously served as Intel’s Chief Supercomputing Architect and directed Intel’s Exascale R&D efforts. Established in memory of Ken Kennedy, the founder of Rice University's nationally ranked computer science program and one of the world's foremost experts on high-performance computing, the ACM/IEEE-CS Ken Kennedy Award recognizes outstanding contributions to programmability or productivity in high-performance computing together with significant community service or mentoring contributions. The 2016 Ken Kennedy Award was presented to William D. Gropp “for highly influential contributions to the programmability of high-performance parallel and distributed computers, and extraordinary service to the profession.” Gropp Is the Acting Director of the National Center for Supercomputing Applications and Director, Parallel Computing Institute, Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign. The IEEE-CS Sidney Fernbach Memorial Award is awarded for outstanding contributions in the application of high performance computers using innovative approaches. The 2016 IEEE-CS Sidney Fernbach Memorial Award was presented to Vipin Kumar “for foundational work on understanding scalability, and highly scalable algorithms for graph positioning, sparse linear systems and data mining.” Kumar is a Regents Professor at the University of Minnesota. The Supercomputing Conference Test of Time Award recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends. The winning paper is “Automatically Tuned Linear Algebra Software” by Clint Whaley from University of Tennessee and Jack Dongarra from University of Tennessee and Oak Ridge National Laboratory. IEEE TCSC Award for Excellence in Scalable Computing for Early Career Researchers: The IEEE TCHPC Award for Excellence in Scalable Computing for Early Career Researchers recognizes individuals who have made outstanding and potentially long-lasting contributions to the field within five years of receiving their Ph.D. The 2016 awards were presented to Kyle Chard, Computation Institute , University of Chicago and Argonne National Laboratory; Sunita Chandrassekaran, University of Delaware; and Seyong Lee, Oak Ridge National Laboratory. SC17 will be held next November 12-17 in Denver, Colorado. For more details, go to http://sc17.supercomputing.org/. SC16, sponsored by the IEEE Computer Society and ACM (Association for Computing Machinery), offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world-class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC16, visit: http://sc16.supercomputing.org.


News Article | October 14, 2016
Site: news.mit.edu

On Friday, Sept. 30, at 9:25 p.m. EDT, scientists and engineers at MIT’s Plasma Science and Fusion Center made a leap forward in the pursuit of clean energy. The team set a new world record for plasma pressure in the Institute’s Alcator C-Mod tokamak nuclear fusion reactor. Plasma pressure is the key ingredient to producing energy from nuclear fusion, and MIT’s new result achieves over 2 atmospheres of pressure for the first time. Alcator leader and senior research scientist Earl Marmar will present the results at the International Atomic Energy Agency Fusion Energy Conference, in Kyoto, Japan, on Oct. 17. Nuclear fusion has the potential to produce nearly unlimited supplies of clean, safe, carbon-free energy. Fusion is the same process that powers the sun, and it can be realized in reactors that simulate the conditions of ultrahot miniature “stars” of plasma — superheated gas — that are contained within a magnetic field. For over 50 years it has been known that to make fusion viable on the Earth’s surface, the plasma must be very hot (more than 50 million degrees), it must be stable under intense pressure, and it must be contained in a fixed volume. Successful fusion also requires that the product of three factors — a plasma’s particle density, its confinement time, and its temperature — reaches a certain value. Above this value (the so-called “triple product”), the energy released in a reactor exceeds the energy required to keep the reaction going. Pressure, which is the product of density and temperature, accounts for about two-thirds of the challenge. The amount of power produced increases with the square of the pressure — so doubling the pressure leads to a fourfold increase in energy production. During the 23 years Alcator C-Mod has been in operation at MIT, it has repeatedly advanced the record for plasma pressure in a magnetic confinement device. The previous record of 1.77 atmospheres was set in 2005 (also at Alcator C-Mod). While setting the new record of 2.05 atmospheres, a 15 percent improvement, the temperature inside Alcator C-Mod reached over 35 million degrees Celsius, or approximately twice as hot as the center of the sun. The plasma produced 300 trillion fusion reactions per second and had a central magnetic field strength of 5.7 tesla. It carried 1.4 million amps of electrical current and was heated with over 4 million watts of power. The reaction occurred in a volume of approximately 1 cubic meter (not much larger than a coat closet) and the plasma lasted for two full seconds. Other fusion experiments conducted in reactors similar to Alcator have reached these temperatures, but at pressures closer to 1 atmosphere; MIT’s results exceeded the next highest pressure achieved in non-Alcator devices by approximately 70 percent. While Alcator C-Mod’s contributions to the advancement of fusion energy have been significant, it is a science research facility. In 2012 the DOE decided to cease funding to Alcator due to budget pressures from the construction of ITER. Following that decision, the U.S. Congress restored funding to Alcator C-Mod for a three-year period, which ended on Sept. 30. “This is a remarkable achievement that highlights the highly successful Alcator C-Mod program at MIT,” says Dale Meade, former deputy director at the Princeton Plasma Physics Laboratory, who was not directly involved in the experiments. “The record plasma pressure validates the high-magnetic-field approach as an attractive path to practical fusion energy.” “This result confirms that the high pressures required for a burning plasma can be best achieved with high-magnetic-field tokamaks such as Alcator C-Mod,” says Riccardo Betti, the Robert L. McCrory Professor of Mechanical Engineering and Physics and Astronomy at the University of Rochester. Alcator C-Mod is the world’s only compact, high-magnetic-field fusion reactor with advanced shaping in a design called a tokamak (a transliteration of a Russian word for “toroidal chamber”), which confines the superheated plasma in a donut-shaped chamber. C-Mod’s high-intensity magnetic field — up to 8 tesla, or 160,000 times the Earth’s magnetic field — allows the device to create the dense, hot plasmas and keep them stable at more than 80 million degrees. Its magnetic field is more than double what is typically used in other designs, which quadruples its ability to contain the plasma pressure. C-Mod is third in the line of high-magnetic-field tokamaks, first advocated by MIT physics professor Bruno Coppi, to be built and operated at MIT. Ron Parker, a professor of electrical engineering and computer science, led its design phase. Professor Ian Hutchinson of the Department of Nuclear Science and Engineering led its construction and the first 10 years of operation through 2003. Unless a new device is announced and constructed, the pressure record just set in C-Mod will likely stand for the next 15 years. ITER, a tokamak currently under construction in France, will be approximately 800 times larger in volume than Alcator C-Mod, but it will operate at a lower magnetic field. ITER is expected to reach 2.6 atmospheres when in full operation by 2032, according to a recent Department of Energy report. Alcator C-Mod is also similar in size and cost to nontokamak magnetic fusion options being pursued by private fusion companies, though it can achieve pressures 50 times higher. “Compact, high-field tokamaks provide another exciting opportunity for accelerating fusion energy development, so that it’s available soon enough to make a difference to problems like climate change and the future of clean energy — goals I think we all share,” says Dennis Whyte, the Hitachi America Professor of Engineering, director of the Plasma Science and Fusion Center, and head of the Department of Nuclear Science and Engineering at MIT. These experiments were planned by the MIT team and collaborators from other laboratories in the U.S. — including the Princeton Plasma Physics Laboratory, the Oak Ridge National Laboratory, and General Atomics — and conducted on the Alcator C-Mod’s last day of operation. The Alcator C-Mod facility, which officially closed after 23 years of operation on Sept. 30, leaves a profound legacy of collaboration. The facility has contributed to more than 150 PhD theses and dozens of interinstitutional research projects. To understand how Alcator C-Mod’s design principles could be applied to power generation, MIT’s fusion group is working on adapting newly available high-field, high-temperature superconductors that will be capable of producing magnetic fields of even greater strength without consuming electricity or generating heat. These superconductors are a central ingredient of a conceptual pilot plant called the Affordable Robust Compact (ARC) reactor, which could generate up to 250 million watts of electricity. Scientists, students, and faculty from the Alcator C-Mod team will discuss fusion, the pressure record, Alcator C-Mod, and the high-field approach at an Ask Me Anything Session on Reddit on Thursday, Oct. 20, at 1 p.m. EDT.


News Article | October 18, 2016
Site: www.theenergycollective.com

On Friday, Sept. 30, at 9:25 p.m. EDT, scientists and engineers at MIT’s Plasma Science and Fusion Center made a leap forward in the pursuit of clean energy. The team set a new world record for plasma pressure in the Institute’s Alcator C-Mod tokamak nuclear fusion reactor. Plasma pressure is the key ingredient to producing energy from nuclear fusion, and MIT’s new result achieves over 2 atmospheres of pressure for the first time. Alcator leader and senior research scientist Earl Marmar will present the results at the International Atomic Energy Agency Fusion Energy Conference, in Kyoto, Japan, on Oct. 17. Nuclear fusion has the potential to produce nearly unlimited supplies of clean, safe, carbon-free energy. Fusion is the same process that powers the sun, and it can be realized in reactors that simulate the conditions of ultrahot miniature “stars” of plasma — superheated gas — that are contained within a magnetic field. For over 50 years it has been known that to make fusion viable on the Earth’s surface, the plasma must be very hot (more than 50 million degrees), it must be stable under intense pressure, and it must be contained in a fixed volume. Successful fusion also requires that the product of three factors — a plasma’s particle density, its confinement time, and its temperature — reaches a certain value. Above this value (the so-called “triple product”), the energy released in a reactor exceeds the energy required to keep the reaction going. Pressure, which is the product of density and temperature, accounts for about two-thirds of the challenge. The amount of power produced increases with the square of the pressure — so doubling the pressure leads to a fourfold increase in energy production. During the 23 years Alcator C-Mod has been in operation at MIT, it has repeatedly advanced the record for plasma pressure in a magnetic confinement device. The previous record of 1.77 atmospheres was set in 2005 (also at Alcator C-Mod). While setting the new record of 2.05 atmospheres, a 15 percent improvement, the temperature inside Alcator C-Mod reached over 35 million degrees Celsius, or approximately twice as hot as the center of the sun. The plasma produced 300 trillion fusion reactions per second and had a central magnetic field strength of 5.7 tesla. It carried 1.4 million amps of electrical current and was heated with over 4 million watts of power. The reaction occurred in a volume of approximately 1 cubic meter (not much larger than a coat closet) and the plasma lasted for two full seconds. Other fusion experiments conducted in reactors similar to Alcator have reached these temperatures, but at pressures closer to 1 atmosphere; MIT’s results exceeded the next highest pressure achieved in non-Alcator devices by approximately 70 percent. While Alcator C-Mod’s contributions to the advancement of fusion energy have been significant, it is a science research facility. In 2012 the DOE decided to cease funding to Alcator due to budget pressures from the construction of ITER. Following that decision, the U.S. Congress restored funding to Alcator C-Mod for a three-year period, which ended on Sept. 30. “This is a remarkable achievement that highlights the highly successful Alcator C-Mod program at MIT,” says Dale Meade, former deputy director at the Princeton Plasma Physics Laboratory, who was not directly involved in the experiments. “The record plasma pressure validates the high-magnetic-field approach as an attractive path to practical fusion energy.” “This result confirms that the high pressures required for a burning plasma can be best achieved with high-magnetic-field tokamaks such as Alcator C-Mod,” says Riccardo Betti, the Robert L. McCrory Professor of Mechanical Engineering and Physics and Astronomy at the University of Rochester. Alcator C-Mod is the world’s only compact, high-magnetic-field fusion reactor with advanced shaping in a design called a tokamak (a transliteration of a Russian word for “toroidal chamber”), which confines the superheated plasma in a donut-shaped chamber. C-Mod’s high-intensity magnetic field — up to 8 tesla, or 160,000 times the Earth’s magnetic field — allows the device to create the dense, hot plasmas and keep them stable at more than 80 million degrees. Its magnetic field is more than double what is typically used in other designs, which quadruples its ability to contain the plasma pressure. C-Mod is third in the line of high-magnetic-field tokamaks, first advocated by MIT physics professor Bruno Coppi, to be built and operated at MIT. Ron Parker, a professor of electrical engineering and computer science, led its design phase. Professor Ian Hutchinson of the Department of Nuclear Science and Engineering led its construction and the first 10 years of operation through 2003. Unless a new device is announced and constructed, the pressure record just set in C-Mod will likely stand for the next 15 years. ITER, a tokamak currently under construction in France, will be approximately 800 times larger in volume than Alcator C-Mod, but it will operate at a lower magnetic field. ITER is expected to reach 2.6 atmospheres when in full operation by 2032, according to a recent Department of Energy report. Alcator C-Mod is also similar in size and cost to nontokamak magnetic fusion options being pursued by private fusion companies, though it can achieve pressures 50 times higher. “Compact, high-field tokamaks provide another exciting opportunity for accelerating fusion energy development, so that it’s available soon enough to make a difference to problems like climate change and the future of clean energy — goals I think we all share,” says Dennis Whyte, the Hitachi America Professor of Engineering, director of the Plasma Science and Fusion Center, and head of the Department of Nuclear Science and Engineering at MIT. These experiments were planned by the MIT team and collaborators from other laboratories in the U.S. — including the Princeton Plasma Physics Laboratory, the Oak Ridge National Laboratory, and General Atomics — and conducted on the Alcator C-Mod’s last day of operation. The Alcator C-Mod facility, which officially closed after 23 years of operation on Sept. 23, leaves a profound legacy of collaboration. The facility has contributed to more than 150 PhD theses and dozens of interinstitutional research projects. To understand how Alcator C-Mod’s design principles could be applied to power generation, MIT’s fusion group is working on adapting newly available high-field, high-temperature superconductors that will be capable of producing magnetic fields of even greater strength without consuming electricity or generating heat. These superconductors are a central ingredient of a conceptual pilot plant called the Affordable Robust Compact (ARC) reactor, which could generate up to 250 million watts of electricity. Scientists, students, and faculty from the Alcator C-Mod team will discuss fusion, the pressure record, Alcator C-Mod, and the high-field approach at an Ask Me Anything Session on Reddit on Thursday, Oct. 20, at 1 p.m. EDT.


« SLAC, Utrecht Univ. team visualize poisoning of FCC catalysts used in gasoline production; seeing changes in pore network materials | Main | Eaton’s new Bussmann series EV fuses allow drivers to go further, faster » A US Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance US manufacturing has funded 13 new industry projects for a total of $3.8 million. Among the projects selected are one by GM and EPRI of California to improve welding techniques for automobile manufacturing and power plant builds in partnership with Oak Ridge National Laboratory (ORNL). Another one of the 13 projects is led by Sepion Technologies, which will partner with LBNL to make new membranes to increase the lifetime of Li-S batteries for hybrid airplanes. The High Performance Computing for Manufacturing (HPC4Mfg) Program creates an ecosystem that allows experts at DOE’s national laboratories to work directly with manufacturing industry members to teach them how to adopt or advance their use of high performance computing (HPC) to address manufacturing challenges with a goal of increasing energy efficiency, reducing environmental impacts and advancing clean energy technologies. The partnerships use world-class supercomputers and scientific expertise from Lawrence Livermore National Laboratory (LLNL), which leads the program, and partner laboratories Lawrence Berkeley and Oak Ridge national laboratories (LBNL and ORNL), which in turn team up with US manufacturers. GM and EPRI. GM and EPRI, representing two major US manufacturing industry sectors (automotive and nuclear energy) and both having welding research & development capabilities, will work with Oak Ridge National Laboratory (ORNL) for the purpose of advancing HPC weld modeling tools for broad industrial applications. This potential will be demonstrated with two representative welded structural components: The goal is to reduce the computational time of the two above examples from days or months to several hours while providing adequate solution accuracy so that the HPC weld modeling tools could effectively optimize welding technology in order to minimize dimensional distortion and proactively mitigate the detrimental impact of weld-induced residual stresses. Sepion. Efforts to commercialize light-weight, energy-dense lithium-sulfur secondary batteries (2510 Wh kg–1) have been stalled by ongoing problems with the battery’s membrane, which limits cycle-life. Sepion’s polymer membrane technology provides a counterpoint, yielding long-lasting lithium-sulfur cells. Advancing to 10 Ah battery prototypes, Sepion faces challenges in membrane manufacturing related to polymer processing and the molecular basis for membrane performance and durability. High-performance computing offers critical new insight into these phenomena, which in turn will accelerate the product’s entry into the market. Sepion envisions that successes could catalyze a transformation in aviation, in which fuel-burning aircraft are replaced with hybrid-electric planes featuring 30–50% reductions in fuel costs and emissions. Shiloh Industries of Ohio will partner with ORNL to study phase change cooling of tooling to speed up casting processes in a project titled “Development of a Transformational Micro-Cooling Technology for High-Pressure Die Casting using High-Performance Computing.” Rolls-Royce Corporation of Indiana will partner with ORNL to improve silicon carbide composites in a project titled “Level-set Modeling Simulations of Chemical Vapor Infiltration for Ceramic Matrix Composites Manufacturing.” ORNL will partner with Agenda 2020 Technology Alliance, a consortium focused on the paper industry to design better catalysts for lignin breakdown in a project titled “Catalytic Pulping of Wood.” LLNL will partner with GE Global Research Center in New York to study how to mitigate defects caused by direct metal laser melting in a project titled “Minimization of Spatter during Direct Metal Laser Melting (DMLM) Additive Manufacturing Process using ALE3D Coupled with Experiments." PPG of Pennsylvania will partner with LBNL to decrease the time needed to paint automobiles in a project titled “Modeling Paint Behavior During Rotary Bell Atomization.” Actasys, Incorporated of New York, will partner with ORNL to decrease the fuel consumption of trucks by actively modifying the flow around the trucks in a project titled “High Performance Computational Modeling of Synthetic Jet Actuators for Increased Freight Efficiency in the Transportation Industry.” Carbon, Incorporated of California will partner with LBNL to increase the speed of polymer additively manufactured components in a project titled “Multi-physics Modeling of Continuous Liquid Interface Production (CLIP) for Additive Manufacturing.” The American Chemical Society Green Chemistry Institute will partner with LBNL to develop lower energy mechanisms of chemical separation using membranes in a project titled “Accelerating Industrial Application of Energy-Efficient Alternative Separations.” The Alzeta Corporation of California will partner with LBNL to destroy effluents from semiconductor processing that could potentially harm the ozone layer in a project titled “Improving Gas Reactor Design With Complex Non-Standard Reaction Mechanisms in a Reactive Flow Model.” Applied Materials, Incorporated will partner with LLNL to enable the manufacture of higher quality, more efficient LEDs for lighting in a project titled “Modeling High Impulse Magnetron Sputtering (HiPIMS) plasma sources for reactive Physical Vapor Deposition (PVD) processes used in fabrication of high efficiency LEDs.” Harper International Corp. of New York will partner with ORNL to reduce the cost of carbon fibers in a project titled “Development and Validation of Simulation Capability for the High Capacity Production of Carbon Fiber.” The program has previously funded 16 projects ranging from improving turbine blades for aircraft engines to reducing heat loss in electronics to improving fiberglass production. Partners range from small to large companies, industry consortiums and institutes. Although the program is focused on using national lab HPC resources to bolster manufacturing, it is possible that other fields, such as transportation, the modern electrical grid and advanced integrated circuitry also could benefit. As the program broadens, other national laboratory partners are expected to join.


Registered to do Business with the United States Federal Government and Defense Agencies Sourced and Manufactured in the United States of America TORONTO, CANADA--(Marketwired - Feb. 8, 2017) - Alabama Graphite Corp. ("AGC" or the "Company") (TSX VENTURE:CSPG)(OTCQX:CSPGF)(FRANKFURT:1AG) is pleased to announce that the Company's 100% wholly owned subsidiary - Alabama Graphite Company, Inc. ("AGC USA."), a corporation registered in the state of Alabama, USA - is now registered to pursue United States federal government funding to advance its research and development, and technology commercialization efforts, as well as to conduct business directly with the US federal government and its various agencies, including the US Department of Defense ("DoD") and the US Department of Energy ("DoE"). Due to developing relationships between AGC and AGC USA with both DoD- and DoE-related entities, including the DoE's Oak Ridge National Laboratory ("ORNL") of Oak Ridge, Tennessee, USA, ORNL requested that AGC USA become registered to do business with the US government and its associated agencies. Accordingly, AGC USA has been assigned a Commercial and Government Entity ("CAGE") code, which is an identification number that is assigned by the DoD's Defense Logistics Agency ("DLA") and used extensively within and by the US federal government. Additionally, AGC USA has also been issued a Data Universal Number System ("DUNS") number, a prerequisite for obtaining US federal funding, including government grants. AGC USA intends to aggressively pursue all potential US government granting and funding opportunities as both a Principal Investigator ("PI") and a Co-Principal Investigator ("Co-PI"), along with other strategic business partners of AGC USA, in conjunction with ORNL and other DoE- and DoD-related entities. Additionally, the DLA informed AGC that the DoD would prefer to use a US-sourced-and-manufactured vendor like AGC USA as the primary vendor for all US DoD lithium-ion ("Li-ion") battery projects, as a matter of US national security. Although AGC must first complete a positive Feasibility Study, secure the required financing and then construct a mine and downstream Coated Spherical Purified Graphite ("CSPG") processing and production facilities, it should be further noted that no supply agreement exists today with respect to US DoD Li-ion battery projects. AGC USA has also received significant and increasing interest from several US government research laboratories working to develop next-generation energy and materials technologies, using domestically sourced natural flake graphite from the contiguous United States (excluding Hawaii and Alaska). AGC and AGC USA President, Chief Executive Officer and Executive Director, Donald Baxter commented, "Having our US subsidiary assigned a DUNS number and CAGE code is a critical step in AGC's path to becoming a US government battery-ready graphite supplier of our American-sourced-and-manufactured CSPG, identified by the ULTRACSPG™ trademark - the very first trademarked sourced-in-America, natural battery-ready graphite for use in lithium-ion batteries. "Additionally, we are now able to vigorously pursue all relevant potential US granting opportunities," stated Mr. Baxter. "In connection with our pursuit of such opportunities, I will be working very closely with AGC USA's CSPG manufacturing team (consisting of three Ph.D. scientists and seven battery materials engineers, working in our US-based, dedicated, state-of-the-art battery materials research laboratory) and AGC's Director of Technology, graphite expert George C. Hawley, along with Strategic Advisor, DoD battery industry expert Randy A. Moore, and AGC Independent Director Dr. Gareth Hatch, who holds United States citizenship and has served as Principal Investigator on multi-million dollar research programs, funded by the DoD's US Army Research Laboratory ("ARL")." On behalf of the Board of Directors of Donald K. D. Baxter, P.Eng., President, Chief Executive Officer and Executive Director of Alabama Graphite Corp., is a Qualified Person as defined by National Instrument 43-101 ("N.I. 43-101") guidelines, and has reviewed and approved the content of this news release. Alabama Graphite Corp. is a Canadian-based flake graphite exploration and development company as well as an aspiring battery materials production and technology company. The Company operates through its wholly owned subsidiary, Alabama Graphite Company Inc. (a company registered in the state of Alabama). With an advancing flake graphite project in the United States of America, Alabama Graphite Corp. intends to become a reliable, long-term US supplier of specialty high-purity graphite products, namely Coated Spherical Purified Graphite (CSPG) engineered for use in lithium-ion batteries. A highly experienced team leads the Company with more than 100 years of combined graphite mining, graphite processing, specialty graphite products and applications, advanced battery development and graphite sales experience. Alabama Graphite Corp. is focused on the exploration and development of its flagship Coosa Graphite Project in Coosa County, Alabama, and its Bama Mine Project in Chilton County, Alabama as well the research and development of its proprietary manufacturing and technological processing process of battery materials. Alabama Graphite Corp. holds a 100% interest in the mineral rights for these two US-based graphite projects, which are both located on private land. The two projects encompass more than 43,000 acres and are located in a geopolitically stable, mining-friendly jurisdiction with significant historical production of crystalline flake graphite in the flake graphite belt of central Alabama, also known as the Alabama Graphite Belt (source: US Bureau of Mines). A significant portion of the Alabama deposits are characterized by graphite-bearing material that is oxidized and has been weathered into extremely soft rock. Both projects have infrastructure in place, are within close proximity to major highways, rail, power and water, and are approximately three hours (by truck or train) to the Port of Mobile, the Alabama Port Authority's deep-seawater port and the ninth largest port by tonnage in the United States (source: US Army Corps of Engineers/USACE). The state of Alabama's hospitable climate allows for year-round mining operations and the world's largest marble quarry (which operates 24 hours a day, 365 days a year in Sylacauga, Alabama), is located within a 30-minute drive of the Coosa Graphite Project. On November 30, 2015, Alabama Graphite Corp. announced the results of a preliminary economic assessment ("PEA") for the Coosa Graphite Project, indicating a potentially low-cost project with potential positive economics. Please refer to the Company's technical report titled "Alabama Graphite Corp. Preliminary Economic Assessment (PEA) on the Coosa graphite Project, Alabama, USA" dated November 27, 2015, prepared by independent engineering firms AGP Mining Consultants Inc. and Metal Mining Consultants Inc., and filed on SEDAR at www.sedar.com. Note: a preliminary economic assessment is preliminary in nature, it includes inferred mineral resources that are considered too speculative geologically to have economic considerations applied to them that would enable them to be categorized as mineral reserves and there is no certainty that the preliminary economic assessment will be realized. * Inferred Mineral Resources represent material that is considered too speculative to be included in economic evaluations. Additional trenching and/or drilling will be required to convert Inferred Mineral Resources to Measured or Indicated Mineral Resources. Mineral Resources that are not Mineral Reserves do not have demonstrated economic viability. There is no guarantee that all or any part of the Mineral Resource will be converted into a Mineral Reserve. Alabama Graphite Corp. is a proud member of the National Association of Advanced Technology Batteries International ("NAATBatt International"), a US-based, not-for-profit trade association commercializing advanced electrochemical energy-storage technologies for emerging, high-tech applications. For further information and updates on the Company or to sign up for Alabama Graphite Corp. News, please visit www.alabamagraphite.com or follow, like and subscribe to us on Twitter, Facebook, YouTube, and LinkedIn. This press release contains forward-looking information under applicable Canadian securities laws ("forward-looking statements"), which may include, without limitation, statements with respect to any potential relationships between the Company and any DoD- and DoE-related entities, any conduct of business involving the US federal government and its agencies and related laboratories, and any potential US government supply, granting and funding opportunities. The forward-looking statements are based on the beliefs of management and reflect Alabama Graphite Corp.'s current expectations. When used in this press release, the words "estimate", "project", "belief", "anticipate", "intend", "expect", "plan", "predict", "may" or "should" and the negative of these words or such variations thereon or comparable terminology are intended to identify forward-looking statements. Such statements reflect the current view of Alabama Graphite Corp. with respect to risks and uncertainties that may cause actual results to differ materially from those contemplated in those forward-looking statements. By their nature, forward-looking statements involve known and unknown risks, uncertainties and other factors which may cause our actual results, performance or achievements, or other future events, to be materially different from any future results, performance or achievements expressed or implied by such forward-looking statements. Such factors include, among other things, the interpretation and actual results of current exploration activities; changes in project parameters as plans continue to be refined; future prices of graphite; possible variations in grade or recovery rates; failure of equipment or processes to operate as anticipated; the failure of contracted parties to perform; labor disputes and other risks of the mining industry; delays in obtaining governmental approvals or financing or in the completion of exploration, as well as those factors disclosed in the Company's publicly filed documents. Forward-looking statements are also based on a number of assumptions, including that contracted parties provide goods and/or services on the agreed timeframes, that equipment necessary for exploration is available as scheduled and does not incur unforeseen breakdowns, that no labor shortages or delays are incurred, that plant and equipment function as specified, that no unusual geological or technical problems occur, and that laboratory and other related services are available and perform as contracted. Forward-looking statements are made based on management's beliefs, estimates and opinions on the date that statements are made and Alabama Graphite Corp. undertakes no obligation to update forward-looking statements (unless required by law) if these beliefs, estimates and opinions or other circumstances should change. Investors are cautioned against attributing undue certainty to forward-looking statements. Alabama Graphite Corp. cautions that the foregoing list of material factors and assumptions are not exhaustive. When relying on Alabama Graphite Corp. forward-looking statements to make decisions, investors and others should carefully consider the foregoing factors and assumptions and other uncertainties and potential events. Alabama Graphite Corp. has also assumed that the material factors and assumptions will not cause any forward-looking statements to differ materially from actual results or events. However, the list of these factors and assumptions is not exhaustive and is subject to change and there can be no assurance that such assumptions will reflect the actual outcome of such items or factors. NEITHER THE TSX VENTURE EXCHANGE NOR ITS REGULATION SERVICE PROVIDER (AS THAT TERM IS DEFINED IN THE POLICIES OF THE TSX VENTURE EXCHANGE) ACCEPTS RESPONSIBILITY FOR THE ADEQUACY OR ACCURACY OF THE CONTENT OF THIS NEWS RELEASE.


Home > Press > 2-D material a brittle surprise: Rice University researchers finds molybdenum diselenide not as strong as they thought Abstract: Scientists at Rice University have discovered that an atom-thick material being eyed for flexible electronics and next-generation optical devices is more brittle than they expected. The Rice team led by materials scientist Jun Lou tested the tensile strength of two-dimensional, semiconducting molybdenum diselenide and discovered that flaws as small as one missing atom can initiate catastrophic cracking under strain. The team's report appears this month in Advanced Materials. The finding may cause industry to look more carefully at the properties of 2-D materials before incorporating them in new technologies, he said. "It turns out not all 2-D crystals are equal," said Lou, a Rice professor of materials science and nanoengineering. "Graphene is a lot more robust compared with some of the others we're dealing with right now, like this molybdenum diselenide. We think it has something to do with defects inherent to these materials." The defects could be as small as a single atom that leaves a vacancy in the crystalline structure, he said. "It's very hard to detect them," he said. "Even if a cluster of vacancies makes a bigger hole, it's difficult to find using any technique. It might be possible to see them with a transmission electron microscope, but that would be so labor-intensive that it wouldn't be useful." Molybdenum diselenide is a dichalcogenide, a two-dimensional semiconducting material that appears as a graphene-like hexagonal array from above but is actually a sandwich of metallic atoms between two layers of chalcogen atoms, in this case, selenium. Molybdenum diselenide is being considered for use as transistors and in next-generation solar cells, photodetectors and catalysts as well as electronic and optical devices. Lou and colleagues measured the material's elastic modulus, the amount of stretching a material can handle and still return to its initial state, at 177.2 (plus or minus 9.3) gigapascals. Graphene is more than five times as elastic. They attributed the large variation to pre-existing flaws of between 3.6 and 77.5 nanometers. Its fracture strength, the amount of stretching a material can handle before breaking, was measured at 4.8 (plus or minus 2.9) gigapascals. Graphene is nearly 25 times stronger. Part of the project led by Rice postdoctoral researcher Yingchao Yang required moving molybdenum diselenide from a growth chamber in a chemical vapor deposition furnace to a microscope without introducing more defects. Yang solved the problem using a dry transfer process in place of a standard acid washing that would have ruined the samples. To test samples, Yang placed rectangles of molybdenum diselenide onto a sensitive electron microscope platform invented by the Lou group. Natural van der Waals forces held the samples in place on springy cantilever arms that measured the applied stress. Lou said the group attempted to measure the material's fracture toughness, an indicator of how likely cracks are to propagate, as they had in an earlier study on graphene. But they found that pre-cutting cracks into molybdenum diselenide resulted in it shattering before stress could be applied, he said. "The important message of this work is the brittle nature of these materials," Lou said. "A lot of people are thinking about using 2-D crystals because they're inherently thin. They're thinking about flexible electronics because they are semiconductors and their theoretical elastic strength should be very high. According to our calculations, they can be stretched up to 10 percent. "But in reality, because of the inherent defects, you rarely can achieve that much strength. The samples we have tested so far broke at 2 to 3 percent (of the theoretical maximum) at most," Lou said. "That should still be fine for most flexible applications, but unless they find a way to quench the defects, it will be very hard to achieve the theoretical limits." Co-authors of the paper are Rice graduate students Emily Hacopian, Weibing Chen, Jing Zhang and Bo Li, alumna Yongji Gong and Pulickel Ajayan, chair of Rice’s Department of Materials Science and NanoEngineering, the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of chemistry; Xing Li of Rice and Peking University, China; Minru Wen of Tsinghua University, China, and the Georgia Institute of Technology; Wu Zhou of Oak Ridge National Laboratory, Oak Ridge, Tenn.; Qing Chen of Peking University; and Ting Zhu of the Georgia Institute of Technology. The research was supported by the Air Force Office of Scientific Research, the Welch Foundation, the Department of Energy Office of Basic Energy Sciences, the National Science Foundation and the National Science Foundation of China. About Rice University Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice’s undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for happiest students and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance. To read “What they’re saying about Rice,” go to http://tinyurl.com/RiceUniversityoverview . Follow Rice News and Media Relations via Twitter @RiceUNews For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 4, 2016
Site: www.eurekalert.org

From deep within the Earth to the upper atmosphere, the organisms and ecosystems highlighted in the 37 projects selected for the 2017 Community Science Program (CSP) of the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, reflect the breadth and depth of interests researchers are exploring to find solutions to energy and environmental challenges. "These new CSP projects, selected through our external review process, exploit DOE JGI's experimental and analytical "omics" capabilities and build our portfolio in key focus areas including sustainable bioenergy production, plant microbiomes and terrestrial biogeochemistry," said Susannah Tringe, DOE JGI User Programs Deputy. The CSP 2017 projects were selected from 98 full proposals received, resulting from 123 letters of intent submitted. The full list of projects may be found at http://jgi. . A number of the accepted proposals call for developing reference genomes for plants of relevance to bioenergy, either as potential feedstocks or because they possess biochemistries that could provide insights into better ways to derive biofuels from plant biomass. These will take advantage of powerful DOE JGI sequencing upgrades. Building upon the previously published reference genome for Sorghum bicolor generated by the DOE JGI, Todd Mockler of the Donald Danforth Plant Science Center has targeted the sequencing of a number of highly diverse sorghum lines. The project will explore and begin to assemble the pangenome to accelerate gene discovery and increase understanding of which variants are associated with different functional outcomes. This work is being conducted in close concert with DOE's Advanced Research Projects Agency-Energy's (ARPA-E) Transportation Energy Resources from Renewable Agriculture (TERRA) program. David Des Marais of Harvard University is similarly focused on Brachypodium, which has had two genomes - Brachypodium distachyon and B. sylvaticum - sequenced by the DOE JGI. Among his plans are estimating a pangenome for B. sylvaticum and sequencing 4 additional Brachypodium species to further functional genomic analysis research in grasses. Karen Aitken of Australia's national science agency CSIRO is focused on producing the first genome assembly of a cultivated sugarcane plant, namely variety R570. Cultivated sugarcane produces 80 percent of the world's sugar and is already harvested in large volumes and along with its transportation system efficiencies, is helping to guide optimized production strategies for other renewable and sustainable biofuel crops. Many projects focus on plants with demonstrated tolerance to stressors such as drought. The proposal from John Cushman of the University of Nevada seeks to establish the common or crystalline ice plant (Mesembryanthemum crystallinum L.) as a DOE JGI Flagship Genome species like poplar, soybean, and Brachypodium distachyon. Most plants use what is known as the C3 pathway to photosynthetically fix carbon (from CO2) in the presence of abundant nutrients, while plants in arid conditions rely on a different mechanism, the water-conserving CAM pathway. A rapidly growing desert annual native to the Namibian desert, the common ice plant is the first reported species that can switch from the C3 pathway to CAM when stressed by factors such as drought and salinity. Elizabeth Kellogg, also of the Danforth Center, requested a genome sequence for the big bluestem (Andropogon gerardii subsp. gerardii), the plant most associated with the Great Plains. The big bluestem dominates the tall grass prairie, accounting for up to 70 percent of biomass in some areas, and is highly resistant to climate change. In a complementary project, Karolina Heyduk of the University of Georgia seeks genome sequences for two cacti: the C3 species Yucca aloifolia and the CAM species Y. filamentosa. The comparison of the genomes for these two related cacti could illuminate the genes and pathways involved in these two forms of photosynthesis. Similarly, the gene atlas for CAM plant Kalanchoe laxiflora proposed by Xiaohan Yang of Oak Ridge National Laboratory could allow researchers to understand how it responds to changes in environmental conditions, including temperature, water, and nitrogen source. Few conifers have had their genomes sequenced thus far, and Joerg Bohlmann of Canada's University of British Columbia wants to raise the number. He has targeted the genomes and transcriptomes of the common yew and western red cedar, as well as the transcriptome of the Jeffrey pine, all candidate bioenergy feedstocks. Two projects focus on the impact of fire on forests as full recovery can take decades. Daniel Cullen of Forest Products Laboratory is comparing how microbial communities in forests dominated by Lodgepole pine (Pinus contorta) fare in fire-disturbed and undisturbed forests, in part to define processes that underlie carbon cycling in coniferous forests. Similarly, Thomas Bruns of University of California, Berkeley seeks to learn more about the effects of pyrophilous fungi on post-fire soil carbon by studying the genomes and transcriptomes of 13 fungi as well as the metatranscriptomes of burned soils. Several projects focus on plant-microbe interactions, both on micro- and macroscales. Sharon Doty of the University of Washington is investigating poplar endophytes, one that fixes nitrogen and one involved in phytoremediation. Plant root microbial consortia are known to be critically important to plant growth and resilience to changing soil conditions but much remains to be learned about community composition and function. Devin Coleman-Derr of the USDA-ARS, with his resource allocation, is seeking to learn more about the role of root-associated Actinobacteria in promoting host fitness in sorghum and rice under drought conditions. Paul Dijkstra of Northern Arizona University, with his allocation, will illuminate soil bacterial transcriptional regulatory networks in switchgrass fields. William Whitman of the University of Georgia plans to develop pangenomes of 100-200 species of soil or plant-associated prokaryotes. The pangenome concept is vital to understanding the repertoire of genes upon which microbial populations may call as local environments change. Jared LeBoldus of Oregon State University is targeting metabolites of Sphaerulina musiva, the cause of Septoria stem canker and leaf spot disease in poplar. Christine Smart of Cornell University is characterizing the mechanisms of willow rust (Melampsora americana), and conducting a comparative genomic study involving M. americana and other Melampsora genomes sequenced by the DOE JGI that are known plant pathogens of poplar and other candidate bioenergy feedstocks. Gregory Bonito of Michigan State University will identify the mechanisms of attraction, communication, and growth-promoting activity between Mortierella, a close relative of arbuscular mycorrhizal fungi, and the DOE JGI Flagship plants. Nhu Nguyen of the University of California, Berkeley will be a generating genus-wide molecular phylogeny of Suillus, asking for 50 fungal genome sequences. Suillus fungi tolerate heavy metals, but the protection varies among hosts. Wayne Nicholson of the University of Florida will be generating a more comprehensive assessment of Carnobacterium, strains of which can be found in all pressure niches from the deep ocean to the upper atmosphere. Sean Crowe of Canada's University of British Columbia will characterize the methanome, comprised of genomic information distributed across organisms that either produce or consume methane, which is both a source of fuel and a greenhouse gas. Graeme Attwood of New Zealand's AgResearch Ltd and his team seek to define gene function in rumen microbes, in part to control the microbes' production of methane emissions from bovines. Soil emissions are a focus of Jennifer Pett-Ridge of Lawrence Livermore National Laboratory and Jonathan Raff of Indiana University. Pett-Ridge is determining the impact microbes in humid tropical forest soils have on carbon cycling, work that complements her DOE Early Career Research Program award. Raff intends to use samples from temperate hardwood forest sites to learn more about soil emissions of nitric oxide (NO) and more accurately represent NO sinks and sources in climate models. Alison Buchan of University of Tennessee, Knoxville is generating data about lignin related aromatic compounds in salt marshes that are removed between river mouths and open oceans, and the biochemical pathways employed in this process. A similar project comes from Christopher Francis of Stanford University and involves the San Francisco Bay Delta, the largest estuary on the west coast of North America. His team is investigating how environmental changes drive estuarine microbial community changes, and if certain pathways and organisms dominate under certain conditions, or if genes co-vary with specific functional gene ecotypes. Several projects are focused on algae for their roles in carbon fixation and for potential bioenergy applications. Chi Zhang of the University of Nebraska - Lincoln is focused on Zygnematales, the closest algal lineage to land plants, to learn more about how plant cell walls formed in the evolutionary transition from aquatic to land plants. The information could shed light on how to deconstruct plant cell walls for biofuel production without impacting plant viability. Matthew Johnson of Woods Hole Oceanographic Institute is interested in the bloom-forming Mesodinium rubrum and M. major complex. While these algae are major contributors to primary production, they acquire their ability to harness light energy for nutrients through predation so genome sequencing would help distinguish native metabolic pathways from those of the prey. Jeffry Dudycha of the University of South Carolina is pursuing a project based on cryptophytes, eukaryotic microalgae that are important primary producers in aquatic environments and are capable of capturing a broad spectrum of available light. By sequencing representatives from all major clades, the team hopes to maximize ways diverse algal communities could boost lipid yields for biofuels. Biological soil crusts or biocrusts are extremely sensitive to climate changes. As surviving extreme drought is a rare feature in plants, Elena Lopez Peredo of Marine Biological Laboratory will be generating annotated genomes of Scenedesmus algae to learn more about the desiccation tolerance of green microalgae in biocrusts. In a complementary project, Steven Harris of University of Nebraska - Lincoln is detailing the interactions of fungal and algal components of biocrusts to learn more about how they can tolerate environmental stress. Several projects build off previous efforts to sequence 1,000 microbial genomes (KMG), which have so far led to the selection of nearly 2,300 type strains. Markus Göker of Germany's DSMZ is spearheading KMG-4 to sequence environmentally relevant cultures. Ramunas Stepanauskas of Bigelow Laboratory for Ocean Sciences is using single-cell genomics to target taxonomic "blind spots" - clades systematically underrepresented or missed due to primer mismatches or intervening sequences in marker genes. Barbara MacGregor of the University of North Carolina is exploring how horizontal gene transfer has shaped the genomes of large sulfur bacteria, which are often found in dense microbial mats and play roles in carbon, nitrogen, sulfur and phosphorus cycles. Mary Ann Moran of the University of Georgia is examining organic sulfur biogeochemistry in a coastal ocean ecosystem. She and colleagues from the Monterey Bay Aquarium Research Institute collected a time series of coastal microbes and microbial communities which will be subjected to sequencing to explore how metabolism shifts as community composition changes. Mark Dopson of Sweden's Linnaeus University has a project that deploys DOE JGI's single cell genomics resources on samples sourced from deep subsurface. Targeted microbiomes will come from deep bedrock waters including Mont Terri Underground Rock Lab in Switzerland and sites constructed to investigate the safety of deep geological storage of spent nuclear fuel. Data would provide knowledge contributing to informed decisions on safety. Ashley Shade of Michigan State University plans to use a synthetic bacterial community to study the interactions among multiple community members, and then link community structure to activity and to exometabolite interactions. In sum, these 37 new projects will furnish a genomic basis for ongoing and new explorations into the genetic bases for plant, fungal, algal, microbial, and microbial community mechanisms focused on important processes of biofuel and bioenergy feedstock composition and metabolisms, as well as related environmental processes of importance for DOE missions. The U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, is committed to advancing genomics in support of DOE missions related to clean energy generation and environmental characterization and cleanup. DOE JGI, headquartered in Walnut Creek, Calif., provides integrated high-throughput sequencing and computational analysis that enable systems-based scientific approaches to these challenges. Follow @doe_jgi on Twitter. DOE's Office of Science is the largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


Vogel T.,University of Georgia | Li Y.W.,University of Georgia | Li Y.W.,Oak Ridge National Laboratory | Wust T.,Swiss Federal Institute of forest | Landau D.P.,University of Georgia
Physical Review Letters | Year: 2013

We introduce a parallel Wang-Landau method based on the replica-exchange framework for Monte Carlo simulations. To demonstrate its advantages and general applicability for simulations of complex systems, we apply it to different spin models including spin glasses, the Ising model, and the Potts model, lattice protein adsorption, and the self-assembly process in amphiphilic solutions. Without loss of accuracy, the method gives significant speed-up and potentially scales up to petaflop machines. © 2013 American Physical Society.


Pennycook S.J.,Oak Ridge National Laboratory | Colliex C.,University Paris - Sud
MRS Bulletin | Year: 2012

In the scanning transmission electron microscope, multiple signals can be simultaneously collected, including the transmitted and scattered electron signals (bright field and annular dark field or Z-contrast images), along with spectroscopic signals such as inelastically scattered electrons and emitted photons. In the last few years, the successful development of aberration correctors for the electron microscope has transformed the field of electron microscopy, opening up new possibilities for correlating structure to functionality. Aberration correction not only allows for enhanced structural resolution with incident probes into the sub-Ãngstrom range, but can also provide greater probe currents to facilitate mapping of intrinsically weak spectroscopic signals at the nanoscale or even the atomic level. In this issue of MRS Bulletin, we illustrate the power of the new generation of electron microscopes with a combination of imaging and spectroscopy. We show the mapping of elemental distributions at atomic resolution and also the mapping of electronic and optical properties at unprecedented spatial resolution, with applications ranging from graphene to plasmonic nanostructures, and oxide interfaces to biology. © 2012 Materials Research Society.


Yueh K.,EPRI | Terrani K.A.,Oak Ridge National Laboratory
Journal of Nuclear Materials | Year: 2014

The feasibility of using SiCf-SiCm composites in light water reactor (LWR) fuel designs was evaluated. The evaluation was motivated by the desire to improve fuel performance under normal and accident conditions. The Fukushima accident once again highlighted the need for improved fuel materials that can maintain fuel integrity to higher temperatures for longer periods of time. The review identified many benefits as well as issues in using the material. Issues perceived as presenting the biggest challenges to the concept were identified to be flux gradient induced differential volumetric swelling, fragmentation and thermal shock resistance. The oxidation of silicon and its release into the coolant as silica has been identified as an issue because existing plant systems have limited ability for its removal. Detailed evaluation using available literature data and testing as part of this evaluation effort have eliminated most of the major concerns. The evaluation identified Boiling Water Reactor (BWR) channel, BWR fuel water tube, and Pressurized Water Reactor (PWR) guide tube as feasible applications for SiC composite. A program has been initiated to resolve some of the remaining issues and to generate physical property data to support the design of commercial fuel components. © 2014 Elsevier B.V. All rights reserved.


Rubin J.,University of Maine, United States | Leiby P.N.,Oak Ridge National Laboratory
Energy Policy | Year: 2013

This research examines the economic implications of different designs for a national low carbon fuel standard (NLCFS) for the road transportation sector. A NLCFS based on the average Carbon Intensity (CI) of all fuels sold generates an incentive for fuel suppliers to reduce the measured CI of their fuels. The economic impacts are determined by the availability of low carbon fuels, estimates of which can vary widely. Also important are the compliance path, reference level CI, and the design of the credit system, particularly the opportunities for trading and banking. To quantitatively examine the implications of a NLCFS, we created the Transportation Regulation and Credit Trading (TRACT) Model. With TRACT, we model a NLCFS credit trading system among profit maximizing fuel suppliers for light- and heavy-duty vehicle fuel use for the United States from 2012 to 2030. We find that credit trading across gasoline and diesel fuel markets can lower the average costs of carbon reductions by an insignificant amount to 98% depending on forecasts of biofuel supplies and carbon intensities. Adding banking of credits on top of trading can further lower the average cost of carbon reductions by 5%-9% and greatly reduce year-to-year fluctuations in credit prices. © 2012 Elsevier Ltd.


Dai P.,University of Tennessee at Knoxville | Dai P.,CAS Institute of Physics | Hu J.,CAS Institute of Physics | Hu J.,Purdue University | And 2 more authors.
Nature Physics | Year: 2012

High-temperature superconductivity in the iron-based materials emerges from, or sometimes coexists with, their metallic or insulating parent compound states. This is surprising, as these undoped states exhibit dramatically different antiferromagnetic spin arrangements and Néel temperatures. Although there is a general consensus that magnetic interactions are important for superconductivity, much remains unknown concerning the microscopic origin of the magnetic states. In this review, we summarize the progress in this area, focusing on recent experimental and theoretical results, and their microscopic implications. We conclude that the parent compounds are in a state that is more complex than that implied by a simple Fermi surface nesting scenario, and a dual description including both itinerant and localized degrees of freedom is needed to properly describe these fascinating materials. © 2012 Macmillan Publishers Limited.


Xiao D.,Oak Ridge National Laboratory | Liu G.-B.,University of Hong Kong | Feng W.,Oak Ridge National Laboratory | Feng W.,University of Tennessee at Knoxville | And 3 more authors.
Physical Review Letters | Year: 2012

We show that inversion symmetry breaking together with spin-orbit coupling leads to coupled spin and valley physics in monolayers of MoS 2 and other group-VI dichalcogenides, making possible controls of spin and valley in these 2D materials. The spin-valley coupling at the valence-band edges suppresses spin and valley relaxation, as flip of each index alone is forbidden by the valley-contrasting spin splitting. Valley Hall and spin Hall effects coexist in both electron-doped and hole-doped systems. Optical interband transitions have frequency-dependent polarization selection rules which allow selective photoexcitation of carriers with various combination of valley and spin indices. Photoinduced spin Hall and valley Hall effects can generate long lived spin and valley accumulations on sample boundaries. The physics discussed here provides a route towards the integration of valleytronics and spintronics in multivalley materials with strong spin-orbit coupling and inversion symmetry breaking. © 2012 American Physical Society.


Dathar G.K.P.,Oak Ridge National Laboratory | Shelton W.A.,Oak Ridge National Laboratory | Shelton W.A.,Pacific Northwest National Laboratory | Xu Y.,Oak Ridge National Laboratory
Journal of Physical Chemistry Letters | Year: 2012

Periodic density functional theory (DFT) calculations indicate that the intrinsic activity of Au, Ag, Pt, Pd, Ir, and Ru for the oxygen reduction reaction by Li (Li-ORR) forms a volcano-like trend with respect to the adsorption energy of oxygen, with Pt and Pd being the most active. The trend is based on two mechanisms: the reduction of molecular O 2 on Au and Ag and of atomic O on the remaining metals. Step edges are found to be more active for catalyzing the Li-ORR than close-packed surfaces. Our findings identify important considerations in the design of catalyst-promoted air cathodes for nonaqueous Li-air batteries. © 2012 American Chemical Society.


Robinson S.A.,University of Wollongong | Erickson D.J.,Oak Ridge National Laboratory
Global Change Biology | Year: 2015

Climate scientists have concluded that stratospheric ozone depletion has been a major driver of Southern Hemisphere climate processes since about 1980. The implications of these observed and modelled changes in climate are likely to be far more pervasive for both terrestrial and marine ecosystems than the increase in ultraviolet-B radiation due to ozone depletion; however, they have been largely overlooked in the biological literature. Here, we synthesize the current understanding of how ozone depletion has impacted Southern Hemisphere climate and highlight the relatively few documented impacts on terrestrial and marine ecosystems. Reviewing the climate literature, we present examples of how ozone depletion changes atmospheric and oceanic circulation, with an emphasis on how these alterations in the physical climate system affect Southern Hemisphere weather, especially over the summer season (December-February). These potentially include increased incidence of extreme events, resulting in costly floods, drought, wildfires and serious environmental damage. The ecosystem impacts documented so far include changes to growth rates of South American and New Zealand trees, decreased growth of Antarctic mosses and changing biodiversity in Antarctic lakes. The objective of this synthesis was to stimulate the ecological community to look beyond ultraviolet-B radiation when considering the impacts of ozone depletion. Such widespread changes in Southern Hemisphere climate are likely to have had as much or more impact on natural ecosystems and food production over the past few decades, than the increased ultraviolet radiation due to ozone depletion. Copyright © 2015 John Wiley & Sons Ltd212 February 2015 10.1111/gcb.12739 Research Review Review Articles © 2014 John Wiley & Sons Ltd.


Dittmer R.,TU Darmstadt | Jo W.,TU Darmstadt | Rodel J.,TU Darmstadt | Kalinin S.,Oak Ridge National Laboratory | Balke N.,Oak Ridge National Laboratory
Advanced Functional Materials | Year: 2012

Piezoresponse force microscopy (PFM) is used to afford insight into the nanoscale electromechanical behavior of lead-free piezoceramics. Materials based on Bi 1/2Na 1/2TiO 3 exhibit high strains mediated by a field-induced phase transition. Using the band excitation technique the initial domain morphology, the poling behavior, the switching behavior, and the time-dependent phase stability in the pseudo-ternary system (1-x)(0.94Bi 1/2Na 1/2TiO 3-0.06BaTiO 3)-xK 0.5Na 0.5NbO 3 (0 <= x <= 18 mol%) are revealed. In the base material (x = 0 mol%), macroscopic domains and ferroelectric switching can be induced from the initial relaxor state with sufficiently high electric field, yielding large macroscopic remanent strain and polarization. The addition of KNN increases the threshold field required to induce long range order and decreases the stability thereof. For x = 3 mol% the field-induced domains relax completely, which is also reflected in zero macroscopic remanence. Eventually, no long range order can be induced for x >= 3 mol%. This PFM study provides a novel perspective on the interplay between macroscopic and nanoscopic material properties in bulk lead-free piezoceramics. Ferroelectric switching on the nanoscale in Bi 1/2Na 1/2TiO 3 (BNT)-BaTiO 3 (BT) lead-free ceramics is characterized using a scanning probe microscopy tip. For a densely spaced grid, ferroelectric switching hysteresis is measured at single points and analyzed with respect to the switchable polarization. In the resulting switching polarization map, individual grains can be distinguished. Ferroelectric switching loops are extracted from several grains and compared. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Hudiburg T.W.,University of Illinois at Urbana - Champaign | Law B.E.,Oregon State University | Thornton P.E.,Oak Ridge National Laboratory
Biogeosciences | Year: 2013

Ecosystem process models are important tools for determining the interactive effects of global change and disturbance on forest carbon dynamics. Here we evaluated and improved terrestrial carbon cycling simulated by the Community Land Model (CLM4), the land model portion of the Community Earth System Model (CESM1.0.4). Our analysis was conducted primarily in Oregon forests using FLUXNET and forest inventory data for the period 2001-2006. We go beyond prior modeling studies in the region by incorporating regional variation in physiological parameters from >100 independent field sites in the region. We also compare spatial patterns of simulated forest carbon stocks and net primary production (NPP) at 15 km resolution using data collected from federal forest inventory plots (FIA) from >3000 plots in the study region. Finally, we evaluate simulated gross primary production (GPP) with FLUXNET eddy covariance tower data at wet and dry sites in the region. We improved model estimates by making modifications to CLM4 to allow physiological parameters (e.g., foliage carbon to nitrogen ratios and specific leaf area), mortality rate, biological nitrogen fixation, and wood allocation to vary spatially by plant functional type (PFT) within an ecoregion based on field plot data in the region. Prior to modifications, default parameters resulted in underestimation of stem biomass in all forested ecoregions except the Blue Mountains and annual NPP was both over-and underestimated. After modifications, model estimates of mean NPP fell within the observed range of uncertainty in all ecoregions (two-sided P value Combining double low line 0.8), and the underestimation of stem biomass was reduced. This was an improvement from the default configuration by 50% for stem biomass and 30% for NPP. At the tower sites, modeled monthly GPP fell within the observed range of uncertainty at both sites for the majority of the year, however summer GPP was underestimated at the Metolius semi-arid pine site and spring GPP was overestimated at the Campbell River mesic Douglas-fir site, indicating GPP may be an area for further improvement. The low bias in summer maximum GPP at the semi-arid site could be due to seasonal response of Vcmax to temperature and precipitation while overestimated spring values at the mesic site could be due to response of Vcmax to temperature and day length. © Author(s) 2013.


Liu T.,Pacific Northwest National Laboratory | Wang X.,Oak Ridge National Laboratory | Hoffmann C.,Oak Ridge National Laboratory | Dubois D.L.,Pacific Northwest National Laboratory | Bullock R.M.,Pacific Northwest National Laboratory
Angewandte Chemie - International Edition | Year: 2014

Hydrogenase enzymes in nature use hydrogen as a fuel, but the heterolytic cleavage of H-H bonds cannot be readily observed in enzymes. Here we show that an iron complex with pendant amines in the diphosphine ligand cleaves hydrogen heterolytically. The product has a strong Fe-H⋯H-N dihydrogen bond. The structure was determined by single-crystal neutron diffraction, and has a remarkably short H⋯H distance of 1.489(10) Å between the protic N-Hδ+ and hydridic Fe-Hδ- part. The structural data for [Cp C 5F 4NFeH(PtBu 2NtBu 2H)]+ provide a glimpse of how the H-H bond is oxidized or generated in hydrogenase enzymes. These results now provide a full picture for the first time, illustrating structures and reactivity of the dihydrogen complex and the product of the heterolytic cleavage of H2 in a functional model of the active site of the [FeFe] hydrogenase enzyme. Caught in the act: The addition of H2 to a synthetic iron complex containing a pendant amine (a model complex for [FeFe] hydrogenase) leads to facile heterolytic cleavage of H2. Neutron diffraction indicates a very short H⋯H bond distance of 1.489(10) Å in the Fe-H⋯H-N complex, thus providing a glimpse of how the H-H bond is oxidized in hydrogenase enzymes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Feng W.,CAS Institute of Physics | Xiao D.,Oak Ridge National Laboratory | Ding J.,CAS Institute of Physics | Yao Y.,CAS Institute of Physics
Physical Review Letters | Year: 2011

Using first-principles calculations within density functional theory, we investigate the band topology of ternary chalcopyrites of composition I-III-VI2 and II-IV-V2. By exploiting adiabatic continuity of their band structures to the binary 3D-HgTe, combined with direct evaluation of the Z2 topological invariant, we show that a large number of chalcopyrites can realize the topological insulating phase in their native states. The ability to host room-temperature ferromagnetism in the same chalcopyrite family makes them appealing candidates for novel spintronics devices. © 2011 The American Physical Society.


Yang K.,Duke University | Setyawan W.,Pacific Northwest National Laboratory | Wang S.,Duke University | Buongiorno Nardelli M.,University of North Texas | And 3 more authors.
Nature Materials | Year: 2012

Topological insulators (TI) are becoming one of the most studied classes of novel materials because of their great potential for applications ranging from spintronics to quantum computers. To fully integrate TI materials in electronic devices, high-quality epitaxial single-crystalline phases with sufficiently large bulk bandgaps are necessary. Current efforts have relied mostly on costly and time-consuming trial-and-error procedures. Here we show that by defining a reliable and accessible descriptor, which represents the topological robustness or feasibility of the candidate, and by searching the quantum materials repository aflowlib.org, we have automatically discovered 28 TIs (some of them already known) in five different symmetry families. These include peculiar ternary halides, Cs{Sn,Pb,Ge}{Cl,Br,I} 3, which could have been hardly anticipated without high-throughput means. Our search model, by relying on the significance of repositories in materials development, opens new avenues for the discovery of more TIs in different and unexplored classes of systems. © 2012 Macmillan Publishers Limited. All rights reserved.


Li M.,Brookhaven National Laboratory | Cullen D.A.,Oak Ridge National Laboratory | Sasaki K.,Brookhaven National Laboratory | Marinkovic N.S.,University of Delaware | And 2 more authors.
Journal of the American Chemical Society | Year: 2013

Splitting the C-C bond is the main obstacle to electrooxidation of ethanol (EOR) to CO2. We recently demonstrated that the ternary PtRhSnO 2 electrocatalyst can accomplish that reaction at room temperature with Rh having a unique capability to split the C-C bond. In this article, we report the finding that Ir can be induced to split the C-C bond as a component of the ternary catalyst. We characterized and compared the properties of several carbon-supported nanoparticle (NP) electrocatalysts comprising a SnO 2 NP core decorated with multimetallic nanoislands (MM = PtIr, PtRh, IrRh, PtIrRh) prepared using a seeded growth approach. An array of characterization techniques were employed to establish the composition and architecture of the synthesized MM/SnO2 NPs, while electrochemical and in situ infrared reflection absorption spectroscopy studies elucidated trends in activity and the nature of the reaction intermediates and products. Both EOR reactivity and selectivity toward CO2 formation of several of these MM/SnO2/C electrocatalysts are significantly higher compared to conventional Pt/C and Pt/SnO2/C catalysts. We demonstrate that the PtIr/SnO2/C catalyst with high Ir content shows outstanding catalytic properties with the most negative EOR onset potential and reasonably good selectivity toward ethanol complete oxidation to CO2. © 2012 American Chemical Society.


Beste A.,Bethel University | Buchanan III A.C.,Oak Ridge National Laboratory
Chemical Physics Letters | Year: 2012

Employing kinetic Monte Carlo, we simulated the radical chain propagation of the pyrolysis of phenethyl phenyl ether (PPE), which serves as a model compound for the β-O-4 linkage in lignin. The input rate constants were obtained with transition state theory based on density functional calculations. Pre- and postcomplexes for hydrogen abstraction and β-scission reactions were included assuming thermal equilibrium. Individual rate constants compare well with experimental estimates. The calculated overall α/β-product selectivity is qualitatively in agreement with experiment. The simulation revealed that the carbon-carbon phenyl shift reaction for the β-PPE radical is part of the pyrolysis mechanism. © 2012 Elsevier B.V. All rights reserved.


Zhang J.,CAS Institute of Physics | Zhang X.-G.,Oak Ridge National Laboratory | Han X.F.,CAS Institute of Physics
Applied Physics Letters | Year: 2012

The orbital composition of the electrode wave functions and the complex bands within the barrier band gap are two important factors in deciding the spin-filter effect. This is illustrated in a class of spinel oxides, including MgAl 2O 4, ZnAl 2O 4, SiMg 2O 4, and SiZn 2O 4. Through first-principles calculations of the complex bands and electron transmission along the [001] direction, they are shown to have the same Δ 1 spin-filter effect as MgO due to the combination of both factors. Due to better lattice match with typical bcc magnetic electrodes than MgO, these materials provide a broad spectrum of candidate materials for magnetic tunnel junctions. © 2012 American Institute of Physics.


Cao B.,State University of New York at Stony Brook | Cao B.,Brookhaven National Laboratory | Veith G.M.,Oak Ridge National Laboratory | Neuefeind J.C.,Oak Ridge National Laboratory | And 3 more authors.
Journal of the American Chemical Society | Year: 2013

A two-step solid-state reaction for preparing cobalt molybdenum nitride with a nanoscale morphology has been used to produce a highly active and stable electrocatalyst for the hydrogen evolution reaction (HER) under acidic conditions that achieves an iR-corrected current density of 10 mA cm -2 at -0.20 V vs RHE at low catalyst loadings of 0.24 mg/cm 2 in rotating disk experiments under a H2 atmosphere. Neutron powder diffraction and pair distribution function (PDF) studies have been used to overcome the insensitivity of X-ray diffraction data to different transition-metal nitride structural polytypes and show that this cobalt molybdenum nitride crystallizes in space group P63/mmc with lattice parameters of a = 2.85176(2) Å and c = 10.9862(3) Å and a formula of Co0.6Mo1.4N2. This space group results from the four-layered stacking sequence of a mixed close-packed structure with alternating layers of transition metals in octahedral and trigonal prismatic coordination and is a structure type for which HER activity has not previously been reported. Based on the accurate bond distances obtained from time-of-flight neutron diffraction data, it is determined that the octahedral sites contain a mixture of divalent Co and trivalent Mo, while the trigonal prismatic sites contain Mo in a higher oxidation state. X-ray photoelectron spectroscopy (XPS) studies confirm that at the sample surface nitrogen is present and N-H moieties are abundant. © 2013 American Chemical Society.


Siefert J.A.,EPRI | David S.A.,Oak Ridge National Laboratory
Science and Technology of Welding and Joining | Year: 2014

Advanced ultrasupercritical steam conditions of up to760°C and 34.5 MPahave been investigated in various programmes around the world over the last twodecades. Todate, much progress has been made, and three candidate materials, namely ferritic, austenitic and nickel base superalloys, have been investigated for high temperature strength, corrosion resistance and weldability. In an earlier published paper, welding and weldability of ferritic alloys were discussed. This paper considers the unique weldability characteristics for utilisation of austenitic stainless steels in future advanced ultrasupercritical fossil power plant designs and covers topics such as fundamentals of austenitic stainless steel welds, including weldability, filler metals and dissimilar metal welds, and discusses the prognosis for this class of materials for advanced ultrasupercritical fossil fired power plants. © 2014 Institute of Materials, Minerals and Mining.


News Article | February 24, 2017
Site: www.technologyreview.com

The company, backed by Peter Thiel’s Founders Fund, revised inflated assertions about its advanced reactor design after growing concerns prompted an MIT review. Nuclear energy startup Transatomic Power has backed away from bold claims for its advanced reactor technology after an informal review by MIT professors highlighted serious errors in the company’s calculations, MIT Technology Review has learned. The Cambridge, Massachusetts-based company, founded in 2011 by a pair of MIT students in the Department of Nuclear Science and Engineering, asserted that its molten-salt reactor design could run on spent nuclear fuel from conventional reactors and generate energy far more efficiently than they do. In a white paper published in March 2014, the company proclaimed its reactor “can generate up to 75 times more electricity per ton of mined uranium than a light-water reactor.” Those lofty claims helped it raise millions in venture capital, secure a series of glowing media profiles (including one in this publication), and draw a rock-star lineup of technical advisors. But in a paper on its site dated November 2016, the company downgraded “75 times” to “more than twice.” In addition, it now specifies that the design “does not reduce existing stockpiles of spent nuclear fuel” or use them as its fuel source. The promise of recycling nuclear waste, which poses tricky storage and proliferation challenges, was a key initial attraction of the company and captured considerable attention. “In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review. The dramatic revisions followed an analysis in late 2015 by Kord Smith, a professor of  nuclear science and engineering at MIT and an expert in the physics of nuclear reactors. At that point, there were growing doubts in the field about the company’s claims and at least some worries that any inflated claims could tarnish the reputation of MIT’s nuclear department, which has been closely associated with the company. Transatomic also has a three-year research agreement with the department, according to earlier press releases. In reviewing the company’s white paper, Smith noticed immediate red flags. He relayed his concerns to his department head and the company, and he subsequently conducted an informal review with two other professors. “I said this is obviously incorrect based on basic physics,” Smith says. He asked the company to run a test, which ended up confirming that “their claims were completely untrue,” Smith says. He notes that promising to increase the reactor’s fuel efficiency by 75 times is the rough equivalent of saying that, in a single step, you'd developed a car that could get 2,500 miles per gallon. Ultimately, the company redid its analysis and produced and posted a new white paper. To be sure, it would still be a notable accomplishment if the company could build a reactor that improved fuel energy efficiency over conventional reactors by a factor more than two. In addition, the new white paper notes the reactor could reduce waste by 53 percent compared to light-water reactors. The viability of the latter finding was recently verified by Oak Ridge National Laboratory. But the analysis found that the reactor couldn’t sustain a fission chain reaction using spent fuel long enough for it to be a feasible option, as previously hoped, Dewan said in a subsequent phone interview. “I’m very glad that we have this robust technical analysis in place, confirmed by Oak Ridge, and we’re committed to moving ahead with our plans for low-cost, safe nuclear power,” Dewan said in her e-mail response. “We decided to open-publish our analysis and the results, because we wanted people to be able to repeat the calculations and examine it directly themselves. Also, we’ve talked before about how the nuclear industry should increase transparency to build public trust, and wanted to practice what we preached.” The company has previously said it intends to build a demonstration reactor by 2020, but that timeline has slipped to 2021, Dewan said during the phone interview. “We certainly have a long road ahead of us,” she said, noting technical, engineering, supply-chain, and regulatory challenges. “But I think that momentum is on our side.” Nuclear experts say that approval and construction of any advanced-reactor prototype requires a minimum of 10 years, given the extensive regulatory requirements of the Nuclear Regulatory Commission. Smith stresses that the founders weren’t acting in bad faith, but he did note they didn’t subject their claims to the peer-review process early on. “They didn’t do any of this intentionally,” Smith says. “It was just a lack of experience and perhaps an overconfidence in their own ability. And then not listening carefully enough when people were questioning the conclusions they were coming to.” The company has raised at least $4.5 million from Peter Thiel’s Founders Fund, Acadia Woods Partners, and Daniel Aegerter of Armada Investment AG. Venture capital veteran Ray Rothrock serves as chairman of the company.


News Article | September 12, 2016
Site: www.scientificcomputing.com

The U.S. Department of Energy's (DOE) Exascale Computing Project (ECP) today announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. Exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 to 100 times faster than the nation's most powerful supercomputers in use today. The 15 awards being announced total $39.8 million, targeting advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy and national security, as well as collaborations such as the Precision Medicine Initiative with the National Institutes of Health's National Cancer Institute. Of the proposals announced that are receiving full funding, two are being led by principal investigators at the DOE's Argonne National Laboratory: 1. Computing the Sky at Extreme Scales equips cosmologists with the ability to design foundational simulations to create "virtual universes" on demand at the extreme fidelities demanded by future multi-wavelength sky surveys. The new discoveries that will emerge from the combination of sky surveys and advanced simulation provided by the ECP will shed more light on three key ingredients of our universe: dark energy, dark matter and inflation. All three of these concepts reach beyond the known boundaries of the Standard Model of particle physics. 2. Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) that addresses three top challenges of the National Cancer Institute: understanding the molecular basis of key protein interactions, developing predictive models for drug response and automating the analysis and extraction of information from millions of cancer patient records to determine optimal cancer treatment strategies. Rick Stevens, Principal Investigator, Argonne National Laboratory, with Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory and the National Institutes of Health's National Cancer Institute (NIH/NCI). Additionally, a third project led by Argonne will be receiving seed funding: Multiscale Coupled Urban Systems will create an integrated modeling framework comprising data curation, analytics, modeling and simulation components that will equip city designers, planners and managers to scientifically develop and evaluate solutions to issues that affect cities now and in the future. The framework will focus first on integrating urban atmosphere and infrastructure heat exchange and air flow; building energy demand at district or city-scale, generation and use; urban dynamics and socioeconomic models; population mobility and transportation; and hooks to expand to include energy systems (biofuels, electricity and natural gas) and water resources. The application efforts will help guide DOE's development of a U.S. exascale ecosystem as part of President Obama's National Strategic Computing Initiative. DOE, the U.S. Department of Defense and the National Science Foundation have been designated as lead agencies, and ECP is the primary DOE contribution to the initiative. The ECP's multi-year mission is to maximize the benefits of high-performance computing for U.S. economic competitiveness, national security and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms and workforce development needs critical to the effective development and deployment of future exascale systems. Leadership of the Exascale Computing Project comes from six DOE national laboratories: the Office of Science's Oak Ridge, Argonne and Lawrence Berkeley national labs and NNSA's Lawrence Livermore, Los Alamos and Sandia national labs.


« Honda begins sales of Clarity Fuel Cell in Japan; targeting 200 units first year | Main | UQM receives new follow-on order from Proterra to support increased demand for electric buses » The US Department of Energy (DOE) selected 33 small businesses to work directly with DOE national labs to accelerate the commercialization of new clean energy technologies. The department’s Office of Energy Efficiency and Renewable Energy is investing nearly $6.7 million under Round 1 of the new Small Business Vouchers (SBV) pilot. For Round 1, the small businesses and laboratories will collaborate on advancing a number of clean energy technologies, including water, wind, bioenergy, solar, buildings, vehicles, fuel cells, geothermal technologies, and advanced manufacturing. The selected small businesses will work with scientists at nine department laboratories: Oak Ridge National Laboratory (ORNL); National Renewable Energy Laboratory (NREL); Lawrence Berkeley National Laboratory (LBNL); Sandia National Laboratories (SNL); Pacific Northwest National Laboratory (PNNL); Idaho National Laboratory (INL); Los Alamos National Laboratory (LANL); Argonne National Laboratory (ANL); and Lawrence Livermore National Laboratory (LLNL). SBV is a collaborative, national effort that provides $20 million for US companies to help improve industry awareness of national laboratory capabilities and provide small and mid-size businesses access to the resources available within the national laboratory system. Vouchers range from $50,000-300,000. The companies were competitively chosen from a pool of hundreds of applications. Almost 40% of the businesses represent new DOE-industry partnerships and relationships with the national labs. Building on the tremendous response of Round 1, the department also has begun accepting applications for Round 2 of the SBV pilot. A total of $13 million worth of funding remains; over the course of the year, up to 100 vouchers will be awarded.


News Article | December 24, 2015
Site: www.techtimes.com

For the first time in three decades, the U.S. has produced plutonium-238, an isotope that powers NASA's rovers on Mars and fuels missions into the outer solar system and deep space beyond. NASA is close to running out of the element, so scientists at Tennessee's Oak Ridge National Laboratory have produced a golf-ball sized lump of the radioactive material, 50 grams in weight. The space agency possesses just enough of the isotope for its planned 2020 Mars mission; after that a proposed mission to Jupiter's moon Europa would use up the rest. Pu-238 produces heat — as much as 450 degrees Fahrenheit — as it decays, and can be used in thermoelectric generators in spacecraft for long-range missions where solar panels would be impractical. The New Horizons spacecraft that flew by Pluto earlier this year is carrying around 24 pounds of plutonium dioxide generating around 245 Watts of electricity, according to NASA. "Radioisotope power systems are a key tool to power the next generation of planetary orbiters, landers and rovers in our quest to unravel the mysteries of the universe," said John Grunsfeld, associate administrator for the agency's Science Mission Directorate. The new chunk of the isotope produced for NASA is the first Pu-238 created in America since the Department of Energy's Savannah River Plant in South Carolina stopped producing it in the late 1980s. The renewed production "signals a new renaissance in the exploration of our solar system," Grunsfeld said. Under an agreement between NASA and the DOE, around 3 pounds of the plutonium isotope will be produced annually at a cost of $15 million, sufficient for one or possibly two deep space exploratory missions every decade. The small lump of the isotope recently produced will be analyzed for chemical purity and Pu-238 richness before scientists scale up production to the levels NASA requires, Oak Ridge officials said. "Once we automate and scale up the process, the nation will have a long-range capability to produce radioisotope power systems such as those used by NASA for deep space exploration," said Bob Wham, who leads the project for the lab's Nuclear Security and Isotope Technology Division.


News Article | November 25, 2015
Site: cleantechnica.com

Just yesterday CleanTechnica outlined a revolution in auto manufacturing, with electric vehicle technology poised to challenge — and overrun — the internal combustion industry. The makings of an internal revolution within the EV field are also boiling to the surface as new EV battery technology emerges to challenge conventional lithium-ion systems. As the saying goes, may you live in interesting times… One significant challenge to lithium-ion EV battery technology is coming from the lithium-sulfur field. CleanTechnica has been following this new energy storage battery technology for the past couple of years, and in 2013 our sister site Gas2.org predicted that lithium-sulfur EV batteries would quickly surpass lithium-ion in terms of low cost and high energy density. In particular, we’ve been following lithium-sulfur battery research at Oak Ridge National Laboratory. In 2013 the lab announced that it had solved an important “catch-22” that had been holding things up, and it has been very busy since then. Last week, Oak Ridge announced that it has signed an exclusive lithium-sulfur battery agreement with a startup called Solid Power, a name that riffs on a fundamental difference between lithium-sulfur and lithium-ion energy storage technology. Here’s Oak Ridge with a rundown: “The ORNL technology will aid Solid Power in the development of solid-state rechargeable batteries that can provide two to three times the energy of conventional lithium ion technologies. Because all-solid batteries lack any volatile or flammable liquid components, they hold potential to save costs by eliminating many of the expensive safety features typically associated with lithium-ion systems.” Aside from leveraging the low cost of sulfur to produce low cost batteries, Solid Power has also focused on reducing costs at the manufacturing end through a “simple battery cell architecture that leverages industry standard manufacturing processes. If the plans all work out, Solid Power will begin rolling out its first large-scale prototypes in December. Solid Power isn’t shy about throwing down the energy storage gauntlet, partly thanks to its solid cred even before the Oak Ridge agreement. The company spun out of the University of Colorado-Boulder in 2012, leveraging research at that institution, with this result: Our technology is based on combining an exceptionally high capacity cathode with a high capacity lithium metal anode and in combination with a high ionic conductivity solid separator. Our battery materials are 100-pecent inorganic and possess no flammable or volatile components. Our batteries provide substantially higher energy than conventional lithium ion (2-3X greater) while also enabling lower cost systems due to the potential for eliminating many of the costly safety features typically associated with lithium-ion systems. Oh, snap! No worries for Tesla, though — that giant lithium-ion “gigafactory” could easily be retooled for lithium sulfur EV batteries, right? As for lithium-sulfur batteries, the aforementioned “catch-22” involved the use of liquid electrolytes, just as with any other conventional battery formula. That’s all well and good for lithium-ion, but adding sulfur to the mix results in an extremely short lifespan. The breakthrough development by Oak Ridge involved developing a new class of synthetic sulfur-rich materials to stand in for the liquid electrolyte. Here’s Oak Ridge chief researcher Chengu Liang with the lowdown back in 2013: “This game-changing shift from liquid to solid electrolytes eliminates the problem of sulfur dissolution and enables us to deliver on the promise of lithium-sulfur batteries. Our battery design has real potential to reduce cost, increase energy density and improve safety compared with existing lithium-ion technologies.” Liang’s optimism is certainly bearing out. An October 2015 article in the journal Nature outlines how the advent of a solid state lithium-sulfur EV battery can break the EV market wide open: …They deliver more power because solid electrolytes mean that the carbon-based anodes can be replaced with lithium metal, which has a higher energy density and cycle life, with less weight and cost. And without the need to package the liquid electrolyte safely, solid-state batteries can be made in more versatile shapes (even thin films), reducing manufacturing costs. This could make electric cars a more enticing proposition, with longer ranges and a lower purchase price. Nature makes it clear that we’re a few years away from mass production of a solid state EV battery, so we’ll be sure to keep an eye on Solid Power for updates. Meanwhile, speaking of a revolution from within, the emergence of a solid state EV battery will put more pressure on the nascent fuel cell sector of the EV market (fuel cell vehicles produce electricity on the go rather than storing it in a battery). Earlier this morning CleanTechnica took note of the rather slow pace of development in the fuel cell EV sector, though the advent of sustainably sourced hydrogen fuel may provide some support for the fuel cell EV market while auto manufacturers work on their R&D. Follow me on Twitter and Google. Photo credit (cropped): Lithium-sulfur battery via Oak Ridge National Laboratory.    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | December 29, 2015
Site: www.nanotech-now.com

Abstract: Understanding where and how phase transitions occur is critical to developing new generations of the materials used in high-performance batteries, sensors, energy-harvesting devices, medical diagnostic equipment and other applications. But until now there was no good way to study and simultaneously map these phenomena at the relevant length scales. Now, researchers at the Georgia Institute of Technology and Oak Ridge National Laboratory (ORNL) have developed a new nondestructive technique for investigating these material changes by examining the acoustic response at the nanoscale. Information obtained from this technique - which uses electrically-conductive atomic force microscope (AFM) probes - could guide efforts to design materials with enhanced properties at small size scales. The approach has been used in ferroelectric materials, but could also have applications in ferroelastics, solid protonic acids and materials known as relaxors. Sponsored by the National Science Foundation and the Department of Energy's Office of Science, the research was reported December 15 in the journal Advanced Functional Materials. "We have developed a new characterization technique that allows us to study changes in the crystalline structure and changes in materials behavior at substantially smaller length scales with a relatively simple approach," said Nazanin Bassiri-Gharb, an associate professor in Georgia Tech's Woodruff School of Mechanical Engineering. "Knowing where these phase transitions happen and at which length scales can help us design next-generation materials." In ferroelectric materials such as PZT (lead zirconate titanate), phase transitions can occur at the boundaries between one crystal type and another, under external stimuli. Properties such as the piezoelectric and dielectric effects can be amplified at the boundaries, which are caused by the multi-element "confused chemistry" of the materials. Determining when these transitions occur can be done in bulk materials using various techniques, and at the smallest scales using an electron microscope. The researchers realized they could detect these phase transitions using acoustic techniques in samples at size scales between the bulk and tens of atoms. Using band-excitation piezoresponse force microscopy (BE-PFM) techniques developed at ORNL, they analyzed the resulting changes in resonant frequencies to detect phase changes in sample sizes relevant to the material applications. To do that, they applied an electric field to the samples using an AFM tip that had been coated with platinum to make it conductive, and through generation and detection of a band of frequencies. "We've had very good techniques for characterizing these phase changes at the large scale, and we've been able to use electron microscopy to figure out almost atomistically where the phase transition occurs, but until this technique was developed, we had nothing in between," said Bassiri-Gharb. "To influence the structure of these materials through chemical or other means, we really needed to know where the transition breaks down, and at what length scale that occurs. This technique fills a gap in our knowledge." The changes the researchers detect acoustically are due to the elastic properties of the materials, so virtually any material with similar changes in elastic properties could be studied in this way. Bassiri-Gharb is interested in ferroelectrics such as PZT, but materials used in fuel cells, batteries, transducers and energy-harvesting devices could also be examined this way. "This new method will allow for much greater insight into energy-harvesting and energy transduction materials at the relevant length sales," noted Rama Vasudeven, the first author of the paper and a materials scientist at the Center for Nanophase Materials Sciences, a U.S. Department of Energy user facility at ORNL. The researchers also modeled the relaxor-ferroelectric materials using thermodynamic methods, which supported the existence of a phase transition and the evolution of a complex domain pattern, in agreement with the experimental results. Use of the AFM-based technique offers a number of attractive features. Laboratories already using AFM equipment can easily modify it to analyze these materials by adding electronic components and a conductive probe tip, Bassiri-Gharb noted. The AFM equipment can be operated under a range of temperature, electric field and other environmental conditions that are not easily implemented for electron microscope analysis, allowing scientists to study these materials under realistic operating conditions. "This technique can probe a range of different materials at small scales and under difficult environmental conditions that would be inaccessible otherwise," said Bassiri-Gharb. "Materials used in energy applications experience these kinds of conditions, and our technique can provide the information we need to engineer materials with enhanced responses." Though widely used, relaxor-ferroelectrics and PZT are still not well understood. In relaxor-ferroelectrics, for example, it's believed that there are pockets of material in phases that differ from the bulk, a distortion that may help confer the material's attractive properties. Using their technique, the researchers confirmed that the phase transitions can be extremely localized. They also learned that high responses of the materials occurred at those same locations. Next steps would include varying the chemical composition of the material to see if those transitions - and enhanced properties - can be controlled. The researchers also plan to examine other materials. "It turns out that many energy-related materials have electrical transitions, so we think this is going to be very important for studying functional materials in general," Bassiri-Gharb added. "The potential for gaining new understanding of these materials and their applications are huge." ### This research was supported by the National Science Foundation (NSF) through grant DMR-1255379. A portion of this research was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility at ORNL. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF or DOE. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 16, 2016
Site: phys.org

Supercomputers provide the computational horsepower to understand single atoms or supernovae. To get the best performance from these machines, the Department of Energy's computing centers offer free access to user support teams. These people are curious, driven scientists, who, in addition to their own research, have taken on the challenge of some of the world's most complex computers. Credit: Nathan Johnson, Pacific Northwest National Laboratory Like the world's best pit crews, groups of highly trained scientists make sure everything works together at the supercomputers available through the Department of Energy's Office of Science. Like their NASCAR counterparts, these crews get the best possible performance from the custom-built supercomputers. Their machines are fast and powerful, but they aren't intuitive. Add to that the fact that researchers using the computers are racing against the clock, too – to get answers before their time on the machine expires. Unwieldy data files, slow programs and hardware crashes all eat into that time. For example, working with data files that are 10 times larger than they've used before can slow the code to a crawl or a crash. The team works with scientists long before their time starts on the supercomputer to optimize codes and improve workflows. "We want them to get the performance they need to get the job done," says Katherine Riley. She's director of science for the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility at Argonne National Laboratory in Illinois. Supporting some of the world's fastest computers requires more than an encyclopedic understanding of supercomputers. A deep knowledge of mathematics is a must. Add to that an outgoing personality and a desire to help. Top it off with a doctoral degree and extensive experience in a scientific field. "We're able to speak the computational language and also the science," says Richard Gerber, now a senior science advisor at National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at Lawrence Berkeley National Laboratory in California. But he started, 20 years ago, answering questions from users about the center's supercomputer. Last year, the dozen-strong team at NERSC, located in California, answered 6,435 enquiries and earned, once again, top marks for their contributions. "Everybody that calls in has a problem that they can't figure it out on their own," says Gerber. "We work with very smart, sophisticated users. We provide that extra bit of knowledge they need." That extra bit can also take the form of training events. Short, intense programming marathons called hack-a-thons are quite popular. Fernanda Foertter, who leads training at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility at Oak Ridge National Laboratory in Tennessee, started these events to increase use of the facility's supercomputer, known as Titan. Foertter saw researchers attend workshops but struggle to find the time and resources to apply what they learned to optimize their codes for Titan. At a hack-a-thon, Foertter pairs each team of five or so software developers with two mentors from the support team and lots of food. The goal is to get the software to run well on the supercomputer. And that's what's happening for scientists at national labs, universities, and elsewhere. In fact, at the end of one event, researchers got their program, which analyzes how fluid moves in the brain, to run eight months of data in eight hours. Despite optimizing the code, crashes happen. At ALCF, the support team analyzes every crash. "It's time consuming, but it is important to us to understand why," says Richard Coffey, ALCF director of user experience. "If it's a system failure, we can give a refund. If it's a bug in the user's software, we work hard to help them understand what they could do differently." Just as the world's fastest cars require more than a driver and a pit crew, so supercomputers have talented teams who don't work directly with the scientist driving the research. Each supercomputer has experts handling issues such as file management, maintenance, security, and much more. While the machines often get the attention at the end of a race, it's the talents of people that win races. In the race to answer tough questions, it's the crews supporting DOE's supercomputers that make it possible to cross the finish line.


News Article | February 24, 2017
Site: www.technologyreview.com

The company, backed by Peter Thiel’s Founders Fund, revised inflated assertions about its advanced reactor design after growing concerns prompted an MIT review. Nuclear energy startup Transatomic Power has backed away from bold claims for its advanced reactor technology after an informal review by MIT professors highlighted serious errors in the company’s calculations, MIT Technology Review has learned. The Cambridge, Massachusetts-based company, founded in 2011 by a pair of MIT students in the Department of Nuclear Science and Engineering, asserted that its molten-salt reactor design could run on spent nuclear fuel from conventional reactors and generate energy far more efficiently than they do. In a white paper published in March 2014, the company proclaimed its reactor “can generate up to 75 times more electricity per ton of mined uranium than a light-water reactor.” Those lofty claims helped it raise millions in venture capital, secure a series of glowing media profiles (including one in this publication), and draw a rock-star lineup of technical advisors. But in a paper on its site dated November 2016, the company downgraded “75 times” to “more than twice.” In addition, it now specifies that the design “does not reduce existing stockpiles of spent nuclear fuel” or use them as its fuel source. The promise of recycling nuclear waste, which poses tricky storage and proliferation challenges, was a key initial attraction of the company and captured considerable attention. “In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review. The dramatic revisions followed an analysis in late 2015 by Kord Smith, a professor of  nuclear science and engineering at MIT and an expert in the physics of nuclear reactors. At that point, there were growing doubts in the field about the company’s claims and at least some worries that any inflated claims could tarnish the reputation of MIT’s nuclear department, which has been closely associated with the company. Transatomic also has a three-year research agreement with the department, according to earlier press releases. In reviewing the company’s white paper, Smith noticed immediate red flags. He relayed his concerns to his department head and the company, and he subsequently conducted an informal review with two other professors. “I said this is obviously incorrect based on basic physics,” Smith says. He asked the company to run a test, which ended up confirming that “their claims were completely untrue,” Smith says. He notes that promising to increase the reactor’s fuel efficiency by 75 times is the rough equivalent of saying that, in a single step, you'd developed a car that could get 2,500 miles per gallon. Ultimately, the company redid its analysis and produced and posted a new white paper. To be sure, it would still be a notable accomplishment if the company could build a reactor that improved fuel energy efficiency over conventional reactors by a factor more than two. In addition, the new white paper notes the reactor could reduce waste by 53 percent compared to light-water reactors. The viability of the latter finding was recently verified by Oak Ridge National Laboratory. But the analysis found that the reactor couldn’t sustain a fission chain reaction using spent fuel long enough for it to be a feasible option, as previously hoped, Dewan said in a subsequent phone interview. “I’m very glad that we have this robust technical analysis in place, confirmed by Oak Ridge, and we’re committed to moving ahead with our plans for low-cost, safe nuclear power,” Dewan said in her e-mail response. “We decided to open-publish our analysis and the results, because we wanted people to be able to repeat the calculations and examine it directly themselves. Also, we’ve talked before about how the nuclear industry should increase transparency to build public trust, and wanted to practice what we preached.” The company has previously said it intends to build a demonstration reactor by 2020, but that timeline has slipped to 2021, Dewan said during the phone interview. “We certainly have a long road ahead of us,” she said, noting technical, engineering, supply-chain, and regulatory challenges. “But I think that momentum is on our side.” Nuclear experts say that approval and construction of any advanced-reactor prototype requires a minimum of 10 years, given the extensive regulatory requirements of the Nuclear Regulatory Commission. Smith stresses that the founders weren’t acting in bad faith, but he did note they didn’t subject their claims to the peer-review process early on. “They didn’t do any of this intentionally,” Smith says. “It was just a lack of experience and perhaps an overconfidence in their own ability. And then not listening carefully enough when people were questioning the conclusions they were coming to.” The company has raised at least $4.5 million from Peter Thiel’s Founders Fund, Acadia Woods Partners, and Daniel Aegerter of Armada Investment AG. Venture capital veteran Ray Rothrock serves as chairman of the company.


News Article | November 10, 2016
Site: www.sciencedaily.com

Despite steady progress in detection and treatment in recent decades, cancer remains the second leading cause of death in the United States, cutting short the lives of approximately 500,000 people each year. To better understand and combat this disease, medical researchers rely on cancer registry programs -- a national network of organizations that systematically collect demographic and clinical information related to the diagnosis, treatment, and history of cancer incidence in the United States. The surveillance effort, coordinated by the National Cancer Institute (NCI) and the Centers for Disease Control and Prevention, enables researchers and clinicians to monitor cancer cases at the national, state, and local levels. Much of this data is drawn from electronic, text-based clinical reports that must be manually curated -- a time-intensive process -- before it can be used in research. For example, cancer pathology reports, text documents that describe cancerous tissue in detail, must be individually read and annotated by experts before becoming part of a cancer registry. With millions of new reports being produced each year, the information burden continues to grow. "The manual model is not scalable," said Georgia Tourassi, director of the Health Data Sciences Institute at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL). "We need to develop new tools that can automate the information-extraction process and truly modernize cancer surveillance in the United States." Since 2014 Tourassi has led a team focused on creating software that can quickly identify valuable information in cancer reports, an ability that would not only save time and worker hours but also potentially reveal overlooked avenues in cancer research. After experimenting with conventional natural-language-processing software, the team's most recent progress has emerged via deep learning, a machine-learning technique that employs algorithms, big data, and the computing power of GPUs to emulate human learning and intelligence. Using the Titan supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL, Tourassi's team applied deep learning to extract useful information from cancer pathology reports, a foundational element of cancer surveillance. Working with modest datasets, the team obtained preliminary findings that demonstrate deep learning's potential for cancer surveillance. The continued development and maturation of automated data tools, among the objectives outlined in the White House's Cancer Moonshot initiative, would give medical researchers and policymakers an unprecedented view of the US cancer population at a level of detail typically obtained only for clinical trial patients, historically less than 5 percent of the overall cancer population. "Today we're making decisions about the effectiveness of treatment based on a very small percentage of cancer patients, who may not be representative of the whole patient population," Tourassi said. "Our work shows deep learning's potential for creating resources that can capture the effectiveness of cancer treatments and diagnostic procedures and give the cancer community a greater understanding of how they perform in real life." Creating software that can understand not only the meaning of words but also the contextual relationships between them is no simple task. Humans develop these skills through years of back-and-forth interaction and training. For specific tasks, deep learning compresses this process into a matter of hours. Typically, this context-building is achieved through the training of a neural network, a web of weighted calculations designed to produce informed guesses on how to correctly carry out tasks, such as identifying an image or processing a verbal command. Data fed to a neural network, called inputs, and select feedback give the software a foundation to make decisions based on new data. This algorithmic decision-making process is largely opaque to the programmer, a dynamic akin to a teacher with little direct knowledge of her students' perception of a lesson. "With deep learning you just throw the document in and say, 'Figure it out,'" Tourassi said. "It's more like a black box, but that's the beauty. We do not impose our own constraints." GPUs, such as those in Titan, can accelerate this training process by quickly executing many deep-learning calculations simultaneously. In two recent studies, Tourassi's team used accelerators to tune multiple algorithms, comparing results to more traditional methods. Using a dataset composed of 1,976 pathology reports provided by NCI's Surveillance, Epidemiology, and End Results (SEER) Program, Tourassi's team trained a deep-learning algorithm to carry out two different but closely related information-extraction tasks. In the first task the algorithm scanned each report to identify the primary location of the cancer. In the second task the algorithm identified the cancer site's laterality -- or on which side of the body the cancer was located. By setting up a neural network designed to exploit the related information shared by the two tasks, an arrangement known as multitask learning, the team found the algorithm performed substantially better than competing methods. "Intuitively this makes sense because carrying out the more difficult objective is where learning the context of related tasks becomes beneficial," Tourassi said. "Humans can do this type of learning because we understand the contextual relationships between words. This is what we're trying to implement with deep learning." Another study carried out by Tourassi's team used 946 SEER reports on breast and lung cancer to tackle an even more complex challenge: using deep learning to match the cancer's origin to a corresponding topological code, a classification that's even more specific than a cancer's primary site or laterality, with 12 possible answers. The team tackled this problem by building a convolutional neural network, a deep-learning approach traditionally used for image recognition, and feeding it language from a variety of sources. Text inputs ranged from general (e.g., Google search results) to domain-specific (e.g., medical literature) to highly specialized (e.g., cancer pathology reports). The algorithm then took these inputs and created a mathematical model that drew connections between words, including words shared between unrelated texts. Comparing this approach to more traditional classifiers, such as a vector space model, the team observed incremental improvement in performance as the network absorbed more cancer-specific text. These preliminary results will help guide Tourassi's team as they scale up deep-learning algorithms to tackle larger datasets and move toward less supervision, meaning the algorithms will make informed decisions with less human intervention. In 2016 Tourassi's team learned its cancer surveillance project will be developed as part of DOE's Exascale Computing Project, an initiative to develop a computing ecosystem that can support an exascale supercomputer -- a machine that can execute a billion billion calculations per second. Though the team has made considerable progress in leveraging deep learning for cancer research, the biggest gains are still to come. "Focusing on clinical text alone, the value would be tremendous," Tourassi said.


News Article | November 9, 2015
Site: www.rdmag.com

Researchers have devised a technology that can bring true color to infrared imaging systems, like the one used to track Arnold Schwarzenegger through the jungle in the movie “Predator.” Traditional infrared imaging systems may look colorful on screen, with warm objects appearing redder and whiter than their surroundings. But these images are not created from actual colors. They are based on the amount of thermal radiation—or infrared light—that the camera captures. The ability to identify different wavelengths—or colors—of the infrared spectrum would capture much more information about the objects being imaged, such as their chemical composition. In a new study, a team lead by Maiken H. Mikkelsen, the Nortel Networks Assistant Professor of Electrical & Computer Engineering and Physics at Duke Univ., demonstrates perfect absorbers for small bands of the electromagnetic spectrum from visible light through the near infrared. The fabrication technique is easily scalable, can be applied to any surface geometry and costs much less than current light absorption technologies. Once adopted, the technique would allow advanced thermal imaging systems to not only be produced faster and cheaper than today’s counterparts, but to have higher sensitivity. It could also be used in a wide variety of other applications, such as masking the heat signatures of objects. The study was published online in Advanced Materials. “By borrowing well-known techniques from chemistry and employing them in new ways, we were able to obtain significantly better resolution than with a million-dollar state-of-the-art electron beam lithography system,” said Mikkelsen. “This allowed us to create a coating that can fine-tune the absorption spectra with a level of control that hasn’t been possible previously, with potential applications from light harvesting and photodetectors to military applications.” “This doesn’t require top-down fabrication such as expensive lithography techniques and we don’t make this in a clean room,” added Gleb Akselrod, a postdoctoral researcher in Mikkelsen’s laboratory.  “We build it from the bottom up, so the whole thing is inherently cheap and very scalable to large areas.” The technology relies on a physics phenomenon called plasmonics. The researchers first coat a surface with a thin film of gold through a common process like evaporation. They then put down a few-nanometer-thin layer of polymer, followed by a coating of silver cubes, each one about 100 nm in size. When light strikes the new engineered surface, a specific color gets trapped on the surface of the nanocubes in packets of energy called plasmons, and eventually dissipates into heat. By controlling the thickness of the polymer film and the size and number of silver nanocubes, the coating can be tuned to absorb different wavelengths of light from the visible spectrum to the near infrared. “What is so attractive about the film/nanocube system is its remarkable simplicity and flexibility,” said David R. Smith, the James B. Duke Professor of Electrical and Computer Engineering at Duke. “The unique absorbing properties of the nanocubes can be predicted with straightforward formulas, making it easy to quickly determine recipes for surface coatings that provide desired spectral properties. The nanocube system eliminates, or at least vastly reduces, cost and manufacturing issues, so that we can focus on impacting exciting application areas such as photovoltaics or thermal coatings.” For an example of the latter, if you can control the colors of light that a material absorbs, then you can also control the wavelengths of light that it emits. By making the nanocubes larger to absorb wavelengths corresponding to thermal radiation, this technology could suppress or mask an object’s natural thermal radiation, otherwise known as “black body radiation.” Coating photodetectors to absorb only specific wavelengths of infrared light would allow novel and cheap cameras to be made that could see different infrared colors. “We haven’t made the device that’s actually going to take that energy and convert it to an electrical signal yet,” said Akselrod. “That’s going to be the next step.” The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason.  Learn more.


The shrub willow plantation is part of a broader five-year program called NEWBio, which is aimed at investigating and promoting sustainable production of woody biomass and warmseason grasses for energy in the Northeast. Planted in 2012 on land formerly owned by the State Correctional Institution at Rockview, the biomass crop will regrow and will be harvested every three years from now on. NEWBio, a regional consortium of institutions lead by Penn State and funded by the U.S. Department of Agriculture's National Institute of Food and Agriculture, is one of seven regional projects across the United States. Other consortium partners are Cornell University, SUNY College of Environmental Science and Forestry, West Virginia University, Delaware State University, Ohio State University, Rutgers University, USDA's Eastern Regional Research Center, and the U.S. Department of Energy's Oak Ridge National Laboratory and Idaho National Laboratory. Researchers involved in the project include plant scientists, agricultural and biological engineers, agricultural safety and health specialists, agronomists, agricultural and forest economists, rural sociologists, supply-chain and business-development experts, and extension educators. "The shrub willow stand at Rockview can continue producing biomass for more than 20 years, and we hope to use it both as a source of renewable energy and as a platform for sustainability research," said Armen Kemanian, associate professor of production systems and modeling in the Department of Plant Science, one of the lead researchers in the project. "This is an excellent site to investigate impacts on soil and water quality, biodiversity, avoided carbon dioxide emissions, and the potential for growing a regional bio-based economy," he said. "Students from our college visit the site and have a firsthand and close-up view of this new crop for the region." Why shrub willow? Because the woody perennial likes to be cut, explained Kemanian. He noted that visitors to Grand Teton National Park in Wyoming may remember the "willow flats," grazed to a uniform height by moose and elk. "At the Rockview site we don't have moose, but we do take advantage of shrub willow's vigorous regrowth to harvest for multiple cycles," he said. "As perennial plants, they establish a root system that stabilizes the soil and stores substantial amounts of carbon that otherwise would be lost to the atmosphere." Perennial biomass crops shrub willow, switchgrass and miscanthus—all of which are being investigated at other experimental sites around the Northeast—also store and recycle nutrients, so they do not require much fertilizer and can improve water quality in streams, rivers and estuaries, such as the Chesapeake Bay. Increasing perennial vegetation is a critical component of Pennsylvania's water quality strategy, and these biomass crops allow vulnerable parts of the landscape to remain economically productive while protecting water quality. Shrub willow can produce the same amount of biomass as a corn crop with only a third of the nitrogen fertilizer, Kemanian pointed out. When the plants grow, they take carbon dioxide from the atmosphere. After harvest, when the biomass is combusted either as wood chips or as a liquid biofuel, the carbon dioxide returns to the atmosphere to complete the cycle. Felipe Montes, a research associate in the Department of Plant Science, established an array of sensors to measure carbon dioxide and water vapor fluxes, which are giving a vivid picture of the growth potential in the region. Shrub willow is one the first plants to leaf out in early spring and dies back late in the fall, and this long growing season makes it extremely efficient in converting sunlight and nutrients to a bioenergy feedstock. "We estimate that we can harvest 20 to 30 units of energy per unit of fossil energy invested in producing the crop, leading to fuel with a very low carbon footprint," Montes said. "The fact that this biomass can be converted to liquid fuel is one of the main advantages of shrub willow and other biomass crops. Low carbon liquid fuels are especially important for long distance transportation, shipping and aviation, where electric vehicles are not practical." Biomass energy could provide the social, economic and ecological drivers for a sustainable rural renaissance in the Northeast, according to NEWBio project leader Tom Richard, professor of agricultural and biological engineering and director of the Penn State Institutes of Energy and the Environment. He believes perennial energy crops are particularly well suited for the region, where forests and pasture long have dominated the landscape. Rocky and sloped soils are more compatible with perennial crops, while perennial root systems better tolerate wet springs and occasional summer drought, Richard said. Northeast biomass production has high water-use efficiency (biomass produced per unit of water transpired by plants) owing to the region's moderate temperatures and relatively high humidity. These perennial crops also increase organic matter in the soil, and coupled with efficient refining and manufacturing processes can produce carbon-negative energy and materials. "Concerns about energy, environmental and human health, rural economic development, and the need to diversify agricultural products and markets have made the development of sustainably produced biomass feedstocks for biofuels, bioproducts and bioenergy a critical national priority," said Richard. "Perennial bioenergy systems, such as the shrub willow demonstrated at Penn State, appear to hold an important key to future economic development for our region. But to unlock that future, we need to learn how to economically handle the harvesting, transportation and storage of massive volumes, which constitutes 40 to 60 percent of the cost of biomass. This project is providing the knowledge and experience needed for a regional bioeconomy to achieve commercial success." Explore further: Bioenergy crops could store more carbon in soil


News Article | December 8, 2016
Site: www.eurekalert.org

Scientists are only beginning to understand the laws that govern the atomic world. Before the 1950s the electrons, neutrons, and protons comprising atoms were the smallest confirmed units of matter. With advancements in experimental and theoretical techniques in recent decades, though, researchers now try to understand particles a step smaller and more fundamental. In recent years large-scale experimental facilities, such as the Large Hadron Collider in Switzerland, have allowed researchers to begin testing theories about how subatomic particles behave under different conditions. Research institutions funded by the US Department of Energy (DOE) have also made major investments in experimental test facilities. The newest of these facilities lies in Hall D at the Thomas Jefferson National Accelerator Facility (Jefferson Lab). The experiment, known as GlueX, aims to give researchers unprecedented insight into subatomic particle interactions. "We believe there is a theory that describes how elementary particles interact, quarks and gluons that make up the matter around us," said Robert Edwards, senior staff scientist at Jefferson Lab. "If so, the theory of QCD suggests that there are some exotic forms of matter that exist, and that's what we're looking for in our Hall D experiment." Edwards serves as the principal investigator on a project that uses computation to inform the GlueX experiment as well as corroborate experimental findings. To that end the team has been using the Titan supercomputer at DOE's Oak Ridge National Laboratory (ORNL). Titan is the flagship supercomputer of the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. The team wants to make computer codes for quantum chromodynamics (QCD) applications run more efficiently and effectively, and with access to world-class computing resources, the researchers' computational innovations were able to achieve speedups ranging from seven- to tenfold for QCD calculations compared with those achieved in earlier work. The field of QCD is the study of forces between two major categories of subatomic particles--quarks and gluons. Quarks serve as the primary force-carrying particles in an atom's nucleus and make up hadrons, a class of subatomic particles that includes protons and neutrons. Gluons, much like their name implies, allow quarks to interact with forces and serve as the "glue" that holds hadrons together. Quarks can also bind with their inverse, antiquarks, to form mesons. Mesons are among the most mysterious of all subatomic particles because their resonances are in existence for only fractions of a microsecond. Through experiments researchers hope to use GlueX to confirm the existence of "exotic" mesons that would help advance QCD theory. When simulating a quantum system of quarks, gluons, and mesons, the number of calculations needed to compute the interactions of the subatomic particle fields explodes in a hurry. Researchers represent quarks and gluons by using a lattice or grid. In fact, researchers using this method call it lattice QCD (LQCD). Once the theories are expressed in terms of the lattice, the overall simulation becomes similar to a high-school-level model of a crystal--plastic spheres at the lattice points connected by springs between them. One can think of the spheres at the lattice points as representing the quark field with the springs between them representing the quark-gluon interactions. When given energy by pushing or nudging, the model will vibrate. At any given instant, a snapshot of the model would show a particular arrangement of stretched and compressed springs. If one looked at the statistical distribution of these snapshots, he or she could deduce information about the crystal. QCD works in a similar way. The team's lattices act as snapshots of the states of the gluon fields. By generating a statistical sampling of these QCD field snapshots and analyzing them, the team can compute the properties of the subatomic particles of interest. Although that process might sound simple, it really isn't. Each snapshot requires a lot of computation. To compute the quark-gluon interactions, the team must repeatedly carry out the complex computation of solving the Dirac equation--a complex wave equation. Solving the equation is complicated enough, but Jefferson Lab researcher Balint Joo noted that the team's simulations must do it many times. "Our algorithm is one that requires solving the Dirac equation hundreds of thousands of times for each of the 300 to 500 snapshots that we take," he said. Such computational demands push even the world's fastest supercomputers to their performance limits, and Joo and NVIDIA high-performance computing researcher Kate Clark have teamed with other researchers from the USQCD collaboration to search for new ways to improve code performance on the Jefferson Lab team's CHROMA code, among other QCD applications. They shared their results in a paper presentation at the SC16 conference, which took place November 13-18. Since 2005 Clark has focused on methods to improve code performance for the LQCD community. Before moving to NVIDIA, she worked in LQCD algorithms at Boston University with professor Richard Brower, where the team developed a multigrid algorithm. Essentially, computer chips have become so much faster than memory systems that memory can't feed chips the data fast enough, meaning the bottleneck for LQCD calculations comes from the speed of the memory system. Clark has been developing the QUDA library, which takes advantage of a GPU system's computational strength, including its very fast built-in memory, to improve calculation speed. When developing its new algorithm, the Edwards team began by adding a multigrid algorithm into its code. Multigrid algorithms take the large, fine-grained lattice grid for LQCD calculations; average the various grid points; and create multiple smaller, coarser grids. Similar to sound waves, which are really composed of many waves, each with a different pitch or frequency, the team's problem is composed of many modes with different energies. High-energy modes need a fine lattice to represent them accurately, but low-energy modes--which usually slow down when seeking a solution--can be represented on coarser lattices with fewer points, ultimately reducing the computational cost. By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. "GPUs provide a lot of memory bandwidth," Clark said. "Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker." In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups. However, the more GPUs working on a problem, the more they must communicate with one another. If too many GPUs get involved and the problem size doesn't keep up with the computational resources being used, the calculation becomes very inefficient. "One aspect of GPUs is that they bring a lot of parallelism to the problem, and so to get maximum performance, you may need to restructure your calculation to exploit more parallelism," Clark said. Essentially, as computing technology has evolved, processing speed has improved faster than the ability of interconnects to move increasingly larger amounts of data across supercomputers' nodes. For simulations in which researchers divide their calculations across many computer nodes, this imbalance can lead to performance bottlenecks. "With QCD the computational cost doesn't scale linearly; it scales super-linearly," Clark said. "If you double the problem size, the computational cost goes up by more than a factor of two. I can't keep the same size of computation per node and just put it on a bigger system." Despite performance gains through implementing the multigrid algorithm, Clark, Joo, and their collaborators noted that for maximum performance impacts, they would need to exploit sources of parallelism other than those that had typically been used in existing LQCD calculations. Each one of Titan's 18,688 GPUs has 2,688 processing cores. To return to the roadway analogy, each one of a GPU's individual processors is a "lane" on a road, and if only one lane is open, cars back up quickly. With that in mind, Clark, Joo, and their collaborators worked on opening up as many processing "lanes" as possible for LQCD calculations. The team recognized that in addition to exploiting parallelism by calculating multiple grids rather than a single, large grid, they could also exploit more parallelism out of each grid point. To create multiple grids from one large, fine-grained grid, each GPU calculates a set of grid points (which appear as mathematical vectors), averages the results, and sends the averages to the middle grid point. Rather than just having one processing "lane" doing all of these calculations, researchers can use four processing cores to calculate the points above, below, and to the left and right of the original grid point. Much like going from a one-lane road to a four-lane highway, the data throughput moves much faster. This concept works for a two-dimensional calculation, and a four-dimensional calculation can use this same concept to achieve eight-way parallelism. In addition, the researchers noted that each grid point is not just a number but also a vector of data. By splitting up the vector calculations to run on multiple processors, the team further increased code parallelism. Because of these innovations, the Edwards team saw hundredfold speedups on the coarsest grids and a tenfold speedup for finer grids when comparing simulations with those that took place before the QUDA implementation. Clark and Joo pointed out that this approach affects more than the team's CHROMA code. These methods are already being applied to other QCD applications. Clark noted that as computers continue to get more powerful by using accelerators--such as the OLCF's next-generation machine, Summit, set to begin delivering science in 2018--researchers will have to focus on getting as much parallelism as possible. "Going forward, supercomputers like Summit will have many more processing cores, so to get high efficiency as a whole, researchers in many fields are going to have to work on how to exploit all the levels of parallelism in a problem," Clark said. "At some point exploiting all levels of parallelism is something that all researchers will have to do, and I think our work is a good example of that." Related Publication: M.A. Clark, Balint Joo, Alexei Strelchenko, Michael Cheng, Arjun Gambhir, and Richard C. Brower, "Accelerating Lattice QCD Multigrid on GPUs Using Fine-Grained Parallelization." SC16 Proceedings of the International Conference for High Performance Computing, Storage and Analysis, Salt Lake City, UT, November 13-18, 2016, http://dl. . Oak Ridge National Laboratory is supported by the US Department of Energy's Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 21, 2016
Site: www.scientificcomputing.com

Despite steady progress in detection and treatment in recent decades, cancer remains the second leading cause of death in the United States, cutting short the lives of approximately 500,000 people each year. To better understand and combat this disease, medical researchers rely on cancer registry programs--a national network of organizations that systematically collect demographic and clinical information related to the diagnosis, treatment, and history of cancer incidence in the United States. The surveillance effort, coordinated by the National Cancer Institute (NCI) and the Centers for Disease Control and Prevention, enables researchers and clinicians to monitor cancer cases at the national, state, and local levels. Much of this data is drawn from electronic, text-based clinical reports that must be manually curated--a time-intensive process--before it can be used in research. For example, cancer pathology reports, text documents that describe cancerous tissue in detail, must be individually read and annotated by experts before becoming part of a cancer registry. With millions of new reports being produced each year, the information burden continues to grow. "The manual model is not scalable," said Georgia Tourassi, director of the Health Data Sciences Institute at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL). "We need to develop new tools that can automate the information-extraction process and truly modernize cancer surveillance in the United States." Since 2014 Tourassi has led a team focused on creating software that can quickly identify valuable information in cancer reports, an ability that would not only save time and worker hours but also potentially reveal overlooked avenues in cancer research. After experimenting with conventional natural-language-processing software, the team's most recent progress has emerged via deep learning, a machine-learning technique that employs algorithms, big data, and the computing power of GPUs to emulate human learning and intelligence. Using the Titan supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL, Tourassi's team applied deep learning to extract useful information from cancer pathology reports, a foundational element of cancer surveillance. Working with modest datasets, the team obtained preliminary findings that demonstrate deep learning's potential for cancer surveillance. The continued development and maturation of automated data tools, among the objectives outlined in the White House's Cancer Moonshot initiative, would give medical researchers and policymakers an unprecedented view of the US cancer population at a level of detail typically obtained only for clinical trial patients, historically less than 5 percent of the overall cancer population. "Today we're making decisions about the effectiveness of treatment based on a very small percentage of cancer patients, who may not be representative of the whole patient population," Tourassi said. "Our work shows deep learning's potential for creating resources that can capture the effectiveness of cancer treatments and diagnostic procedures and give the cancer community a greater understanding of how they perform in real life." Creating software that can understand not only the meaning of words but also the contextual relationships between them is no simple task. Humans develop these skills through years of back-and-forth interaction and training. For specific tasks, deep learning compresses this process into a matter of hours. Typically, this context-building is achieved through the training of a neural network, a web of weighted calculations designed to produce informed guesses on how to correctly carry out tasks, such as identifying an image or processing a verbal command. Data fed to a neural network, called inputs, and select feedback give the software a foundation to make decisions based on new data. This algorithmic decision-making process is largely opaque to the programmer, a dynamic akin to a teacher with little direct knowledge of her students' perception of a lesson. "With deep learning you just throw the document in and say, 'Figure it out,'" Tourassi said. "It's more like a black box, but that's the beauty. We do not impose our own constraints." GPUs, such as those in Titan, can accelerate this training process by quickly executing many deep-learning calculations simultaneously. In two recent studies, Tourassi's team used accelerators to tune multiple algorithms, comparing results to more traditional methods. Using a dataset composed of 1,976 pathology reports provided by NCI's Surveillance, Epidemiology, and End Results (SEER) Program, Tourassi's team trained a deep-learning algorithm to carry out two different but closely related information-extraction tasks. In the first task the algorithm scanned each report to identify the primary location of the cancer. In the second task the algorithm identified the cancer site's laterality--or on which side of the body the cancer was located. By setting up a neural network designed to exploit the related information shared by the two tasks, an arrangement known as multitask learning, the team found the algorithm performed substantially better than competing methods. "Intuitively this makes sense because carrying out the more difficult objective is where learning the context of related tasks becomes beneficial," Tourassi said. "Humans can do this type of learning because we understand the contextual relationships between words. This is what we're trying to implement with deep learning." Another study carried out by Tourassi's team used 946 SEER reports on breast and lung cancer to tackle an even more complex challenge: using deep learning to match the cancer's origin to a corresponding topological code, a classification that's even more specific than a cancer's primary site or laterality, with 12 possible answers. The team tackled this problem by building a convolutional neural network, a deep-learning approach traditionally used for image recognition, and feeding it language from a variety of sources. Text inputs ranged from general (e.g., Google search results) to domain-specific (e.g., medical literature) to highly specialized (e.g., cancer pathology reports). The algorithm then took these inputs and created a mathematical model that drew connections between words, including words shared between unrelated texts. Comparing this approach to more traditional classifiers, such as a vector space model, the team observed incremental improvement in performance as the network absorbed more cancer-specific text. These preliminary results will help guide Tourassi's team as they scale up deep-learning algorithms to tackle larger datasets and move toward less supervision, meaning the algorithms will make informed decisions with less human intervention. In 2016 Tourassi's team learned its cancer surveillance project will be developed as part of DOE's Exascale Computing Project, an initiative to develop a computing ecosystem that can support an exascale supercomputer--a machine that can execute a billion billion calculations per second. Though the team has made considerable progress in leveraging deep learning for cancer research, the biggest gains are still to come. "Focusing on clinical text alone, the value would be tremendous," Tourassi said.


News Article | March 1, 2017
Site: www.eurekalert.org

March 1, 2017 - Under a collaborative partnership between the National Aeronautics and Space Administration and the Department of Energy, a new automated measurement system developed at DOE's Oak Ridge National Laboratory will ensure quality production of plutonium-238 while reducing handling by workers. NASA has funded ORNL and other national laboratories to develop a process that will restore U.S. production capability of Pu-238 for the first time since the late 1980s when the Savannah River Plant ceased production. ORNL has produced and separated about 100 grams of the material and plans to scale up the process over the next several years to meet demand to power NASA deep space missions. "We are bringing together multiple disciplines across ORNL to achieve this automation and ramp up so that we can supply Pu-238 for NASA," said Bob Wham, who leads the project for the lab's Nuclear Security and Isotope Technology Division. The Pu-238 is produced from neptunium-237 feedstock provided by Idaho National Laboratory. Workers at ORNL mix neptunium oxide with aluminum and press the mixture into high-density pellets. The new automated measurement system robotically removes the Np-237 pellets from their holding tray, and measures their weight, diameter, and height. "We're excited to go from making these measurements by hand to just pressing a 'GO' button," said Jim Miller, a scientist in the Fusion & Materials for Nuclear Systems Division who is employing the new system. "About 52 Np-237 pellets can be measured per hour using the new automated measurement system," he added. Pellets meeting specifications, as determined by the new automated measurement system, are placed in a cassette that moves to another location for loading into a hollow aluminum tube that is hydrostatically compressed around the pellets. The Np-237 pellets loaded in the hollow aluminum tube later enter the High Flux Isotope Reactor, a DOE Office of Science User Facility at ORNL, where they are irradiated, creating Np-238, which quickly decays and becomes Pu-238. The irradiated pellets are then dissolved, and ORNL staff use a chemical process to separate the plutonium from any remaining neptunium. Purified plutonium is converted back to an oxide powder, packaged and shipped to Los Alamos for final processing. Plans are for initial production of 400 grams Pu-238 per year on average at ORNL and then to increase that quantity through additional automation and scale-up processes. Several ORNL researchers contributed to the automated measurement system. Alan Barker was the software architect, enhancing early work performed by others on the system and serving as technical lead to finish the project. Richard Wunderlich further developed the software to professional grade with an emphasis on making it more robust, usable and maintainable. Michelle Baldwin also provided programming expertise, including software quality assurance, verification and validation. David West was the hardware architect, overseeing configuration tasks within the glovebox and making sure the system is safe and functional in a radioactive environment. Tim McIntyre was the project manager. Project challenges included fitting the system into a glovebox that was only about 6 feet wide and 3 feet deep, and designing the system to be easy for workers to manipulate, maintain and repair, McIntyre said. In another project funded by NASA, Miller said the lab is working to automate the creation of the target neptunium/aluminum pellets. Miller also pointed to the collaborative nature of the automation development work at the lab. "I have a background in materials science, the Electrical and Electronics Systems Research Division people have the electrical and robotics background, and others like the staff in the NSITD have a chemical engineering background. None of us individually could get this done," he said. The next NASA mission planning to use a radioisotope thermoelectric generator fueled by Pu-238 is the Mars 2020 rover, scheduled for launch in July 2020. The mission will seek signs of life on Mars, test technology for human exploration, and gather samples of rocks and soil that could be returned to Earth. In the future, newly produced Pu-238 from ORNL will fuel these kinds of missions. NASA announced this week that is has accepted a small quantity of the new heat source for use on the rover. UT-Battelle manages ORNL for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | November 17, 2016
Site: www.eurekalert.org

Supercomputer simulations at the Department of Energy's Oak Ridge National Laboratory have played a key role in discovering a new class of drug candidates that hold promise to combat antibiotic resistance. In a study led by the University of Oklahoma with ORNL, the University of Tennessee and Saint Louis University, lab experiments were combined with supercomputer modeling to identify molecules that boost antibiotics' effect on disease-causing bacteria. The researchers found four new chemicals that seek out and disrupt bacterial proteins called "efflux pumps", known to be a major cause of antibiotic resistance. Although some antibiotics can permeate the protective barriers surrounding bacterial cells, many bacteria have evolved efflux pumps that expel antibiotics back out of the cell and render the medications ineffective. The team focused on one efflux pump protein, known as AcrA, which connects two other proteins in a tunnel shape through the bacterial cell envelope. Disrupting this centrally positioned protein could "throw a wrench" into the middle of the efflux pump and mechanically break it, unlike drug design strategies that try to inhibit overall biochemical processes. "As a first in this field, we proposed the approach of essentially 'screwing up' the efflux pump's protein assembly, and this led to the discovery of molecules with a new type of antibacterial activity," said co-author Jeremy Smith, who serves as a UT-ORNL Governor's Chair and director of the UT-ORNL Center for Molecular Biophysics. "In contrast to previous approaches, our new mechanism uses mechanics to revive existing antibiotics' ability to fight infection." Details of the study were published in ACS Infectious Diseases. Through laboratory experiments done in tandem with extensive protein simulations run on ORNL's Titan supercomputer, they scanned large numbers of chemicals to predict and select which would be the most effective in preventing AcrA proteins from assembling properly. "The supercomputing power of Titan allowed us to perform large-scale simulations of the drug targets and to screen many potential compounds quickly," said Helen Zgurskaya, head of OU's Antibiotic Discovery and Resistance Group, who led the study. "The information we received was combined with our experiments to select molecules that were found to work well, and this should drastically reduce the time needed to move from the experimental phase to clinical trials." Using computational models, researchers screened various combinations of molecules and proteins to determine which ones "fit" well together, similar to finding the right key for a specific lock. This process was complicated by the protein's dynamic nature; proteins constantly change their shape. In a simulated environment, researchers created virtual representations of the proteins, generated a series of protein "snapshots" in their various configurations and used Titan to "dock" thousands of molecules to each snapshot and estimate how strongly each would interact with the protein. "The first screening took only 20 minutes using 42,000 processors and yielded several promising results," ORNL's Jerry Parks said. "After more extensive analysis, we narrowed down our list to predict which molecules were most likely to disrupt the function of the efflux pump." The research team members at the University of Oklahoma then conducted laboratory experiments to confirm the disruption of the efflux pump and the antibiotic-reviving capability for four of the molecules selected. Saint Louis University researchers then synthesized structural analogs of the discovered efflux pump inhibitors and identified properties essential for their activities. The team's study focused on a prototypical type of efflux pump found in Escherichia coli bacteria, but the researchers anticipate that their antibiotic-reviving approach will be applicable to many Gram-negative bacteria. They plan to leverage a recently awarded Innovative and Novel Computational Impact on Theory and Experiment (INCITE) allocation from DOE to perform larger simulations on the Titan supercomputer to gain deeper understanding of how bacterial efflux pumps function, identify more potent efflux pump inhibitors and optimize the best antibiotic-plus-inhibitor combinations to make them suitable for clinical trials. The study titled, "Reviving Antibiotics: Efflux Pump Inhibitors That Interact with AcrA, a Membrane Fusion Protein of the AcrAB-TolC Multidrug Efflux Pump," was led by OU's Helen Zgurskaya and co-authored by UT-ORNL's Jeremy Smith, Jerry Parks and Jerome Baudry; UT's Adam Green; OU's Narges Abdali, Julie Chaney, David Wolloscheck and Valentin Rybenkov; and SLU's Keith Haynes and John Walker. The research was supported by a National Institutes of Health grant. The Titan supercomputer is part of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science user facility. UT-Battelle manages ORNL for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | February 22, 2017
Site: www.scientificcomputing.com

Barely wider than a strand of human DNA, magnetic nanoparticles -- such as those made from iron and platinum atoms -- are promising materials for next-generation recording and storage devices like hard drives. Building these devices from nanoparticles should increase storage capacity and density, but understanding how magnetism works at the level of individual atoms is critical to getting the best performance. However, magnetism at the atomic scale is extremely difficult to observe experimentally, even with the best microscopes and imaging technologies. That's why researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE's Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle. "These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles," said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL's Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab's Molecular Foundry to combine world-class experimental data with world-class computing to do something new--simulate magnetism atom by atom in a real nanoparticle. Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF. Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure. "In this case, researchers were able to get the precise 3-D structure for a real particle," Eisenbach said. "The UCLA group has developed a new experimental technique where they can tell where the atoms are--the coordinates--and the chemical resolution, or what they are -- iron or platinum." The results were published on February 2 in Nature. Using a state-of-the-art electron microscope at Berkeley Lab's Molecular Foundry, the Berkley Lab and UCLA teams measured multiple 2-D images from a single FePt nanoparticle at different orientations. UCLA researchers then used GENFIRE, a reconstruction algorithm they developed, to align 2-D images and reconstruct the 3-D atomic positions with cutting-edge precision. The nanoparticle they imaged was synthesized at the University of Buffalo. "Our technique is called atomic electron tomography (AET) and enables the reconstruction of 3?D atomic structure in materials with 22-picometer precision," said Jianwei (John) Miao of UCLA. A picometer is one-trillionth of a meter. "Like a CT scan, you take multiple images from samples and reconstruct them into a 3-D image." However, a CT scan is on the order of millimeters for medical diagnoses, whereas the UCLA team's AET technique is measuring atom locations on the order of hundreds of picometers, or the space between atoms. The UCLA team also developed algorithms to trace the positions of about 6,500 iron and 16,500 platinum atoms, revealing 3-D chemical disorder and other defects at the atomic level. "We find that the atomic structure is much more complicated than people thought," Miao said. "There were a lot of defects and imperfections in this iron-platinum nanoparticle." One of the defining characteristics of the FePt nanoparticle is the grouping of iron and platinum atoms into regions or "grains" divided by boundaries. Researchers wanted to understand how magnetism would differ across boundaries given that the ratio and order of iron and platinum atoms changes from grain to grain. Ultimately, magnetism from grain to grain could influence the performance of a magnetic storage device. "The computational challenge was to demonstrate how magnetism is ordered in the real particle and understand how it changes between boundaries of differently ordered grains," Eisenbach said. Magnetism at the atomic level is driven by quantum mechanics -- a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload. For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom. "If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature," Kent said. To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results. To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL. "The LSMS code was developed for large magnetic systems and can tackle lots of atoms," Kent said. As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan's accelerated architecture, speeding up calculations more than 8 times on the machine's GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures. Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms. "With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations," Eisenbach said. Computer simulations revealed that grain boundaries have a strong effect on magnetism. "We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important," Miao said. In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible -- as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms. Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance. "There's a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance," Kent said.


News Article | February 17, 2017
Site: www.eurekalert.org

U.S. Department of Energy (DOE) high-performance computer sites have selected a dynamic fusion code, led by physicist C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL), for optimization on three powerful new supercomputers. The PPPL-led code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s. The PPPL code, called XGC, simulates behavior of the ions, electrons and neutral atoms in the transport barrier region-- or "pedestal" -- between the ultra-hot core of the plasma that fuels fusion reactions and the cooler and turbulent outer edge of the plasma. The pedestal must be high and wide enough to prevent damage to the divertor plate that exhausts heat in doughnut-shaped tokamaks that house the fusion reactions. "How to create a high edge pedestal without damaging the divertor wall is the key question to be answered," said Chang. "That is a prerequisite for achieving steady state fusion." Among the team of nationwide experts developing this program are PPPL physicists Seung-Ho Ku, Robert Hager and Stephane Ethier. Selection of the PPPL code could help ready it for exascale development. "Computer architecture is evolving rapidly and these new pre-exascale computers have features that are quite different from some of the earlier petascale supercomputers," said Amitava Bhattacharjee, head of the Theory Department at PPPL. Petascale machines operate in petaflops, or one million billion (1015) floating point operations per second. Bhattacharjee heads a PPPL-led Exascale Computing Project that will integrate the XGC code with GENE, a code developed at the University of California, Los Angeles, to create the first simulation of a complete fusion plasma. Exascale supercomputers will perform exaflops, or a billion billion (1018) floating point operations per second. The three new pre-exascale supercomputers: Cori, now fully installed at the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory. Cori, named for biochemist Gerty Cori, the first American woman to win a Nobel Prize in science, has a theoretical peak speed of 30 petaflops per second on scientific applications using Intel Xeon "Haswell" and Xeon Phi "Knights Landing" processor nodes. Also selected to participate in Cori's NERSC Exascale Science Applications Program (NESAP) is the PPPL-led M3D-CI, an extended magnetohydrodynamics (MHD) code focused on simulation of plasma disruptions led by physicist Stephen Jardin, with support from physicists Joshua Breslau, Nate Ferraro and Jin Chen. Two more PPPL-led codes, in addition to the 20 that included XGC and M3D-CI that were previously selected, will participate in the Cori NERSC program. These programs are GTC-P and GTS codes that model plasma turbulence in the plasma core and are headed by physicists William Tang and Stephane Ethier. Principal developer of the GTS code is PPPL physicist Weixing Wang. The GTC-P code is PPPL's version of the GTC code led by the University of California, Irvine. Summit, to be operational at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in 2018. Summit features a hybrid architecture consisting of IBM Power 9 processors and multiple NVIDIA Volta graphic processing units and will be capable of performing up to at least 200 petaflops for a wide range of applications. The facility's Center for Accelerated Application Readiness (CAAR) program has selected 13 projects that will participate in the program to optimize their applications codes and demonstrate the effectiveness of their applications on Summit. Aurora, scheduled to be deployed in 2018 at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory, will be comprised of third generation Intel Xeon Phi "Knights Hill" many-core processors. Ten projects have been selected for the ALCF Early Science Program, which is expected to be capable of performing up to 200 petaflops on a wide range of scientific applications. PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- ultra-hot, charged gases -- and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 9, 2015
Site: www.rdmag.com

A team led by scientists at the U.S. Dept. of Energy (DOE)’s SLAC National Accelerator Laboratory combined powerful magnetic pulses with some of the brightest x-rays on the planet to discover a surprising 3-D arrangement of a material’s electrons that appears closely linked to a mysterious phenomenon known as high-temperature superconductivity. This unexpected twist marks an important milestone in the 30-year journey to better understand how materials known as high-temperature superconductors conduct electricity with no resistance at temperatures hundreds of degrees Fahrenheit above those of conventional metal superconductors but still hundreds of degrees below freezing. The study was published in Science Express. The study also resolves an apparent mismatch in data from previous experiments and charts a new course for fully mapping the behaviors of electrons in these exotic materials under different conditions. Researchers have an ultimate goal to aid the design and development of new superconductors that work at warmer temperatures. “Totally unexpected” physics “This was totally unexpected, and also very exciting. This experiment has identified a new ingredient to consider in this field of study. Nobody had seen this 3-D picture before,” said Jun-Sik Lee, a SLAC staff scientist and one of the leaders of the experiment conducted at SLAC’s Linac Coherent Light Source (LCLS) x-ray laser. “This is an important step in understanding the physics of high-temperature superconductors.” The dream is to push the operating temperature for superconductors to room temperature, he added, which could lead to advances in computing, electronics and power grid technologies. There are already many uses for standard superconducting technology, from MRI machines that diagnose brain tumors to a prototype levitating train, the CERN particle collider that enabled the Nobel Prize-winning discovery of the Higgs boson and ultrasensitive detectors used to hunt for dark matter, the invisible constituent believed to make up most of the mass of the universe. A planned upgrade to the LCLS, known as LCLS-II, will include a superconducting particle accelerator. The new wave in superconductivity The 3-D effect that scientists observed in the LCLS experiment, which occurs in a superconducting material known as YBCO (yttrium barium copper oxide), is a newly discovered type of “charge density wave.” This wave does not have the oscillating motion of a light wave or a sound wave; it describes a static, ordered arrangement of clumps of electrons in a superconducting material. Its coexistence with superconductivity is perplexing to researchers because it seems to conflict with the freely moving electron pairs that define superconductivity. The 2-D version of this wave was first seen in 2012 and has been studied extensively. The LCLS experiment revealed a separate 3-D version that appears stronger than the 2-D form and closely tied to both the 2-D behavior and the material’s superconductivity. The experiment was several years in the making and required international expertise to prepare the specialized samples and construct a powerful customized magnet that produced magnetic pulses compressed to thousandths of a second. Each pulse was 10-20 times stronger than those from the magnets in a typical medical MRI machine. A powerful blend of magnetism and light Those short but intense magnetic pulses suppressed the superconductivity of the YBCO samples and provided a clearer view of the charge density wave effects. They were immediately followed at precisely timed intervals by ultrabright LCLS x-ray laser pulses, which allowed scientists to measure the wave effects. “This experiment is a completely new way of using LCLS that opens up the door for a whole new class of future experiments,” said Mike Dunne, LCLS director. Researchers conducted many preparatory experiments at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and Argonne National Laboratory's Advanced Photon Source (APS), which also produce x-rays for research. LCLS, SSRL and APS are DOE Office of Science User Facilities. Scientists from SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC, and SSRL and LCLS were a part of the study. “I’ve been excited about this experiment for a long time,” said Steven Kivelson, a Stanford Univ. physics professor who contributed to the study and has researched high-temperature superconductors since 1987. Kivelson said the experiment sets very clear boundaries on the temperature and strength of the magnetic field at which the newly observed 3-D effect emerges. “There is nothing vague about this,” he said. “You can now make a definitive statement: In this material a new phase exists.” The experiment also adds weight to the growing evidence that charge density waves and superconductivity “can be thought of as two sides of the same coin,” he added. In search of common links But it is also clear that YBCO is incredibly complex, and a more complete map of all of its properties is required to reach any conclusions about what matters most to its superconductivity, said Simon Gerber of SIMES and Hoyoung Jang of SSRL, the lead authors of the study. Follow-up experiments are needed to provide a detailed visualization of the 3-D effect, and to learn whether the effect is universal across all types of high-temperature superconductors, said SLAC staff scientist and SIMES investigator Wei-Sheng Lee, who co-led the study with Jun-Sik Lee of SSRL and Diling Zhu of LCLS. “The properties of this material are much richer than we thought,” Lee said. “We continue to make new and surprising observations as we develop new experimental tools,” Zhu added. The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason.  Learn more.


News Article | April 4, 2016
Site: phys.org

Yet certain medically useful isotopes can only be produced by nuclear reactions that rely on high-energy particle accelerators, which are expensive to build and operate. As such, these isotopes are not typically available from commercial vendors. Instead, they are produced at facilities with unique capabilities, like the national labs built and funded by the U.S. Department of Energy (DOE). The DOE Office of Science's Nuclear Physics Isotope Development and Production for Research and Applications program (DOE Isotope Program) seeks to make critical isotopes more readily available for energy, medical, and national security applications and for basic research. Under this program, scientists, engineers, and technicians at DOE's Brookhaven National Laboratory recently completed the installation of a beam raster (or scanning) system designed to increase the yield of critical isotopes produced at the Brookhaven Linac Isotope Producer (BLIP), the Lab's radioisotope production and research facility, in operation since 1972. The new raster system became operational in Jan. 2016 and is contributing to increased production of strontium-82 (Sr-82), an isotope used for cardiac imaging, and to research and development of actinium-225 (Ac-225), an isotope that may be a promising treatment for targeting many forms of cancer, including leukemia and melanoma. BLIP produces isotopes by directing a high-energy (up to 200 million electron volts), high-intensity narrow beam of protons at a water-cooled array of several target materials. When the protons collide with the target materials, a reaction occurs, producing radioactive products in the target. More than one isotope is being produced at any given time, so multiple targets are hit with the beam. As the beam passes through the first to last targets, its energy decays. The targets are lined up in an order that maximizes production of the desired isotopes: the targets needed to produce isotopes requiring higher energy are put in the front of the array. A small fraction of each of the products is the desired isotopes, which can be extracted and purified through chemical processing conducted inside lead-enclosed "hot" cells at Brookhaven Lab's Isotope Research Processing Laboratory. Inside these hot cells, which are designed for safely handling radioactive materials, the irradiated targets are cut open and their contents are dissolved to separate out the desired isotopes. Personnel use manipulators to remotely handle the radioactive material. The isotopes are then shipped in specialized containers to suppliers, who distribute them to hospitals for medical use or to universities for ongoing research. What makes BLIP unique is its high-energy capacity. "A minimum of 70 million electron volts is required to make Sr-82," explained Leonard Mausner, the head of research and development and facility upgrades within Brookhaven Lab's Medical Isotope Research and Production Program. Mausner proposed both the new beam raster system and a complementary effort to increase the beam intensity of the Brookhaven Linear Accelerator, or Linac, which provides the beam used at BLIP. "To date, there are no commercial accelerators in the United States with the power required to produce this critical isotope used to image 300,000 patients per year," said Mausner. However, the targets' ability to handle BLIP's high intensities has, until now, been largely hindered by the way the beam was positioned on the target. The beam, which significantly heats up the target, had always been narrowly focused on a single area of the target. As a result, the targets—such as the rubidium chloride (RbCl) used to produce Sr-82—became very hot only in the region of highest beam intensity. "The beam pulse, which is 450 microseconds long and happens every 150 milliseconds, would hit the targets in the same exact spot 24/7 over a period of weeks, potentially damaging the targets when high beam current was applied," explained Robert Michnoff, project manager for the BLIP beam raster system and an engineer in Brookhaven Lab's Collider-Accelerator Department. "Because the RbCl target was being heated unevenly, a small molten zone would form, then expand and push out into the cooler outer edges of the target area, where the material would solidify," said Mausner. "We want the target fully solid or fully molten, as a uniform density may improve isotope yield." Uneven heating causes the density of a target to vary across its diameter. As a result, the energy of the beam exiting the target will also vary across this diameter, impacting the energy that enters the next target in the array. "The distribution in beam energy increases as it travels through one target to the next, making it difficult to assess how much energy is transferred. We want to optimize production of isotopes in the downstream targets," said Cathy Cutler, director of Brookhaven Lab's Medical Isotope Research and Production Program. The new beam raster system provides a more uniform distribution of the beam on the target by rapidly "painting" the beam on the target in a circular raster fashion using two custom-designed magnets. But, rapidly sweeping the beam in one circular motion is not enough to uniformly distribute the beam across the target. "The beam intensity pattern would resemble a donut or a volcanic crater—a circle with a hollow center," said Mausner. Instead, the beam is moved in a circular pattern at two different radii, essentially creating a larger and smaller circle. The radius values and the number of beam pulses for each radius can be programmed to optimize beam distribution. "The rastering pattern allows us to achieve near-uniform beam current density on the target," said Michnoff, who mentioned plans to test a "middle" radius, which may help to provide even better beam distribution. Paving the way for higher beam intensities The new raster system provides an opportunity for researchers to safely apply increasingly higher beam currents on targets—an approach that Brookhaven scientists have been actively exploring to further increase isotope production yield. In a complementary effort to the BLIP raster upgrade, the scientists worked to increase the beam intensity of the Brookhaven Linac by optimizing operating parameters of the hydrogen ion source, neutralizing the electric charge generated by the beam, and increasing the length of the beam pulse by placing the beam earlier in the radio-frequency pulse used to accelerate the ions. "At higher currents, the beam's electric charge causes the beam to spread out, resulting in defocusing of the beam and ultimately a loss of current. We use Xenon gas to neutralize that charge," explained Deepak Raparia, head of the linac pre-injector system, project manager of the linac upgrade, and a scientist in Brookhaven Lab's Collider-Accelerator Department. "By increasing the length of the beam pulse, we can deliver more beam current to the BLIP target station and thus increase the quantity of isotopes produced." In 2015, the Brookhaven Linac's beam intensity was increased from 115 to 147 microamps, surpassing the initial project goal of 140 microamps. After the raster system was commissioned in 2016, the beam was further optimized, achieving 165 microamps. According to Mausner, yield could rise significantly at this higher intensity: "If the beam current can be maintained at 165 microamps, production levels could potentially increase by 40 percent." In a second proposed phase of the Linac upgrade, the Brookhaven team hopes to double beam current by doubling the beam pulse length from 450 to 900 microseconds. Accelerating and extracting significantly more beam current out of the linac and transporting that current to BLIP will require the linac's radio-frequency system and beam-focusing elements to be upgraded. "These upgrades will increase the duration of accelerating and focusing fields to cover the entire beam pulse," said Raparia. The capability to handle higher and higher beam intensities without targets failing not only increases the yield of routinely produced medical isotopes, such as Sr-82, but also enables scientists to investigate other promising isotopes for medical and other applications. Among these isotopes is Ac-225, a short-lived alpha emitter—it quickly releases a high amount of energy in the form of alpha particles that can only travel short distances. As such, Ac-225 has the ability to destroy targeted cancer cells without destroying healthy surrounding tissue. Traditionally, Ac-225 has been produced through the natural decay of thorium-229 from uranium-233, which is not readily available and has an extremely long half-life, or rate of decay. In collaboration with Los Alamos National Laboratory and Oak Ridge National Laboratory, Brookhaven Lab has been developing an accelerator-based capability to scale up the production of Ac-225, the demand for which is 50 to 100 times greater than the current supply. Scientists from all three laboratories are working to develop a new Ac-225 production approach in which readily available thorium metal targets are bombarded with high-energy protons. Initial experiments demonstrated that an irradiation lasting less than a week's time could produce the annual quantity of Ac-225 currently in supply. "The raster system will ultimately aid us in the research and development of Ac-225, which could eventually be manufactured into an active pharmaceutical ingredient used in cancer therapy," said Mausner. However, before Ac-225 can become a routine medical isotope, several challenges must be overcome. "Hundreds of isotopes are produced when thorium is hit with protons, so it is difficult to separate out Ac-225 during chemical processing of the target," explained Mausner. A small number of clinical cancer treatment trials are ongoing using the currently limited supply of Ac-225. Initial results indicate that Ac-225 has the potential to put terminal cancer patients with acute myeloid leukemia, a cancer of the blood and bone marrow, into remission. "The increased beam intensity, coupled with this new raster system, will enable higher production levels of desirable radioisotopes for the nuclear medicine community and industry, and will increase the number of patients who could benefit across our nation," said Mausner.


News Article | January 18, 2016
Site: www.greencarcongress.com

« Toho Tenax develops energy-saving, high-productivity carbonizing process and surface treatment technologies for CFRP | Main | SOFC company Ceres Power and Honda sign new joint development agreement for power equipment » The potential for using low temperature combustion (LTC) in compression ignition engines (i.e., diesel) to reduce NO and PM while maintaining high efficiency has attracted a great deal of research interest over the past several years. While achieving LTC with diesel fuel over a wide operating range has been shown to be difficult for several reasons, gasoline, with its high volatility and low chemical reactivity, offers a more attractive fuel option for LTC. Accordingly, in the industry’s quest for lower fuel consumption and emissions, a number of schemes for achieving LTC gasoline compression ignition have emerged. Now, a team of researchers from Oak Ridge National Laboratory’s National Transportation Research Center (NTRC) has published a comprehensive, open-access review of a variety of fuel injection strategies being investigated for LTCGCI. The paper is available for download from the International Journal of Engine Research. Simply speaking, the goal of LTC is to achieve sufficient premixing between the fuel and air in a globally dilute environment, such that combustion takes place at lower temperatures… Providing time for fuel/air mixing avoids high soot formation rates. Dilution, with either air or exhaust gas recirculation (EGR), reduces the peak combustion temperatures to avoid high NO formation rates. However, if the fuel and air are overly mixed, resulting in equivalence ratios that are too low, the local combustion temperature becomes too low to support complete fuel oxidation. Although LTC can yield high thermal efficiency with simultaneously low NO and soot emissions, there are challenges associated with combustion efficiency (i.e., unburned hydrocarbon (UHC) and CO emissions), narrow operable load range, high pressure rise rates, and combustion timing controllability, all of which in-cylinder fuel stratification aim to address while maintaining low NO and soot emissions. To illustrate the impact of fuel stratification on gasoline compression ignition, the authors selected three representative operating strategies: partial (PFS), moderate (MFS), and heavy fuel (HFS) stratification. The team used CFD modeling of the in-cylinder processes during the closed valve portion of the cycle to illustrate the differences between the various levels of fuel stratification, making comparisons with conventional diesel combustion (CDC). The authors stressed that because the different operating strategies will be sensitive to combustion chamber and injector design, their results are not “all-encompassing” and should not be used to create overly wide generalizations. However, the results do highlight some overarching opportunities and challenges, while also emphasizing that there is a continuum between all of the defined stratification bins. Among their overall findings: All of the operating strategies yielded high gross indicated efficiency (GIE) (~46%) due to the high compression ratio and globally lean air/fuel ratio. There is a trade-off between fuel stratification and combustion efficiency, with the more stratified conditions yielding higher combustion efficiency. The lower combustion efficiencies observed for the more premixed concepts stem from overly lean regions which result in overly low combustion temperatures (i.e. less than ~1400 K / 1127 ˚C). Despite the lower combustion efficiency, the more premixed strategies, such as MFS and PFS, still yielded high indicated efficiency due to reduced heat transfer losses and more favorable thermodynamic properties of the working fluid, both stemming from the lower combustion temperatures. The more stratified cases yielded higher NO emissions than the more premixed operating strategies due to locally high combustion temperatures. All of the LTGCI strategies investigated appear to form very little soot due to the fact that all injection events have ceased prior to the start of combustion. Combustion noise displayed a non-monotonic trend with fuel stratification—a finding that must be exploited to minimize engine noise.


News Article | February 27, 2017
Site: www.sciencemag.org

Thom Mason, the director of the Department of Energy's (DOE's) Oak Ridge National Laboratory in Tennessee, says that before he took his job he received two pieces of advice. One former director of a DOE national lab told him that a director couldn't serve more than 10 years because every year he'd infuriate another 10% of the staff. Another told him that it would take him 10 years to have a substantial impact. So Mason, 52, has decided that he will step down on 1 July, exactly 10 years after taking the directorship, the laboratory announced Friday. He will take a position with Battelle in Columbus, which helps manage several DOE national laboratories. "It sort of feels that I've been here long enough to have an impact but not so long that I've worn out my welcome," Mason says. William Madia, vice president for SLAC National Laboratory at Stanford University in Palo Alto, California, says, "It's been a very, very good run for Thom Mason—a lot of accomplishments." In the past, directors of DOE's 16 national labs often served for a decade or more, but these days a typical tenure is now 4 or 5 years. So Mason is by far the longest serving director, ahead of Terry Michalske, director of DOE's Savannah River National Laboratory near Jackson, South Carolina, who has served since 2009. Oak Ridge is the largest of the 10 national labs supported by DOE's Office of Science, with a staff of 4500 and an annual budget of $1.5 billion. A condensed matter physicist, Mason came to Oak Ridge in 1998 to lead construction of the lab's flagship facility, the $1.4 billion Spallation Neutron Source, which was completed in 2006. Under Mason's directorship, Oak Ridge also developed two supercomputers that ranked as the world's most powerful, took leading roles in multi-institutional centers to develop biofuels and high-precision simulations of nuclear reactors, established a center for advanced manufacturing, and overhauled much of the lab's infrastructure. That rejuvenation extends to the lab staff, Mason says. "More than 70% of our Ph.D.-level scientists and engineers have joined the lab in the past 10 years," he says. Mason was a leader not only at the lab, but among DOE lab directors and in representing DOE's science efforts in Washington, D.C., Madia says. He has a well-deserved reputation for being forthright and dependable, says Madia, who served as Oak Ridge director from 2000 to 2003 and hired Mason to the lab. "If Thom Mason walks into your office and says he's going to do something, you can bank on it," he says. Mason will take the post of senior vice president for laboratory operations at Battelle, which is involved in managing six DOE national labs and one for the Department of Homeland Security. "It's a very good move for both Battelle and for Thom," says Samuel Aronson, an emeritus physicist at Brookhaven National Laboratory in Upton, New York, who directed that lab from 2006 until 2012. "After serving so long in the trenches he has a very deep and immediate grasp of how the labs operate."


News Article | November 18, 2016
Site: www.sciencedaily.com

A new class of drugs that combat antibiotic resistance has been discovered by a University of Oklahoma researcher and team. In the study supported by the National Institutes of Health, laboratory experiments were combined with supercomputing modeling to identify molecules that boost the effect of antibiotics on disease-causing bacteria. Helen Zgurskaya, professor of chemistry and biochemistry in the OU College of Arts and Sciences, and OU team members Narges Abdali, Julie Chaney, David Wolloscheck and Valentin Rybenkov, collaborated with Jeremy Smith, Jerry Parks and Jerome Baudry, the University of Tennessee-Oak Ridge National Laboratory Center for Molecular Biophysics; Adam Green, UT; and Keith Haynes and John Walker, Saint Louis University School of Medicine. They collectively identified four new chemicals that seek out and disrupt bacterial proteins called "efflux pumps," a major cause of antibiotic resistance in bacteria. "The supercomputing power of ORNL's Titan supercomputer allowed us to perform large-scale simulations of the drug targets and to screen many potential compounds quickly," said Zgurskaya, head of the OU Antibiotic Discovery and Resistance Group at the Stephenson Life Sciences Research Center. "The information we received was combined with our experiments to select molecules that were found to work well, and this should drastically reduce the time needed to move from the experimental phase to clinical trials," she added. The team focused on one efflux pump protein, known as AcrA, which connects two other proteins in a tunnel shape through the bacterial cell envelope. Disrupting this protein could essentially break the efflux pump -- an approach unlike other drug design strategies that try to inhibit the biochemical processes. "In contrast to previous approaches, our new mechanism uses mechanics to revive an existing antibiotic's ability to fight infection," said Smith, UT-ORNL Governor's Chair and director of the UT-ORNL Center for Molecular Biophysics. The laboratory experiments were done jointly with extensive protein simulations run on ORNL's Titan supercomputer. Large numbers of chemicals were scanned to predict and select which would be the most effective in preventing AcrA proteins from assembling properly. Using computational models produced by the Titan supercomputer, researchers screened various combinations of molecules and proteins to determine which ones were most disruptive to their formation. "The first screening took only 20 minutes using 42,000 processors and yielded several promising results," said Parks, ORNL. "After more extensive analysis, we narrowed down our list to predict which molecules were most likely to disrupt the function of the efflux pump." OU researchers then conducted laboratory experiments to confirm the disruption of the efflux pump and the antibiotic-reviving capability of four of the molecules selected. The SLU School of Medicine research team synthesized structural analogs of the discovered efflux pump inhibitors and identified properties essential for their activities.


News Article | November 9, 2016
Site: www.chromatographytechniques.com

Despite steady progress in detection and treatment in recent decades, cancer remains the second leading cause of death in the United States, cutting short the lives of approximately 500,000 people each year. To better understand and combat this disease, medical researchers rely on cancer registry programs--a national network of organizations that systematically collect demographic and clinical information related to the diagnosis, treatment, and history of cancer incidence in the United States. The surveillance effort, coordinated by the National Cancer Institute (NCI) and the Centers for Disease Control and Prevention, enables researchers and clinicians to monitor cancer cases at the national, state and local levels. Much of this data is drawn from electronic, text-based clinical reports that must be manually curated--a time-intensive process--before it can be used in research. For example, cancer pathology reports, text documents that describe cancerous tissue in detail, must be individually read and annotated by experts before becoming part of a cancer registry. With millions of new reports being produced each year, the information burden continues to grow. "The manual model is not scalable," said Georgia Tourassi, director of the Health Data Sciences Institute at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL). "We need to develop new tools that can automate the information-extraction process and truly modernize cancer surveillance in the United States." Since 2014 Tourassi has led a team focused on creating software that can quickly identify valuable information in cancer reports, an ability that would not only save time and worker hours but also potentially reveal overlooked avenues in cancer research. After experimenting with conventional natural-language-processing software, the team's most recent progress has emerged via deep learning, a machine-learning technique that employs algorithms, big data, and the computing power of GPUs to emulate human learning and intelligence. Using the Titan supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL, Tourassi's team applied deep learning to extract useful information from cancer pathology reports, a foundational element of cancer surveillance. Working with modest datasets, the team obtained preliminary findings that demonstrate deep learning's potential for cancer surveillance. The continued development and maturation of automated data tools, among the objectives outlined in the White House's Cancer Moonshot initiative, would give medical researchers and policymakers an unprecedented view of the US cancer population at a level of detail typically obtained only for clinical trial patients, historically less than 5 percent of the overall cancer population. "Today we're making decisions about the effectiveness of treatment based on a very small percentage of cancer patients, who may not be representative of the whole patient population," Tourassi said. "Our work shows deep learning's potential for creating resources that can capture the effectiveness of cancer treatments and diagnostic procedures and give the cancer community a greater understanding of how they perform in real life." Creating software that can understand not only the meaning of words but also the contextual relationships between them is no simple task. Humans develop these skills through years of back-and-forth interaction and training. For specific tasks, deep learning compresses this process into a matter of hours. Typically, this context-building is achieved through the training of a neural network, a web of weighted calculations designed to produce informed guesses on how to correctly carry out tasks, such as identifying an image or processing a verbal command. Data fed to a neural network, called inputs, and select feedback give the software a foundation to make decisions based on new data. This algorithmic decision-making process is largely opaque to the programmer, a dynamic akin to a teacher with little direct knowledge of her students' perception of a lesson. "With deep learning you just throw the document in and say, 'Figure it out,'" Tourassi said. "It's more like a black box, but that's the beauty. We do not impose our own constraints." GPUs, such as those in Titan, can accelerate this training process by quickly executing many deep-learning calculations simultaneously. In two recent studies, Tourassi's team used accelerators to tune multiple algorithms, comparing results to more traditional methods. Using a dataset composed of 1,976 pathology reports provided by NCI's Surveillance, Epidemiology, and End Results (SEER) Program, Tourassi's team trained a deep-learning algorithm to carry out two different but closely related information-extraction tasks. In the first task the algorithm scanned each report to identify the primary location of the cancer. In the second task the algorithm identified the cancer site's laterality--or on which side of the body the cancer was located. By setting up a neural network designed to exploit the related information shared by the two tasks, an arrangement known as multitask learning, the team found the algorithm performed substantially better than competing methods. "Intuitively this makes sense because carrying out the more difficult objective is where learning the context of related tasks becomes beneficial," Tourassi said. "Humans can do this type of learning because we understand the contextual relationships between words. This is what we're trying to implement with deep learning." Another study carried out by Tourassi's team used 946 SEER reports on breast and lung cancer to tackle an even more complex challenge: using deep learning to match the cancer's origin to a corresponding topological code, a classification that's even more specific than a cancer's primary site or laterality, with 12 possible answers. The team tackled this problem by building a convolutional neural network, a deep-learning approach traditionally used for image recognition, and feeding it language from a variety of sources. Text inputs ranged from general (e.g., Google search results) to domain-specific (e.g., medical literature) to highly specialized (e.g., cancer pathology reports). The algorithm then took these inputs and created a mathematical model that drew connections between words, including words shared between unrelated texts. Comparing this approach to more traditional classifiers, such as a vector space model, the team observed incremental improvement in performance as the network absorbed more cancer-specific text. These preliminary results will help guide Tourassi's team as they scale up deep-learning algorithms to tackle larger datasets and move toward less supervision, meaning the algorithms will make informed decisions with less human intervention. In 2016 Tourassi's team learned its cancer surveillance project will be developed as part of DOE's Exascale Computing Project, an initiative to develop a computing ecosystem that can support an exascale supercomputer--a machine that can execute a billion billion calculations per second. Though the team has made considerable progress in leveraging deep learning for cancer research, the biggest gains are still to come. "Focusing on clinical text alone, the value would be tremendous," Tourassi said.


News Article | March 9, 2016
Site: www.scientificcomputing.com

Drive away from city lights on a clear night and you can see innumerable stars. Several hundred thousand years after the big bang, though, the universe looked very different — and much darker. Minutes after the big bang, the universe was filled with radiation and a nearly uniform, featureless atomic gas of hydrogen and helium so hot that every atom was ionized — or stripped of its electrons, which roamed freely. By the time 400,000 years elapsed, expansion of the universe had cooled so much that every hydrogen atom was able to capture an electron to neutralize it, and the radiation had shifted its colors to invisible wavelengths. What came next is called the “cosmic dark ages,” before gravity managed to pull any of that gas in to make stars. The total density of matter then, as now, was everywhere dominated by dark matter — a mysterious form of matter invisible to the human eye that scientists only know exists based on its gravitational pull. This dark matter was smoothly distributed, too, but in some places, there was a bit extra. That extra dark matter made gravity stronger there, causing those places to collapse both dark and atomic matter inward to make galaxies and, inside them, stars — lighting up the universe and ending the dark ages. “When the first galaxies and stars formed about 100 million years after the big bang, starlight leaked into the intergalactic medium, where most of the hydrogen and helium created in the big bang was still waiting, not yet formed into galaxies,” said Paul Shapiro, the Frank N. Edmonds, Jr. Regents Professor of Astronomy at the University of Texas at Austin. “As ultraviolet photons of starlight streamed across the universe, they struck these intergalactic atoms and photoionized [meaning light served as the catalyst for ionization] them, releasing fast-moving electrons that heated the gas to 10,000 kelvin.” Shapiro noted that, as this process continued, galaxies and groups of galaxies formed, zones of hot, ionized intergalactic gas grew up around them, filling the universe with a patchwork quilt of ionized and neutral zones, until the ionized zones became large enough to overlap entirely. “By the end of the first billion years, the whole universe was filled with ionized gas — reionized, actually, since atoms were born in an ionized state a billion years before that but had since then recombined,” Shapiro said. “We call this transformation the ‘epoch of reionization.’ It was a kind of ‘second coming’ of ionization — a giant backlash by the first galaxies on the universe that created them.” Shapiro can speak confidently on this topic because he and a multi-institutional research team have been using supercomputing resources at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) to simulate this phenomenon with unparalleled detail and accuracy. The team, which includes collaborators from the Universities of Texas, Strasbourg (France), Zurich (Switzerland), Sussex (United Kingdom), Madrid (Spain), and Potsdam (Germany), simulated the reionization of the universe with a focus on how it happened in our own neighborhood — the Milky Way and Andromeda galaxies along with a number of smaller ones that comprise the Local Group. The researchers’ goal was to learn about reionization across the universe, while predicting its imprints that can be observed today on these nearby celestial objects. Astronomers think that when dwarf galaxies much smaller than the Milky Way were engulfed by reionization, for example, their star formation was suppressed. If the team’s newest simulation confirmed this, it would solve a major puzzle in the standard theory of cosmology. Its first results are soon to be published in the Monthly Notices of the Royal Astronomical Society. To model global reionization accurately, the team needed to simulate a very large volume that would expand with the general expansion of the universe, would be large enough to contain millions of galaxies as possible sources of radiation, and would have fine enough spatial detail to resolve the smallest galaxies. This required an extremely large computational grid of 4,096 by 4,096 by 4,096 spatial cells for the gas and radiation and the same number, 69 billion, of particles of dark matter, in a cubic volume 300 million light-years per side of the grid today. To model the local universe, the team also had to create initial conditions that would guarantee the formation, 13 billion years later, of all the well-known local structures observed today, from the Milky Way and Andromeda galaxies of the Local Group to the giant clusters of galaxies called Virgo and Fornax. Shapiro noted that these simulations were the first to have all three components — large enough volume, fine enough spatial grid, and conditions just right to model the local universe — while calculating all the major physical processes involved in reionizing the universe, in a “fully coupled” way. This meant that, in addition to calculating the gravitational pull of dark matter, atomic gas, and stars on each other, researchers had to simulate the hydrodynamics of hydrogen gas flowing within and between galaxies and the transport of the radiation that helped simultaneously complete the universe’s ionization and heating. “You can’t simulate the radiation as an afterthought,” Shapiro said. “You can’t just simulate the gravitational collapse of galaxies and then ‘paint on’ the radiation and let it ionize. That isn’t good enough, because there is feedback. The radiation influences how galaxies form and release the starlight that causes reionization — so we need to do it simultaneously.” There was a barrier to doing this, however, which even some of the fastest supercomputers could not overcome previously. To solve the equations describing the time history of reionization, researchers needed to compute “snapshots” of the moving gas, stars, dark matter and radiation to track the expanding surfaces of the ionized zones (called “ionization fronts”) without blurring them. The radiation that causes these surfaces to expand travels at the speed of light, much faster than the gas, dark matter and stars. To follow the radiation and ionization fronts accurately, while also tracking the slower-moving gas, stars and dark matter, Shapiro’s team programmed the ionization and radiation snapshots to run on GPUs. The team also used CPUs on the same parallel computer nodes to compute the motions of gas, dark matter and stars separately. By making the GPUs perform hundreds of finer-spaced “frames per second” of the reionization time history while the CPUs performed just one of their “frames per second,” the team greatly reduced the total number of computations required to advance the simulation. Such a multi-faceted simulation needed not only a powerful supercomputer, but also one that had plenty of GPUs so the team could offload its radiation and ionization calculations. Thankfully, the Oak Ridge Leadership Computing Facility’s (OLCF’s) Cray XK7 Titan supercomputer fit the description. The OLCF, a DOE Office of Science User Facility located at ORNL, built Titan with a GPU for every one of its 18,688 compute nodes. The Shapiro team needed 8,192 GPUs to run its simulations. “Titan is special in having such a large numbers of GPUs, and that is something that made our simulations a Titan-specific milestone,” Shapiro said. Simulating our cosmic dawn The resulting fully coupled simulation performed during the Shapiro team’s INCITE 2013 grant is called CoDa, short for “Cosmic Dawn.” It is based upon the group’s flagship code, RAMSES-CUDATON, developed by team members Romain Teyssier (Zurich) and Dominique Aubert (Strasbourg), and run on Titan by Pierre Ocvirk (Strasbourg). Team members from the Constrained Local Universe Simulations, led by Gustavo Yepes (Madrid), Stefan Gottloeber (Potsdam), and Yehuda Hoffman (Jerusalem), provided the initial framework for the simulation. The RAMSES-CUDATON code is actually several different coupled numerical methods. The RAMSES component treats gravitational dynamics by an N-body calculation — a term describing matter that moves under the action of gravitational forces computed by discretizing the dark matter and stars as “N” point-mass particles and the atomic gas as a uniform cubic lattice of cells. Because the gas also experiences pressure forces, an additional hydrodynamics calculation is performed on the same cubic lattice, coupled to the gravity calculation. The second part of the code computes the transfer of radiation and the ionization of the gas, also using the same cubic lattice, by the method called ATON. The GPU version of ATON, called CUDATON, benefitted enormously from Titan’s GPUs — the team was able to accelerate its simulations by a factor of roughly 100. To simulate such fine spatial detail over such a large volume, the researchers generated enormous data sets — over 2 petabytes (2 million gigabytes), in fact. The team stored 138 snapshots, one for every 10 million years of cosmic time, starting at a time about 10 million years after the big bang. Each snapshot created about 16 terabytes of raw data. Because saving this much data indefinitely was not possible, the team also created a greatly reduced data set on the order of 100 terabytes for further analysis and a more permanent archival record. To analyze this particularly large, challenging set of raw data, the team subsequently was awarded additional computing hours to continue using the OLCF’s analysis clusters Eos and Rhea. The analysis clusters allowed the researchers to scan each “snapshot” for the positions of the N-body particles so they could identify the locations and masses of dense, quasi-spherical clumps of particles called “halos,” using a “halo finder” program. Each halo is the scaffolding within which a galaxy formed, with gas, dark matter and star particles held together by their mutual gravitational attraction. With every snapshot, the number and location of galactic halos changed. By matching this catalog of galactic halos located by the halo finder to the properties of the gas and stars within them in each snapshot, the team could track the evolving properties of all the galaxies in their simulation for comparison with observations of such early galaxies. These OLCF clusters were necessary to enable a parallel computing analysis of the raw data sets that were otherwise too large to be analyzed by lesser computers. After analyzing CoDa simulation snapshots, Shapiro’s team found that the simulated reionization of the local universe ended a bit later than observations suggest for the universe-at-large, but the predicted properties of galaxies, intergalactic gas and the radiation backgrounds that filled the universe at such early times agreed with those observed once an adjustment was made for this late reionization. The team also found that star formation in the dwarf galaxies — those of total mass below a few billion times the mass of the Sun — was severely suppressed by reionization, as hypothesized. The bulk of the star formation in the universe during the epoch of reionization was dominated, instead, by galaxies with masses between 10 billion and 100 billion times that of the Sun. Most importantly, CoDa will now serve as a pathfinder for a follow-up simulation, CoDa II, already planned, in which parameters that control the efficiency for converting cold, dense gas into stars will, with the hindsight provided by CoDa I, be adjusted to make reionization end a bit earlier. Despite the extreme scale of the CoDa simulation, Shapiro said there are still other ways to improve. Even with the future of exascale computing — computers capable of a thousandfold speedup over current petascale machines — he noted that completely simulating the universe in all macro- and microscopic detail would still demand more power. However, as next-generation supercomputers add more GPUs and memory, researchers will be able to zoom in on small patches of the universe, and within these, zoom in further on individual galaxies, offering a more detailed look at star and galaxy formation. “We may be able to say, ‘I can’t follow the formation of all the stars, but maybe I can zoom in on one galaxy, then zoom in on one cloud of molecular gas within it, which might wind up forming several thousand stars,’” Shapiro said. Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time.


News Article
Site: www.asminternational.org

CONEXPO-CON/AGG and IFPE 2017 are teaming up to unveil the world's first fully-functional 3D printed construction excavator and the first large-scale use of steel in 3D printing. The excavator, which will be on display at the joint trade shows in March 2017, will bring to life how technology is transforming the construction industry in line with the show's 2017 theme, "Imagine What's Next." In addition to the pre-printed excavator, show attendees will see a second excavator printing live on the show floor. CONEXPO-CON/AGG and IFPE 2017 will take place March 7-11, 2017 at the Las Vegas Convention Center. The excavator is a joint collaboration between the Association of Equipment Manufacturers (AEM), National Fluid Power Association (NFPA), Center for Compact and Efficient Fluid Power (CCEFP), Oak Ridge National Laboratory (ORNL), and the National Science Foundation (NSF). The group is working with research teams from Georgia Tech and the University of Minnesota to convert the current excavator design to one that is conducive to and takes full advantage of 3D manufacturing. Graduate engineering students at Georgia Tech will be creating a boom and bucket featuring integrated hydraulics with the goal of decreasing the weight, materials cost and maintenance, while students at the University of Minnesota are designing a hydraulic oil reservoir/heat exchanger and cooling system that reduces the size and weight and increase the efficiency of the machine. "Technology and innovation will drive change for the future of the construction industry, and we're excited that students are playing a vital role in bringing the newly designed machine to life," says Eric Lanke, chief executive officer of NFPA. In addition to the partnerships with the Georgia Tech and the University of Minnesota, AEM, NFPA, CCEFP, ORNL and NSF are inviting undergraduate engineering students from across the country to participate in a nationwide contest to design and print a futuristic cab and a human-machine interface for the excavator that is both aesthetic pleasing and functionally designed. Students can submit their designs for the cab of the excavator on the Center for Compact and Efficient Fluid Power website and all entries will be judged by a panel of industry experts. The winning team will receive a $2000 cash prize and the opportunity to visit the Oak Ridge National Laboratory (ORNL) in Tennessee to observe the printing of the selected design.


News Article | November 17, 2016
Site: www.eurekalert.org

A new class of drugs that combat antibiotic resistance has been discovered by a University of Oklahoma researcher and team. In the study supported by the National Institutes of Health, laboratory experiments were combined with supercomputing modeling to identify molecules that boost the effect of antibiotics on disease-causing bacteria. Helen Zgurskaya, professor of chemistry and biochemistry in the OU College of Arts and Sciences, and OU team members Narges Abdali, Julie Chaney, David Wolloscheck and Valentin Rybenkov, collaborated with Jeremy Smith, Jerry Parks and Jerome Baudry, the University of Tennessee-Oak Ridge National Laboratory Center for Molecular Biophysics; Adam Green, UT; and Keith Haynes and John Walker, Saint Louis University School of Medicine. They collectively identified four new chemicals that seek out and disrupt bacterial proteins called "efflux pumps", a major cause of antibiotic resistance in bacteria. "The supercomputing power of ORNL's Titan supercomputer allowed us to perform large-scale simulations of the drug targets and to screen many potential compounds quickly," said Zgurskaya, head of the OU Antibiotic Discovery and Resistance Group at the Stephenson Life Sciences Research Center. "The information we received was combined with our experiments to select molecules that were found to work well, and this should drastically reduce the time needed to move from the experimental phase to clinical trials," she added. The team focused on one efflux pump protein, known as AcrA, which connects two other proteins in a tunnel shape through the bacterial cell envelope. Disrupting this protein could essentially break the efflux pump--an approach unlike other drug design strategies that try to inhibit the biochemical processes. "In contrast to previous approaches, our new mechanism uses mechanics to revive an existing antibiotic's ability to fight infection," said Smith, UT-ORNL Governor's Chair and director of the UT-ORNL Center for Molecular Biophysics. The laboratory experiments were done jointly with extensive protein simulations run on ORNL's Titan supercomputer. Large numbers of chemicals were scanned to predict and select which would be the most effective in preventing AcrA proteins from assembling properly. Using computational models produced by the Titan supercomputer, researchers screened various combinations of molecules and proteins to determine which ones were most disruptive to their formation. "The first screening took only 20 minutes using 42,000 processors and yielded several promising results," said Parks, ORNL. "After more extensive analysis, we narrowed down our list to predict which molecules were most likely to disrupt the function of the efflux pump." OU researchers then conducted laboratory experiments to confirm the disruption of the efflux pump and the antibiotic-reviving capability of four of the molecules selected. The SLU School of Medicine research team synthesized structural analogs of the discovered efflux pump inhibitors and identified properties essential for their activities. The study, "Reviving Antibiotics: Efflux Pump Inhibitors That Interact with AcrA, a Membrane Fusion Protein of the AcrAB-ToIC Multidrug Efflux Pump," was published in the American Chemical Society's Infectious Diseases journal. Support for the project was provided by NIH grant number RO1AI052293 in the amount of $2 million. For more information about this project, please contact elenaz@ou.edu.


News Article | October 23, 2015
Site: news.mit.edu

Two key physical phenomena take place at the surfaces of materials: catalysis and wetting. A catalyst enhances the rate of chemical reactions; wetting refers to how liquids spread across a surface. Now researchers at MIT and other institutions have found that these two processes, which had been considered unrelated, are in fact closely linked. The discovery could make it easier to find new catalysts for particular applications, among other potential benefits. “What’s really exciting is that we’ve been able to connect atomic-level interactions of water and oxides on the surface to macroscopic measurements of wetting, whether a surface is hydrophobic or hydrophilic, and connect that directly with catalytic properties,” says Yang Shao-Horn, the W.M. Keck Professor of Energy at MIT and a senior author of a paper describing the findings in the Journal of Physical Chemistry C. The research focused on a class of oxides called perovskites that are of interest for applications such as gas sensing, water purification, batteries, and fuel cells. Since determining a surface’s wettability is “trivially easy,” says senior author Kripa Varanasi, an associate professor of mechanical engineering, that determination can now be used to predict a material’s suitability as a catalyst. Since researchers tend to specialize in either wettability or catalysis, this produces a framework for researchers in both fields to work together to advance understanding, says Varanasi, whose research focuses primarily on wettability; Shao-Horn is an expert on catalytic reactions. “We show how wetting and catalysis, which are both surface phenomena, are related,” Varanasi says, “and how electronic structure forms a link between both.” While both effects are important in a variety of industrial processes and have been the subject of much empirical research, “at the molecular level, we understand very little about what’s happening at the interface,” Shao-Horn says. “This is a step forward, providing a molecular-level understanding.” “It’s primarily an experimental technique” that made the new understanding possible, explains Kelsey Stoerzinger, an MIT graduate student and the paper’s lead author. While most attempts to study such surface science use instruments requiring a vacuum, this team used a device that could study the reactions in humid air, at room temperature, and with varying degrees of water vapor present. Experiments using this system, called ambient pressure X-ray photoelectron spectroscopy, revealed that the reactivity with water is key to the whole process, she says. The water molecules break apart to form hydroxyl groups — an atom of oxygen bound to an atom of hydrogen — bonded to the material’s surface. These reactive compounds, in turn, are responsible for increasing the wetting properties of the surface, while simultaneously inhibiting its ability to catalyze chemical reactions. Therefore, for applications requiring high catalytic activity, the team found, a key requirement is that the surface be hydrophobic, or non-wetting. “Ideally, this understanding helps us design new catalysts,” Stoerzinger says. If a given material “has a lower affinity for water, it has a higher affinity for catalytic activity.” Shao-Horn notes that this is an initial finding, and that “extension of these trends to broader classes of materials and ranges of hydroxyl affinity requires further investigation.” The team has already begun further exploration of these areas. This research, she says, “opens up the space of materials and surfaces we might think about” for both catalysis and wetting. The research team also included graduate student Wesley Hong, visiting scientist Livia Giordano, and postdocs Yueh-Lin Lee and Gisele Azimi at MIT; Ethan Crumlin and Hendrik Bluhm at Lawrence Berkeley National Laboratory; and Michael Biegalski at Oak Ridge National Laboratory. The work was supported by the National Science Foundation and the U.S. Department of Energy.


News Article
Site: www.asminternational.org

A tool that provides world-class microscopy and spatially resolved chemical analysis shows considerable promise for advancing a number of areas of study, including chemical science, pharmaceutical development, and disease progression. The hybrid optical microscope/mass spectrometry-based imaging system developed at the Department of Energy's Oak Ridge National Laboratory, Tenn., operates under ambient conditions and requires no pretreatment of samples to analyze chemical compounds with sub-micron resolution. "Knowing the chemical basis of material interactions that take place at interfaces is vital for designing and advancing new functional materials that are important for DOE missions such as organic photovoltaics for solar energy," says postdoctoral associate Gary Van Berkel. "In addition, the new tool can be used to better understand the chemical basis of important biological processes such as drug transport, disease progression, and response for treatment." The hybrid instrument transfers tiny amounts of a material such as human tissue or an organic polymer from a sample by a laser ablation process in which material is captured and transported via liquid stream to the ionization source of the mass spectrometer. In just seconds, a computer screen displays the results. Researchers noted that the resolution of less than one micron is essential to accurately differentiate and distinguish between polymers and sub-components of similar-sized cells. "Today's mass spectrometry imaging techniques are not yet up to the task of reliably acquiring molecular information on a wide range of compound types," says Van Berkel. "Examples include synthetic polymers used in various functional materials like light harvesting and emitting devices or biopolymers like cellulose in plants or proteins in animal tissue." This technology, however, provides the long-sought detailed chemical analysis through a simple interface between a hybrid optical microscope and an electrospray ionization system for mass spectrometry. Funding for this research was provided by DOE's Office of Science. The National Institute of Drug Abuse-Intramural Research Program, National Institutes of Health provided the mouse brain tissue. Instruments were provided on loan through a cooperative search and development agreement by Sciex, a supplier of life science analytical platforms and licensee of ORNL technologies. UT-Battelle manages ORNL for the DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the U.S. Image caption — A hybrid 3D optical microscope. Mass spectrometry map showing optical brightness (height) and chemical distribution of poly(2vinylpyridine) (red) and poly(N-vinylcarbazole) (blue) signals of a 20 μm × 20 μm area of a polymer blend. Courtesy of ORNL/DOE.


News Article | December 20, 2016
Site: phys.org

The material, a formulation of iron, sodium, copper and arsenic created by Rice graduate student Yu Song in the laboratory of physicist Pengcheng Dai, is described this week in the journal Nature Communications. Dai said Song's recipe—which involves mixing ingredients in a pure argon atmosphere, sealing them in niobium canisters and baking them at nearly 1,000 degrees Celsius—produces a layered alloy in which iron and copper separate into alternating stripes. This striping is critical for the material's usefulness in explaining the origins of high-temperature superconductivity, said RCQM Director Qimiao Si. "By forming this regular pattern, Yu Song has physically removed disorder from the system, and that is crucially important for being able to say something meaningful about what's going on electronically," said Si, a theoretical physicist who has worked to explain the origins of high-temperature superconductivity and similar phenomena for nearly two decades. High-temperature superconductivity was discovered in 1986. It occurs when electrons pair up and flow freely in layered alloys like Song's new creation. Dozens of high-temperature superconducting alloys have been created. Most are complex crystals that contain a transition metal—typically iron or copper—and other elements. High-temperature superconductors are typically terrible conductors at room temperature and only become superconductors when they are cooled to a critical temperature. "The central problem of high-temperature superconductivity is to understand the precise relationship between these two fundamental states of matter and the phase transition between them," said Dai, professor of physics and astronomy at Rice. "The macroscopic change is evident, but the microscopic origins of the behavior are open to interpretation, largely because there are many variables in play, and the relationship between them is both synergistic and nonlinear." Dai said two schools of thought "developed from the very beginning of this field. One was the itinerant camp, which argues that both states ultimately arise from itinerant electrons. After all, these materials are metals, even if they may be poor metals." The other camp is the localized camp, which argues that fundamentally new physics arise—due to electron-electron interactions—at the critical point at which the materials transition from one phase to the other. Dai said measurements on Song's new material support the localized theory. In particular, the new material is the first member of a class of iron-based superconductors called pnictides (pronounced NIK-tides) that can be tuned between two competing phases: the superconducting phase in which electrons flow with no resistance, and a "Mott insulating" phase in which electrons become locked in place and do not flow at all. "The discovery that Yu Song made is that this material is more correlated, which is evident because of the Mott insulating phase," Dai said. "This is the first time anyone has reported an iron-based superconductor that can be continuously tuned from the superconducting phase to the Mott insulating phase." Samples were made and some tests were performed at RCQM. Additional tests were performed at Chalk River Laboratories' Canadian Neutron Beam Center in Ontario, the National Institute for Standards and Technology's Center for Neutron Research in Maryland, Brookhaven National Laboratory in New York, Oak Ridge National Laboratory's High Flux Isotope Reactor in Tennessee and the Paul Scherrer Institute's Advanced Resonant Spectroscopies beamline in Switzerland. "In the paper, we showed that if the interaction was weak, then even replacing 50 percent of the iron with copper would still not be sufficient to produce the insulating state," Si said. "The fact that our experimentalists have managed to turn the system to be Mott insulating therefore provides direct evidence for strong electron-electron interactions in iron pnictides. That is an important step forward because it suggests that superconductivity should be tied up with these strong electron correlations." Explore further: Material turns 'schizophrenic' on way to superconductivity More information: Yu Song et al. A Mott insulator continuously connected to iron pnictide superconductors, Nature Communications (2016). DOI: 10.1038/ncomms13879


News Article | February 20, 2017
Site: phys.org

"Concentrations are tiny, on the order of a single grain of salt dissolved in a liter of water," said Yi Cui, a materials scientist and co-author of a paper in Nature Energy. "But the oceans are so vast that if we can extract these trace amounts cost effectively, the supply would be endless." Alternative forms of energy like wind or solar are critical for reducing the world's carbon emissions. While wind and solar costs are plunging, some experts argue that nuclear power remains important because it can be turned on and off to match peaks and valleys in demand without carbon emissions. "We need nuclear power as a bridge toward a post-fossil-fuel future," said Professor Steven Chu, a Nobel Prize-winning physicist and co-author of the Nature Energy article. "Seawater extraction gives countries that don't have land-based uranium the security that comes from knowing they'll have the raw material to meet their energy needs." Chu, a former U.S. secretary of energy who encouraged seawater extraction research before he left the Department of Energy (DOE) to return to Stanford, observed that nuclear power currently generates 20 percent of U.S. electricity and 13 percent worldwide. Even as researchers work to improve reactor safety and solve the waste disposal issues, he believes that a practical way to extract uranium from seawater is needed to reduce the energy insecurity of nations that depend on nuclear power but lack uranium within their own borders. The Stanford findings build on years of research in Japan and China, as well as by DOE scientists at Oak Ridge National Laboratory and the Pacific Northwest National Laboratory. Scientists have long known that uranium dissolved in seawater combines chemically with oxygen to form uranyl ions with a positive charge. Extracting these uranyl ions involves dipping plastic fibers containing a compound called amidoxime into seawater. The uranyl ions essentially stick to the amidoxime. When the strands become saturated, the plastic is chemically treated to free the uranyl, which then has to be refined for use in reactors just like ore from a mine. How practical this approach is depends on three main variables: how much uranyl sticks to the fibers; how quickly ions can be captured; and how many times the fibers can be reused. In the recent work, the Stanford researchers improved on all three variables: capacity, rate and reuse. Their key advance was to create a conductive hybrid fiber incorporating carbon and amidoxime. By sending pulses of electricity down the fiber, they altered the properties of the hybrid fiber so that more uranyl ions could be collected. Postdoctoral scholar Chong Liu oversaw the lab tests that compared Stanford's amidoxime-carbon hybrid fibers with today's amidoxime fibers. First she tested how much uranyl each type of fiber could hold before reaching saturation. In these tests she found that by the time the standard amidoxime fiber had become saturated, Stanford's amidoxime-carbon hybrid fibers had already adsorbed 9 times as much uranyl and were still not saturated. What's more, the electrified fiber captured three times as much uranyl during an 11-hour test using seawater from Half Moon Bay, about an hour from Stanford, and had three times the useful lifespan of the standard amidoxime. "We have a lot of work to do still but these are big steps toward practicality," Cui said. Chu emphasized that research on seawater extraction has to proceed in parallel with reactor safety and waste disposal challenges. "For much of this century, some fraction of our electricity will need to come from sources that we can turn on and off. I believe nuclear power should be part of that mix, and assuring access to uranium is part of the solution to carbon-free energy," he said. Explore further: New technique could lead to safer, more efficient uranium extraction


News Article | January 5, 2016
Site: phys.org

FM Global is one of the world's largest commercial and industrial insurance companies. Providing insurance to one in three Fortune 1000 companies, FM Global attributes its success to offering not only comprehensive property insurance products but also world-class loss prevention research and engineering services that help clients better understand steps they can take to prevent fires and minimize loss if a fire does start. However, for FM Global research scientist Yi Wang, fire suppression research affects far more than his business's bottom line. "The goal of our research is to make protection standards and solutions better," Wang said. "We believe that the majority of property loss is preventable. We develop solutions to prevent losses, share these solutions, and promote improvement of protection standards." Some of these solutions Wang discovered came from research performed on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy Office of Science User Facility located at Oak Ridge National Laboratory. Improving protection standards can be a costly process, though—insurers must build large testing facilities and conduct expensive fire tests or use high-performance computing to simulate how fires spread and how suppression systems perform. FM Global researchers are doing both. To better understand how fires spread and the best methods to suppress them, FM Global built the world's largest fire technology laboratory at its property loss prevention research campus in West Glocester, Rhode Island. The entire laboratory is 108,000 sq. ft., with an enormous area for fire tests measuring 33,600 sq. ft., or the size of a football field. These testing rooms have "movable ceilings" that can go from 15 to 60 ft. high, allowing researchers to evaluate fire hazards and test fire suppression techniques at a variety of heights. In addition, FM Global owns the world's largest fire calorimeter—a large hood-like device that is hung above a fire to measure its heat release rate, or the rate at which it is releasing energy. This is the one of the best metrics to gauge fire size and is the main driver for determining how many sprinklers should turn on during a fire. In recent years, FM Global has been considering how it might enhance its fire testing capabilities and gain more insight from each test. Its facility is invaluable for gaining insights into fire growth and suppression, but it is also in high demand—researchers often must reserve the space months in advance. In addition, these tests are expensive, with some experiments costing $100,000 or more. And despite the company's football-field-sized fire testing room and world-class calorimeter, FM Global researchers found they might not be able to replicate fires for the world's newest, largest facilities. These industrial "mega" warehouses and distribution centers can exceed 100,000 sq. ft. and rise from 60 to 100 ft. in height. Many companies choose to use this extra height to store their commodities—in corrugated cardboard boxes— on wooden pallets stacked in tiers, each tier about 5 ft. high. As these facilities stack the pallets increasingly higher, they run the risk of helping a fire spread faster. Wang noted that once storage facilities reach a certain height, sprinkler systems may not prevent catastrophic damage as effectively. "Typically, when sprinklers are on the ceiling during a fire, the smoke reaches the ceiling to set off the system," he said. "But when warehouses get very tall, the time of the protection system activation can be delayed. In addition, the strong fire plume can prevent the ceiling water spray from penetrating through and reaching the base of the fire in time to achieve effective suppression." The high costs of large-scale fire tests, along with the challenges in generalizing and extrapolating test results, prompted FM Global research scientists in 2008 to develop computational models and simulate fires on an internal cluster computer. Using OpenFOAM, a fluid dynamics code, the team developed FireFOAM, its flagship code for simulating all of the complex physics that occur during an industrial fire. And in keeping with the firm's commitment to sharing important research results, the researchers made their code "open source," available to any researcher studying fires and fire suppression. "Our goal is to develop an efficient computational fluid dynamics (CFD) code for the fire research community which has all the components to catch all the physics that are important for fire suppression, such as heat transfer, material flammability, water spray, chemical transport, and radiation in larger fires," Wang said. Using supercomputers to simulate industrial fires is more complicated than just virtually igniting a fire and letting it burn. Researchers have to account for the roles of soot formation, oxidation, radiation, and sprinkler spray dynamics, among other processes. In addition to calculating so many different physical processes, researchers also require very fine resolution—small time and spatial scales—to fully capture all of the subtle chemical and physics-related processes happening during a fire. Early results for smaller-scale fire simulations using FM Global's internal cluster were encouraging—the simulations showed very good agreement with actual physical tests. However, as the team progressed in its fire modeling experience, it realized the need for substantially more computing power than it had internally to accurately model the much larger fires that could occur in clients' large warehouses. In addition, the team needed access to a very large high-performance computing system to scale FireFOAM up to meet the computational challenge. While attending a scientific conference on combustion, Wang and the FM Global team met Ramanan Sankaran, a computational research scientist and combustion expert at the OLCF. After discussing their research interests, Sankaran told them about the OLCF's Industrial Partnerships Program, which offers researchers in industry the opportunity to access America's most powerful supercomputer for open research. FM Global researchers knew that gaining access to a larger supercomputer was necessary to improve their simulations, but it was only part of the challenge—knowing how to make efficient use of a supercomputer with over 299,000 cores was the other part. They successfully scaled FireFOAM from 100 CPUs to thousands of CPUs. And the team's relationship with Sankaran continued paying dividends. "We got support from Ramanan early on and identified a bottleneck in the pyrolysis submodel in our code, which deals with solid fuel burning," Wang's colleague Ning Ren said. "He worked with us to improve the efficiency of that submodel significantly, and that allowed us to scale our model up so we could simulate 7 tiers (35 ft. high) of storage." Through those simulations, the team discovered that stacking storage boxes on wooden pallets impedes the horizontal flame spread, substantially reducing the fire hazard in the early stages of fire growth. As with many large-scale simulations, the team used FireFOAM by dividing its simulations into very fine mesh, with each cell calculating the processes for a very small area and sharing the data with neighboring grid points. The finer the grid, the more computationally demanding the simulation becomes. Wang credits his team's successful simulations to OLCF computing resources. "Without access to leadership computing resources at the OLCF, the team would have no way to accurately study fire spread dynamics in the larger warehouses," he said. "With Titan, we are doing predictive simulations of 7-tier stacks and gaining important information about the fire hazard that we simply can't gather through our experimental fire tests." After receiving a second award for computing time, the team has been collaborating with OLCF staff to incorporate the Adaptable I/O System (ADIOS) for its FireFOAM code. OLCF staff developed ADIOS to transfer data more efficiently on and off the computer. The team took the improved FireFOAM code and began simulating other commodities stored in warehouses, beginning with simulations for large paper-roll fires in 2015. Wang sees the collaboration between his group and OLCF staff as a relationship that benefits both parties and society at large. "Our project was the first step, and I think the work we've done is very promising," Wang said. "The collaboration with OLCF adds a lot of value to our research; access to Titan and the experts at the OLCF are enhancing our research capabilities so that we can offer better fire protection solutions for our clients." Explore further: WPI research team to conduct tests aimed at better understanding post-earthquake fires More information: N. Ren, J. de Vries, K. Meredith, M. Chaos, and Y. Wang, "FireFOAM Modeling of Standard Class 2 Commodity Rack Storage Fires," published in the Proceedings of Fire and Materials 2015 (February 2–4, 2015): 340.


News Article | April 8, 2016
Site: www.scientificcomputing.com

OAK RIDGE, TN — Researchers at the Department of Energy’s Oak Ridge National Laboratory used neutrons to uncover novel behavior in materials that holds promise for quantum computing. The findings, published in Nature Materials, provide evidence for long-sought phenomena in a two-dimensional magnet. In 2006, the physicist Alexei Kitaev developed a theoretical model of microscopic magnets (“spins”) that interact in a fashion that leads to a disordered state called a quantum spin liquid. This “Kitaev quantum spin liquid” supports magnetic excitations equivalent to Majorana fermions — particles that are unusual in that they are their own antiparticles. The presence of Majorana fermions is of great interest because of their potential use as the basis for a qubit, the essential building block of quantum computers. Familiar magnetic materials exhibit magnetic excitations called “spin-waves” that occur in quantized lumps, but in the Kitaev quantum spin liquid, the lumps are split and the Majorana excitations are therefore termed “fractionalized.” Scientists have theorized that Kitaev interactions exist in nature in certain materials containing magnetic ions that exhibit strong coupling between the electron spin and orbital angular momentum. Arnab Banerjee, the study’s lead author and a post-doctoral researcher at ORNL, explained that one way to observe spin liquid physics in such a material is to “splash” or excite the liquid using neutron scattering. Banerjee and colleagues from ORNL and the University of Tennessee, working with collaborators from the Max Planck Institute in Dresden, Germany and Cambridge University in the United Kingdom, used the “splash” technique to investigate a two-dimensional graphene-like material, alpha-ruthenium trichloride. Neutrons shining onto and scattering from the material can deposit small amounts of energy that create magnetic excitations. The form of magnetic excitations created in alpha-ruthenium trichloride was found to be different from spin waves seen in ordinary magnets, but was very well-matched to the spectrum predicted for the Majorana fermions expected in the Kitaev quantum spin liquid. “The concept of Majorana fermion originated in fundamental high energy particle physics, but we saw their signatures in a solid state material at modest temperatures,” Banerjee said. “Neutron scattering not only provided the ‘splash’ we needed to see them, but also directly measured the resulting magnetic excitations. The Spallation Neutron Source’s SEQUOIA instrument is best suited for this research because the range of energy and momentum one can access with the instrument perfectly matches the regime where Majorana fermions show up.” “The observation of these fractionalized excitations is truly remarkable,” said Steve Nagler, director of the Quantum Condensed Matter Division at ORNL and co-corresponding author of the paper. “There has been a huge push recently to see if Kitaev quantum spin liquid physics can be found in materials.  Time will tell whether this represents a first step on the road to a new qubit technology.” The experiment required extremely pure samples that were prepared by Banerjee and Craig Bridges of ORNL. The interpretation of the experiments was helped by theoretical predictions of team members Roderich Moessner of the Max Planck Institute, and Johannes Knolle of Cambridge and their colleagues. “This study proved that the proper honeycomb lattice materials can have the exotic excitations long sought by the scientific community, potentially bringing us closer to realizing Kitaev’s vision of topologically protected quantum information,” said Alan Tennant, chief scientist for Neutron Sciences at ORNL and a co-author on the paper. The research team also included Jiaqiang Yan, Adam Aczel, Matthew Stone, Garrett Granroth, and Mark Lumsden from ORNL, David Mandrus, a joint faculty of University of Tennessee and ORNL, Ling Li and Yuen Yiu from the University of Tennessee, Dmitry Kovrizhin from Cambridge, and Subhro Bhattacharjee from the Max Planck Institute. The paper is published as “Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet.” The Spallation Neutron Source and the High Flux Isotope Reactor are DOE Office of Science User Facilities. UT-Battelle manages ORNL for the DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


Before the 1950s the electrons, neutrons, and protons comprising atoms were the smallest confirmed units of matter. With advancements in experimental and theoretical techniques in recent decades, though, researchers now try to understand particles a step smaller and more fundamental. In recent years large-scale experimental facilities, such as the Large Hadron Collider in Switzerland, have allowed researchers to begin testing theories about how subatomic particles behave under different conditions. Research institutions funded by the US Department of Energy (DOE) have also made major investments in experimental test facilities. The newest of these facilities lies in Hall D at the Thomas Jefferson National Accelerator Facility (Jefferson Lab). The experiment, known as GlueX, aims to give researchers unprecedented insight into subatomic particle interactions. "We believe there is a theory that describes how elementary particles interact, quarks and gluons that make up the matter around us," said Robert Edwards, senior staff scientist at Jefferson Lab. "If so, the theory of QCD suggests that there are some exotic forms of matter that exist, and that's what we're looking for in our Hall D experiment." Edwards serves as the principal investigator on a project that uses computation to inform the GlueX experiment as well as corroborate experimental findings. To that end the team has been using the Titan supercomputer at DOE's Oak Ridge National Laboratory (ORNL). Titan is the flagship supercomputer of the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. The team wants to make computer codes for quantum chromodynamics (QCD) applications run more efficiently and effectively, and with access to world-class computing resources, the researchers' computational innovations were able to achieve speedups ranging from seven- to tenfold for QCD calculations compared with those achieved in earlier work. The field of QCD is the study of forces between two major categories of subatomic particles—quarks and gluons. Quarks serve as the primary force-carrying particles in an atom's nucleus and make up hadrons, a class of subatomic particles that includes protons and neutrons. Gluons, much like their name implies, allow quarks to interact with forces and serve as the "glue" that holds hadrons together. Quarks can also bind with their inverse, antiquarks, to form mesons. Mesons are among the most mysterious of all subatomic particles because their resonances are in existence for only fractions of a microsecond. Through experiments researchers hope to use GlueX to confirm the existence of "exotic" mesons that would help advance QCD theory. When simulating a quantum system of quarks, gluons, and mesons, the number of calculations needed to compute the interactions of the subatomic particle fields explodes in a hurry. Researchers represent quarks and gluons by using a lattice or grid. In fact, researchers using this method call it lattice QCD (LQCD). Once the theories are expressed in terms of the lattice, the overall simulation becomes similar to a high-school-level model of a crystal—plastic spheres at the lattice points connected by springs between them. One can think of the spheres at the lattice points as representing the quark field with the springs between them representing the quark-gluon interactions. When given energy by pushing or nudging, the model will vibrate. At any given instant, a snapshot of the model would show a particular arrangement of stretched and compressed springs. If one looked at the statistical distribution of these snapshots, he or she could deduce information about the crystal. QCD works in a similar way. The team's lattices act as snapshots of the states of the gluon fields. By generating a statistical sampling of these QCD field snapshots and analyzing them, the team can compute the properties of the subatomic particles of interest. Although that process might sound simple, it really isn't. Each snapshot requires a lot of computation. To compute the quark-gluon interactions, the team must repeatedly carry out the complex computation of solving the Dirac equation—a complex wave equation. Solving the equation is complicated enough, but Jefferson Lab researcher Balint Joo noted that the team's simulations must do it many times. "Our algorithm is one that requires solving the Dirac equation hundreds of thousands of times for each of the 300 to 500 snapshots that we take," he said. Such computational demands push even the world's fastest supercomputers to their performance limits, and Joo and NVIDIA high-performance computing researcher Kate Clark have teamed with other researchers from the USQCD collaboration to search for new ways to improve code performance on the Jefferson Lab team's CHROMA code, among other QCD applications. They shared their results in a paper presentation at the SC16 conference, which took place November 13-18. Since 2005 Clark has focused on methods to improve code performance for the LQCD community. Before moving to NVIDIA, she worked in LQCD algorithms at Boston University with professor Richard Brower, where the team developed a multigrid algorithm. Essentially, computer chips have become so much faster than memory systems that memory can't feed chips the data fast enough, meaning the bottleneck for LQCD calculations comes from the speed of the memory system. Clark has been developing the QUDA library, which takes advantage of a GPU system's computational strength, including its very fast built-in memory, to improve calculation speed. When developing its new algorithm, the Edwards team began by adding a multigrid algorithm into its code. Multigrid algorithms take the large, fine-grained lattice grid for LQCD calculations; average the various grid points; and create multiple smaller, coarser grids. Similar to sound waves, which are really composed of many waves, each with a different pitch or frequency, the team's problem is composed of many modes with different energies. High-energy modes need a fine lattice to represent them accurately, but low-energy modes—which usually slow down when seeking a solution—can be represented on coarser lattices with fewer points, ultimately reducing the computational cost. By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. "GPUs provide a lot of memory bandwidth," Clark said. "Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker." In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups. However, the more GPUs working on a problem, the more they must communicate with one another. If too many GPUs get involved and the problem size doesn't keep up with the computational resources being used, the calculation becomes very inefficient. "One aspect of GPUs is that they bring a lot of parallelism to the problem, and so to get maximum performance, you may need to restructure your calculation to exploit more parallelism," Clark said. Essentially, as computing technology has evolved, processing speed has improved faster than the ability of interconnects to move increasingly larger amounts of data across supercomputers' nodes. For simulations in which researchers divide their calculations across many computer nodes, this imbalance can lead to performance bottlenecks. "With QCD the computational cost doesn't scale linearly; it scales super-linearly," Clark said. "If you double the problem size, the computational cost goes up by more than a factor of two. I can't keep the same size of computation per node and just put it on a bigger system." Despite performance gains through implementing the multigrid algorithm, Clark, Joo, and their collaborators noted that for maximum performance impacts, they would need to exploit sources of parallelism other than those that had typically been used in existing LQCD calculations. Each one of Titan's 18,688 GPUs has 2,688 processing cores. To return to the roadway analogy, each one of a GPU's individual processors is a "lane" on a road, and if only one lane is open, cars back up quickly. With that in mind, Clark, Joo, and their collaborators worked on opening up as many processing "lanes" as possible for LQCD calculations. The team recognized that in addition to exploiting parallelism by calculating multiple grids rather than a single, large grid, they could also exploit more parallelism out of each grid point. To create multiple grids from one large, fine-grained grid, each GPU calculates a set of grid points (which appear as mathematical vectors), averages the results, and sends the averages to the middle grid point. Rather than just having one processing "lane" doing all of these calculations, researchers can use four processing cores to calculate the points above, below, and to the left and right of the original grid point. Much like going from a one-lane road to a four-lane highway, the data throughput moves much faster. This concept works for a two-dimensional calculation, and a four-dimensional calculation can use this same concept to achieve eight-way parallelism. In addition, the researchers noted that each grid point is not just a number but also a vector of data. By splitting up the vector calculations to run on multiple processors, the team further increased code parallelism. Because of these innovations, the Edwards team saw hundredfold speedups on the coarsest grids and a tenfold speedup for finer grids when comparing simulations with those that took place before the QUDA implementation. Clark and Joo pointed out that this approach affects more than the team's CHROMA code. These methods are already being applied to other QCD applications. Clark noted that as computers continue to get more powerful by using accelerators—such as the OLCF's next-generation machine, Summit, set to begin delivering science in 2018—researchers will have to focus on getting as much parallelism as possible. "Going forward, supercomputers like Summit will have many more processing cores, so to get high efficiency as a whole, researchers in many fields are going to have to work on how to exploit all the levels of parallelism in a problem," Clark said. "At some point exploiting all levels of parallelism is something that all researchers will have to do, and I think our work is a good example of that." More information: M.A. Clark, Balint Joo, Alexei Strelchenko, Michael Cheng, Arjun Gambhir, and Richard C. Brower, "Accelerating Lattice QCD Multigrid on GPUs Using Fine-Grained Parallelization." SC16 Proceedings of the International Conference for High Performance Computing, Storage and Analysis, Salt Lake City, UT, November 13–18, 2016, dl.acm.org/citation.cfm?id=3014995&CFID=872259881&CFTOKEN=57960029.


News Article | December 5, 2016
Site: www.eurekalert.org

Inside a new exotic crystal, physicist Martin Mourigal has observed strong indications of "spooky" action, and lots of it. The results of his experiments, if corroborated over time, would mean that the type of crystal is a rare new material that can house a quantum spin liquid. Currently, only a small handful of materials are believed to possibly have these properties. This new crystal was synthesized for the first time only a year ago. Corroboration by other physicists of Mourigal's newly produced experimental data could take a decade or longer. A "liquid" found inside a solid object may sound confusing to many people. Welcome to quantum materials, part of the twilight zone called quantum physics, which scientists have been struggling for a century to grasp a nanometer at a time. Though much about it is yet undiscovered, quantum physics describes the underlying reality of matter. The workings of computers, cell phones, superconductors and MRI machines are based on it. But its laws about the atomic realm defy human perception of what is real, and some sound so preposterous that they have become popular science brain teasers. Take quantum entanglement, the core of Mourigal's research on the crystal: If two particles, electrons for example, become entangled, they can be physically separated by many miles, and still be intimately linked to one another. Actions applied to one particle then instantaneously effect the other. At first, this theory was too weird even for the father of relativity, Albert Einstein, who lampooned it as "spooky action at a distance." Entanglement has since been proven in experiments, but now scientists like Mourigal, an experimental physicist at the Georgia Institute of Technology, and his team, have taken it much farther. The synthetic crystal he has examined, an ytterbium compound with the formula YbMgGaO4, is likely brimming with observable 'spooky' connections. Mourigal, former postdoctoral fellow Joseph Paddison and graduate student Marcus Daum published their observations in the journal Nature Physics on Monday, December 5, 2016. They collaborated with colleagues at the University of Tennessee and Oak Ridge National Laboratory. Work was funded by the National Science Foundation and the U.S. Department of Energy. This massive 'spooky' entanglement makes a system of electrons a quantum spin "liquid." The term is not meant in the everyday sense, as in water. Here, it describes the collective nature of electrons' spins in the crystal. "In a spin 'liquid,' the directions of the spins are not tidily aligned, but frenzied, although the spins are interconnected, whereas in a spin 'solid' the spin directions have a neat organization," Mourigal said. If the discovery stands, it could open a door to hundreds of yet unknown quantum spin liquid materials that physicists say must exist according to theory and mathematical equations. In the distant future, new quantum materials could become, by today's standards, virtual sorcerer's stones in quantum computing engineers' hands. The ytterbium crystal was first synthesized a year ago by scientists in China, where the government in Beijing has invested heavily in hopes of creating synthetic quantum materials with novel properties. It appears they may have now succeeded, said Mourigal, an assistant professor at Georgia Tech's School of Physics. "Imagine a state of matter where this entanglement doesn't involve two electrons but involves, three, five, 10 or 10 billion particles all in the same system," Mourigal said. "You can create a very, very exotic state of matter based on the fact that all these particles are entangled with each other. There are no individual particles anymore, but one huge electron ensemble acting collectively." One of the only previously observed apparent quantum spin liquids occurs in a natural crystal called herbertsmithite, an emerald green stone found in 1972 in a mine in Chile. It was named after mineralogist Herbert Smith, who died nearly 20 years prior to the discovery. Researchers observed its apparent spin liquid nature in 2012 after Massachusetts Institute of Technology scientists succeeded at reproducing a purified piece of the crystal in their lab. That initial discovery was just the beginning of an Odyssey. Because of its chemical makeup, herbertsmithite produces just one single entanglement scheme. Physics math says there must be myriads more. "Finding herbertsmithite was like saying, 'animals exist.' But there are so many different species of animals, or mammals, or fish, reptiles and birds," Mourigal said. "Now that we have found one, we are looking for different kinds of spin liquids." The more spin liquids experimental physicists confirm, the more theoretical physicists will be able to use them to bend their minds around quantum physics. "It's important to create the encyclopedia of them," Mourigal said. "This new crystal may be only our second or third entry." Physicists from the University of Tennessee succeeded in replicating the original ytterbium crystal, and Mourigal examined it at Oak Ridge National Laboratory (ORNL), where it was cooled down to a temperature of -273.09 degrees Celsius (0.06 degrees Kelvin). The cooling slowed the natural motion of the atoms to a near stop, which allowed the researchers to observe the electron spins' dance around the Ytterbium (Yb) atoms in the YbMgGaO4 crystal. They used a powerful superconducting magnet to line the spins up in an orderly fashion to create a starting point for their observations. "Then we removed the magnetic field, and let them go back to their special kind of wiggling," Mourigal said. His team carried out the observations at the ORNL Spallation Neutron Source, a U.S. Department of Energy Office of Science User Facility. SNS has about the power and size of a particle supercollider, and allowed the scientists to watch the concert of electrons' spins by bombarding them with neutrons. Normally, when one electron flips its spin, researchers would expect it to create a neat chain reaction, resulting in a wave going through the crystal. The wave of electron spins flipping in sequence might look something like fans at a football game standing and sitting back down to make a wave go around the stadium. But something odd happened. "This jumbly kind of spin wave broke down into many other waves, because everything is collective, everything is entangled," Mourigal said. "It was a continuum of excitations, but breaking down across many electrons at once." It was qualitatively similar to what was observed using the same technique on herbertsmithite. To authenticate the observations made by Mourigal's team, theoretical physicists will have to crunch the data with methods that, in part, rely on topology, a focus of the 2016 Nobel Prize in Physics. Mourigal thinks chances are they will pass muster. "At first glance, this material is screaming, 'I'm a quantum spin liquid,'" he said. But it must undergo a years-long battery of stringent mathematical tests. The theoretical physicists will wrap the data around a mathematical "donut" to confirm whether or not it is a quantum spin liquid. "That's meant seriously," Mourigal said. "As a mathematical mental exercise, they virtually spread the spin liquid around a donut shape, and the way it responds to being on a donut tells you something about the nature of that spin liquid." Though entangled particles appear to defy space and time, the shape of space they occupy affects the nature of the entanglement pattern. The possibility of a quantum spin liquid was first demonstrated in the 1930s, but only using atoms placed in a straight line. Physicists have been searching in the decades since for materials containing them. Zhiling Dun and Haidong Zhou from the University of Tennessee, and Georg Ehlers, Yaohua Liu and Matthew Stone from Oak Ridge National Laboratory coauthored the paper (10.1038/nphys3971). Reproduction of the ytterbium crystal at UT was funded by a National Science Foundation grant (DMR-1350002). ORNL, which provided the neutron scattering is part of the U.S. Department of Energy. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring agency.


"While our first priority is to upgrade Titan to handle the huge amounts of data generated by the particle and nuclear physics experiments in the Large Hadron Collider in Switzerland, we are increasingly focusing on providing support for other scientific disciplines like advanced biology and materials science that increasingly need supercomputing for big data applications," said Kaushik De, UTA physics professor and director of the High-Energy Physics Group. "Our unifying workload management system will also help integrate the grid, supercomputers and cloud computing to move toward a high-capacity global cyber-infrastructure that could transform scientific research around the world." Professor De, who joined UTA in 1993, designed the workload management system known as PanDA, for Production and Distributed Analysis, to handle data analysis jobs for the Large Hadron Collider's ATLAS particle physics experiment. During the LHC's first run of experiments from 2010 to 2013, PanDA made ATLAS data available for analysis by 3,000 scientists around the world using the LHC's global grid of networked computing resources. The latest rendition, known as Big PanDA, schedules jobs opportunistically on Titan in a manner that does not conflict with Titan's ability to schedule its traditional, very large, leadership-class computing jobs. Even when fully scheduled, some 10 percent of Titan's computing capacity may still remain unused for short periods, creating the opportunity for Big PanDA to "harvest" hours of idle time for other projects. "The aim of this new project is to use Big PanDA to improve operational efficiencies of supercomputers like Titan and develop an effective approach that could run on all leading computing facilities, " Professor De said. "Using Big PanDA to further integrate supercomputers with the grid and cloud computing would also enable a much more efficient use of computing resources for a wider range of scientific applications." Alex Weiss, chair of UTA's Physics Department, underscored the leadership that Professor De has demonstrated during the last decades in data-driven discovery, one of the key themes of UTA's Strategic Plan 2020: Bold Solutions | Global Impact. "Professor De has been a pioneer in the area of workload management and distributed computing. His work has provided important new solutions to the problem of how to manage the worldwide production and distributed analysis of the big data generated in particle physics experiments, " Weiss said. "This new grant demonstrates yet again that ideas, methods and systems designed at UTA are at the leading edge of the field and provide essential support for emerging science needs." UTA's scientists will be collaborating with Brookhaven National Laboratory, Oak Ridge National Laboratory and Rutgers University on this project. UTA currently hosts the ATLAS SouthWest Tier 2 Center, one of more than 100 centers around the world where massive amounts of data from the particle collisions is fed and analyzed. Professor De leads a team of 30 UTA researchers dedicated to the ATLAS project. His research has generated more than $30 million in research funding for UTA during the past two decades. In 2015 he was elected Fellow of the American Physical Society precisely for his work in developing cloud computing architectures that enabled global collaboration and big data analysis. Professor De also is known for his other research exploring physics beyond the current "theory of everything" known as the standard model. Explore further: Higgs: Bringing the power of data-mining to astrophysics, biology


News Article | December 16, 2015
Site: www.materialstoday.com

AlphaSTAR Corporation’s Genoa 3D printing simulation software has been selected as a winner of the 2015 R&D 100 Award. The competition, sponsored by R&D Magazine, recognizes advances in the impactful technologies worldwide and acknowledges the scientists and engineers who have led these efforts. AlphaSTAR has previously received the award for the core Genoa Multi-Scale Technologies as developed with NASA Glenn Research Center and Clarkson University in the year 2000. Genoa 3D printing simulation can accurately predict the deflection, residual stress, damage initiation and crack growth formation observed by various 3D printing machines. Its multi-scale progressive failure analysis (MS-PFA) methods are used to determine the entire 3D printing process at the material characterization level without the use of FEM as well as the structural MS-PFA that simulates the entire 3D printing process using FEM. With the ability to print directly from the printers G-Code file, engineers can simulate the entire printing process, taking into account material uncertainties and production defects. The software can also visualize damage in the printed structure, such as damages in the fiber and matrix with specific delamination types such as transverse shear, fiber crushing, fiber micro-buckling, relative rotation and out of plane shear stress. Damages can be traced directly to 3D printing variables, such as deposition speed, bead width, overall path, and bottom plate temperature & convection conditions. AlphaSTAR collaborated with Oak Ridge National Laboratory to develop the software’s simulation capabilities. This story is reprinted from material from AlphaSTAR, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier.


News Article | November 18, 2015
Site: phys.org

The Freight Analysis Framework captures detailed data on the movements of freight in the United States, including domestic movements of foreign imports and exports. Credit: iStockphoto A new freight database developed with assistance from the Department of Energy's Oak Ridge National Laboratory will help transportation officials improve highways, railroads and other trade routes across the country. With funding from the Department of Transportation's Bureau of Transportation Statistics (BTS) and Federal Highway Administration, ORNL data experts recently provided specialized data development, statistical analysis and modeling support to create the Freight Analysis Framework (FAF), the most comprehensive public source of freight transportation data in the nation. U.S. Transportation Secretary Anthony Foxx and Federal Highway Administrator Gregory Nadeau recently announced the release of the updated database known as FAF4. "A transportation network that can support the freight needs of this country is essential to a healthy economy," Foxx said. The Department of Transportation updates the framework every five years based on numbers from its BTS Commodity Flow Survey, which captures about 70 percent of freight transportation. ORNL fills in the other 30 percent, adding data on the domestic movements of foreign trade along with business segments such as crude petroleum and retail services that are not represented in the BTS survey. Researchers apply sophisticated modeling techniques to integrate disparate data sources into a comprehensive database that provides a national picture of freight flows. With about two million records in the latest FAF database, this resource captures freight movement by origin, destination, commodity, mode, trade type, tonnage and value. Users can determine the amount and type of goods that move by highway, waterway, rail, and other modes of transportation between states, regions and across the nation. An ORNL-developed data extraction tool makes it easy for users to access the information they need online through the FAF website. "FAF data help planners in the public and private sectors to better understand freight movement, analyze market trends, and make strategic decisions," said Diane Davidson, leader of ORNL's Transportation Planning and Decision Science Group. "Transportation planners use it to target resources to improve operations or increase capacity." Trends in FAF4 show that approximately 17 billion tons of goods were moved in 2012 on the transportation network, which breaks down to 47 million tons of goods valued at more than $49 billion a day. Trucks transported the bulk of those goods, hauling 64 percent of the weight and 71 percent of the value. These and other data from FAF4 can help direct infrastructure investments. The FAF4 is one of the most used tools in a suite of transportation analytics developed by ORNL's Center for Transportation Analysis, which supports vehicle and transportation system-wide efficiency gains through the development of predictive information, analytical methods and vehicle systems data. Explore further: Strategies for improved collection of national travel data


« VW: 430,046 MY 2016 vehicles in Europe affected by “CO2 issue” | Main | Huawei unveils new fast-charging Li-ion batteries » 3M will supply Plug Power Inc. with membrane electrode assemblies (MEAs) to be used in Plug Power designed proton exchange membrane (PEM) fuel cell stacks under a new strategic supply agreement. Through this strategic supply agreement, fuel cell stacks will be manufactured in Plug Power’s Latham, NY and Spokane, WA facilities to support the $20-billion material handling market. Additionally, this new fuel cell stack technology will be utilized to expand Plug Power’s presence into hydrogen-enabled electric vehicle applications outside of the material handling market. Fuel cell systems using the 3M MEA’s and Plug Power stack design will begin shipments in the fourth quarter of 2015. 3M offers a broad portfolio of solutions for fuel cell construction—for low- and high-temperature applications—including fluoropolymers, membrane materials and advanced ceramics. 3M has applied its expertise in electrochemistry, fluoropolymer membrane, and high throughput manufacturing for advanced fuel cell MEAs. 3M Dyneon Fluoropolymers feature excellent chemical and temperature resistance, weatherability, permeation control and low surface energy— making them highe suited for PEM applications. For example, gaskets made from 3M Dyneon Ultra Low Viscosity Fluoroelastomers cure at low temperatures for seal-in-place solutions. Barrier films can be made from thermoplastics such as 3M Dyneon THV, which imparts oxidative stability; 3M thermoplastics also function as binders for bipolar plate applications. 3M’s fully integrated membrane electrode assembly is based on its catalyst coated membrane (CCM). The CCMs feature integrated, thrifted subgasketed construction, and combine low precious metal loadings with strong performance and durability. 3M researchers have been working for several years on a DOE-funded project to develop a durable, low-cost, robust, high-performance membrane electrode assembly (MEA) for transportation applications that is able to meet or exceed US Department of Energy 2020 MEA targets. Partners in the three-year, $4.6-million project include Johns Hopkins University, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Michigan Technological University, Freudenberg FCCT, Argonne National Laboratory, Los Alamos National Laboratory and GM. The approach 3M took with this advanced MEA research project was to optimize the integration of advanced anode and cathode catalysts, based on 3M’s nanostructured thin film (NSTF) catalyst technology platform, with next generation PFSA PEMs (polymer electrolyte membranes based on perfluorosulfonic acid); gas diffusion media; cathode interfacial layers; and flow fields to improve overall MEA performance, durability, robustness, and cost. Plug Power’s GenKey, provides an all-inclusive package for customers, incorporating GenFuel hydrogen and fueling infrastructure, GenCare aftermarket service and either GenDrive or ReliOn fuel cell systems. GenDrive, a lead-acid battery replacement, is used in electric lift trucks in high-throughput material handling applications. With more than 9,000 GenDrive units deployed with material handling customers, GenDrive has been proven reliable with over 107 million hours of runtime.


For half a century, researchers worldwide have tried to mine uranium from the oceans with limited success. In the 1990s, Japan Atomic Energy Agency (JAEA) scientists pioneered materials that hold uranium as it is stuck or adsorbed onto surfaces of the material submerged in seawater. In 2011, the U.S. Department of Energy (DOE) initiated a program involving a multidisciplinary team from U.S. national laboratories, universities and research institutes to address the fundamental challenges of economically extracting uranium from seawater. Within five years this team has developed new adsorbents that reduce the cost of extracting uranium from seawater by three to four times. To chronicle this and other successes, the special issue focused on "Uranium in Seawater" amasses research presented by international scientists at ACS's spring 2015 meeting in Denver. Major contributions came from researchers supported by the Fuel Resources Program of DOE's Office of Nuclear Energy who coordinate an international effort involving researchers in China and Japan under agreements with the Chinese Academy of Sciences and JAEA. The DOE program is laying the technological foundation to determine the economic feasibility of uranium recovery from seawater. It supports researchers at national laboratories, universities and research institutes focused on developing and testing the next generation of adsorbents that will exhibit higher adsorbent capacity, faster binding and lower degradation over multiple use cycles in seawater. "For nuclear power to remain a sustainable energy source, an economically viable and secure source of nuclear fuel must be available," said Phillip Britt, who provides technical and outreach leadership for the DOE program. "This special journal issue captures the dramatic successes that have been made by researchers across the world to make the oceans live up to their vast promise for a secure energy future." Scientists from two DOE labs, Oak Ridge National Laboratory in Tennessee and Pacific Northwest National Laboratory in Washington, led more than half of the 30 papers in the special issue. ORNL contributions concentrated on synthesizing and characterizing uranium adsorbents, whereas PNNL papers focused on marine testing of adsorbents synthesized at national labs and universities. "Synthesizing a material that's superior at adsorbing uranium from seawater required a multi-disciplinary, multi-institutional team including chemists, computational scientists, chemical engineers, marine scientists and economists," said Sheng Dai, who has technical oversight of the ORNL uranium from seawater program. "Computational studies provided insight into chemical groups that selectively bind uranium. Thermodynamic studies provided insight into the chemistry of uranium and relevant chemical species in seawater. Kinetic studies uncovered factors that control how fast uranium in seawater binds to the adsorbent. Understanding adsorbent properties in the laboratory is key for us to develop more economical adsorbents and prepare them to grab as much uranium as possible." That teamwork culminated in the creation of braids of polyethylene fibers containing a chemical species called amidoxime that attracts uranium. So far, testing has been conducted in the laboratory with real seawater; but the braids are deployable in oceans, where nature would do the mixing, avoiding the expense of pumping large quantities of seawater through the fibers. After several weeks, uranium oxide-laden fibers are collected and subjected to an acidic treatment that releases, or desorbs, uranyl ions, regenerating the adsorbent for reuse. Further processing and enriching of the uranium produces a material to fuel nuclear power plants. PNNL researchers tested the adsorbents developed at ORNL and other laboratories, including universities participating in the Nuclear Energy University Program, using natural filtered and unfiltered seawater from Sequim Bay in Washington under controlled temperature and flow-rate conditions. Gary Gill, deputy director of PNNL's Coastal Sciences Division, coordinated three marine testing sites—at PNNL's Marine Sciences Laboratory in Sequim, Wash., Woods Hole Oceanographic Institution in Massachusetts and the University of Miami in Florida. "Understanding how the adsorbents perform under natural seawater conditions is critical to reliably assessing how well the uranium adsorbent materials work," Gill said. "In addition to marine testing, we assessed how well the adsorbent attracted uranium versus other elements, adsorbent durability, whether buildup of marine organisms might impact adsorbent capacity, and we demonstrated that most of the adsorbent materials are not toxic. PNNL also performed experiments to optimize release of uranium from the adsorbents and adsorbent re-use using acid and bicarbonate solutions." Marine testing at PNNL showed an ORNL adsorbent material had the capacity to hold 5.2 grams of uranium per kilogram of adsorbent in 49 days of natural seawater exposure—the crowning result presented in the special issue. The Uranium from Seawater program continues to make significant advancements, producing adsorbents with even higher capacities for grabbing uranium. Recent testing exceeded 6 grams of uranium per kilogram of adsorbent after 56 days in natural seawater - an adsorbent capacity that is 15 percent higher than the results highlighted in the special edition. Uranium from terrestrial sources can last for approximately 100 years, according to Erich Schneider of the University of Texas-Austin. As terrestrial uranium becomes depleted, prices are likely to rise. "If we have technology to capture uranium from seawater, we can ensure that an essentially unlimited supply of the element becomes available if uranium prices go up in the future," Schneider said. In July, experts in uranium extraction from seawater will convene at University of Maryland-College Park for the International Conference on Seawater Uranium Recovery. They will further explore the potential of uranium from seawater to keep the world's lights on. Explore further: How seawater could corrode nuclear fuel


News Article | December 9, 2016
Site: www.eurekalert.org

OAK RIDGE, Tenn., Dec. 9, 2016--Neutron scattering studies of a rare earth metal oxide have identified fundamental pieces to the quantum spin liquid puzzle, revealing a better understanding of how and why the magnetic moments within these materials exhibit exotic behaviors such as failing to freeze into an ordered arrangement even near absolute zero temperatures. In a paper published in Nature Physics, a team of researchers from the Georgia Institute of Technology, the University of Tennessee and the Department of Energy's Oak Ridge National Laboratory used neutrons to examine the origins of unusual magnetic behavior in a rare earth-based metal oxide, ytterbium-magnesium-gallium-tetraoxide (YbMgGaO4). The material, discovered in 2015, is known to have strange magnetic properties, putting it in a unique category of materials classified as quantum spin liquids. "A quantum spin liquid is an exotic state of matter characterized by the entanglement of particles over long distances across the atomic scale," said lead investigator Martin Mourigal, an assistant physics professor at the Georgia Institute of Technology. Think of Schrödinger's cat, the thought experiment, he said: Many particles participate in a quantum superposition, where multiple quantum states combine to form a new quantum state, and cannot be characterized by the behavior of individual particles. By definition, he said, "it's something we can't explain with classical physics." In a series of experiments at ORNL's Spallation Neutron Source, the researchers revealed three key features underpinning the material's exotic properties: antiferromagnetic interactions, where groups of electron spins have an antiparallel alignment with their respective neighbors; spin space anisotropy, meaning that individual magnetic moments strongly prefer aligning themselves alongside specific directions in the material; and chemical disorder between the material's magnetic layers that randomizes the interactions between electron spins. Neutrons are well suited for studying magnetism because their lack of electric charge allows them to penetrate through materials, even when the neutrons' energy is low. The neutrons also have magnetic moments, allowing researchers to directly probe the behavior of spins within materials. "Neutron scattering is the only technique that allows us to study the dynamics of quantum spin liquids at the lowest temperatures," Mourigal said. However, quantum spin liquids present a challenge because their magnetic moments are constantly changing. In typical materials, researchers can lock the spins into certain symmetric patterns by lowering the temperature of the sample, but this approach doesn't work on spin liquids. In the team's first neutron scattering measurements of an YbMgGaO4 single-crystal sample at the SNS's Cold Neutron Chopper Spectrometer, CNCS, the researchers observed that, even at a temperature of 0.06 kelvins (approximately negative 460 degrees Fahrenheit), magnetic excitations remained disordered or "fuzzy." This fluctuating magnetic behavior, known to occur to quantum spin liquids, runs counter to the laws of classical physics. "The material screamed spin liquid when we put it in the beam," Mourigal said. To overcome this fuzziness, the team used an 8 Tesla magnet to create a magnetic field that locked the spins into an ordered and partly frozen arrangement, allowing for better measurements. "Once we applied the magnetic field, we were able to measure coherent magnetic excitations in the material that propagate sort of like sound waves," said CNCS instrument scientist Georg Ehlers. "When a neutron comes into the material, it flies by a magnetic moment and shakes it. The nearby magnetic moments see this happening, and they all begin to vibrate in unison. The frequency of these vibrations is determined by the energy between neighboring spins." Those magnetic field measurements enabled the team to directly validate theoretical expectations and provided a physical understanding of the spin behavior and the system as a whole. "A quantum spin liquid is an intrinsically collective state of matter," said Mourigal. "But if you want to understand the society, you need to understand the individuals as well." The team then turned to another SNS instrument, the Fine-Resolution Fermi Chopper Spectrometer instrument, SEQUOIA, to understand the individual properties of the magnetic moments. "In rare earth magnets, rich physics, like what was observed at the CNCS instrument, can emerge from the fact that the individual spins can prefer to point along certain directions in a crystal," said SEQUOIA instrument scientist Matthew Stone. "SEQUOIA examined the localized higher energy states to confirm the individual pieces of the model used to describe the CNCS data were correct." Mourigal says the information gleaned from the experiments will enable researchers to develop better theoretical models to further study these quantum phenomena. "While the exact nature of the quantum state hosted by this material has not been fully established yet, we've discovered that chemical disorder and other effects are important here," said Mourigal. "With these experiments, we've really been able to nail down what ingredients need to be taken into the recipe for a quantum spin liquid in this material." The paper's authors are Joseph A. M. Paddison, Marcus Daum, Zhiling Dun, Georg Ehlers, Yaohua Liu, Matthew B. Stone, Haidong Zhou and Martin Mourigal. The YbMgGaO4 sample was synthesized at the University of Tennessee. Supplementary measurements of the YbMgGaO4 crystal structure were made at the SNS CORELLI instrument. The research received support from the Georgia Institute of Technology, the National Science Foundation, and DOE's Office of Science. SNS is a DOE Office of Science User Facility. UT-Battelle manages ORNL for the DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | January 22, 2016
Site: www.greencarcongress.com

« Ballard receives follow-on order from Nisshinbo for development of fuel cell catalyst; targeting 70% reduction in platinum loading | Main | Pitt study reveals heat transfer mechanisms in MOF-based adsorbed natural gas storage; advancing the technology » The US Department of Energy (DOE) is establishing a new consortium to support a vision of developing and deploying lightweight materials twice as fast, at a fraction of the cost, while increasing US competitiveness in manufacturing. It can take as long as 20 years to get a lightweight material from concept to market. If DOE can help cut that by 10 years, the savings and boost to US industry will be substantial. With our national laboratories’ expertise in lightweight materials built on a framework of industry engagement and support, LightMAT can put more of DOE’s resources to work and have a real impact.Reuben Sarkar, deputy assistant secretary for sustainable transportation in DOE’s office of Energy Efficiency & Renewable Energy DOE’s Pacific Northwest National Laboratory will manage the new Lightweight Materials National Lab Consortium (LightMAT)—a network of nine national labs with technical capabilities that are highly relevant to lightweight materials development and use. LightMAT will provide a single point of contact which will match industry led research teams with specific expertise and equipment found only at the national labs. PNNL, in collaboration with the eight other DOE national laboratories, is developing a virtual storefront to showcase the capabilities of participating laboratories. Industry partners will now be able to engage LightMAT assistance through the LightMAT website. Any US company, large or small, is able to seek assistance to locate and use strategic resources to accelerate lightweight materials research & development. Along with access to national laboratory capabilities, all industrial partners will be able to use collaboration tools and data aggregation tools that are being developed as part of LightMAT. This includes simplified work and technology transfer agreements, and a LightMAT non-disclosure agreement that allows access to all participating laboratories through a single document. To date, the LightMAT network includes Ames Laboratory; Argonne National Laboratory; Idaho National Laboratory; Los Alamos National Laboratory; Lawrence Livermore National Laboratory; National Renewable Energy Laboratory; Oak Ridge National Laboratory; Pacific Northwest National Laboratory; and Sandia National Laboratories.


PPPL physicists working on development of XGC code for simulating the behavior of transport barrier particles. From left: Robert Hager, Seung-Ho Ku, team leader C.S. Chang, Stephane Ethier. Credit: Elle Starkman/PPPL Office of Communications U.S. Department of Energy (DOE) high-performance computer sites have selected a dynamic fusion code, led by physicist C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL), for optimization on three powerful new supercomputers. The PPPL-led code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s. The PPPL code, called XGC, simulates behavior of the ions, electrons and neutral atoms in the transport barrier region— or "pedestal"—between the ultra-hot core of the plasma that fuels fusion reactions and the cooler and turbulent outer edge of the plasma. The pedestal must be high and wide enough to prevent damage to the divertor plate that exhausts heat in doughnut-shaped tokamaks that house the fusion reactions. "How to create a high edge pedestal without damaging the divertor wall is the key question to be answered," said Chang. "That is a prerequisite for achieving steady state fusion." Among the team of nationwide experts developing this program are PPPL physicists Seung-Ho Ku, Robert Hager and Stephane Ethier. Selection of the PPPL code could help ready it for exascale development. "Computer architecture is evolving rapidly and these new pre-exascale computers have features that are quite different from some of the earlier petascale supercomputers," said Amitava Bhattacharjee, head of the Theory Department at PPPL. Petascale machines operate in petaflops, or one million billion (1015) floating point operations per second. Bhattacharjee heads a PPPL-led Exascale Computing Project that will integrate the XGC code with GENE, a code developed at the University of California, Los Angeles, to create the first simulation of a complete fusion plasma. Exascale supercomputers will perform exaflops, or a billion billion (1018) floating point operations per second. The three new pre-exascale supercomputers: Cori, now fully installed at the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory. Cori, named for biochemist Gerty Cori, the first American woman to win a Nobel Prize in science, has a theoretical peak speed of 30 petaflops per second on scientific applications using Intel Xeon "Haswell" and Xeon Phi "Knights Landing" processor nodes. Also selected to participate in Cori's NERSC Exascale Science Applications Program (NESAP) is the PPPL-led M3D-CI, an extended magnetohydrodynamics (MHD) code focused on simulation of plasma disruptions led by physicist Stephen Jardin, with support from physicists Joshua Breslau, Nate Ferraro and Jin Chen. Two more PPPL-led codes, in addition to the 20 that included XGC and M3D-CI that were previously selected, will participate in the Cori NERSC program. These programs are GTC-P and GTS codes that model plasma turbulence in the plasma core and are headed by physicists William Tang and Stephane Ethier. Principal developer of the GTS code is PPPL physicist Weixing Wang. The GTC-P code is PPPL's version of the GTC code led by the University of California, Irvine. Summit, to be operational at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in 2018. Summit features a hybrid architecture consisting of IBM Power 9 processors and multiple NVIDIA Volta graphic processing units and will be capable of performing up to at least 200 petaflops for a wide range of applications. The facility's Center for Accelerated Application Readiness (CAAR) program has selected 13 projects that will participate in the program to optimize their applications codes and demonstrate the effectiveness of their applications on Summit. Aurora, scheduled to be deployed in 2018 at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory, will be comprised of third generation Intel Xeon Phi "Knights Hill" many-core processors. Ten projects have been selected for the ALCF Early Science Program, which is expected to be capable of performing up to 200 petaflops on a wide range of scientific applications. PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas—ultra-hot, charged gases—and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | February 20, 2017
Site: www.futurity.org

Trace amounts of uranium exist in seawater, but efforts to extract it for nuclear power haven’t produced enough to make it a viable source for countries that don’t have uranium mines. A practical method for extracting that uranium, which produces higher quantities in less time, could help make nuclear power a viable part of the quest for a carbon-free energy future. “Concentrations are tiny, on the order of a single grain of salt dissolved in a liter of water,” says Yi Cui, a professor of materials science and engineering and of photon science at Stanford University and coauthor of a new paper in Nature Energy. “But the oceans are so vast that if we can extract these trace amounts cost effectively, the supply would be endless.” Alternative forms of energy like wind or solar are critical for reducing the world’s carbon emissions. While wind and solar costs are plunging, some experts argue that nuclear power remains important because it can be turned on and off to match peaks and valleys in demand without carbon emissions. “We need nuclear power as a bridge toward a post-fossil-fuel future,” says coauthor Steven Chu, professor of molecular and cellular physiology and a Nobel Prize-winning physicist. “Seawater extraction gives countries that don’t have land-based uranium the security that comes from knowing they’ll have the raw material to meet their energy needs.” Chu, a former US secretary of energy who encouraged seawater extraction research before he left the Department of Energy, says that nuclear power currently generates 20 percent of US electricity and 13 percent worldwide. Even as researchers work to improve reactor safety and solve the waste disposal issues, he believes that a practical way to extract uranium from seawater is needed to reduce the energy insecurity of nations that depend on nuclear power but lack uranium within their own borders. The new findings build on years of research in Japan and China, as well as by DOE scientists at Oak Ridge National Laboratory and the Pacific Northwest National Laboratory. Scientists have long known that uranium dissolved in seawater combines chemically with oxygen to form uranyl ions with a positive charge. Extracting these uranyl ions involves dipping plastic fibers containing a compound called amidoxime into seawater. The uranyl ions essentially stick to the amidoxime. When the strands become saturated, the plastic is chemically treated to free the uranyl, which then has to be refined for use in reactors just like ore from a mine. How practical this approach is depends on three main variables: how much uranyl sticks to the fibers; how quickly ions can be captured; and how many times the fibers can be reused. In the recent work, researchers improved on all three variables: capacity, rate, and reuse. The key advance was to create a conductive hybrid fiber incorporating carbon and amidoxime. By sending pulses of electricity down the fiber, they altered the properties of the hybrid fiber so that more uranyl ions could be collected. Postdoctoral scholar Chong Liu oversaw the lab tests that compared Stanford’s amidoxime-carbon hybrid fibers with today’s amidoxime fibers. First she tested how much uranyl each type of fiber could hold before reaching saturation. The findings show that by the time the standard amidoxime fiber had become saturated, the amidoxime-carbon hybrid fibers had already adsorbed nine times as much uranyl and were still not saturated. What’s more, the electrified fiber captured three times as much uranyl during an 11-hour test using seawater from Half Moon Bay, about an hour from Stanford, and had three times the useful lifespan of the standard amidoxime. “We have a lot of work to do still but these are big steps toward practicality,” Cui says. Research on seawater extraction has to proceed in parallel with reactor safety and waste disposal challenges, Chu says. “For much of this century, some fraction of our electricity will need to come from sources that we can turn on and off. I believe nuclear power should be part of that mix, and assuring access to uranium is part of the solution to carbon-free energy.”


News Article | February 24, 2017
Site: www.technologyreview.com

The company, backed by Peter Thiel’s Founders Fund, revised inflated assertions about its advanced reactor design after growing concerns prompted an MIT review. Nuclear energy startup Transatomic Power has backed away from bold claims for its advanced reactor technology after an informal review by MIT professors highlighted serious errors in the company’s calculations, MIT Technology Review has learned. The Cambridge, Massachusetts-based company, founded in 2011 by a pair of MIT students in the Department of Nuclear Science and Engineering, asserted that its molten-salt reactor design could run on spent nuclear fuel from conventional reactors and generate energy far more efficiently than they do. In a white paper published in March 2014, the company proclaimed its reactor “can generate up to 75 times more electricity per ton of mined uranium than a light-water reactor.” Those lofty claims helped it raise millions in venture capital, secure a series of glowing media profiles (including one in this publication), and draw a rock-star lineup of technical advisors. But in a paper on its site dated November 2016, the company downgraded “75 times” to “more than twice.” In addition, it now specifies that the design “does not reduce existing stockpiles of spent nuclear fuel” or use them as its fuel source. The promise of recycling nuclear waste, which poses tricky storage and proliferation challenges, was a key initial attraction of the company and captured considerable attention. “In early 2016, we realized there was a problem with our initial analysis and started working to correct the error,” cofounder Leslie Dewan said in an e-mail response to an inquiry from MIT Technology Review. The dramatic revisions followed an analysis in late 2015 by Kord Smith, a professor of  nuclear science and engineering at MIT and an expert in the physics of nuclear reactors. At that point, there were growing doubts in the field about the company’s claims and at least some worries that any inflated claims could tarnish the reputation of MIT’s nuclear department, which has been closely associated with the company. Transatomic also has a three-year research agreement with the department, according to earlier press releases. In reviewing the company’s white paper, Smith noticed immediate red flags. He relayed his concerns to his department head and the company, and he subsequently conducted an informal review with two other professors. “I said this is obviously incorrect based on basic physics,” Smith says. He asked the company to run a test, which ended up confirming that “their claims were completely untrue,” Smith says. He notes that promising to increase the reactor’s fuel efficiency by 75 times is the rough equivalent of saying that, in a single step, you'd developed a car that could get 2,500 miles per gallon. Ultimately, the company redid its analysis and produced and posted a new white paper. To be sure, it would still be a notable accomplishment if the company could build a reactor that improved fuel energy efficiency over conventional reactors by a factor more than two. In addition, the new white paper notes the reactor could reduce waste by 53 percent compared to light-water reactors. The viability of the latter finding was recently verified by Oak Ridge National Laboratory. But the analysis found that the reactor couldn’t sustain a fission chain reaction using spent fuel long enough for it to be a feasible option, as previously hoped, Dewan said in a subsequent phone interview. “I’m very glad that we have this robust technical analysis in place, confirmed by Oak Ridge, and we’re committed to moving ahead with our plans for low-cost, safe nuclear power,” Dewan said in her e-mail response. “We decided to open-publish our analysis and the results, because we wanted people to be able to repeat the calculations and examine it directly themselves. Also, we’ve talked before about how the nuclear industry should increase transparency to build public trust, and wanted to practice what we preached.” The company has previously said it intends to build a demonstration reactor by 2020, but that timeline has slipped to 2021, Dewan said during the phone interview. “We certainly have a long road ahead of us,” she said, noting technical, engineering, supply-chain, and regulatory challenges. “But I think that momentum is on our side.” Nuclear experts say that approval and construction of any advanced-reactor prototype requires a minimum of 10 years, given the extensive regulatory requirements of the Nuclear Regulatory Commission. Smith stresses that the founders weren’t acting in bad faith, but he did note they didn’t subject their claims to the peer-review process early on. “They didn’t do any of this intentionally,” Smith says. “It was just a lack of experience and perhaps an overconfidence in their own ability. And then not listening carefully enough when people were questioning the conclusions they were coming to.” The company has raised at least $4.5 million from Peter Thiel’s Founders Fund, Acadia Woods Partners, and Daniel Aegerter of Armada Investment AG. Venture capital veteran Ray Rothrock serves as chairman of the company.


News Article | December 5, 2016
Site: www.eurekalert.org

OAK RIDGE, Tenn., Dec. 5, 2016 - As part of an effort to develop drought-resistant food and bioenergy crops, scientists at the Department of Energy's Oak Ridge National Laboratory have uncovered the genetic and metabolic mechanisms that allow certain plants to conserve water and thrive in semi-arid climates. Semi-arid plants such as agave have adapted to survive in areas with little rainfall by developing a specialized mode of photosynthesis called crassulacean acid metabolism, or CAM. Unlike plants in wetter environments, CAM plants absorb and store carbon dioxide through open pores in their leaves at night, when water is less likely to evaporate. During the day, the pores, also called stomata, stay closed while the plant uses sunlight to convert carbon dioxide into energy, minimizing water loss. ORNL scientists are studying the unique metabolic mechanisms that allow CAM plants to conserve water, with the goal of introducing water-saving traits into bioenergy and food crops. The results of the team's latest study, which focuses on agave, are published in Nature Plants as the journal's cover story. The CAM photosynthetic process, discovered in the 1950s, has largely remained a scientific curiosity, but researchers are now examining it as a potential solution to maintaining food and bioenergy crop yields during water shortages and drought. "Today's demand on agricultural systems to provide food, feed, forage, fiber and fuel call for more comprehensive research into understanding the complexities of CAM plants," said ORNL coauthor Xiaohan Yang. "As we uncover each layer of the CAM process, our studies aim to speed up the evolution of crops to give them the ability to thrive in more arid environments as the availability of freshwater becomes limited." To gain a comprehensive view of the complex CAM system, the team used ORNL's mass spectrometry to compare the molecular traits of agave with a control plant, Arabidopsis, which uses a more common photosynthetic process. The team evaluated genetic behavior that signals stomatal movement in each plant over the same 24-hour period. Their study revealed that the timing of daytime versus nighttime stomatal activity varied significantly between agave and Arabidopsis. The research also pinpointed which genetic and metabolic mechanisms signal CAM plants to open and close their stomata. Understanding the timing of these signals will be key to transferring CAM processes into crops such as rice, corn, poplar and switchgrass. "Further research is required to understand how this molecular timekeeping regulates CAM, but the results of this study provide new insights into the complexity of CAM biodesign, featuring an integrative understanding of CAM at the molecular level," Gerald Tuskan, ORNL Corporate Fellow and coauthor, said. "The transfer of CAM molecular machinery into energy crops would facilitate their deployment onto marginal lands and would simultaneously reduce competition with food crops." The study titled, "Transcript, protein and metabolite temporal dynamics in the CAM plant Agave," included coauthors Paul Abraham, Hengfu Yin, Henrique Cestari De Paoli, Nancy Engle, Ryan Agh, David Weston, Stan Wullschleger, Timothy Tschaplinski, Daniel Jacobson, Robert Hettich, Gerald Tuskan and Xiaohan Yang of ORNL; Anne Borland of the University of Newcastle and ORNL; Deborah Weighill and Piet Jones of the University of Tennessee and ORNL; and John Cushman and Sung Don Lim of the University of Nevada. The research was funded by DOE's Office of Science and used resources of the Compute and Data Environment for Science (CADES) at ORNL, a fully integrated infrastructure offering scalable computing, software support and high-performance cloud storage services for researchers labwide. UT-Battelle manages ORNL for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | December 30, 2015
Site: news.yahoo.com

Scientists mixed neptunium oxide with aluminum and pressed the result into pellets. They then irradiated the pellets to create neptunium-238, which decayed quickly into plutonium-238. More Scientists have produced a powder of plutonium-238 for the first time in nearly 30 years in the United States, a milestone that they say sets the country on a path toward powering NASA's deep-space exploration and other missions. Plutonium-238 (Pu-238) is a radioactive element, and as it decays, or breaks down into uranium-234, it releases heat. That heat can then be used as a power source; for instance, some 30 space missions, including the Voyager spacecraft, which explored the solar system's outer planets in the 1970s, have relied on the oxide form of the plutonium isotope. (An isotope is atom of an element with a different number of neutrons.) During the Cold War, the Savannah River Plant in South Carolina was pumping out Pu-238. "Those reactors were shut down in 1988, and the U.S. has not had the capability to make new material since then," said Bob Wham, who leads the project for the Nuclear Security and Isotope Technology Division at the Department of Energy's Oak Ridge National Laboratory (ORNL). [8 Rare Elements That You've Never Heard Of] After U.S. production of the isotope stopped, Russia supplied the Pu-238 needed for space missions. However, Russia has also stopped producing the material. Two years ago, NASA began funding a new effort to produce plutonium-238, giving about $15 million a year to the DOE Office of Nuclear Energy. Plutonium-238 is an ideal power source for space missions for several reasons, including the element's so-called half-life of about 88 years. Half-life is the time it takes for half of the atoms of an element to decay. That means the isotope's heat output won't be reduced to half for 88 years. Plutonium-239, which has a half-life of 24,110 years, is the isotope most commonly formed from uranium in nuclear reactors, according to the World Nuclear Association. In addition, "it's stable at high temperatures, can generate substantial heat in small amounts and emits relatively low levels of radiation that is easily shielded, so mission-critical instruments and equipment are not affected," Wham said. In the new achievement, Wham and his colleagues created 50 grams (1.8 ounces) of Pu-238 — about one-eighth of a cup (30 milliliters) — or enough to characterize the substance, he said. Because the scientists were using existing infrastructure at the Department of Energy, they needed to adapt the plutonium-making process. "For example, the current DOE operating research reactors are smaller than those used at Savannah River," Wham said. "Therefore, we need to modify the technology to work within the existing operating reactors." Next, the scientists will test the purity of the sample and work on scaling up the manufacturing process. "Once we automate and scale up the process, the nation will have a long-range capability to produce radioisotope power systems such as those used by NASA for deep-space exploration," Wham said. The next NASA mission with a plan to use such radioisotope power is the Mars 2020 rover, set for launch in July 2020, the researchers said. The rover will be designed to look for signs of life on the Red Planet, collect rock and soil samples for testing on Earth, and investigate technology for human exploration. Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | December 7, 2015
Site: phys.org

A side effect of this information explosion, however, is the fragmentation of knowledge. With thousands of new articles being published by medical journals every day, developments that could inform and add context to medicine's global body of knowledge often go unnoticed. Uncovering these overlooked gaps is the primary objective of literature-based discovery, a practice that seeks to connect existing knowledge. The advent of online databases and advanced search techniques has aided this pursuit, but existing methods still lean heavily on researchers' intuition and chance discovery. Better tools could help uncover previously unrecognized relationships, such as the link between a gene and a disease, a drug and a side effect, or an individual's environment and risk of developing cancer. For the past five years, Sreenivas Rangan Sukumar, a data scientist at the Department of Energy's Oak Ridge National Laboratory, has been working with health data and the high-performance computing resources of ORNL's Compute and Data Environment for Science (CADES) to improve health care in the United States. His most recent success, called Oak Ridge Graph Analytics for Medical Innovation (ORiGAMI), supplies researchers with an advanced data tool for literature-based discovery that has the potential to accelerate medical research and discovery. "Humans' limited bandwidth constrains the ability to reason with the vast amounts of available medical information," Sukumar said. "By design, ORiGAMI can reason with the knowledge of every published medical paper every time a clinical researcher uses the tool. This helps researchers find unexplored connections in the medical literature. By allowing computers to do what they do best, doctors can do better at answering health-related questions." The result of collaboration between ORNL and the US National Library of Medicine (NLM), a division of the National Institutes of Health, ORiGAMI unites three emerging technologies that are shaping the future of health care: big data, graph computing, and the Semantic Web, a common framework that allows data to be shared more freely between people and machines. A Better Way to Search When medical researchers and clinicians want to know the latest biomedical research, they turn to MEDLINE, NLM's comprehensive database of life sciences and biomedical information. MEDLINE draws from more than 5,600 journals worldwide, adding 2,000 to 4,000 new citations each day to its archive. A conventional search engine query of MEDLINE can yield results in the thousands—more information than a researcher can review. To improve the usefulness of MEDLINE searches, NLM information research specialist Tom Rindflesch developed software called Semantic MEDLINE that is capable of "reading" key words pulled from the titles and abstracts of articles and summarizing the most relevant information in an interactive graph. The graph, a network of words connected by lines, draws attention to key relationships between the texts and serves as a guide to further exploration. Currently, more than 70 million articles in the MEDLINE database can be searched in this way. "Semantic MEDLINE is kind of like having a research assistant who looks at a ton of articles and organizes them for you," Rindflesch said. One of the primary limitations of NLM's Semantic MEDLINE, however, is computing. To produce its results, the application must plow through millions of subject-verb-object groupings pulled from each article and identify the strongest relationships or—better still—the strongest potential relationships, a specialized task that requires a specialized computer. Although conventional data analysis computers excel at reducing large datasets to smaller, more significant datasets, they struggle to compute large graphs capable of linking concepts and weak—yet relevant—associations. Fortunately, CADES, an integrated compute and data ecosystem within ORNL's Computing and Computational Sciences Directorate, houses a machine with just the right attributes. Apollo, a Cray Urika graph computer, possesses massive multithreaded processors and 2 terabytes of shared memory, attributes that allow it to host the entire MEDLINE database and compute multiple pathways on multiple graphs simultaneously. Combined with Helios, CADES' Cray Urika extreme analytics platform, Sukumar's team had the cutting-edge hardware needed to process large datasets quickly—about 1,000 times faster than a workstation—and at scale. Once the MEDLINE database was brought into the CADES environment, Sukumar's team applied advanced graph theory models that implement semantic, statistical, and logical reasoning algorithms to create ORiGAMI. The result is a free online application capable of delivering health insights in less than a second based on the combined knowledge of a worldwide medical community. In the hands of medical experts and clinicians, ORiGAMI has the potential to increase the efficiency of medical research by directing researchers toward the right questions, an outcome that could reduce costs and speed up delivery of new treatments. The tool is currently being enhanced beyond literature-based reasoning to data-driven, evidence-supported reasoning using cohort and intervention assessment methods. Georgia Tourassi, director of ORNL's Health Data Sciences Institute, offered an example of how ORiGAMI is impacting research. Tourassi's team is investigating environmental factors and migration patterns that affect people's cancer risk for a study proposed by the National Cancer Institute's Provocative Questions Initiative. As part of the investigation, the team searched for connections between lung cancer and airborne carcinogens recognized by the US Environmental Protection Agency (EPA). "When we threw the EPA's top 10 carcinogens at ORiGAMI, we noticed that there were a few elements that appeared over and over as connecting links. Some of these elements made sense from a reasoning point of view, but there was one that we had never seen before," Tourassi said. The surprising connection was xylene, a common solvent used in the printing, rubber, paint, and leather industries. Past EPA studies focused on xylene as a potential carcinogen have proven inconclusive, but ORiGAMI's results suggested further inquiry. Using publicly available health-related datasets and an advanced web crawler called iCRAWL, Tourassi's team built profiles of xylene exposure for lung cancer patients and non-cancer patients and compared the two. "The people who had lung cancer had much larger and longer exposures to xylene than the people without cancer," Tourassi said. "This is not confirmation that xylene causes cancer—in order to have confirmation, we need a carefully designed longitudinal cohort study—but this is one more red flag that we should be looking at xylene closely." In addition to population health, Tourassi's team has used ORiGAMI to explore genomic literature. Tourassi refers to the utility of ORiGAMI as "computer-assisted serendipity," meaning the tool enhances rather than replaces the person making the discovery. "All of us have those moments of epiphany when certain thoughts click into our head and we move on to explore hypotheses deeper," Tourassi said. "This tool enables that serendipity. It helps guide you in certain ways." Explore further: Novel high-performance hybrid system for semantic factoring of graph databases


News Article | November 21, 2016
Site: www.rdmag.com

Researchers have enlisted the help of supercomputers in finding new drug candidates to help combat antibiotic resistant bacteria. Led by the University of Oklahoma, with the Department of Energy’s Oak Ridge National Laboratory, the University of Tennessee and Saint Louis University, scientists used lab experiments combined with supercomputer modeling to identify molecules that boost antibiotics’ effect on drug-resistant bacteria. In the study the researchers discovered four new chemicals that seek out and disrupt efflux pumps, which are bacterial proteins known to be a major cause of antibiotic resistance. Jeremy Smith, who serves as a UT–ORNL Governor’s chair and director of the UT–ORNL Center for Molecular Biophysics, explained the approach of the study. “As a first in this field, we proposed the approach of essentially ‘screwing up’ the efflux pump’s protein assembly, and this led to the discovery of molecules with a new type of antibacterial activity,” he said in a statement. “In contrast to previous approaches, our new mechanism uses mechanics to revive existing antibiotics’ ability to fight infection.” While some antibiotics can permeate the protective barriers surrounding bacterial cells, many bacteria have evolved efflux pumps that expel antibiotics back out of the cell and render the medications ineffective. The main focus of the study was AcrA, an efflux pump protein that connects two other proteins in a tunnel shape through the bacterial cell envelope. Disrupting the centrally located protein mechanically breaks the efflux pump, unlike drug design strategies that try to inhabit overall biochemical processes. The team was able to scan large numbers of chemicals to predict and select which would be the most effective in preventing AcrA proteins from assembling properly using both experiments and protein simulations run on ORNL’s Titan supercomputer. “The supercomputing power of Titan allowed us to perform large-scale simulations of the drug targets and to screen many potential compounds quickly,” Helen Zgurskaya, head of OU’s Antibiotic Discovery and Resistance Group, who led the study, said in a statement. “The information we received was combined with our experiments to select molecules that were found to work well, and this should drastically reduce the time needed to move from the experimental phase to clinical trials.” Researchers were forced to screen various combinations of molecules and proteins to determine which ones fit well together, a process complicated by the protein’s dynamic nature. Researchers were able to create a virtual representation of the proteins, generated a series of protein ‘snapshots’ in their various configurations and used Titan to dock thousands of molecules to each snapshot and estimated how strongly each would interact with the protein. While the study focused on a prototypical type of efflux pump found in Escherichia coli bacteria, the researchers believe that the antibiotic-reviving approach will be applicable to many Gram-negative bacteria. The study titled, “Reviving Antibiotics: Efflux Pump Inhibitors that Interact with AcrA, a Membrane Fusion Protein of the AcrAB-TolC Multidrug Efflux Pump,” was led by Zgurskaya and co-authored by UT-ORNL’s Smith, Jerry Parks and Jerome Baudry; UT’s Adam Green; OU’s Narges Abdali, Julie Chaney, David Wolloscheck and Valentin Rybenkov; and SLU’s Keith Haynes and John Walker. The research was supported by a National Institutes of Health grant. The study, which was published in ACS Infectious Diseases, can be viewed here.


News Article | January 12, 2016
Site: phys.org

Engineers at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have finished designing a novel component for the Wendelstein 7-X (W7-X) stellarator, which recently opened at the Max Planck Institute of Plasma Physics (IPP) in Griefswald, Germany. Known as a "test divertor unit (TDU) scraper element," the component intercepts some of the heat flowing towards the divertor—a part of the machine that collects heat and particles as they escape from the plasma before they hit the stellarator wall or degrade the plasma's performance. Stellarators are fusion facilities that confine plasma in twisty magnetic fields, compared with the symmetrical fields that tokamaks use. Two of the components are being built and are scheduled to be delivered to IPP in September 2016, with the complete unit to be installed and ready to operate in late 2017. Once installed, the scraper will let physicists explore a wide variety of ways to arrange the stellarator's magnetic fields and vary the currents and pressures of the plasma. This project was supported by the DOE Office of Science. Engineers realized the need for the scraper element when they noticed that as the plasma exited the vessel, it sometimes struck parts of the divertor that weren't intended to handle high heat loads. "This problem had to be solved to avoid damage to the stellarator equipment," said Hutch Neilson, head of PPPL's advanced projects department and coordinator of PPPL's collaborations with W7-X. The resulting design, begun by scientists at Oak Ridge National Laboratory (ORNL) and completed by researchers at PPPL, ensures that the heat from fusion reactions will hit the scraper before reaching the divertor. "The scraper element is an additional divertor plate that intercepts the heat earlier," said Sam Lazerson, a stellarator computational physicist at PPPL who will be stationed at W7-X until February 2016. Engineers plan the scraper together with diagnostic devices to measure the heat flow more precisely. And they will use computer models to "check our understanding of how the exhaust power flows from the plasma to the stellarator's first wall," said Neilson. Designing the scraper element meant overcoming several obstacles. The primary physics-related challenge involved finding the best shape for the scraper's plasma-facing surface. This process called for modeling the "edge plasma heat flows in the complicated 3D magnetic field in both the edge and divertor regions," said Neilson. Engineers also had to consider that the scraper would not be actively cooled. This was a problem since the temperature of the scraper would rise more quickly than that of its supporting structure during W7-X operations. "The design relied on sophisticated thermal-mechanical analyses by our Engineering Analysis Branch," said Neilson. The analysis ensured that "all materials will stay below their temperature limits and that stresses caused by thermal expansion will be low enough to avoid component damage." A panel of researchers from PPPL, ORNL, and IPP approved the design on November 11, 2015, after a series of reviews. According to a committee report, "discussions between the PPPL cognizant engineer, Douglas Loesser, and IPP staff, resulted in a simplified design that will significantly reduce design, fabrication, and assembly costs." Explore further: US joining the Wendelstein 7-X fusion project


« Volkswagen Group and SOVAC to produce vehicles in Algeria | Main | GLM launches EV brand and shows two models in Hong Kong » In October 2015, the US Department of Energy’s (DOE) launched a broad, joint effort to co-optimize the development of efficient engines and low greenhouse-gas fuels for on-road vehicles with the goal of reducing petroleum consumption by 30% by 2030 beyond what is already targeted. (Earlier post.) The intended application is light-, medium-, and heavy-duty markets including hybrid architectures. The Co-Optima project team, which is leveraging the technical contributions of nine of DOE’s 17 national laboratories, has grown to more than 130 researchers, according to Robert Wagner, Director of the Fuels, Engines, and Emissions Research Center at Oak Ridge National Laboratory (ORNL), and a member of the Co-Optima leadership team, in a briefing at the lab earlier this month. In August 2016, DOE announced funding of up to $7 million further to support the initiative. Co-Optima’s premise is that current fuels constrain engine design—and thus engine efficiency. The researchers suggest that there are engine architectures that can provide higher thermodynamic efficiencies than available from modern internal combustion engines; however, new fuels are required to maximize efficiency and operability across a wide speed/load range. The Co-Optima researchers are working to discover: In an informational webinar in September, John Farrell, Laboratory Program Manager—Vehicle Technologies, National Renewable Energy Laboratory, and part of the Co-Optima leadership team observed that: The project has two parallel thrusts: a near-term thrust on spark ignition (SI), with a 2025 commercial target, and a long-term thrust on advanced compression ignition (ACI), with a 2030 commercial target. Thrust 1 builds on the significant amount of work carried out over the past several decades, focused on building advanced boosted, downsized, and downspeeded spark ignition engines. The Co-Optima team is working to extend the knowledge base resulting from simulations with petroleum-derived fuels and ethanol to the broader range of fuels and fuel properties available from biomass resources. Thrust 2’s focus on advanced compression ignition engines includes not only advanced conventional compression ignition engines or diesel engine technologies, but also on the broad suite of technologies focused on kinetically controlled and low-temperature combustion approaches. These efforts basically take the high efficiency of the diesel engine and the low emission potential of the gasoline engine and try and find ways to capitalize on both efficiency and emissions potential. While such technologies have demonstrated the potential for high efficiency and low emissions at the research labs, they are much less developed than the advanced spark ignition engine technologies and thus represent the higher risk and longer term approach being pursued in the Co-Optima effort, Farrell said. Further, noted Wagner, the two thrust may require very different fuel properties. To address the complexity of this broad technology space, Co-Optima is relying on the collaboration of six multi-disciplinary teams: For Thrust 1, the researchers are focused on fuel properties—not a specific molecule, Wagner said. The one thing we didn’t want to do was to try to pick a molecule as a winner. We’ve been accused of that; people still say we are, but we’re not. We’re not going to say “Ethanol. Ethanol is the answer.” Maybe ethanol will meet the properties we come up with, maybe it won’t. It’s going to be more open than that. Let’s say we do come up with properties, and there is some fuel we can make in mass quantity that meets those properties. Five years from now, ten years from now, we want there to be an opportunity for other fuels to come on line that also meet those properties. Put another way, the Thrust 1 approach seeks to understand not just “what could we make” but “what should we make,” said Dan Gaspar from Pacific Northwest National Laboratory (PNNL) in the webinar. Thrust 1 began by casting a wide net, looking at biomass from all sources and the range of conversion processes, from fermentation and other biological approaches to thermochemical or catalytic approaches. Co-Optima is definitely not seeking to develop new conversion pathways, but is relying on what has already been done. The team then focused on the types molecules that might be best fuels and came up with nine different molecular classes that are suitable for spark ignition blendstocks or fuels. These include traditional hydrocarbons such as paraffins or olefins, cycloalkanes, and aromatics. They also focused on a set of oxygenates, alcohols, furans, ketones, ethers, and esters suitable as fuels and have the potential to improve engine performance. The result was one the order of 400 potential blendstock candidates. The researchers then used a tiered approach to prioritize candidates. This process includes consideration of a range of metrics spanning not only engine performance, but also environmental impacts, cost, scalability, infrastructure compatibility and so on. From this fuel funnel, the researchers have currently identified 20 molecules or mixtures that will be evaluated as blends in the 10–30% range using BOB (blendstock for oxygenate blending) as the basis. To rank promising candidates, the researchers developed an engine performance fuel merit function, which continues to evolve, Wagner said. The merit function includes a large number of properties that are important for engine performance (such as octane number, heat of vaporization, flame speed, lower volume fraction, Heaviside function and particle mass index) coupled to a series of expected values. Findings from these investigations will feedback to help calibrate the merit function. Thrust 2, which includes both gasoline-like and diesel like fuels, examines the compatibility of Thrust I fuels with Thrust 2 engine technologies and seeks to accelerate Thrust 2 technology development. Broadly, the effort seeks to discover the important fuel properties or performance metrics and how to measure or compute those metrics. Researchers are working to identify new, or combined, metrics for autoignition that are not capture in the conventional RON, MON, CN metrics. The researchers are also seeking to clarify which fuel properties dominate specific performance measures, such as particulate formation. From an engineering perspective, this means determining how fuel properties and engine design/operating parameters interact, and how those interactions can be leveraged to optimize efficiency. Thrust 2, said Wagner, is facing a decision point in March 2017. The field of advanced compression ignition combustion represents a very broad spectrum of technologies. As a general rule, ACI offers high efficiency and reduced particulate and NO emissions, but is often unstable and is hampered by a limited speed/load range. High combustion noise and high HC and CO emissions are also a problem, with lower exhaust temperatures representing a challenge for current emissions controls. There are four Thrust 2 combustion mode strategies currently under investigation: There are multiple possible Thrust 2 paths forward: Thrust 2 fuel the same as Thrust 1; a Thrust 2-specific fuel; or an advanced diesel-specific fuel. What the program has not done to this point is add blended power or extended-range architectures (i.e., various forms of hybrid architectures, including engines optimized for range-extension, earlier post), Wagner said. A second Co-Optima Listening Day (the first was earlier this year) is targeted for 19–20 January at Sandia Labs in California; the goal is to solicit stakeholder input on program goals and progress.


News Article | December 20, 2016
Site: www.prweb.com

SAE International announces that Katherine Avery, Research Scientist with Ford Motor Company’s Research and Innovation Center; and Mohammad J. Mahtabi, a third-year PhD student majoring in Mechanical Engineering at Mississippi State University, have been honored with the Henry O. Fuchs Student Award. Established in 1991, this award recognizes a graduate or recently graduated student (i.e. post doctorate or new professor) that is working in the field of fatigue research and applications. The purpose of this award is to promote the education of engineering students in the area of fatigue technology. This award honors the memory of Professor Henry O. Fuchs. Professor Fuchs participated in the SAE Fatigue Design & Evaluation Committee's research projects, was a member of the faculty who founded the SAE Fatigue Concepts in Design short course, published extensively in SAE and elsewhere in the technical community, and actively participated in the Surface Enhancement Division of the Committee which is responsible for many standards relating to surface treatments of metals for withstanding fatigue damage. As a graduate student, Avery was a visiting scholar at Ford, and had internships at Oak Ridge National Laboratory and Tenneco, Inc. She also served as a teaching assistant at the University of Michigan in 2011, and a guest lecturer at Kettering University in 2013. Avery has authored or co-authored several papers on durability and high temperature fatigue of metals through SAE International and ASME. Together with her research collaborators, Avery was a recipient of the 2012 SAE Arch T. Colwell Merit Award. At the University of Michigan, she and her faculty adviser, Dr. Jwo Pan, were honored with the 2015 Robert M. Caddell Award for Research for their joint contributions in the areas of materials and manufacturing. For her active role in student leadership, Avery was presented with the Distinguished Leadership Award and the MLK Spirit Award from the Deans of U of M’s College of Engineering in 2014. Avery serves on the Michigan Engineering Alumni Board and is a member of SAE International, ASME and SWE. She holds a bachelor’s degree in Mechanical Engineering from Kettering University, and a master’s degree and PhD in Mechanical Engineering from the University of Michigan. Mahtabi holds a bachelor’s degree from the University of Tehran and a master’s degree from Iran University of Science and Technology; he ranked as an honor student at both institutions. He is a third-year PhD student, majoring in Mechanical Engineering at Mississippi State University and is conducting his research at MSU’s Center for Advanced Vehicular Systems (CAVS) under supervision of Dr. Shamsaei. Mahtabi’s dissertation is on different aspects of fatigue behavior of NiTi, an equiatomic alloy of nickel and titanium. This material is categorized as a shape memory alloy (SMA), while also exhibiting superelastic behavior. NiTi has become a preferred choice for many applications in the automotive, aerospace and biomedical industries. In automotive industry, the material has been used in engine control systems, transmission control, and has the potential to be utilized in structural components, bumpers and crash structures. He is using several experimental, analytical, and computational techniques to model the fatigue behavior of SMAs and explain the relationship between various aspects of the material, such as microstructural features, phase transformation, and loading, including mean strain/stress and variable amplitude effects, on the fatigue resistance. His research involves many complex (and coupled) physical phenomena with the need to understand and model: cyclic deformation, phase transformation, superelasticity, and damage accumulation. Based on his dissertation research on experimental and computational fatigue and fracture analysis, he has published several journal articles, one book chapter, and presented in multiple conferences. SAE International is a global association committed to being the ultimate knowledge source for the engineering profession. By uniting more than 127,000 engineers and technical experts, we drive knowledge and expertise across a broad spectrum of industries. We act on two priorities: encouraging a lifetime of learning for mobility engineering professionals and setting the standards for industry engineering. We strive for a better world through the work of our philanthropic SAE Foundation, including programs like A World in Motion® and the Collegiate Design Series™.


News Article | December 17, 2015
Site: www.greencarcongress.com

« Achates OPGCI project targeting 50% fuel efficiency gains over downsized GDI engine at reduced cost | Main | Researchers synthesize borophene, expanding family of 2-D materials beyond graphene » The US Department of Energy, Office of Energy Efficiency and Renewable Energy’s (EERE) Bioenergy Technologies Office (BETO) and Vehicle Technologies Office (VTO) have released a request for information (RFI) (DE-FOA-0001460) titled “Co-Optimization of Fuels and Engines” (Optima). The Optima program is a key collaborative initiative being pursued by EERE, VTO, and BETO. The Optima initiative is focused on the development of new fuels and engine architectures that are co-optimized—i.e., designed in tandem to maximize performance and carbon efficiency. (Earlier post.) The initiative intends to accelerate the widespread deployment of significantly improved fuels and vehicles (passenger to light truck to heavy duty commercial vehicles) by 2030. Specifically, Optima is targeting a reduction in per-vehicle petroleum consumption by 30% versus the 2030 business as usual. This goal reflects contributions from both improved engines (7%–14% reduction in fuel consumption) and improved fuels (including 15 billion gallons/year of advanced biofuels). At a fleet level, Optima will result in an additional 9%–14% fleet greenhouse gas (GHG) reduction by 2040. In an article for ASME’s Mechanical Engineering magazine, Dr. Robert Wagner, director of the Fuels, Engines, and Emissions Research Center at Oak Ridge National Laboratory, who is a member of the national labs’ leadership team for Optima, observed that: Over the past several years, road transportation has seen some significant advances in what are considered alternative technologies. Energy storage, electric drive systems, and fuel cell technology all seem to be poised to find a significant place in the automotive marketplace. But it would be a mistake to believe that such technologies will completely sweep aside what has come before. Instead, the internal combustion engine will continue to be integral to the transportation of people and goods for the foreseeable future. That is not to say that things will stay as they are now. The engine is undergoing a significant evolution of its own, as new fuel economy and emissions standards in the light-duty and heavy-duty sectors push the development of new technologies on an unprecedented scale toward the theoretical limits of engine operation. Coupled with continuing research into fundamental engine processes, the introduction of affordable high-performance computing, and the adoption of advanced manufacturing techniques throughout industry, those new technologies are leading to potentially disruptive opportunities for the introduction of engines with extraordinarily high efficiencies. How these new engines perform and how they will be integrated into new vehicle architectures will be the story of personal mobility for this half of the 21st century. … The “Optima” program bridges the broad expertise and resources of the Vehicle Technologies Office and the Bioenergy Technologies Office at the DOE. The overall plan includes a near-term phase which builds upon current engine technologies with a goal of new fuel and vehicle technologies in the marketplace by 2025, together with a longer-term—and arguably more ambitious—phase focused on kinetically controlled combustion processes and fuel technologies with impact expected in the 2030 timeframe. The Optima team is working closely with a broad range of stakeholders representing vehicle and engine manufacturers, energy companies, biofuel producers, fuel distributors and retailers as well as identifying and addressing potential deployment issues to ensure maximum success. In fiscal year 2016 (beginning October 1, 2015), BETO and VTO jointly funded a consortium of nine DOE national laboratories to begin an initial three-year effort focused on Optima. Activities funded under this initiative cover six major technology areas that leverage existing, foundational fuels and engines R&D along with initial stakeholder engagement. The Optima project will provide DOE and stakeholders with: Thrust I – Improvement of near-term conventional SI engine efficiency. High-research octane number (RON) fuels are known to enable more efficient, higher-performance SI operation via engine downsizing and boosting. Many biofuel blending components exhibit high RON and can be introduced into the market in the near- to medium-term for engines optimized to operate on those fuels. Fuel properties beyond RON, such as heat of vaporization, burn rate, viscosity, volatility, and energy density will also be characterized and the complexity of their interactions mapped to evaluate the full value opportunity. This thrust has lower risk relative to Thrust II because SI engines are in use today—but will benefit from additional optimization to take advantage of potential new fuels. Thrust II – Enable full operability ACI engines. Thrust II will provide the science and technology underpinnings needed to make new fuels compatible with commercially viable new ACI engine technologies. This engine platform, which includes kinetically controlled and low-temperature combustion approaches, offers the promise of significantly greater thermal efficiencies with lower criteria-pollutant emissions, and presents attractive options for both light- and heavy- duty vehicles. Fuel research will focus on low-GHG advanced biofuel/petroleum blends. In addition, already-efficient conventional compression ignition (CI) engines can realize fuel economy increases enabled by improved, low GHG-intensity fuels. Thrust II, to be conducted in parallel with Thrust I, presents a more complex technical challenge with higher potential risk and reward. The research cycle for each thrust will include identifying fuel candidates, understanding their characteristics and combustion performance, and determining market transformation requirements such as cost, GHG reduction, feedstock requirements, scalability, and infrastructure compatibility, while providing feedback to stakeholders and future collaborators. The current DOE national laboratory effort in this arena focuses on six major, interrelated areas: The purpose of this RFI is to solicit feedback from industry, academia, research laboratories, government agencies, and other stakeholders on issues related to the Optima initiative. EERE is specifically interested in information regarding additional areas of research that universities and others are interested in contributing to and collaborating on. The RFI will help BETO and VTO understand the additional areas of research, capabilities, and yet to be addressed barriers and opportunities for stakeholder engagement relative to the co-optimization of engines and fuels. The RFI provides stakeholders with an opportunity to provide information on:


News Article | April 10, 2016
Site: www.nanotech-now.com

Abstract: Researchers at the Department of Energy's Oak Ridge National Laboratory used neutrons to uncover novel behavior in materials that holds promise for quantum computing. The findings, published in Nature Materials, provide evidence for long-sought phenomena in a two-dimensional magnet. In 2006, the physicist Alexei Kitaev developed a theoretical model of microscopic magnets ("spins") that interact in a fashion that leads to a disordered state called a quantum spin liquid. This "Kitaev quantum spin liquid" supports magnetic excitations equivalent to Majorana fermions--particles that are unusual in that they are their own antiparticles. The presence of Majorana fermions is of great interest because of their potential use as the basis for a qubit, the essential building block of quantum computers. Familiar magnetic materials exhibit magnetic excitations called "spin-waves" that occur in quantized lumps, but in the Kitaev quantum spin liquid, the lumps are split and the Majorana excitations are therefore termed "fractionalized." Scientists have theorized that Kitaev interactions exist in nature in certain materials containing magnetic ions that exhibit strong coupling between the electron spin and orbital angular momentum. Arnab Banerjee, the study's lead author and a post-doctoral researcher at ORNL, explained that one way to observe spin liquid physics in such a material is to "splash" or excite the liquid using neutron scattering. Banerjee and colleagues from ORNL and the University of Tennessee, working with collaborators from the Max Planck Institute in Dresden, Germany and Cambridge University in the United Kingdom, used the "splash" technique to investigate a two-dimensional graphene-like material, alpha-ruthenium trichloride. Neutrons shining onto and scattering from the material can deposit small amounts of energy that create magnetic excitations. The form of magnetic excitations created in alpha-ruthenium trichloride?was found to be different from spin waves seen in ordinary magnets, but was very well-matched to the spectrum predicted for the Majorana fermions expected in the Kitaev quantum spin liquid. "The concept of Majorana fermion originated in fundamental high energy particle physics, but we saw their signatures in a solid state material at modest temperatures," Banerjee said. "Neutron scattering not only provided the 'splash' we needed to see them, but also directly measured the resulting magnetic excitations. The Spallation Neutron Source's SEQUOIA instrument is best suited for this research because the range of energy and momentum one can access with the instrument perfectly matches the regime where Majorana fermions show up." "The observation of these fractionalized excitations is truly remarkable," said Steve Nagler, director of the Quantum Condensed Matter Division at ORNL and co-corresponding author of the paper. "There has been a huge push recently to see if Kitaev quantum spin liquid physics can be found in materials. Time will tell whether this represents a first step on the road to a new qubit technology." The experiment required extremely pure samples that were prepared by Banerjee and Craig Bridges of ORNL. The interpretation of t