Lemont, IL, United States
Lemont, IL, United States

Time filter

Source Type

Gao C.,Xi'an University of Science and Technology | Gao C.,University of California at Riverside | Hu Y.,Argonne National Laboratory | Wang M.,University of California at Riverside | And 2 more authors.
Journal of the American Chemical Society | Year: 2014

We report that fully alloyed Ag/Au nanospheres with high compositional homogeneity ensured by annealing at elevated temperatures show large extinction cross sections, extremely narrow bandwidths, and remarkable stability in harsh chemical environments. Nanostructures of Ag are known to have much stronger surface plasmon resonance than Au, but their applications in many areas have been very limited by their poor chemical stability against nonideal chemical environments. Here we address this issue by producing fully alloyed Ag/Au nanospheres through a surface-protected annealing process. A critical temperature has been found to be around 930 °C, below which the resulting alloy nanospheres, although significantly more stable than pure silver nanoparticles, can still gradually decay upon extended exposure to a harsh etchant. Nanospheres annealed above the critical temperature show a homogeneous distribution of Ag and Au, minimal crystallographic defects, and the absence of structural and compositional interfaces, which account for the extremely narrow bandwidths of the surface plasmon resonance and may enable many plasmonic applications with high performance and long lifetime, especially for those involving corrosive species. © 2014 American Chemical Society.


Guzman-Verri G.G.,Argonne National Laboratory
MRS Bulletin | Year: 2014

This article reviews silicene, a relatively new allotrope of silicon, which can also be viewed as the silicon version of graphene. Graphene is a two-dimensional material with unique electronic properties qualitatively different from those of standard semiconductors such as silicon. While many other two-dimensional materials are now being studied, our focus here is solely on silicene. We first discuss its synthesis and the challenges presented. Next, a survey of some of its physical properties is provided. Silicene shares many of the fascinating properties of graphene, such as the so-called Dirac electronic dispersion. The slightly different structure, however, leads to a few major differences compared to graphene, such as the ability to open a bandgap in the presence of an electric field or on a substrate, a key property for digital electronics applications. We conclude with a brief survey of some of the potential applications of silicene. © 2014 Materials Research Society.


Boughezal R.,Argonne National Laboratory | Focke C.,Northwestern University | Liu X.,University of Maryland University College | Liu X.,Peking University | And 2 more authors.
Physical Review Letters | Year: 2015

We present the complete calculation of W-boson production in association with a jet in hadronic collisions through next-to-next-to-leading order (NNLO) in perturbative QCD. To cancel infrared divergences, we discuss a new subtraction method that exploits the fact that the N-jettiness event-shape variable fully captures the singularity structure of QCD amplitudes with final-state partons. This method holds for processes with an arbitrary number of jets and is easily implemented into existing frameworks for higher-order calculations. We present initial phenomenological results for W+jet production at the LHC. The NNLO corrections are small and lead to a significantly reduced theoretical error, opening the door to precision measurements in the W+jet channel at the LHC. © 2015 American Physical Society. © 2015 American Physical Society.


Latta D.E.,Argonne National Laboratory | Gorski C.A.,Eawag - Swiss Federal Institute of Aquatic Science and Technology | Scherer M.M.,University of Iowa
Biochemical Society Transactions | Year: 2012

Recent work has indicated that iron (oxyhydr-)oxides are capable of structurally incorporating and releasing metals and nutrients as a result of Fe2+-induced iron oxide recrystallization. In the present paper, we briefly review the current literature examining the mechanisms by which iron oxides recrystallize and summarize how recrystallization affects metal incorporation and release. We also provide new experimental evidence for the Fe2+-induced release of structural manganese from manganese-doped goethite. Currently, the exact mechanism(s) for Fe2+-induced recrystallization remain elusive, although they are likely to be both oxide- and metal-dependent. We conclude by discussing some future research directions for Fe2+-catalysed iron oxide recrystallization. ©The Authors Journal compilation ©2012 Biochemical Society.


Young R.D.,Argonne National Laboratory | Thomas A.W.,University of Adelaide
Nuclear Physics A | Year: 2010

It has proven a significant challenge to experiment and phenomenology to extract a precise values of the nucleon sigma terms. This difficulty opens the window for lattice QCD simulations to lead the field in resolving this aspect of nucleon structure. Here we report on recent advances in the extraction of nucleon sigma terms in lattice QCD. In particular, the strangeness component is now being resolved to a precision that far surpasses best phenomenological estimates. © 2010 Elsevier B.V.


Cloet I.C.,Argonne National Laboratory | Bentz W.,Tokai University | Thomas A.W.,University of Adelaide
Physical Review C - Nuclear Physics | Year: 2014

Electromagnetic form factors of the nucleon in the spacelike region are investigated within the framework of a covariant and confining Nambu-Jona-Lasinio model. The bound-state amplitude of the nucleon is obtained as the solution of a relativistic Faddeev equation, where diquark correlations appear naturally as a consequence of the strong coupling in the color 3¯ qq channel. Pion degrees of freedom are included as a perturbation to the "quark-core" contribution obtained using the Poincaré covariant Faddeev amplitude. While no model parameters are fit to form-factor data, excellent agreement is obtained with the empirical nucleon form factors (including the magnetic moments and radii) where pion loop corrections play a critical role for Q2-1GeV2. Using charge symmetry, the nucleon form factors can be expressed as proton quark sector form factors. The latter are studied in detail, leading, for example, to the conclusion that the d-quark sector of the Dirac form factor is much softer than the u-quark sector, a consequence of the dominance of scalar diquark correlations in the proton wave function. On the other hand, for the proton quark sector Pauli form factors we find that the effect of the pion cloud and axial-vector diquark correlations overcomes the effect of scalar diquark dominance, leading to a larger d-quark anomalous magnetic moment and a form factor in the u-quark sector that is slightly softer than in the d-quark sector. © 2014 American Physical Society.


Carlson J.,Los Alamos National Laboratory | Gandolfi S.,Los Alamos National Laboratory | Pederiva F.,University of Trento | Pederiva F.,National Institute of Nuclear Physics, Italy | And 5 more authors.
Reviews of Modern Physics | Year: 2015

Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents. © 2015 American Physical Society. © 2015 American Physical Society.


Binosi D.,Fondazione Bruno Kessler | Chang L.,University of Adelaide | Papavassiliou J.,University of Valencia | Roberts C.D.,Argonne National Laboratory
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

Within contemporary hadron physics there are two common methods for determining the momentum-dependence of the interaction between quarks: the top-down approach, which works toward an ab initio computation of the interaction via direct analysis of the gauge-sector gap equations; and the bottom-up scheme, which aims to infer the interaction by fitting data within a well-defined truncation of those equations in the matter sector that are relevant to bound-state properties. We unite these two approaches by demonstrating that the renormalisation-group-invariant running-interaction predicted by contemporary analyses of QCD's gauge sector coincides with that required in order to describe ground-state hadron observables using a nonperturbative truncation of QCD's Dyson-Schwinger equations in the matter sector. This bridges a gap that had lain between nonperturbative continuum-QCD and the ab initio prediction of bound-state properties. © 2015.


Finney L.,Argonne National Laboratory
Expert Review of Proteomics | Year: 2014

Metals are essential cofactors, utilized in many critical cellular processes. For example, zinc is important in insulin biosynthesis and may play a role in Alzheimer's disease, but much of how the zinc-mediated process remains unknown. Knowing which metal is in which protein at a given point in time would lead to new insights into how metals work in biological systems. New tools are being developed to investigate the biochemistry and cell biology of metals, with potential for biomedical applications. In this report, we consider the promise and limitations of metalloproteins detection techniques. We provide a brief overview of the techniques available and a discussion of the technical challenges to biomedical applications, with particular focus on what must be overcome for the potential of these approaches to be achieved. © 2014 Informa UK, Ltd.


Cloet I.C.,University of Adelaide | Cloet I.C.,Argonne National Laboratory | Roberts C.D.,Argonne National Laboratory | Roberts C.D.,Illinois Institute of Technology | Thomas A.W.,University of Adelaide
Physical Review Letters | Year: 2013

The proton is arguably the most fundamental of nature's readily detectable building blocks. It is at the heart of every nucleus and has never been observed to decay. It is nevertheless a composite object, defined by its valence-quark content: u+u+d - i.e., two up (u) quarks and one down (d) quark; and the manner by which they influence, inter alia, the distribution of charge and magnetization within this bound state. Much of novelty has recently been learned about these distributions; and it now appears possible that the proton's momentum-space charge distribution possesses a zero. Experiments in the coming decade should answer critical questions posed by this and related advances; we explain how such new information may assist in charting the origin and impact of key emergent phenomena within the strong interaction. Specifically, we show that the possible existence and location of a zero in the proton's electric form factor are a measure of nonperturbative features of the quark-quark interaction in the standard model, with particular sensitivity to the running of the dressed-quark mass. © 2013 American Physical Society.


Das S.,Argonne National Laboratory | Das S.,Purdue University | Gulotty R.,Argonne National Laboratory | Gulotty R.,University of California at Riverside | And 2 more authors.
Nano Letters | Year: 2014

In this article, we report only 10 atomic layer thick, high mobility, transparent thin film transistors (TFTs) with ambipolar device characteristics fabricated on both a conventional silicon platform as well as on a flexible substrate. Monolayer graphene was used as metal electrodes, 3-4 atomic layers of h-BN were used as the gate dielectric, and finally bilayers of WSe2 were used as the semiconducting channel material for the TFTs. The field effect carrier mobility was extracted to be 45 cm2/(V s), which exceeds the mobility values of state of the art amorphous silicon based TFTs by ∼100 times. The active device stack of WSe2-hBN-graphene was found to be more than 88% transparent over the entire visible spectrum and the device characteristics were unaltered for in-plane mechanical strain of up to 2%. The device demonstrated remarkable temperature stability over 77-400 K. Low contact resistance value of 1.4 kω-μm, subthreshold slope of 90 mv/decade, current ON-OFF ratio of 107, and presence of both electron and hole conduction were observed in our all two-dimensional (2D) TFTs, which are extremely desirable but rarely reported characteristics of most of the organic and inorganic TFTs. To the best of our knowledge, this is the first report of all 2D transparent TFT fabricated on flexible substrate along with the highest mobility and current ON-OFF ratio. © 2014 American Chemical Society.


Bokulich N.A.,University of California at Davis | Subramanian S.,University of Washington | Faith J.J.,University of Washington | Gevers D.,The Broad Institute of MIT and Harvard | And 6 more authors.
Nature Methods | Year: 2013

High-throughput sequencing has revolutionized microbial ecology, but read quality remains a considerable barrier to accurate taxonomy assignment and α-diversity assessment for microbial communities. We demonstrate that high-quality read length and abundance are the primary factors differentiating correct from erroneous reads produced by Illumina GAIIx, HiSeq and MiSeq instruments. We present guidelines for user-defined quality-filtering strategies, enabling efficient extraction of high-quality data and facilitating interpretation of Illumina sequencing results. © 2013 Nature America, Inc. All rights reserved.


Chang L.,Peking University | Chang L.,Argonne National Laboratory | Roberts C.D.,Peking University | Roberts C.D.,Argonne National Laboratory
Physical Review C - Nuclear Physics | Year: 2012

We describe a symmetry-preserving calculation of the light-quark meson spectrum, which combines a description of pion properties with reasonable estimates of the masses of heavier mesons, including axial-vector states. The kernels used in formulating the problem are essentially nonperturbative, incorporating effects of dynamical chiral symmetry breaking (DCSB) that were not previously possible to express. Our analysis clarifies a causal connection between DCSB and the splitting between vector and axial-vector mesons and exposes a key role played by the anomalous chromomagnetic moment of dressed quarks in forming the spectrum. © 2012 American Physical Society.


Altmannshofer W.,Fermi National Accelerator Laboratory | Gori S.,University of Chicago | Gori S.,Argonne National Laboratory | Kribs G.D.,University of Oregon
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We explore the phenomenology of a two-Higgs-doublet model (2HDM) where both Higgs doublets couple to up-type and down-type fermions with couplings determined by the minimal flavor violation (MFV) ansatz. This 2HDM "type MFV" generalizes 2HDM types I-IV, where the decay rates of h→bb̄ and h→τ+τ- are governed by MFV couplings independent of the Higgs couplings to gauge bosons or the top quark. To determine the implications of the present Higgs data on the model, we have performed global fits to all relevant data. Several surprisingly large effects on the light Higgs phenomenology can arise: (1) The modified couplings of the Higgs to fermions can enhance the h→γγ rate significantly in both vector boson fusion production (up to a factor of 3 or more) and the inclusive rate (up to a factor of 1.5 or more); (2) in the 2HDM type MFV, the constraints on a light charged Higgs are milder than in 2HDM types I-IV. Thus, there can be substantial charged Higgs loop contribution to the diphoton rate, allowing further enhancements of the diphoton rates; (3) the h→τ+τ- rate can be (highly) suppressed, independently of the other decay channels. Furthermore, we studied the correlation between the light Higgs and the heavy Higgs phenomenology. We showed that even small deviations from the decoupling limit would imply good prospects for the detection of the heavy Higgs boson. In some regions of parameter space, a substantial range of MH is already either ruled out or on the edge of detection. Finally, we investigated the possibility that the heavy Higgs is close in mass to the light Higgs, providing an additional h/H→bb̄ rate, as well as confounding the extraction of properties of the Higgs bosons. © 2012 American Physical Society.


Lu Z.-T.,Argonne National Laboratory | Lu Z.-T.,University of Chicago | Mueller P.,Argonne National Laboratory | Drake G.W.F.,University of Windsor | And 4 more authors.
Reviews of Modern Physics | Year: 2013

The neutron-rich He6 and He8 isotopes exhibit an exotic nuclear structure that consists of a tightly bound He4-like core with additional neutrons orbiting at a relatively large distance, forming a halo. Recent experimental efforts have succeeded in laser trapping and cooling these short-lived, rare helium atoms and have measured the atomic isotope shifts along the He4-He6-He8 chain by performing laser spectroscopy on individual trapped atoms. Meanwhile, the few-electron atomic structure theory, including relativistic and QED corrections, has reached a comparable degree of accuracy in the calculation of the isotope shifts. In parallel efforts, also by measuring atomic isotope shifts, the nuclear charge radii of lithium and beryllium isotopes have been studied. The techniques employed were resonance ionization spectroscopy on neutral, thermal lithium atoms and collinear laser spectroscopy on beryllium ions. Combining advances in both atomic theory and laser spectroscopy, the charge radii of these light halo nuclei have now been determined for the first time independent of nuclear structure models. The results are compared with the values predicted by a number of nuclear structure calculations and are used to guide our understanding of the nuclear forces in the extremely neutron-rich environment. © 2013 American Physical Society.


Boughezal R.,Argonne National Laboratory | Focke C.,Northwestern University | Giele W.,Fermi National Accelerator Laboratory | Liu X.,University of Maryland University College | And 3 more authors.
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

We use the recently proposed jettiness-subtraction scheme to provide the complete calculation of Higgs boson production in association with a jet in hadronic collisions through next-to-next-to-leading order in perturbative QCD. This method exploits the observation that the N-jettiness event-shape variable completely describes the singularity structure of QCD when final-state colored particles are present. Our results are in agreement with a recent computation of the gg and qg partonic initial states based on sector-improved residue subtraction. We present phenomenological results for both fiducial cross sections and distributions at the LHC. © 2015 The Authors.


Gurevich A.,Old Dominion University | Vinokur V.M.,Argonne National Laboratory
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We discuss the importance of the vortex core energy and realistic boundary conditions to the Fokker-Plank equation for the calculation of thermally activated hopping of vortices across narrow superconducting films. Disregard of these issues in the papers by Bulaevskii, Graf and Kogan, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.85.014505 85, 014505 (2012) and by Bulaevskii, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.83.144526 83, 144526 (2011) in which an uncertain London vortex core cutoff was used, can produce large numerical errors and a significant discrepancy between their results and the results of the paper by Gurevich and Vinokur, Phys. Rev. Lett.PRLTAO0031-900710.1103/ PhysRevLett.100.227007 100, 227007 (2008) in which these issues were taken into account. This can be essential for the interpretation of experimental data on thin-film photon detectors and other superconducting nanostructures. © 2012 American Physical Society.


Yu J.,University of California at Riverside | Liu G.,University of California at Riverside | Sumant A.V.,Argonne National Laboratory | Goyal V.,University of California at Riverside | And 2 more authors.
Nano Letters | Year: 2012

Graphene demonstrated potential for practical applications owing to its excellent electronic and thermal properties. Typical graphene field-effect transistors and interconnects built on conventional SiO 2/Si substrates reveal the breakdown current density on the order of 1 μA/nm 2 (i.e., 10 8 A/cm 2), which is ∼100× larger than the fundamental limit for the metals but still smaller than the maximum achieved in carbon nanotubes. We show that by replacing SiO 2 with synthetic diamond, one can substantially increase the current-carrying capacity of graphene to as high as ∼18 μA/nm 2 even at ambient conditions. Our results indicate that graphene's current-induced breakdown is thermally activated. We also found that the current carrying capacity of graphene can be improved not only on the single-crystal diamond substrates but also on an inexpensive ultrananocrystalline diamond, which can be produced in a process compatible with a conventional Si technology. The latter was attributed to the decreased thermal resistance of the ultrananocrystalline diamond layer at elevated temperatures. The obtained results are important for graphene's applications in high-frequency transistors, interconnects, and transparent electrodes and can lead to the new planar sp 2-on-sp 3 carbon-on-carbon technology. © 2012 American Chemical Society.


Newton M.C.,University College London | Newton M.C.,University of Surrey | Leake S.J.,University College London | Harder R.,Argonne National Laboratory | And 2 more authors.
Nature Materials | Year: 2010

Nanoscale structures can be highly strained because of confinement effects and the strong influence of their external boundaries. This results in dramatically different electronic, magnetic and optical material properties of considerable utility. Third-generation synchrotron-based coherent X-ray diffraction has emerged as a non-destructive tool for three-dimensional(3D) imaging of strain and defects in crystals that are smaller than the coherence volume, typically a few cubic micrometres, of the available beams that have sufficient flux to reveal the materials structure1. Until now, measurements have been possible only at a single Bragg point of a given crystal beause of the limited ability to maintain alignment2; it has therefore been possible to determine only one component of displacement and not the full strain tensor. Here we report key advances in our fabrication and experimental techniques, which have enabled diffraction patterns to be obtained from six Bragg reflections of the same ZnO nanocrystal for the first time. All three Cartesian components of the ion displacement field, and in turn the full nine-component strain tensor, have thereby been imaged in three dimensions. © 2010 Macmillan Publishers Limited. All rights reserved.


Jones M.R.,Northwestern University | MacFarlane R.J.,Northwestern University | Lee B.,Argonne National Laboratory | Zhang J.,Northwestern University | And 3 more authors.
Nature Materials | Year: 2010

Directional bonding interactions in solid-state atomic lattices dictate the unique symmetries of atomic crystals, resulting in a diverse and complex assortment of three-dimensional structures that exhibit a wide variety of material properties. Methods to create analogous nanoparticle superlattices are beginning to be realized, but the concept of anisotropy is still largely underdeveloped in most particle assembly schemes. Some examples provide interesting methods to take advantage of anisotropic effects, but most are able to make only small clusters or lattices that are limited in crystallinity and especially in lattice parameter programmability. Anisotropic nanoparticles can be used to impart directional bonding interactions on the nanoscale, both through face-selective functionalization of the particle with recognition elements to introduce the concept of valency, and through anisotropic interactions resulting from particle shape. In this work, we examine the concept of inherent shape-directed crystallization in the context of DNA-mediated nanoparticle assembly. Importantly, we show how the anisotropy of these particles can be used to synthesize one-, two- and three-dimensional structures that cannot be made through the assembly of spherical particles. © 2010 Macmillan Publishers Limited. All rights reserved.


Young R.D.,Argonne National Laboratory | Thomas A.W.,Jefferson Lab | Thomas A.W.,College of William and Mary | Thomas A.W.,University of Adelaide
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2010

We report an analysis of the impressive new lattice simulation results for octet baryon masses in 2+1-flavor QCD. The analysis is based on a low-order expansion about the chiral SU(3) limit in which the symmetry breaking arises from terms linear in the quark masses plus the variation of the Goldstone boson masses in the leading chiral loops. The baryon masses evaluated at the physical light-quark masses are in remarkable agreement with the experimental values, with a model dependence considerably smaller than the rather small statistical uncertainty. From the mass formulas one can evaluate the sigma commutators for all octet baryons. This yields an accurate value for the pion-nucleon sigma commutator. It also yields the first determination of the strangeness sigma term based on 2+1-flavor lattice QCD and, in general, the sigma commutators provide a resolution to the difficult issue of fine-tuning the strange-quark mass. © 2010 The American Physical Society.


Kogut J.B.,University of Maryland University College | Sinclair D.K.,Argonne National Laboratory
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2010

QCD with two flavors of massless color-sextet quarks is considered as a model for conformal/walking technicolor. If this theory possesses an infrared fixed point, as indicated by 2-loop perturbation theory, it is a conformal (unparticle) field theory. If, on the other hand, a chiral condensate forms on the weak-coupling side of this would-be fixed point, the theory remains confining. The only difference between such a theory and regular QCD is that there is a range of momentum scales over which the coupling constant runs very slowly (walks). In this first analysis, we simulate the lattice version of QCD with two flavors of staggered quarks at finite temperatures on lattices of temporal extent Nt=4 and 6. The deconfinement and chiral-symmetry restoration couplings give us a measure of the scales associated with confinement and chiral-symmetry breaking. We find that, in contrast to what is seen with fundamental quarks, these transition couplings are very different. β=6/g2 for each of these transitions increases significantly from Nt=4 and Nt=6 as expected for the finite-temperature transitions of an asymptotically free theory. This suggests a walking rather than a conformal behavior, in contrast to what is observed with Wilson quarks. In contrast to what is found for fundamental quarks, the deconfined phase exhibits states in which the Polyakov loop is oriented in the directions of all three cube roots of unity. At very weak coupling the states with complex Polyakov loops undergo a transition to a state with a real, negative Polyakov loop. © 2010 The American Physical Society.


Holt R.J.,Argonne National Laboratory | Roberts C.D.,Argonne National Laboratory | Roberts C.D.,Peking University
Reviews of Modern Physics | Year: 2010

An experimental and theoretical perspective is provided on the behavior of unpolarized distribution functions for the nucleon and pion on the valence-quark domain, namely, Bjorken x0.4. This domain is a key to much of hadron physics; e.g., a hadron is defined by its flavor content and that is a valence-quark property. Furthermore, its accurate parametrization is crucial to the provision of reliable input for large collider experiments. The focus is on experimental extractions of distribution functions via electron and muon inelastic scattering, and from Drell-Yan interactions; and on theoretical treatments that emphasize an explanation of the distribution functions, providing an overview of major contemporary approaches and issues. Valence-quark physics is a compelling subject, which probes at the heart of our understanding of the standard model. There are numerous outstanding and unresolved challenges, which experiment and theory must confront. In connection with experiment, an explanation that an upgraded Jefferson Laboratory facility is well suited to provide new data on the nucleon is provided, while a future electron-ion collider could provide essential new data for the mesons. There is also great potential in using Drell-Yan interactions, at FNAL, CERN, J-PARC, and GSI, to push into the large-x domain for both mesons and nucleons. Furthermore, it is argued that explanation, in contrast to modeling and parametrization, requires a widespread acceptance of the need to adapt theory: to the lessons learnt already from the methods of nonperturbative quantum-field theory and a fuller exploitation of those methods. © 2010 The American Physical Society.


Newton M.A.,European Synchrotron Radiation Facility | Chapman K.W.,Argonne National Laboratory | Thompsett D.,Johnson Matthey | Chupas P.J.,Argonne National Laboratory
Journal of the American Chemical Society | Year: 2012

When materials are reduced to the nanoscale, their structure and reactivity can deviate greatly from the bulk or extended surface case. Using the archetypal example of supported Pt nanoparticles (ca. 2 nm diameter, 1 wt % Pt on Al 2O 3) catalyzing CO oxidation to CO 2 during cyclic redox operation, we show that high energy X-ray total scattering, used with subsecond time resolution, can yield detailed, valuable insights into the dynamic behavior of nanoscale systems. This approach reveals how these nanoparticles respond to their environment and the nature of active sites being formed and consumed within the catalytic process. Specific insight is gained into the structure of the highly active Pt surface oxide that formed on the nanoparticles during catalysis. © 2012 American Chemical Society.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 3.99M | Year: 2014

The Scottish Doctoral Training Centre in Condensed Matter Physics, known as the CM-DTC, is an EPSRC-funded Centre for Doctoral Training (CDT) addressing the broad field of Condensed Matter Physics (CMP). CMP is a core discipline that underpins many other areas of science, and is one of the Priority Areas for this CDT call. Renewal funding for the CM-DTC will allow five more annual cohorts of PhD students to be recruited, trained and released onto the market. They will be highly educated professionals with a knowledge of the field, in depth and in breadth, that will equip them for future leadership in a variety of academic and industrial careers. Condensed Matter Physics research impacts on many other fields of science including engineering, biophysics, photonics, chemistry, and materials science. It is a significant engine for innovation and drives new technologies. Recent examples include the use of liquid crystals for displays including flat-screen and 3D television, and the use of solid-state or polymeric LEDs for power-saving high-illumination lighting systems. Future examples may involve harnessing the potential of graphene (the worlds thinnest and strongest sheet-like material), or the creation of exotic low-temperature materials whose properties may enable the design of radically new types of (quantum) computer with which to solve some of the hardest problems of mathematics. The UKs continued ability to deliver transformative technologies of this character requires highly trained CMP researchers such as those the Centre will produce. The proposed training approach is built on a strong framework of taught lecture courses, with core components and a wide choice of electives. This spans the first two years so that PhD research begins alongside the coursework from the outset. It is complemented by hands-on training in areas such as computer-intensive physics and instrument building (including workshop skills and 3D printing). Some lecture courses are delivered in residential schools but most are videoconferenced live, using the well-established infrastructure of SUPA (the Scottish Universities Physics Alliance). Students meet face to face frequently, often for more than one day, at cohort-building events that emphasise teamwork in science, outreach, transferable skills and careers training. National demand for our graduates is demonstrated by the large number of companies and organisations who have chosen to be formally affiliated with our CDT as Industrial Associates. The range of sectors spanned by these Associates is notable. Some, such as e2v and Oxford Instruments, are scientific consultancies and manufacturers of scientific equipment, whom one would expect to be among our core stakeholders. Less obviously, the list also represents scientific publishers, software houses, companies small and large from the energy sector, large multinationals such as Solvay-Rhodia and Siemens, and finance and patent law firms. This demonstrates a key attraction of our graduates: their high levels of core skills, and a hands-on approach to problem solving. These impart a discipline-hopping ability which more focussed training for specific sectors can complement, but not replace. This breadth is prized by employers in a fast-changing environment where years of vocational training can sometimes be undermined very rapidly by unexpected innovation in an apparently unrelated sector. As the UK builds its technological future by funding new CDTs across a range of priority areas, it is vital to include some that focus on core discipline skills, specifically Condensed Matter Physics, rather than the interdisciplinary or semi-vocational training that features in many other CDTs. As well as complementing those important activities today, our highly trained PhD graduates will be equipped to lay the foundations for the research fields (and perhaps some of the industrial sectors) of tomorrow.


News Article | December 1, 2016
Site: phys.org

A technology developed at the University of Chicago, and now being commercialized by a University startup, is addressing the intermittent nature of these renewable sources. It uses a selectively evolved, unicellular microorganism that helps convert electricity into methane gas. That gas can be stored, transported and used wherever natural gas is used, including for the generation of power on demand. Laurens Mets, associate professor of molecular genetics and cell biology, began developing the technology in the late 1990s. From it, the startup Electrochaea was born with support from the University's technology transfer office, which is now part of the Polsky Center for Entrepreneurship and Innovation. "Direct scaling at this pace and scale is rare in the energy field," Mets said. "But we found this technology to be eminently scalable, so I'm very confident about its commercialization." Electrochaea was selected for the 2014 Global Cleantech 100—a list of 100 private companies with the greatest potential to solve the clean energy crisis according to the market intelligence firm Cleantech Group. It has experienced a string of successes, including a large-scale demonstration facility that delivers methane to Denmark's pipeline grid and a commercial-scale plant announced in October to be built in Hungary. "The disruptive energy storage technology developed by Dr. Mets in his lab has been validated by Electrochaea and is now being shown to scale in a commercially meaningful way," said Cristianne Frazier, senior project manager of technology commercialization and licensing at the Polsky Center. At the center of the power-to-gas technology is a strain of methanogenic Archaea—a microorganism that Mets adapted for industrial use. The process starts with surplus electricity that is coming from a wind farm or solar array, but isn't needed immediately. That power is used to convert water into hydrogen and oxygen. The hydrogen is combined with waste carbon dioxide from any of a variety of sources, such as a biogas or an industrial process, in a proprietary bioreactor in which the microorganisms efficiently catalyze conversion of the mixture into methane and water. The resulting methane can be transported in existing pipelines or converted into compressed natural gas or liquid natural gas, making it available to generate electricity. The technology offers a large carrying capacity—more than competing bulk-energy storage systems, such as batteries, pumped hydroelectric and compressed air, according to Mets. The technology enables increased use of variable electricity sources such as wind and solar by storing excess power, thus smoothing out the variability and making renewables more feasible and economically viable. The carbon dioxide produced by burning the methane product of the process was waste from its original source and would have been released into the atmosphere in any case. The power-to-gas technology is thus carbon-neutral in its primary impact on the environment. It has the additional important impact of displacing net carbon emissions from burning of fossil fuels for energy generation with energy derived from renewable wind and solar sources. The potential of the patented power-to-gas technology is significant, according to Seth Snyder, leader of the Water-Energy-Sustainability Initiative at the Argonne National Laboratory. "Methane could be the primary source for much of society's energy needs including electricity, heating, industrial processes and transportation," he said. "Therefore a robust way to create clean methane from renewable sources has the potential to transform our energy systems." Mets continues his research at the University and aims to refine the novel technology. He hopes to adapt it to produce gasoline and jet fuel. "What's so interesting is that Electrochaea is demonstrating that a very fundamental process in nature can be harnessed and adapted to address an immediate societal challenge," Snyder said. "If done correctly, the benefits could be significant in Europe and Asia." The links between the company and the University are numerous. The University recognized the importance of Met's discoveries early on and filed several families of patent applications that would be licensed to Electrochaea and become central to the startup's intellectual property portfolio. "The technology commercialization and licensing team at the Polsky Center, Electrochaea and Dr. Mets have worked collaboratively on everything from company formation and technology development to Series A financing and patent prosecution, and I believe those relationships have helped foster a successful company," Frazier said. Created in 2006, Electrochaea first validated the process in its laboratory in St. Louis. It began field testing Mets' power-to-gas technology in 2011. Three years later, Electrochaea started constructing a large-scale demonstration facility at a wastewater treatment plant outside Copenhagen, with the treatment plant providing the waste carbon dioxide used in the conversion process. Based on the success of that project, which is called BioCat and went live in June, Electrochaea is building a 10-megawatt plant in Hungary that will be the world's first commercial-scale power-to-gas plant. "Electrochaea ramped up very quickly, with several steps, from the one-liter reactor in my lab at the University, through the one-megawatt BioCat project and now the 10-megawatt commercial plant in Hungary. The microbes have proven to be very robust," Mets said. The Hungarian plant will be built by Electrochaea and Magyar Villamos Muvek, that country's largest energy provider. As with BioCat, the new plant will provide methane directly to the existing system of natural gas pipelines. Electrochaea plans to build an additional plant in Switzerland and envisions plants with up to 1,000 megawatts of capacity. Meanwhile, Pacific Gas and Electric Company is building a small demonstration plant at the National Renewable Energy Lab in Colorado. Explore further: Power-to-liquid—pilot operation of first compact plant


News Article | April 14, 2016
Site: cleantechnica.com

Cummins says it is starting development of a plug-in hybrid electric Class 6 truck that will boost fuel economy by 50%. A Class 6 truck has a gross vehicle weight of between 19,000 and 26,000 pounds. The category includes trucks with a single rear axle and school buses. Examples of a Class 6 truck include the International Durastar, Chevrolet Kodiak/GMC TopKick C6500, and the Ford F-650, according to Wikipedia. The project will be funded in part by a $4.5 million grant from the U.S. Department of Energy.  Cummins’ primary objective is to develop a plug-in hybrid electric vehicle that can ensure fuel efficiency while offering improved performance. Its researchers will optimize the powertrain by selecting the engine best suited for use as an electric commercial vehicle range extender. That engine will be used to manage the charge level of the all-electric drive battery pack. The range extender will be integrated, using advanced vehicle controls, with the electrified powertrain and other applicable technologies. Partners in the project include PACCAR, Ohio State University, the National Renewable Energy Laboratory, and the Argonne National Laboratory. PACCAR manufactures medium- and heavy-duty trucks under the Kenworth, Peterbilt, and DAF brand names. “The close integration and control of the electrified powertrain with an appropriately selected engine is critically important to developing a plug-in hybrid electric vehicle system,” said Wayne Eckerle, Vice President, Research and Technology, Cummins Inc. “We believe that through the team’s efforts we can soon make these innovations commercially available, which has the potential to translate into substantial savings annually per vehicle, helping our customers and the environment.” The reduction of fuel consumption will be accomplished using a range of drive cycles designed to meet the needs of commercial fleet operators. In addition to a plug-in hybrid powertrain, the trucks will feature other technologies, including intelligent transportation systems and electronic braking. Ultimately, the researchers aim to demonstrate improved fuel consumption and state-of-the-art drivability and performance regardless of environmental conditions. Cummins has not announced when it expects its plug-in hybrid electric trucks to be available. This is important news because Class 6 trucks are widely used in commercial applications. Think of these new trucks as a really heavy-duty version of the Chevy Volt. Raising their average fuel economy — which tends to be dreadful — will pay bigger dividends for the environment than raising the national average for private passenger cars by a few percentage points. The Cummins research should have real benefits, as opposed to the story about electric tractor trailers from Oakridge Battery that turned out to be vaporware.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


In a development that could revolutionize electronic ciruitry, a research team from the University of Wisconsin at Madison (UW) and the U.S. Department of Energy's Argonne National Laboratory has confirmed a new way to control the growth paths of graphene nanoribbons on the surface of a germainum crystal.


News Article | April 1, 2016
Site: phys.org

Last week, Nano Letters published the team's study, which covers how the researchers made a catalyst for the first time out of the metal ruthenium. The break-through is not limited to this one metal, however. Xia says the process they developed is most important. "We are fine-tuning the surface, size, shape and crystal structure," he says. "Our goal is to increase their catalytic activity while reducing the usage of this precious material." In an effort to make better catalysts, Xia brought together a team from the Argonne National Laboratory, University of Texas at Dallas, and Temple University. Haihang Ye, Xia's graduate student, is first author on the paper; undergraduate Joseph Vermeylen also contributed to the work. Chemically speaking, catalysts are substances that increase the rate of a reaction, making it start or continue more quickly. Materially speaking, catalyst sources vary, although some of the most important tend to be expensive metals. Ruthenium, a noble metal and fairly rare, has joined an elite group of catalyst metals including gold, platinum and palladium. Since the metals are so rare, there is a strong incentive to reduce how much of a catalyst is used in any given process, Xia says. Usually researchers control shape and size—the key is more surface area. Nanoframes, which are nanoparticles with open centers, have an advantage with their gazebo-like atomic arrangements. "Since we have the interior space available, it's another benefit," he says, adding that their particular ruthenium structures get even more detailed. "Because catalytic reactions occur only on the surface of materials, the surface atom arrangement has a great impact on determining the catalytic activity." The arrangements of atoms—the crystal structure of a material—can be difficult to manipulate. In general, ruthenium nanocrystals adopt the hexagonal close-packed (hcp) structure. But Xia and his team came up with an elegant solution for making ruthenium nanocrystals with another structure: face-centered cubic (fcc) structure. The process involves two steps, growth and etching. Basically, ruthenium doesn't naturally grow in a crystal structure that can be made into a nanoframe. Instead, the team grew the nanoframe on a core of palladium, which they later removed. The first step starts with a palladium seed with an fcc structure, and the researchers used the metal as an atomic template. Ruthenium was directed to preferentially grow on the edges and corners of the palladium octahedron. That way, the newly formed ruthenium simply replicated the fcc structure of the palladium seed. The second step is called etching, which removes the palladium seed, leaving behind the hollowed out ruthenium nanoframe. To ensure the material had catalytic potential, the team ran the ruthenium nanoframes through several diagnostic tests: the reduction of p-nitrophenol by NaBH4 and the dehydrogenation of ammonia borane. While more data is needed to quantify how well ruthenium holds up against existing metal catalysts, Xia says the results from their experiments are promising. Improving the material's stability is the next line of research. Being such a new catalyst, it needs to be fully characterized and have its limits tested. For example, Xia says they will want to know how thick the nanoframe material can grow and still keep the fcc structure and be effective as a catalyst. Most importantly, once the material has been vetted, researchers will be able to start applying the catalyst to several big challenges. Namely, Xia says that ruthenium nanoframes and other catalysts with unique crystal structures could improve hydrogen fuel production and carbon storage. A noble calling for a metal catalyst. Explore further: Chemists uncover 'green' catalysts with promise for cheaper drug production More information: Haihang Ye et al. Ru Nanoframes with an fcc Structure and Enhanced Catalytic Properties, Nano Letters (2016). DOI: 10.1021/acs.nanolett.6b00607


The method of taking these pictures is a collaborative creation that involved Kansas State University researchers Artem Rudenko and Daniel Rolles, both assistant professors of physics. The movies help scientists understand interactions of intense laser light with matter. But even more importantly, these experiments lead the way to filming various processes that involve ultrafast dynamics of microscopic samples, such as the formation of aerosols—which play a major role in climate models—or laser-driven fusion. "We can create a real movie of the microworld," Rudenko said. "The key development is that now we can take sequences of pictures on the nanoscale." Rudenko and Rolles—both affiliated with the university's James R. Macdonald Laboratory—collaborated with researchers at SLAC National Accelerator Laboratory at Stanford University, Argonne National Laboratory and the Max Planck Institutes in Germany. Their publication, "Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles," appears in Nature Photonics. In this work, the collaboration used intense lasers to heat xenon nanoscale clusters and then took a series of X-ray pictures to show what happened to the particles. The picture series became a movie of how these objects move at the level of femtoseconds, which are one-millionth of a billionth of a second. "What makes nano so interesting is that the behavior for many things changes when you get to the nanoscale," Rolles said. "Nano-objects bridge the gap between bulk matter and individual atoms or molecules. This research helps us as we try to understand the behavior of nano-objects and how they change shape and properties within extremely short times." The pictures of the nanoparticles cannot be taken with normal optical light, but must be taken with X-rays because X-ray light has nanometer wavelengths that enable researchers to view nanoscale objects, Rolles said. The light wavelength must match the size of the object. To take the pictures, the researchers needed two ingredients: very short X-ray pulses and very powerful X-ray pulses. The Linac Coherent Light Source at SLAC provided those two ingredients, and Rudenko and Rolles traveled to California to use this machine to take the perfect pictures. The photo-taking method and the pictures it produces have numerous applications in physics and chemistry, Rolles said. The method is also valuable for visualizing laser interactions with nanoparticles and for the rapidly developing field of nanoplasmonics, in which the properties of nanoparticles are manipulated with intense light fields. This may help to build next-generation electronics. "Light-driven electronics can be much faster than conventional electronics because the key processes will be driven by light, which can be extremely fast," Rudenko said. "This research has big potential for optoelectronics, but in order to improve technology, we need to know how a laser drives those nanoparticles. The movie-making technology is an important step in this direction." Rudenko and Rolles are continuing to improve the moviemaking process. In collaboration with the university's soft matter physics group, they have extended the range of samples, which can be put into the X-ray machine and now can produce movies of gold and silica nanoparticles. Explore further: Dual camera smartphones – the missing link that will bring augmented reality into the mainstream More information: Tais Gorkhover et al. Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles, Nature Photonics (2016). DOI: 10.1038/nphoton.2015.264


News Article | October 23, 2015
Site: www.nature.com

Two large science experiments head a wish list drawn up by US nuclear physicists for the next decade: a quest to uncover the nature of neutrinos and a particle collider to study the forces that bind quarks. The big-ticket items, each of which would cost hundreds of millions of dollars, are among the top priorities highlighted by the Nuclear Science Advisory Committee (NSAC) on 15 October. Every 5–7 years, this panel of high-level nuclear physicists presents a long-term plan to the US Department of Energy and National Science Foundation, after consulting the US nuclear-physics community. The agenda assumes that US funding for nuclear science will increase by 1.6% per year above inflation — a realistic scenario, says NSAC chair Donald Geesaman, a physicist at Argonne National Laboratory in Illinois. “We have exciting science to do, and we are not asking for large increases,” he says. The neutrino experiment, construction of which could begin by the end of the decade, would search for a theorized rare form of radioactive decay in which two identical neutrinos annihilate one another — an event that would imply that neutrinos are their own anti-particles. It could provide a way to measure the tiny mass of neutrinos and help to explain why the Universe has lots of matter but almost no antimatter. Experiments around the world using materials such as liquid xenon have failed to detect the event, known as neutrino­less double β decay. One of the largest is the Enriched Xenon Observatory-200 (EXO-200) experiment, which uses 200 kilograms of xenon as a detector deep below the desert outside Carlsbad, New Mexico. But the NSAC report says that an experiment using a tonne or more of material — about ten times more than any previous attempt — could either find or rule out the phenomenon. Confirming neutrino­less double β decay "would in one stroke add lots of stuff to our knowledge of the natural world," says Giorgio Gratta, a physicist at Stanford University in California, and a former spokesperson for EXO-200. Another priority, on which Nature reported in May, is a particle accelerator that would collide electrons with protons or heavy ions to investigate gluons, which carry the force that binds quarks. But construction would have to wait until the 2020s because NSAC’s top priority is to complete and maintain existing facilities, such as the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in Upton, New York. RHIC faced closure two years ago, but an improved budgetary position means it can now be sustained into the next decade. Although a more powerful heavy-ion experiment is housed in the Large Hadron Collider at CERN, Europe’s particle-physics laboratory in Geneva, Switzerland, researchers say that RHIC still has important science to do. Both experiments melt atomic nuclei together into a liquid-like soup of quarks and gluons, but each looks at different energy scales. “Collisions at lower energy create a very different kind of matter,” says Jamie Nagle, a physicist at the University of Colorado at Boulder who works on RHIC. NSAC also wants to continue support for two other leading US facilities: the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson National Laboratory in Newport News, Virginia, and the planned US$700 million construction of the Facility for Rare Isotope Beams (FRIB) at Michigan State University in East Lansing. Its final priority is to increase funding for small- and medium-scale projects through mechanisms such as the National Science Foundation’s Major Research Instrumentation programme, which supplies scientific equipment to universities. “All recommendations and priorities were agreed on by consensus,” says Geesaman.


News Article | August 22, 2016
Site: www.spie.org

Quantum dot photosensitizers as a new paradigm for photochemical activation Interfacial triplet-triplet energy transfer is used to significantly extend the exciton lifetime of cadmium selenide nanocrystals in an experimental demonstration of their molecular-like photochemistry. Photosensitizers are an essential component of solar energy conversion processes, in which they are used to generate the highly reactive excited states that enable energy conversion (e.g., photochemical upconversion).1, 2 Typically, molecular triplet photosensitizers are used for such applications, but to improve the solar energy conversion process, the identification and preparation of next-generation triplet photosensitizers is required. However, the design of such photosensitizers—suitable for solar energy conversion and photocatalytic applications—remains a challenge.3 Semiconductor nanocrystals are stable light-emitting materials that can be systematically tuned to produce intense absorptions and photoluminescence. Futhermore, semiconductor nanocrystals offer several advantages over molecular photosensitizers, e.g., simple preparative synthesis, photochemical stability, size-tunable electronic and photophysical properties, high molar extinction coefficients, and trivial post-synthetic functionalization. Moreover, the inherently large, energy-consuming singlet-triplet gaps that are characteristic of molecular sensitizers can be avoided with the use of semiconductor nanocrystals that feature closely spaced excited-state energy levels.4 The characteristic broadband light absorption properties of these materials can be extended into the near-IR and can thus potentially be exploited for numerous triplet excited-state reactions, such as photoredox catalysis, singlet oxygen generation, photochemical upconversion, and excited state electron transfer. In this work,5 we have investigated the possibility of using quantum dots as effective alternatives to molecular triplet photosensitizers. With our experiments, we show definitively that triplet energy transfer proceeds rapidly and efficiently from energized semiconductor nanocrystals to surface-anchored molecular acceptors. In particular, we find that cadmium selenide (CdSe) quantum dots can serve as effective surrogates for molecular triplet sensitizers and can easily transfer their triplet excitons to organic acceptors (see Figure 1). These semiconductor nanomaterials are thus highly suited to energy-conversion applications. Figure 1. Artistic illustration of quantum dot-to-molecule triplet energy transfer and example subsequent reactions. The nanoparticle-to-solution triplet exciton transfer methodology we used in our experiments is shown in Figure 2. Quantum dots are typically capped with a ligand shell, and in our experiments we used oleic acid (OA) to ensure solubility of the dots (while also preventing inter-particle aggregation). This ligand periphery also serves as an insulating layer that prevents collisional quenching with the freely diffusing molecules in the solution. As a consequence, bimolecular energy transfer cannot proceed in the solution within the approximately 30ns-lifetime of CdSe excitons. To circumvent this limitation, we modified the CdSe surfaces by replacing some of the native OA ligands with molecular triplet acceptors that bear a carboxylic acid moiety, e.g., 9-anthracene carboxylic acid (ACA). We then purified the resultant nanocrystals by successive preciptation/centrifugation washing cycles that provided the desired donor/acceptors (termed CdSe/ACA). Figure 2. Schematic representation of the quantum dot-to-solution triplet energy transfer process. ). Cl: Chlorine. Ph: Phenyl. PDT: Photodynamic therapy. Schematic representation of the quantum dot-to-solution triplet energy transfer process. 5 The associated energy (E) levels, as well as the various triplet-triplet energy transfer (TTET) and decay pathways investigated in this study are also depicted. Cadmium selenide (CdSe) nanocrystals—capped with oleic acid (OA)—are used as the light-absorbing triplet sensitizer, in conjunction with 9-anthracenecarboxylic acid (ACA) as the triplet acceptor. The long-lived ACA triplets enable exothermic triplet energy transfer to freely diffusing 2-chlorobisphenylethynylanthracene (CBPEA) and dioxygen (O). Cl: Chlorine. Ph: Phenyl. PDT: Photodynamic therapy. Since the energy transfer process in our experiments takes place at the molecule–nanoparticle interface (i.e., resembling an intramolecular process), the dynamics occur on ultrafast timescales. To monitor the photoinduced processes in these materials, we used femtosecond transient absorption spectroscopy to observe the excited-state dynamics following a 500nm laser pulse (100fs full width at half-maximum, 0.2μJ). We find—see Figure 3(a)—that direct triplet-triplet energy transfer (TTET) between the CdSe excited states and the surface-anchored molecular acceptors occurs within hundreds of picoseconds after the laser pulse (with an average rate constant of 2.0×109s−1) and with a nearly quantitative yield. Consequently, the CdSe photoluminescence was completely quenched, and the CdSe exciton ground-state recovery correlated with the molecular ACA triplet excited-state (3ACA*) signal growth (which occurred at a rate of 2.2×109s−1). Figure 3. (a) Ultrafast differential transient absorption (TA) spectra of CdSe-OA quantum dots suspended in toluene. These spectra were obtained upon selective excitation of CdSe, with the use of a 500nm laser pulse. The inset shows the TA kinetics for the growth of the molecular ACA triplet excited state (3ACA*) at 441nm. ΔA: Change in absorbance. 〈k〉 : Average TTET rate constant. (b) TA difference spectra (from a 1mJ laser pulse with 505nm excitation wavelength, 5–7ns full width at half-maximum) measured at selected delay times after the laser pulse for CdSe/ACA CBPEA in deaerated toluene (at concentrations of 5 and 6μM, respectively) at room temperature. The inset illustrates the triplet energy transfer reaction between 3ACA* and CBPEA. It shows TA decay kinetics at 430nm (red), as well as the rise and decay at 490nm (blue) along with their biexponential fits (solid and dashed lines, respectively). (a) Ultrafast differential transient absorption (TA) spectra of CdSe-OA quantum dots suspended in toluene. These spectra were obtained upon selective excitation of CdSe, with the use of a 500nm laser pulse. The inset shows the TA kinetics for the growth of the molecular ACA triplet excited state (ACA*) at 441nm. ΔA: Change in absorbance. 〈k〉: Average TTET rate constant. (b) TA difference spectra (from a 1mJ laser pulse with 505nm excitation wavelength, 5–7ns full width at half-maximum) measured at selected delay times after the laser pulse for CdSe/ACA CBPEA in deaerated toluene (at concentrations of 5 and 6μM, respectively) at room temperature. The inset illustrates the triplet energy transfer reaction betweenACA* and CBPEA. It shows TA decay kinetics at 430nm (red), as well as the rise and decay at 490nm (blue) along with their biexponential fits (solid and dashed lines, respectively). 5 The 3ACA* has an extremely long lifetime because of the strongly spin-forbidden nature of the T # S transition (i.e., between the lowest energy triplet excited state and the singlet ground state). Our results also show the decay of the 3ACA* excited states that formed on the CdSe surfaces had lifetimes on the order of milliseconds. This result represents a remarkable six-order-of-magnitude increase from the initial CdSe excited-state lifetime. Such a long excited-state lifetime is promising for numerous applications because it provides the opportunity for additional chemical reactivity within the bulk solution. As a proof of concept, we used a secondary freely diffusing molecular triplet acceptor—2-chlorobisphenylethynylanthracene (CBPEA)—in solution (toluene), to demonstrate the extraction of triplet energy from the CdSe surface. We observed—see Figure 3(b)—near-quantitative TTET between the 3ACA* and the triplet excited CPBEA (3CPBEA*) states, which can thus enable highly efficient triplet energy extraction from the initially prepared CdSe excitons. Our results also show that once the triplet exciton energy is transferred to the freely diffusing acceptors in a deoxygenated solution, they eventually undergo triplet-triplet annihilation. This leads to upconverted emission from the 1CBPEA* (at 490nm), with a lifetime of hundreds of microseconds, as shown in Figure 4(a). Moreover, in an aerated solution of CdSe/ACA we detected the characteristic photoluminescence (centered at 1277nm) of singlet oxygen (1O ), which results from the quenching of 3ACA* by freely diffusing ground-state oxygen. In contrast—see Figure 4(b)—we observed no such signal when we used CdSe nanoparticles that were devoid of ACA ligands. This work therefore represents the first example of 1O sensitization by semiconductor nanocrystals via a mechanism other than Förster energy transfer. Figure 4. (a) Delayed fluorescence (DF) spectra of triplet-sensitized upconversion emission that occurred as a result of TTET from CdSe/ 3ACA* to CBPEA, followed by triplet-triplet annihilation. The inset shows the emission decay kinetics of the integrated delayed emission. (b) Near-IR singlet oxygen phosphorescence emission from CdSe-OA and CdSE-OA/ACA in aerated toluene (both at a concentration of 4μM), under 505nm excitation at room temperature. (a) Delayed fluorescence (DF) spectra of triplet-sensitized upconversion emission that occurred as a result of TTET from CdSe/ACA* to CBPEA, followed by triplet-triplet annihilation. The inset shows the emission decay kinetics of the integrated delayed emission. (b) Near-IR singlet oxygen phosphorescence emission from CdSe-OA and CdSE-OA/ACA in aerated toluene (both at a concentration of 4μM), under 505nm excitation at room temperature. 5 In summary, we have conducted proof-of-concept experiments in which we show that CdSe quantum dots are effective surrogates for more-ubiquitous molecular triplet photosensitizers in energy conversion processes. The high photostability, broad absorption spectra, and tunable optical properties of such quantum dots give rise to their superior properties. We have also demonstrated that the behavior of semiconductor quantum dots mimics the classical behavior of molecular triplets, and that triplet excitons in nanocrystals can be efficiently transferred to a bulk solution through successive (and nearly quantitative) triplet energy transfer steps. This photofunctionality may be exploited for numerous triplet excited-state reactions, including photoredox catalysis, singlet oxygen generation, photochemical upconversion, and excited-state electron transfer. Our current research activities include generalizing this approach across a range of semiconductor nanocrystalline materials, probing the ‘molecular’ nature of these materials, and applying these long-lived excited states in a range of photoactivated chemistry. This work was supported by the US Air Force Office of Scientific Research (FA9550-13-1-0106) and the Ultrafast Initiative of the US Department of Energy, Office of Science, Office of Basic Energy Sciences, through Argonne National Laboratory (under contract DE-AC02-06CH11357). Department of Chemistry North Carolina State University (NCSU) Cédric Mongin received his PhD in organic chemistry (on photoswitchable molecular cages) from the University of Bordeaux, France, in 2013. Since 2014 he has been a postdoctoral researcher in the Castellano research group at NCSU, where his research focuses on the exploitation of semiconductor quantum dots as promising triplet sensitizers. He will begin his independent career later this year, as a faculty member in chemistry at the École Normale Supérieure de Cachan, France. Sofia Garakyaraghi received her BS in chemistry (with a minor in mathematics) from the College of William and Mary in 2013. She is currently a PhD candidate under the guidance of Felix Castellano. She is studying the excited-state dynamics of various molecular- and nanocrystal-based systems with the use of ultrafast spectroscopy. Felix (Phil) Castellano earned a BA from Clark University in 1991 and a PhD from Johns Hopkins University in 1996 (both in chemistry). Following a National Institutes of Health postdoctoral fellowship at the University of Maryland's School of Medicine, he accepted a position as assistant professor at Bowling Green State University in 1998. He was promoted to associate professor in 2004, to professor in 2006, and was appointed director of the Center for Photochemical Sciences in 2011. In 2013 he moved his research program to NCSU, where he is currently a professor. He was appointed as a fellow of the Royal Society of Chemistry in 2015. His current research focuses on metal-organic chromophore photophysics and energy transfer, photochemical upconversion phenomena, solar fuel photocatalysis, energy transduction at semiconductor/molecular interfaces, and excited-state electron transfer processes. 2. T. F. Schulze, T. W. Schmidt, Photochemical upconversion: present status and prospects for its application to solar energy conversion, Energy Environ. Sci. 8, p. 103-125, 2015. 5. C. Mongin, S. Garakyaraghi, N. Razgoniaeva, M. Zamkov, F. N. Castellano, Direct observation of triplet energy transfer from semiconductor nanocrystals, Science 351, p. 369-372, 2016.


News Article | November 17, 2016
Site: phys.org

In a recent pair of publications in Nature Communications, teams led by Caltech's Thomas F. Rosenbaum, professor of physics and holder of the Sonja and William Davidow Presidential Chair, report how they have combined magnetic fields and large pressures to not only induce these states at ultra-low temperatures, but also to nudge them between competing types of quantum order. Rosenbaum is an expert on the quantum mechanical nature of materials—the physics of electronic, magnetic, and optical materials at the atomic level—that are best observed at temperatures near absolute zero. In the first of the two papers, published in June and led by Sara Haravifard, now on the faculty at Duke University, the team squeezed a collection of magnetic quantum particles in a pressure cell at temperatures near absolute zero and at magnetic fields more than 50,000 times stronger than the earth's field, and discovered the formation of new types of crystal patterns. The geometry of these crystal patterns not only reveals the underlying quantum mechanics of the interactions between the magnetic particles, but also bears on the kinds of collective states allowed for atomic systems, such as those that flow without friction. In the work in the second paper, published in October and led by Caltech graduate student Yishu Wang and Argonne scientist Yejun Feng, Rosenbaum and colleagues also investigate how materials balance on the knife edge between different types of quantum order. In this case, however, the researchers focus on the relationship between magnetism and superconductivity—the complete disappearance of electrical resistance—and how those properties relate to one another when the material changes state under the pressures achievable in a diamond anvil cell. The researchers used the Advanced Photon Source at Argonne National Laboratory to study the magnetic properties of the transition metal manganese phosphide (MnP) to see how it might be possible to manipulate the ordering of the spins—the intrinsic magnetic moments of the electrons—to either enhance or suppress the onset of superconductivity. Superconductivity is a state in a material in which there is no resistance to electric current and all magnetic fields are expelled. This behavior arises from a so-called "macroscopic quantum state" where all the electrons in a material act in concert to move cooperatively through the material without energy loss. Rosenbaum and his colleagues delineated a spiral pattern of the magnetic moments of the electrons in MnP that could be tuned by increasing pressure to induce superconductivity. Here again the particular geometry of the magnetic pattern held the key to the ultimate state that the material reached. "The experiments reveal manifest opportunities to find new low-energy states via substitutions for manganese and phosphorus with neighboring elements from the periodic table such as chromium and arsenic. The taxonomy of allowable quantum states and the ability to manipulate them unites approaches across quantum physics and technology," Rosenbaum says. The first paper, "Crystallization of spin superlattices with pressure and field in the layered magnet SrCu2(BO3)2," was published on June 20, 2016. Coauthors include Daniel M. Silevitch, research professor of physics at Caltech. Work at Caltech was supported by the National Science Foundation. The research in the second paper, entitled "Spiral magnetic order and pressure-induced superconductivity in transition metal compounds" and published on October 6, was funded at Caltech by a U.S. Department of Energy Basic Energy Sciences award. Explore further: Superconductor's strange behaviour results in new laboratory tool More information: S. Haravifard et al. Crystallization of spin superlattices with pressure and field in the layered magnet SrCu2(BO3)2, Nature Communications (2016). DOI: 10.1038/ncomms11956 Yishu Wang et al. Spiral magnetic order and pressure-induced superconductivity in transition metal compounds, Nature Communications (2016). DOI: 10.1038/ncomms13037


News Article | October 10, 2016
Site: news.mit.edu

Energy storage devices called supercapacitors have become a hot area of research, in part because they can be charged rapidly and deliver intense bursts of power. However, all supercapacitors currently use components made of carbon, which require high temperatures and harsh chemicals to produce. Now researchers at MIT and elsewhere have for the first time developed a supercapacitor that uses no conductive carbon at all, and that could potentially produce more power than existing versions of this technology. The team’s findings are being reported in the journal Nature Materials, in a paper by Mircea Dincă, an MIT associate professor of chemistry; Yang Shao-Horn, the W.M. Keck Professor of Energy; and four others. “We’ve found an entirely new class of materials for supercapacitors,” Dincă says. Dincă and his team have been exploring for years a class of materials called metal-organic frameworks, or MOFs, which are extremely porous, sponge-like structures. These materials have an extraordinarily large surface area for their size, much greater than the carbon materials do. That is an essential characteristic for supercapacitors, whose performance depends on their surface area. But MOFs have a major drawback for such applications: They are not very electrically conductive, which is also an essential property for a material used in a capacitor. “One of our long-term goals was to make these materials electrically conductive,” Dincă says, even though doing so “was thought to be extremely difficult, if not impossible.” But the material did exhibit another needed characteristic for such electrodes, which is that it conducts ions (atoms or molecules that carry a net electric charge) very well. “All double-layer supercapacitors today are made from carbon,” Dincă says. “They use carbon nanotubes, graphene, activated carbon, all shapes and forms, but nothing else besides carbon. So this is the first noncarbon, electrical double-layer supercapacitor.” One advantage of the material used in these experiments, technically known as Ni (hexaiminotriphenylene) , is that it can be made under much less harsh conditions than those needed for the carbon-based materials, which require very high temperatures above 800 degrees Celsius and strong reagent chemicals for pretreatment. The team says supercapacitors, with their ability to store relatively large amounts of power, could play an important role in making renewable energy sources practical for widespread deployment. They could provide grid-scale storage that could help match usage times with generation times, for example, or be used in electric vehicles and other applications. The new devices produced by the team, even without any optimization of their characteristics, already match or exceed the performance of existing carbon-based versions in key parameters, such as their ability to withstand large numbers of charge/discharge cycles. Tests showed they lost less than 10 percent of their performance after 10,000 cycles, which is comparable to existing commercial supercapacitors. But that’s likely just the beginning, Dincă says. MOFs are a large class of materials whose characteristics can be tuned to a great extent by varying their chemical structure. Work on optimizing their molecular configurations to provide the most desirable attributes for this specific application is likely to lead to variations that could outperform any existing materials. “We have a new material to work with, and we haven’t optimized it at all,” he says. “It’s completely tunable, and that’s what’s exciting.” While there has been much research on MOFs, most of it has been directed at uses that take advantage of the materials’ record porosity, such as for storage of gases. “Our lab’s discovery of highly electrically conductive MOFs opened up a whole new category of applications,” Dincă says. Besides the new supercapacitor uses, the conductive MOFs could be useful for making electrochromic windows, which can be darkened with the flip of a switch, and chemoresistive sensors, which could be useful for detecting trace amounts of chemicals for medical or security applications. While the MOF material has advantages in the simplicity and potentially low cost of manufacturing, the materials used to make it are more expensive than conventional carbon-based materials, Dincă says. “Carbon is dirt cheap. It’s hard to find anything cheaper.” But even if the material ends up being more expensive, if its performance is significantly better than that of carbon-based materials, it could find useful applications, he says. This discovery is “very significant, from both a scientific and applications point of view,” says Alexandru Vlad, a professor of chemistry at the Catholic University of Louvain in Belgium, who was not involved in this research. He adds that “the supercapacitor field was (but will not be anymore) dominated by activated carbons,” because of their very high surface area and conductivity. But now, “here is the breakthrough provided by Dinca et al.: They could design a MOF with high surface area and high electrical conductivity, and thus completely challenge the supercapacitor value chain! There is essentially no more need of carbons for this highly demanded technology.” And a key advantage of that, he explains, is that “this work shows only the tip of the iceberg. With carbons we know pretty much everything, and the developments over the past years were modest and slow. But the MOF used by Dinca is one of the lowest-surface-area MOFs known, and some of these materials can reach up to three times more [surface area] than carbons. The capacity would then be astonishingly high, probably close to that of batteries, but with the power performance [the ability to deliver high power output] of supercapacitors.” The research team included former MIT postdoc Dennis Sheberla (now a postdoc at Harvard University), MIT graduate student John Bachman, Joseph Elias PhD ’16, and Cheng-Jun Sun of Argonne National Laboratory. The work was supported by the U.S. Department of Energy through the Center for Excitonics, the Sloan Foundation, the Research Corporation for Science Advancement, 3M, and the National Science Foundation.


News Article | October 23, 2015
Site: phys.org

The first direct measurement of resistance to bending in a nanoscale membrane has been made by scientists from the University of Chicago, Peking University, the Weizmann Institute of Science and the Department of Energy's (DOE) Argonne National Laboratory.


News Article | October 25, 2016
Site: www.greencarcongress.com

« Continental puts first 48V hybrid assist system into production with Renault diesels | Main | US total petroleum demand up in September year-on-year; highest September gasoline deliveries on record » Microturbine manufacturer Capstone Turbine Corporation announced that the US Department of Energy (DOE) will provide $335,000 in funding for one year to Argonne National Laboratory to conduct hydrogen and syngas testing on Capstone’s C65 and C200 microturbines. In October 2015, Capstone partnered with Argonne National Laboratory for participation in the DOE’s Technologist in Residence (TIR) program. Since then, Capstone and Argonne have explored new focus areas to advance the adoption of low-emission, high-efficiency power solutions. The additional funding from the DOE will help to drive Capstone’s hydrogen and syngas fuel product offering to market and promote further research and development efforts.


News Article | February 23, 2017
Site: www.theenergycollective.com

South Korea (ROK), Japan and the United States all have large nuclear energy programs that are facing a variety of challenges limiting their growth, namely opposition by the nonproliferation industry to wider deployment of enrichment and recycling technologies. There is interest and opportunity to collaborate in developing solutions in areas where challenges overlap. The Global America Business Institute, a Washington, D.C. think tank, recently hosted a forum that gathered high level representatives from all three countries as speakers to an audience made up of people from government, media, industry, non-profit groups and antinuclear organizations to discuss some of the challenges and solutions. Several speakers, including Dr. Jooho Whang from the Korean Nuclear Society, Mr. Nobuo Tanaka from the Sasakawa Peace Foundation, and Dr. John-hyuk Baek from the Korean Atomic Energy Research Institute focused on pyroprocessing. They described it as a field where all three nations could both contribute to and benefit from developing laboratory-tested technology into a more commercial product offering. Pyroprocessing is a high temperature material recycling technology that was developed in the U.S. as part of the long ago discontinued Integral Fast Reactor (IFR) program. One driver for its development was to create a way to extract and reuse mixed actinides from used reactor fuel without passing through any stages that produce separated fissile isotopes. As envisioned, pyroprocessing would also enable material from used fuel rods to be promptly transformed into new fuel rods, thus addressing the contentious issue of building up large stockpiles of material that could be converted into nuclear weapons. Even though it was developed with attention to limiting its use as a potential source of weapons-useful material, pyroprocessing is still tightly controlled under export control regimes. It is often a major discussion topic during the negotiations for bilateral cooperation agreements with the U.S. A contingent of the American nonproliferation community stubbornly resists any development that might lead to a closed fuel cycle, ostensibly due to concerns about either material or useful knowledge development. Dr. Ed Lyman from the Union of Concerned Scientists attended the meeting and expressed that point of view by asking a question, muttering to himself and exuding negative body language. Nuclear cooperation agreements with the U.S., known as 123 agreements after the governing section in the Atomic Energy Act, specify terms and conditions for nuclear technology and information cooperation. Each agreement is individually concluded and some contain specific prohibitions on various components of the nuclear fuel cycle. ROK has been interested in pyroprocessing for a number of years, recognizing that it has potential to make long term used fuel storage easier. The country’s indigenous development effort, however, has been confined to laboratory scale studies due to the limitations imposed by the original 123 agreement that it signed in 1974. That first agreement expired in 2014. The new ROK-U.S. 123 agreement was signed in the summer of 2015 after an extended period of negotiation. One reason for the extended discussions was that Korea insisted that it needed more flexible options for dealing with used fuel. At the same time, some U.S. negotiators were interested in adding conditions prohibiting both fuel enrichment and recycling. The final agreement finessed the issue; it does not mention enrichment and recycling. Instead, the preamble to the agreement explicitly affirms that nonproliferation treaty signatories like ROK have an inalienable right to develop peaceful applications of nuclear energy. The current agreement is also more balanced that the 1974 version, recognizing that Seoul has matured into a capable nuclear system supplier in its own right. It is no longer a subservient customer of a world dominating supplier. As Dr. Whang described, the ROK and the U.S. have long been cooperating on pyroprocessing development and testing. Korea produced a plan in 2008 that envisions a pilot-scale sodium fast reactor using fuel materials extracted from used light water fuel by 2028. In support of that goal, the plan envisions a pyroprocessing facility in operation by 2025. In 2011, the two countries signed a ten-year agreement to continue the cooperation, with a completion goal in 2020 that fits well with the ROK’s existing facility construction dates. A later speaker, Mr. Tanaka from Sasakawa Peace Foundation, suggested that Japan might be interested in joining the U.S. and Korea in developing pyroprocessing technology. He mentioned that his country has recently decided to permanently shut down its Monju facility. That is the only currently operating sodium fast reactor in the country. However, Mr. Tanaka described that decision as a positive step that allows more options for closing the nuclear fuel cycle. He suggested it is time for increased U.S.-Japan-Korea cooperation as Japan reduces its commitment to joint projects with France, especially those related to aqueous reprocessing. Japan’s current 123 agreement with the U.S. explicitly allows both enrichment and recycling, but Tanaka warned that there are “antinuclear people” from Japan that are already working in Washington to prevent the automatic extension of the agreement when it expires in 2018. He described the French aqueous reprocessing technology currently used as being more expensive than pyroprocessing. It is also unpopular due to the growing stockpile of separated plutonium. Tanaka mentioned a visit to the Idaho National Laboratory where he toured the hot cells and other components that were developed for the IFR project. He saw that there was real equipment that had been fully tested at a laboratory scale and was ready for further refinement and development. In addition to addressing the used fuel challenge that Japan shares with the ROK (and the U.S.) pyroprocessing has potential uses associated with the cleanup and eventual removal of melted fuel at Fukushima Daiichi. By sectioning solidified corium into appropriately sized blocks, it is technically feasible to reduce the oxide material into metal and then to use electrorefining to separate the actinides from fission products. Yoon Chang, a key member of the IFR team who remains active in international technology development efforts, confirmed the feasibility of the concept and pointed out a study titled “Conceptual Design of a Pilot-Scale Pyroprocessing Facility” performed in 2015 for Argonne National Laboratory. Both Korean and Japanese speakers mentioned their interest in jointly pursuing a project that might lead to commercial demonstration of the GE-developed PRISM sodium fast reactor. That design incorporates lessons learned from the IFR project and once passed through an initial design validation review at the U.S. Nuclear Regulatory Commission. There are political reasons for optimism regarding increased cooperation and trade opportunities for technologies like pyroprocessing and Gen IV sodium fast reactors. Rex Tillerson, the Trump administration’s nominee for secretary of state, a department that would play a key role in related deci- sion processes, recognizes that sanctions and technology trade restrictions can harm U.S. interests without effectively altering the behavior of the target of those restrictions. The post Potential for Korea, Japan, U.S. to Collaborate on Pyroprocessing Under Trump appeared first on Atomic Insights.


« New Lexus LC 500h with Multi Stage Hybrid System | Main | Cummins Westport ISB6.7 G midrange natural gas engine now in full production » Nuclear-renewable hybrid energy systems (N-R HESs) can enable low-carbon, on-demand electricity while providing reduced-emission thermal energy for industrial processes. N-R HES systems are managed by a single entity that link a nuclear reactor that generates heat, a thermal power cycle for heat-to-electricity conversion, at least one renewable energy source, and an industrial process that uses thermal and/or electrical energy. However, the economic feasibility of these systems may depend on future natural gas prices, electricity market structures, and clean energy incentives. A series of new reports from the Joint Institute for Strategic Energy Analysis (JISEA) and Idaho National Laboratory (INL) examines various hybrid system configurations to provide a basis to identify opportunities for clean energy use and examine the most economically viable configurations. N-R HESs are physically coupled facilities that include both nuclear and renewable energy sources and produce electricity and another product such as a fuel, thermal energy, hydrogen, and desalinated water. Energy and materials flows among energy production and delivery systems are dynamically integrated so that the production rate of each product can be varied. In one report, Generation and Use of Thermal Energy in the US Industrial Sector and Opportunities to Reduce its Carbon Emissions, researchers from INL and the Energy Department’s National Renewable Energy Laboratory (NREL) identify key greenhouse gas (GHG) emission sources in the industrial sector and propose low-emitting alternatives using targeted, process-level analysis of industrial heat requirements. The report examines emissions generated during process heat generation. The study focuses on the 14 industries with the largest emissions as reported under the Environmental Protection Agency’s Greenhouse Gas Reporting Program in 2014. Approximately 960 plants from those industries represent less than one half of one percent of all manufacturing in the US, but they emit nearly 25% of all industrial sector emissions—5% of total US GHG emissions in 2014. The report also identifies non-GHG-emitting thermal energy sources that could be used to generate heat without emissions. Those potential sources include small modular nuclear reactors, solar heat for industrial processes, and geothermal heat. The report identifies potential opportunities for each source, identifies implementation challenges, and proposes analyses to identify approaches to overcome the challenges. In a second report, Status on the Component Models Developed in the Modelica Framework: High-Temperature Steam Electrolysis Plant & Gas Turbine Power Plant, INL details a modeling and simulation framework to assess the technical and economic viability of an N-R HES. INL, with support from Oak Ridge National Laboratory and Argonne National Laboratory, developed a dynamic, physics-based modeling capability of N-R HESs using the Modelica programming language. The report presents details on newly developed high-temperature steam electrolysis (for hydrogen production) and gas turbine power plant subsystems. Simulations of several case studies show that the suggested control scheme could maintain satisfactory plant operations even under rapid variations in net load. The study finds that the N-R HESs modeled could provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess plant capacity. In a third report, The Economic Potential of Three Nuclear-Renewable Hybrid Energy Systems Providing Thermal Energy to Industry, NREL researchers explore the economics of an N-R HES that sells a thermal product (steam or a high-temperature heat transfer fluid) to one or more industrial customers. Under each scenario examined, the economically optimal system configuration includes a nuclear reactor generating a thermal product such as steam or a heat transfer fluid—a configuration that can economically reduce GHG emissions from industry. In addition, configurations that include a thermal power cycle can support resource adequacy for the electricity grid while maximizing production of the thermal energy product if the markets sufficiently incentivize that option. Together, these three reports indicate nuclear renewable hybrid energy systems can reduce industrial emissions and support the power system. The Joint Institute for Strategic Energy Analysis is operated by the Alliance for Sustainable Energy, LLC, on behalf of the US Department of Energy’s National Renewable Energy Laboratory, the University of Colorado-Boulder, the Colorado School of Mines, Colorado State University, Massachusetts Institute of Technology, and Stanford University.


News Article | March 4, 2016
Site: www.nature.com

SAS method for preparing georgeite and zincian georgeite. Copper(ii) acetate monohydrate (4 mg ml−1) and zinc(ii) acetate dihydrate (2.16 mg ml−1) (Sigma Aldrich ≥ 99% Puriss) were dissolved in ethanol (reagent grade, Fischer Scientific) containing 0 vol%, 5 vol% or 10 vol% deionized water. Smithsonite ZnCO was prepared with zinc(ii) acetate dehydrate (2.16 mg ml−1) in a 10 vol% water and ethanol solution. SAS experiments were performed using apparatus manufactured by Separex. CO (from BOC) was pumped through the system (held at 110 bar, 40 °C) via the outer part of a coaxial nozzle at a rate of 6 kg h−1. The metal salt solution was concurrently pumped through the inner nozzle, using an Agilent HPLC pump at a rate of 6.4 ml min−1. The resulting precipitate was recovered on a stainless steel frit, while the CO –solvent mixture passed down stream, where the pressure was decreased to separate the solvent and CO . The precipitation vessel has an internal volume of 1 litre. Precipitation was carried out for 120 minutes, and followed by a purge of the system with ethanol–CO for 15 minutes, then CO for 60 minutes under 110 bar and 40 °C. The system was then depressurized and the dry powder collected. Recovered samples were placed in a vacuum oven at 40 °C for 4 hours to remove any residual solvent. Approximately 1.5 g of georgeite is prepared during the 120-minute duration of solution precipitation. Co-precipitation method for preparing malachite and zincian malachite precursors for the standard methanol-synthesis catalysts. The procedure was performed via a semi-continuous process, using two peristaltic pumps to maintain pH. Copper(ii) nitrate hydrate and zinc(ii) nitrate hydrate solutions in deionized water were prepared with copper/zomc molar ratios of 1/0, 1/1 and 2/1. The premixed metal solution (5 l, ~0.5 g ml−1) was preheated along with a separate 5 l solution of 1.5 M sodium carbonate. The mixed metals were precipitated by combining the two solutions concurrently at 65 °C, with the pH being held between 6.5 and 6.8. The precipitate would spill over from the small precipitation pot into a stirred ageing vessel, held at 65 °C. The precipitate was aged for 15 minutes after all precursor solutions had been used. The precipitate was than filtered and washed to minimize sodium content. The sample was washed with 6 l of hot deionized water, and the sodium content monitored using a photometer. The washing process was repeated until the sodium content showed no change. The sample was then dried at 110 °C for 16 hours. Samples were calcined for 6 hours at 300 °C or 450 °C in a tube furnace under static air. The ramp rate used to reach the desired temperature was 1 °C min−1. Catalyst testing was performed with 0.5 g of the calcined catalyst, pelleted and ground to a sieve fraction of 0.6–1 mm for both the methanol-synthesis and the LTS test reactions. The catalysts were reduced in situ using a 2% H /N gas mixture at 225 °C (ramp rate 1 °C min–1), before the reaction gases were introduced. Data reported for the industrial standards are the mean value from four repeat experiments. Error bars are based on the standard deviation from the mean values. Methanol synthesis. Testing was carried out in a single-stream, six-fixed-bed reactor with an additional bypass line. After reduction, the catalysts were then subjected to synthesized syngas (CO/CO /H /N  = 6.9/2/67/17.8) at 3.5 l h−1, 25 bar pressure and 195 °C. In-line gas analysis was performed using an FT-IR spectrometer, which detected CO, CO , H O and CH OH. Downstream of the catalyst beds, knockout pots collected effluent produced from the reaction. The contents were collected after each test run and analysed using gas chromatography to evaluate the selectivity of catalysts. The total system flow was maintained using the bypass line. LTS. Testing was performed in six parallel fixed-bed reactors with a single stream feed and an additional bypass line. After reduction, the catalysts were subjected to synthetic syngas (CO/CO /H /N  = 1/4/13.75/6.25) at 27.5 bar pressure and 220 °C. The reactant gas stream was passed though vaporized water to give a water composition of 50 vol%. This gives a total gas flow of H O/CO/CO /H /N  = 50/2/8/27.5/12.5. The standard mass hourly space velocity (MHSV) used for testing was 75,000 k h−1 kg−1. In-line IR analysis was carried out to measure CO conversion. Selectivity was determined by the methanol content within the knockout pots downstream of the reactors. Space and mass velocity variation tests were performed at 65 hours’ time-on-line by altering the flow for each catalyst bed. Relative activities were calculated by altering the flow for each catalyst bed in order to achieve 75% CO conversion after 75 hours’ time-on-line. The total system flow was maintained using the bypass line. Powder X-ray diffraction (XRD). Measurements were performed using a PANalytical X’pert Pro diffractometer with a Ni-filtered CuK radiation source operating at 40 kV and 40 mA. Patterns were recorded over the range of 10–80° 2θ using a step size of 0.016°. All patterns were matched using the International Centre for Diffraction Data (ICDD) database. An in situ Anton Parr XRK900 cell (with an internal volume of ~0.5 l) was used to monitor the formation of metallic copper during the reduction of the CuO/ZnO materials. A flow of 2% H /N (60 ml min-1) was passed through the sample bed while the cell was heated to 225 °C (ambient temperature to 125 °C, ramp rate10 °C min−1; 125–225 °C, ramp rate 2 °C min−1). The sample was then cooled to room temperature and a 20–100° 2θ scan performed. Copper crystallite size was estimated from a peak-broadening analysis of the XRD pattern using Topas Academic30, and the volume-weighted column height (L ) was calculated according to ref. 31. X-ray absorption fine-edge spectroscopy (XAFS). K-edge XAFS studies of copper and zinc were carried out on the B18 beamline at the Diamond Light Source, Didcot, UK. Measurements were performed using a quick extended (QE) XAFS set-up with a fast-scanning silicon (111) double-crystal monochromator. The time resolution of the spectra reported herein was 1 minute per spectrum (k  = 14), and on average three scans were acquired to improve the signal-to-noise ratio of the data. All ex situ samples were diluted with cellulose and pressed into pellets to optimize the effective-edge step of the XAFS data and measured in transmission mode using ion-chamber detectors. All transmission XAFS spectra were acquired concurrently with the appropriate reference foil (copper or zinc) placed between I and I XAS data processing, and EXAFS analysis were performed using IFEFFIT32 with the Horae package33 (Athena and Artemis). The FT data were phase-corrected for oxygen. The amplitude-reduction factor, S2 , was derived from EXAFS data analysis of a known copper reference compound, namely tenorite (CuO). For the fitting of the local coordination geometry of georgeite and malachite, the Jahn–Teller distorted copper–oxygen bond was difficult to observe because of thermal disorder, and so was fixed in the model. X-ray pair distribution function (PDF) analysis. Synchrotron X-ray PDF data were collected on the 11-ID-B beamline at the Advanced Photon Source, Argonne National Laboratory, and on the I15 beamline at the Diamond Light Source. Powder samples were packed into kapton capillaries with an internal diameter of 1 mm. Room-temperature powder X-ray diffraction data were collected at a wavelength of 0.2114 Å (11-ID-B) and 0.1620 Å (I15) using the Rapid Acquisition PDF method34. The scattering data (0.5 ≤ Q ≤ 22 Å−1) were processed into PDF data using the program GudrunX35. Two notations of PDF data are presented in this manuscript: G(r) and D(r) (ref. 36). The total radial distribution function, G(r), is the probability of finding a pair of atoms, weighted by the scattering power of the pair, at a given distance, r; it is the Fourier transform experimentally determined total structure factor. D(r) is re-weighted to emphasize features at high r, such that D(r) = 4πrρ G(r), where ρ is the average number density of the material (in atoms Å−1). Fourier transform-infrared spectroscopy (FT-IR). Analysis was carried out using a Jasco FT/IR 660 Plus spectrometer in transmission mode over a range of 400–4,000 cm–1. Catalysts were supported in a pressed KBr disk. Raman spectroscopy. Raman spectra were obtained using a Renishaw inVia spectrometer equipped with an argon ion laser (λ = 514 nm). Thermal gravimetric analysis (TGA). TGA measurements were performed using a SETARAM Labsys analyser with sample masses of about 20 mg, at 1 °C min−1 under air with a flow rate of 20 ml min−1. Evolved gas analysis (EGA). EGA experiments were performed using a Hiden CATLAB under the same conditions used in the TGA experiments. Copper surface area analysis. This was determined by reactive frontal chromatography. Catalysts were crushed and sieved to a particle size of 0.6–1 mm and packed into a reactor tube. The catalyst was purged under helium for 2 minutes at 70 °C before being heated under a 5% H reduction gas to 230 °C (8 °C min−1) for 3 hours. The catalyst was then cooled to 68 °C under helium before the dilute 2.5% N O reaction gas was added with a flow rate of 65 ml min−1. The formation of N from the surface oxidation of the copper catalyst by N O was measured downstream using a thermal conductivity detector (TCD). Once the surface of the copper is fully oxidized, there is a breakthrough of N O, detected on the TCD. From this, the number of oxygen atoms that are chemisorbed on the copper surface can be determined. The number of exposed surface copper atoms and the copper surface area can then be derived. Quoted surface areas are calculated using discharged sample mass. Note that recent work37, 38 has shown that, if the catalyst is exposed to partial pressures of H exceeding 0.05 bar, then partial reduction of ZnO at the copper interface can occur. This will affect copper surface area results, owing to N O oxidizing both copper and ZnO . In these cases, alternative techniques, such as a H thermal conductivity detector, will give more accurate data with respect to copper surface area37, 38. Copper surface area analysis after exposure to LTS conditions. Copper surface area analysis of the fresh catalysts were carried out on a Quantachrome ChemBET 3000. Sample (100 mg) was packed into a stainless-steel U-tube and purged with high-purity helium for 5 minutes. Samples were reduced using 10% H /Ar (30 ml min−1) heated to 140 °C at 10 °C min−1, before heating further to 225 °C at 1 °C min−1. The resulting catalyst was held at this temperature for 20 minutes to ensure that complete reduction took place. Note that, under this partial pressure of H , it has been reported that ZnO species in contact with copper can partially reduce37, 38. Residual H was flushed from the system by switching the gas line back over to helium (80 ml min−1), while holding the sample at 220 °C for another 10 minutes. The temperature was then reduced to 65 °C for N O pulsing (BOC AA Grade). A programme of 12 pulses of 113 μl N O with a 5-minute stabilization time between pulses was carried out, followed by three pulses of N for calibration. Unreacted N O was trapped before reaching the detector using a molecular sieve 5A (pelleted, 1.6 mm, Sigma Aldrich) trap. The copper surface area was determined from the amount of N emitted and the post-reaction analysis of catalyst mass, as follows: where N is the Avogadro constant, 6.022 × 1023 (atoms). The key assumptions are that the amount of N emitted amounts to half a monolayer’s coverage of oxygen, and that the surface density of copper is 1.47 × 1019 (atoms m−2). The volume of N produced was quantified using a TCD. After copper surface area analysis, the samples were kept under N and transferred to an LTS reactor. Ageing was carried out in a single fixed-bed reactor equipped with a bypass line. CO, N , CO and H were introduced to the catalyst bed via mass-flow controllers (Bronkhorst). Water of high-performance liquid chromatography (HPLC) grade was passed through a liquid-flow controller (Bronkhorst) and then into a controlled evaporator mixer (Bronkhorst) that was heated to 140 °C. N was fed through the vaporized water to give a dilute syngas mixture (H O/CO/CO /H /N  = 25/1/4/13.75/56.25). This mixture was introduced at 220 °C after re-reduction of the catalyst. The gas flows were controlled to achieve an MHSV of 30,000 l h−1 kg−1. After ageing for 40 hours, the samples were transferred back to the Quantachrome ChemBET Chemisorption analyser, whereby, after re-reduction, the copper surface areas of the aged catalysts were measured, as described above. Scanning transmission electron microscopy (STEM). Samples for STEM characterization were dry-dispersed onto holey carbon TEM grids. They were examined in an aberration-corrected JEOL ARM-200CF scanning transmission electron microscope operating at 200 kV in bright-field and dark-field STEM imaging modes. Reliable electron microscopy results could be obtained only from the set of zincian-georgeite- and zincian-malachite-derived materials, as these were found to be stable under the vacuum environment of the microscope, and largely unaffected by electron-beam irradiation. By way of contrast, the copper-only georgeite precursor materials were highly unstable under vacuum conditions (even without electron-beam irradiation), turning from a blue to dark-green colour, probably owing to the loss of occluded water. The corresponding zincian-georgeite materials showed no such colour transformation under vacuum. Environmental transmission electron microscopy (ETEM). Samples for reduction during characterization by ETEM were dry-dispersed on heater chips (DensSolution trough hole) and then mounted on a DensSolution SH30 heating holder. The holder with sample was inserted into an FEI Titan 80-300 environmental transmission electron microscope operated at 300 kV (ref. 39) . The reduction of the samples was performed in situ as follows, for both the georgeite and the malachite precursors: a flow of H was let into the microscope, building up a pressure of 2 mbar. The sample was heated from room temperature to 150 °C using a heating ramp rate of 10 °C min−1. The final heating to 225 °C was done at 1 °C min−1. The oxidation state of copper was monitored by electron energy-loss spectroscopy (EELS) during the reduction treatment using a Gatan Tridiem 866 spectrometer attached to the microscope. After an extended in situ treatment of several hours at 225 °C, phase-contrast lattice imaging was performed at elevated temperature in an H atmosphere for both the georgeite and malachite samples, and recorded using a Gatan US 1000 charge-coupled-device camera. X-ray photoelectron spectroscopy (XPS). A Kratos Axis Ultra DLD system was used to collect XPS spectra, using a monochromatic Al K X-ray source operating at 120 W. Data were collected with pass energies of 160 eV for survey spectra, and 40 eV for the high-resolution scans. The system was operated in the Hybrid mode, using a combination of magnetic immersion and electrostatic lenses and acquired over an area approximately 300 × 700 μm2. A magnetically confined charge-compensation system was used to minimize charging of the sample surface, and the resulting spectra were calibrated to the C(1 s) line at 284.8 eV; all spectra were taken with a 90° take-off angle. A base pressure of ~1 × 10−9 Torr was maintained during collection of the spectra. Gas treatments were performed in a Kratos catalysis cell, which mimics the conditions of a normal reactor vessel, allowing the re-creation of reactor conditions and analysis of the chemical changes taking place on the catalyst surface. Briefly, the cell consists of a fused quartz reactor vessel contained within a stainless-steel vacuum chamber (base pressure ~10−8 Torr after baking). Samples were heated at a controlled ramp rate of 2 °C min−1 to a temperature of 225 °C using a eurotherm controller. The catalysts were exposed to an atmosphere of 2% H in nitrogen with a flow rate of 30 ml min−1, controlled using MKS mass flow controllers during the heating ramp, during the 20-minute isotherm at 225 °C, and also while the catalyst was cooled to 25 °C. The samples were analysed before and after gas treatment without breaking the vacuum. Inductively coupled plasma mass spectrometry (ICP-MS) and carbon–hydrogen–nitrogen (CHN) analysis. These analyses were provided as a commercial service by Warwick Analytical Services. Helium pycnometry. This analysis was provided as a commercial service by MCA Services.


News Article | December 14, 2016
Site: www.nature.com

The design of the experimental system is based on simple physical considerations. An applied electric field E will impose a force on net charge q to cause a displacement Δx along the field. For any residue (or other group of atoms), we can associate a transition dipole moment with each motion, where i is an index over atoms (in units of elementary charge times distance (eÅ); 1 eÅ ≈ 4.8 D). The energetic effect due to the electric field is −Δμ · ΔE, and its significance depends on how it compares to thermal energy k T; for example, a weakly populated excited state increases in occupancy by ~2.7 fold when its energy relative to the ground state is lowered by 1k T. As shown in Extended Data Table 1, fields of ~1 MV/cm are in the right range for our purpose. On the basis of this, we designed a ±10 kV power supply (Spellman HV) which charges a pulse generator (IXYS Colorado), and from which high-voltage (HV) pulses are triggered by a TTL signal from the synchrotron signal processing hardware. This establishes precise timing between X-ray and electric field (EF) pulses. Integrity of the conductive path to the tip of the capillary and its associated propagation delay were determined using an HV probe. We designed a number of safety features. Custom RG-11 high-voltage cables (Gater Industries) were high-potential tested by the power group at the Advanced Photon Source and were approved for use up to 8 kVDC. EF pulses were generated in ‘half-bridge’ mode, where residual charge stored by the HV cables after an EF pulse is drained through a large capacitor connected to ground. The counter electrode was designed to avoid any path through air of less than 1 cm to the grounded cable connector exterior. The inhibit feature of the power supply was connected to an interlock system at the beamline facility, ensuring that the system is de-energized upon personnel entry into the beamline hutch. Power supply voltage and the counter electrode backpressure were controlled remotely using a network-connected microcontroller and custom software. The RG-11 cable is terminated on one side with an HV connector (LEMO) (for the pulse generator) and on the other side with an SHV connector (for the housing of the top counter electrode) (Extended Data Fig. 1). The housing was prototyped in-house using a three-dimensional printer (Formlabs, Somerville, MA) and custom fabricated commercially (PolyJet technology, PartSnap, Irving, TX). The housing contains a cylindrical glass insert fitted with a silicone gasket and a thin metal wire (75 μm diameter, Cooner Wire, Chatsworth, CA) with a dielectric coating, except at the tip. The wire was guided to the crystal through a glass capillary (0.5 or 1.0 λ (140 or 200 μm, respectively) orifice, Drummond Scientific, Broomall, PA). The electrode housing was filled with crystallization solution and contains a small port (blue arrow, Extended Data Fig. 1b) that allowed for computer-controlled backpressure for slow infusion of liquid through the top electrode to maintain crystal hydration. Bottom electrodes were prepared from glass capillaries (0.25 λ, Drummond Scientific) with a ~100 μm orifice, cut in half and aminosilanized at the tip surface to improve adhesive capacity. A 75 μm diameter uncoated stainless steel wire (Cooner Wire) was threaded until just below this orifice. The capillary was inserted in a reusable goniometer base (MiTeGen, Ithaca, NY) and soaked, in inverted position, in crystallization solution. For LNX2PDZ2, we obtained an expression strain (BL21(DE3)-R3-pRARE2) and plasmid construct (pNIC28-LNX2PDZ2) from the Structural Genomics Consortium (SGC)40 (http://www.thesgc.org; construct identifier LNX2A-c033). pNIC28-LNX2PDZ2 includes residues 336–424 from Homo sapiens LNX2, with the F338L mutation described by the SGC, an N-terminal cloning artefact (334–335), and a C-terminal ligand motif Glu-Ile-Glu-Leu (425–428). LNX2PDZ2 protein was expressed as an N-terminal hexahistidine fusion in BL21(DE3)-R3-pRARE2 and purified by nickel affinity chromatography (Ni-NTA agarose, Qiagen), cleavage of the TEV tag by 1 U ProTEV per 50 μg protein during dialysis into 50 mM HEPES pH 7.5, 500 mM NaCl, 5% glycerol, 0.5 mM TCEP, size exclusion chromatography, and concentrated to 20 mg/ml for storage. Two protocols yielded suitable crystals. In the first, 3.5 mg/ml protein was dialysed twice (12 and 6 h) against 3 l 5% glycerol, and crystallized by the hanging drop vapour diffusion method in 19% PEG-300, 48 mM citric acid, 35 mM NaH PO and 5% glycerol at 20 °C. Drops were set up by mixing 0.55 μl protein and 1.0 μl buffer. In the second protocol, concentrated protein was diluted to 3.5 mg/ml with 10% glycerol and crystallized by hanging drop vapour diffusion in 27–31% PEG-300, 43 mM citric acid and 35 mM NaH PO at 20 °C (drops, 1.0 μl protein and 1.0 μl well solution). Crystals were manually mounted under a stereomicroscope across the orifice of the pre-soaked bottom electrode, attached to a magnetic goniometer base. Sylgard 184 (Dow-Corning) was prepared to just before full curing and applied around the crystal using a piece of monofilament fishing line (Cajun Line, 0.012″ diameter, Zebco, Tulsa, OK), taking care to not overcoat the crystal. A MiTeGen polyester sleeve containing 15 μl of 50/50 crystallization solution and water at one end, was slid over the electrode to maintain suitable vapour pressure for the crystal. The mounted electrode system was placed on the goniometer and the final experimental configuration (Extended Data Fig. 1e, g), was achieved in three steps: (1) coarse relative positioning of the two electrode system using an XYZ translation stage (Thorlabs), (2) cutting the MiTeGen sleeve to expose the crystal, and (3) rapid, camera-guided approach of the top counter electrode until a liquid junction with the crystal was established (Supplementary Video 2). EF-X data were collected at BioCARS (14-ID) at the Advanced Photon Source, Argonne National Laboratory. The cryostream temperature was set to 289 K, and data were collected using a Rayonix MX340-HS detector with undulators U23 at 10.74 mm and U27 at 15.85 mm (wavelength range of 1.02–1.16 Å). The beam size was approximately 90 μm (h) × 60 μm (v) and slit settings were 200 μm (h) × 70 μm (v). Data collection proceeded in four 180° passes with 4°, 4°, 2° and 1° steps, respectively, and with matching offsets to maximize coverage of reciprocal space (Extended Data Fig. 3 and Supplementary Table 2). Laue data were processed by using Precognition and Epinorm software, with concurrent processing of OFF and ON frames. The data were integrated to 1.8 Å (Supplementary Table 3) and merged in space group P1 using the C2 unit cell dimensions. The orientation of the imposed electric field relative to the crystal lattice was established directly from indexed diffraction patterns. Data for the high-resolution room-temperature (277 K) structure (Fig. 4g and Extended Data Table 3) were collected at the Stanford Synchrotron Radiation Lightsource (SSRL, 11-1) using the PILATUS 6M PAD detector from a single crystal and indexed, integrated, scaled and merged in HKL2000 (ref. 41) (HKL Research). The data showed little radiation damage (HKL2000 radiation-damage coefficients of 0.01–0.03; values >0.1–0.15 indicate significant damage42) or non-isomorphism (coefficient 0.001). We refined the structure of LNX2PDZ2 in the absence of electric field (OFF) first using the high-resolution (1.1 Å) data set collected at SSRL at 277 K, with initial phases obtained by molecular replacement using a cryo structure of LNX2PDZ2 (model PDB accession 2VWR). After early simulated annealing, a model was refined by alternating rounds of automated refinement in PHENIX43 and manual adjustments in Coot44. Alternate conformations were placed where supported by averaged kick45 and F − F maps. The final model had no Ramachandran outliers. Further refinement yielded a model without alternate conformations, also without Ramachandran outliers (Extended Data Table 3). Initial phases for the 289 K OFF data set collected at BioCARS were determined by direct placement of the high-resolution single-conformer model of LNX2PDZ, with small differences in unit cell dimensions refined by rigid-body refinement in PHENIX. Solvent molecules and alternate conformations were modelled in Coot, with real-space refinement to relieve backbone strain, and limited additional refinement in PHENIX (Extended Data Table 2). Anisotropic displacement parameters were refined only for residues with substantial difference density at atomic positions. Note that for calculation of internal difference maps, it is essential that the model used for phasing be refined in the space group of the OFF crystal lattice to guarantee exact position of symmetry elements. We subsequently expanded the refined model to the asymmetric unit of the reduced-symmetry space group using PDBSET (CCP4 6.4.0)46. Difference map Fourier coefficients were calculated directly from merged structure factors using custom MATLAB (Mathworks Inc.) scripts performing the following operations: (1) match structure factors F and and calculate differences , where are the correction coefficients for absorption anisotropy derived from OFF data (below); (2) obtain phases of the corresponding structure factors from the C2 OFF model expanded into C1 using PDBSET (CCP4 6.4.0); (3) calculate weights according to as previously described29, 47 and modified48 to include a term reducing the contribution of any single structure factor difference. Following Schmidt et al.29, 31, the difference density maps are improved if structure factors corresponding to large lattice spacings are rejected (here d  > 4 Å), since EF-X typically produces small-scale electron density differences. For anisotropic absorption correction, we compute , where the tilde indicates local scaling49 as implemented in SOLVE50. Map coefficients were calculated in PHENIX (FFT) with a grid spacing of 0.3 Å. Absolute difference density was integrated in UCSF Chimera27, with calculations based on the C2 OFF model, expanded to C1. Since the EF breaks C2 symmetry, refinement of the up and down models was carried out in the P1 space group, with the C2 OFF model as a starting point. A P1 unit cell was chosen containing one up and one down chain, requiring a rotation around the c* axis by arctan(b/a) (here: 31.1°). To do this, the C2 OFF model was ‘expanded’ in PDBSET (CCP4 4.6.0) using symmetry operations (1) X, Y, Z, and (2) (1 − X), Y, (1 - Z) and the resulting model was rotated in PDBSET by the specified angle. Extrapolated structure factors were calculated as , where N = 1/(1 − f) is the extrapolation factor30. For traditional pump-probe experiments, f is interpreted as the fraction of molecules excited by an optical pulse30; here, it increases the effective population of excited states, facilitating structure refinement. N was chosen as a trade-off between two criteria: map quality, which deteriorates with increasing N, and the appearance of difference electron density peaks consistent with internal difference maps, which initially increases with increasing N (systematic optimization of N in a site-specific manner will be explored in future work). Refinement was performed mostly manually in Coot with determination of R factors in PHENIX, combined with bulk solvent scaling and occupancy refinement every 5–10 modifications (Extended Data Fig. 5a). Near completion, a few rounds of overall coordinate refinement (PHENIX, 10–15 microcycles, small geometric weights) were included. Electron density maps and composite omit maps were calculated in PHENIX with 0.3 Å grid spacing and default settings. Reflections in the R test set were included in final map calculations. The refinement statistics are given in Extended Data Table 2. Eleven pairs of high-resolution (≤2 Å) X-ray structures of PDZ domains with and without ligand were selected: NHERF-1PDZ1: PDB accessions 1G9O, 1GQ4; PALS-1PDZ: 4UU6, 4UU5; Tiam-1PDZ: 3KZD, 4GVC; ZO-1PDZ1: 4OEO, 4OEP; ErbinPDZ: 2H3L, 1MFG; DishevelledPDZ: 2F0A, 1L6O; PDZK-1PDZ3: 3R68, 3R69; ShankPDZ: 1Q3O, 1Q3P; GRIP-1PDZ6: 1N7E, 1N7F; PTP-1EPDZ2: 3LNX, 3LNY; PSD-95PDZ3 (R.R. et al., unpublished observations). Structures were aligned in PyMOL, using ‘super’ for backbone atoms, first to the down state of LNX2PDZ2, and then within each pair (Extended Data Fig. 5d legend). For backbone atoms with matching positions in LNX2PDZ2, displacements (Δr) from unbound (apo) to bound (liganded) were then calculated. Atoms with |Δr| < 0.1 Å were excluded from analysis. Average displacements displayed in Fig. 5c represent the median magnitude and average direction of apo to liganded displacement over homologues. To assess statistical significance of correlations between various experimental measures, the observed quantities were transformed to stabilize variance, reduce kurtosis and approximate a normal distribution. IADDAT values (Fig. 3) were square-root transformed, and B-factors (Fig. 3) and displacements (Fig. 5) were log-transformed. To assess the statistical significance of correlations, sample correlation coefficients were then Fisher Z-transformed, and tested for deviation from a standard normal distribution. For the statistical comparison of the data in Fig. 5b and e, individual residues can be considered independent, yielding P < 0.001. More conservatively, one can also take the shorter of the correlation length scales of B-factors and observed displacements (~2 residues) as a measure of internal data dependence. This yields a reduced number of independent samples and P < 0.01. Thus, the association of Fig. 5b and e is robust to local internal correlations in the data. No statistical methods were used to predetermine sample size. Structure factors and refined models have been deposited in the PDB under accessions 5E11, 5E1Y, 5E21 and 5E22.


News Article | August 22, 2016
Site: www.materialstoday.com

In a new study, researchers from the Cambridge Crystallographic Data Centre (CCDC) in the UK and the US Department of Energy’s (DOE’s) Argonne National Laboratory have teamed up to capture neon within a porous crystalline framework. Neon is the most unreactive element and is a key component in semiconductor manufacturing, but it has never been studied within an organic or metal-organic framework (MOF) until now. These new results, which include critical studies carried out at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, also point the way towards a more economical and greener industrial process for neon production. Although best known for its iconic use in neon signs, industrial applications of neon have recently become dominated by its use in excimer lasers to produce semiconductors. Despite being the fifth most abundant element in the atmosphere, the cost of pure neon gas has risen significantly over the years, increasing the demand for better ways to separate and isolate the gas. In 2015, CCDC scientists presented a talk at the annual American Crystallographic Association (ACA) meeting on the array of elements that have been studied within an organic or metal-organic environment. They challenged the crystallographic community to find the next and possibly last element to be added to the Cambridge Structural Database (CSD). A chance encounter at that meeting with Andrey Yakovenko, a beamline scientist at the APS, resulted in a collaborative project to capture neon – the 95th element to be observed in the CSD. Neon’s low reactivity, along with the weak scattering of X-rays due to its relatively low number of electrons, means that conclusive experimental observation of neon captured within a crystalline framework is very challenging. By conducting in situ high pressure gas flow experiments at X-Ray Science Division beamline 17-BM at the APS using the X-ray powder diffraction technique at low temperatures, the researchers have now managed to elucidate the structure of two different metal-organic frameworks (MOFs) with neon gas captured inside them. “This is a really exciting moment representing the latest new element to be added to the CSD and quite possibly the last given the experimental and safety challenges associated with the other elements yet to be studied” said Peter Wood, senior research scientist at the CCDC and lead author of a paper on this work in Chemical Communications. “More importantly, the structures reported here show the first observation of a genuine interaction between neon and a transition metal, suggesting the potential for future design of selective neon capture frameworks.” The structure of neon captured within a MOF known as NiMOF-74, a porous framework built from nickel metal centers and organic linkers, shows clear nickel-to-neon interactions forming at low temperatures. These interactions are significantly shorter than would be expected from a typical weak contact. “These fascinating results show the great capabilities of the scientific program at 17-BM and the Advanced Photon Source,” said Yakovenko. “Previously we have been doing experiments at our beamline using other much heavier, and therefore easily detectable, noble gases such as xenon and krypton. However, after meeting co-authors Pete, Colin, Amy and Suzanna at the ACA meeting, we decided to perform these much more complicated experiments using the very light and inert gas – neon. In fact, only by using a combination of in situ X-ray powder diffraction measurements, low temperature and high pressure have we been able to conclusively identify the neon atom positions beyond reasonable doubt”. “This is a really elegant piece of in situ crystallography research and it is particularly pleasing to see the collaboration coming about through discussions at an annual ACA meeting,” said Chris Cahill, past president of the ACA and professor of chemistry at George Washington University. This story is adapted from material from the Cambridge Crystallographic Data Centre, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | December 7, 2015
Site: phys.org

This makes it a material of choice for use in the transportation and machinery industries, which rely on cast iron's resistance to wear, deformation, and rusting to design high-performance bridges, tools, and engine parts. But the manufacturing process is as much art as science, producing good results yet not capturing cast iron's full potential. Controversy still exists over the correlation between manufacturing casting parameters and desirable properties. Limited by typical industrial 2-D imaging techniques or time-consuming 3-D laboratory studies, researchers have been unable to pinpoint the exact processing parameters needed to elicit the ideal properties for each cast iron application. Finding an easier way to peer deep inside the alloy to get a definitive answer could be a boon for consumers as well as give the U.S. industry a competitive advantage. According to a study released in the journal Scripta Materialia, high-energy synchrotron X-rays can provide that insight. "By understanding the structure, it will be possible to develop alloys with improved mechanical and thermal properties. This implies that for applications, such as vehicle engine and engine components, one could use less material and reduce overall vehicle weight that would translate into fuel savings," said Dileep Singh, Group Leader of Thermal-Mechanical Research at Argonne National Laboratory's Center for Transportation Research and the technical lead of this study at Argonne. For the transportation industry, the ability to modify manufacturing processes to create high-performance materials could aid in the development of more fuel-efficient engines or engine parts that can withstand heat better to have longer lifespans. "Researchers at Caterpillar are actively seeking to improve our understanding of cast iron alloys in order to provide innovative product solutions to our customers," said Richard Huff, a Technical Team Leader with Caterpillar Inc., which supplied engine alloy castings for use in the proof of principle study. The study results showed that high-energy X-ray tomography can reveal previously unknown behaviors of graphite in cast iron, such as the growth of nodules, as it undergoes various treatments. The X-rays also can unambiguously classify the particle type involved in the behavior, which is critical to identifying the structure-process relationship. These insights hold the key to manipulating the atomic structure of the graphite through manufacturing treatments such as changing the chemistry of the melt and altering the inoculants added to the liquid cast iron. The research team included Huff from Caterpillar, and Argonne researchers Singh, Chihpin Chuang and John Hryn from the Energy Systems Division and Jon Almer and Peter Kenesei from the X-Ray Science Division,. Advanced Photon Source (APS), a DOE Office of Science User Facility based at the U.S. Department of Energy's Argonne National Laboratory was used as part of this research. Synchrotron X-ray analysis has several advantages over the current techniques used to evaluate graphite microstructure. Three-dimensional imaging of the structure of graphite, its spatial arrangement in the alloy, and its phase connectivity, are key factors that determine the properties of cast iron. These parameters cannot be attained reliably by the current industry standard 2-D test. Less frequently used, but more effective is the use of focused ion beams (FIB) and transmission electron microscopy, which can provide high-resolution 3-D images, but is labor-intensive, time consuming and destroys the sample. High-energy X-rays penetrate inhomogeneous samples up to a centimeter thick under real operating conditions. This avoids the challenges of FIB and TEM techniques while also providing a better statistical representation of parameters in bulk material. The research team found that the synchrotron characterization methods enable new insight into why compacted graphite iron, which is used by Caterpillar in heavy-duty engine components, can conduct heat better than ductile iron while maintaining good ductile strength. The answer lay in the shape, size, and distribution of the graphite particles in the cast iron. "The 3-D characterization of the material enables greater insight into the structure formation and structure-property relationships," Huff said. The results were published in the journal Scripta Materialia in the article "3D quantitative analysis of graphite morphology in high strength cast iron by high-energy X-ray tomography". Work was conducted at beamline 1-ID at the APS. Explore further: NC State breakthrough results in super-hard nanocrystalline iron that can take the heat More information: Chihpin Chuang et al. 3D quantitative analysis of graphite morphology in high strength cast iron by high-energy X-ray tomography, Scripta Materialia (2015). DOI: 10.1016/j.scriptamat.2015.03.017


Jack Gilbert is director of the Microbiome Center, an interdisciplinary institution lead by Argonne National Laboratory, the University of Chicago and the Marine Biology Laboratory. The center seeks to build our understanding of the microbes found in our surroundings and inside our bodies in order to improve human and environmental health. Credit: Argonne National Laboratory. Humans and Komodo dragons, the largest lizards in the world, could not be more different. In the wild, these four-legged carnivores wander in solitude across islands in Indonesia, consuming large prey like deer and water buffalo. But when placed in a closed environment, Komodo dragons interact with their surroundings much in the same way humans do – at least on the microbial level – and researchers are using this knowledge to help them understand the health of animals in captivity. Researchers at the University of California San Diego, the University of Colorado-Boulder, the University of Chicago and the U.S. Department of Energy's (DOE's) Argonne National Laboratory, are the first to identify similarities in the way in which Komodo dragons and humans and their pets share microbes within closed environments. Their findings, published in The American Society of Microbiology's mSystems journal, say that the pattern of microbial exchange is "likely circular in nature," meaning that captive dragons contribute microbes to their environment, and reacquire these same microbes from their environment, repeating this exchange in an ongoing cycle without other external sources of microbial diversity. Researchers noticed this pattern after sampling the microbial communities found in the saliva, skin and feces of 37 Komodo dragons across 12 U.S. zoos as well as the environment of two of the 12 enclosures. They then compared these two sets of data to each other and to past studies of humans and pets. Statistical comparison revealed that, as with humans and pets in their homes, captive Komodo dragons transfer a significant source of bacteria and other microbes to their enclosures. These findings are helping researchers better understand the relationship between captivity and microbial diversity and health, knowledge that could be key to improving the health of animals in the care of zookeepers, veterinarians and other caretakers. "Regardless of whether you're in a closed or open environment, there's always a constant exchange of microbes between a host and their environment, and that constant exposure has impacts on health; for example it can lead to changes in a host's immune system that help the host stave off pathogens," said Argonne's Jack Gilbert, an author of the study and the director of The Microbiome Center, a joint Argonne, University of Chicago and Marine Biological Laboratory program. "The problem is that the degree of exposure becomes limited when you put a host in captivity, and this change has unknown consequences on health," he said, "which is exactly why we're trying to learn more about it." Evidence from past studies suggests that animals in captivity experience diseases that are associated with or worsened by captivity. Collectively these studies, which have included green sea turtles, polar bears and monkeys, bring weight to the "hygiene hypothesis," which theorizes that reduced exposure to microbes is contributing to the increase in autoimmune and allergic disease occurrence in Westernized nations. "For some animals there are diseases that affect them but don't affect their wild counterparts, or don't affect their counterparts as severely, so it makes sense that people, as they spend less time outdoors, would be affected in the same ways," said lead author Embriette Hyde, an assistant project scientist and project manager of the American Gut Project. This study is the largest of its kind on captive Komodo dragons, and the first to investigate the microbial communities in the feces and skin of Komodo dragons. To date, no comparable studies of wild Komodo dragons have been conducted. Researchers are, however, continuing to explore microbiome interactions using other animal models. Gilbert, along with a separate group of researchers, is studying these interactions in captive dolphins and their environment. "Since it's quite difficult to track microbial exposure on a daily basis, especially with humans - who move around a lot - we're exploring new animal models to see if we can ask the same questions we would about humans and get answers in more rigorous, controlled ways," Gilbert said. Through such research, investigators are hoping to identify useful systems to simulate the interactions between an animal and its environment, and find out whether the interactions within a given system are comparable to how humans interact with their environment. Such answers can expand our knowledge of the microbial health of humans and animals alike, and lead to improvements in animal husbandry practices. Explore further: Seven rare Komodo dragons hatch in Indonesia More information: Embriette R. Hyde et al. The Oral and Skin Microbiomes of Captive Komodo Dragons Are Significantly Shared with Their Habitat, mSystems (2016). DOI: 10.1128/mSystems.00046-16


News Article | October 23, 2015
Site: news.mit.edu

MIT research teams have received funding through five separate Department of Energy (DOE) Nuclear Energy University Programs (NEUP) and Nuclear Energy Enabling Technologies (NEET) grants. A team led by Department of Nuclear Science and Engineering (NSE) researchers Mujid Kazimi, principal investigator, and Kord Smith, co-principal investigator, received $3 million over three years for their project, “Development of Accident Tolerant Fuel Options For Near Term Applications." Researchers will develop computational tools to evaluate accident tolerant fuel (ATF) options for near-term applications. The computational tools will be predominantly developed under the Nuclear Energy Advanced Modeling and Simulation framework and will include RattleSNake, MARMOT, BISON and RELAP-7. These codes will be further enhanced in order to model ATF options. The integrated research project involves two industry collaborators (AREVA and Anatech), the University of Wisconsin, Texas A&M University, Pennsylvania State University, and the Idaho National Laboratory. Collaborators from NSE are: research scientists Koroush Shirvan and Thomas McKrell, assistant professor Michael Short, and professor Jacopo Buongiorno. Short also won an NEUP research and development grant for $800,000 for his work, titled, “Multilayer Composite Fuel Cladding for LWR Performance Enhancement and Severe Accident Tolerance." Short and his team will develop, manufacture, and weld a new multimetallic layered composite (MMLC) for enhanced severe accident tolerance and manufacturability in light water reactors (LWR). Layers of stainless steel (facing the coolant) and Zircaloy (facing the fuel) will be separated by barrier layers of Chromium and Niobium elements. Severe accident testing in 1200 Celsius steam, LWR autoclave testing, and ion irradiation studies will demonstrate the MMLC's microstructural stability. In addition, Short will receive $200,000 through a collaboration with Texas A&M on corrosion testing of Titanium nitride-coated Zircaloy in PWR water (using the Short Lab’s CRUD loop) and molten sodium. MIT Nuclear Reactor Lab’s Lin Wen Hu and David Carpenter are part of the team led by Oregon State University, which received a grant to work on a project, titled, “Computational and Experimental Benchmarking for Transient Fuel Testing." Researchers will perform a benchmark of the Transient Reactor Facility located at the Idaho National Laboratory. This benchmark will include two steady state neutronic benchmark problems and two transient benchmark problems. It will include the design, construction, and utilization of a full-scale representation of an in-pile flow loop prototype for Argonne National Laboratory's Transient Reactor Test Facility, and numerical benchmarking against the experimental data gained from the experiment. Finally, the Department of Materials Science and Engineering’s Michael Demkowicz, an associate professor, will collaborate with researchers at the University of Nebraska at Lincoln on a project, titled, “Radiation tolerance and mechanical properties of nanostructured amorphous-ceramic/metal composites." The NEET award for just under $1 million supports the team, which will use a non-traditional approach to design amorphous-ceramic and metal composites for service in extreme irradiation environments. Rather than try to prevent microstructure changes in polycrystalline aggregates, researchers will evolve composite systems, where one of the constituents is intentionally synthesized in a non-crystalline or “amorphous” state. These materials may serve as the basis for developing a new class of structural materials with unprecedented resistance to radiation. In 2015, the DOE is awarding approximately $31 million to 43 university-led nuclear energy projects, approximately $13 million for four Integrated Research Projects, and nine infrastructure support awards through the NEUP. The DOE is also awarding over $8.5 million to support 10 nuclear energy projects through the NEET program.


News Article | December 14, 2016
Site: www.eurekalert.org

Findings by Berkeley Lab researchers mean the Arctic may be even less of a carbon sink than previously thought When the frozen Arctic tundra starts to thaw around June of each year, the snow melting and the ground softening, the soil may release a large pulse of greenhouse gases, namely, carbon dioxide and methane. Little has been known about such releases. Now scientists at the U.S. Department of Energy's (DOE) Lawrence Berkeley National Laboratory, in collaboration with a team of other scientists taking measurements both in the field and in the lab, have quantified the scale of such releases and explained the underlying mechanisms for the phenomenon. Their study was based on a spring pulse in northern Alaska that they documented in 2014 that included CO2 emissions equivalent to 46 percent of the net CO2 that is absorbed in the summer months and methane emissions that added 6 percent to summer fluxes. What's more, recent climate trends may make such emissions more frequent, the scientists conclude. "We can see the effects of climate change happening more rapidly in the Arctic than in any other part of world," said Berkeley Lab scientist Naama Raz-Yaseef. "So we need to understand the processes that are occurring and what to expect in the future. The amount of CO2 and methane (CH¬¬4) in the atmosphere determines the greenhouse effect--so we need to know more about these sources." Their study was recently published in the journal Geophysical Research Letters in a paper titled, "Large CO2 and CH4 Emissions from Polygonal Tundra During Spring Thaw in Northern Alaska." Raz-Yaseef, an ecohydrologist, was the lead author. Co-authors were Berkeley Lab researchers Margaret Torn, Yuxin Wu, and Tim Kneafsey; Dave Billesbach of the University of Nebraska; Anna Liljedahl and Vladimir Romanovsky of the University of Alaska; David Cook of Argonne National Laboratory; and Stan Wullschleger of Oak Ridge National Laboratory. The study was a project of DOE's Next-Generation Ecosystem Experiment (NGEE-Arctic), which seeks to gain a predictive understanding of the Arctic terrestrial ecosystem's feedback to climate. The team used two towers 4 meters high to collect gas flux readings. The towers are located about 5 km apart near Barrow, Alaska, the northernmost town in the U.S, one tower operated by NGEE and the other by DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility. "Typically we just measure what happens during summer," Raz-Yaseef said. "We assume nothing happens during winter because it's frozen, so there's no biological activity. But in 2014 we measured a big flux of carbon and methane emitted from the ground, at the very beginning of thaw and snowmelt. At first we didn't know if it was real or a measurement error. Then we confirmed it with another tower 5 km away (the ARM tower), which measured a very similar flux of the same size during the same period." That spring they measured a total of three pulses, the longest lasting five days. These findings are important in understanding the carbon balance of ecosystems in the Arctic. So far, the Arctic is considered a carbon sink, meaning it absorbs more CO2 than it emits on an annual basis, thanks mainly to the vegetation that grows in the summer. (Oceans and forests are far larger carbon sinks.) "Given our findings, the Arctic is an even smaller carbon sink than we thought since during some years nearly half of the summer uptake of CO2 is offset with these spring emissions," Raz-Yaseef said. Why is the soil releasing gases? Soils everywhere are full of microbes, and microbial activity emits gases. What makes the Arctic different is the way the ground freezes. As temperatures start to drop in the late fall, the soil freezes from the permafrost up and from the surface down. The active layer in between is trapped in the middle. "For about a month after the surface freezes, the middle layer is still active," Raz-Yaseef explained. "The temperature is still around 0 (Celsius), so microbial activity continues to take place, producing carbon and methane, but it's trapped beneath the surface ice. It can't be emitted, so it's stored throughout the winter." When the ice cap thaws in spring, the gases escape to the atmosphere. The scientists confirmed these observations in several ways. First, they measured soil temperatures every 10 cm down through the permafrost 1.5 meters below ground every 5 minutes year-round. Second, they extracted large, frozen soil cores that were transported to the laboratory for experiments. "(Berkeley Lab scientist) Yuxin Wu thawed a core under controlled conditions, but just like in nature," Raz-Yaseef said. "Just when the surface ice melted in the lab, he measured a large flux of carbon and methane. This is formed from gases trapped in the soil over winter." Separately, Berkeley Lab scientist Timothy Kneafsey took CT scans of hundreds of frozen cores and found gas-rich channels and pockets near the surface, which the scientists posited could serve as accumulation zones and pathways for gas flow. Spring pulses not annual but may become more frequent In fact, it's these channels and pockets that enable the emission pulses, the scientists conclude in their paper. The spring pulses were not detected in 2013 or 2016, and 2015 could not be measured due to equipment malfunction. Since it's not an annual event they analyzed the conditions under which the pulses might occur. "The process is more likely to occur when there are events of rain on ice," Raz-Yaseef said. "When it thaws and freezes repeatedly, that produces cracks in the frozen soil, and through these cracks the gas can be emitted." During warmer years, the scientists expect the spring pulses to be more frequent. "We expect there will be more gas built up due to longer and warmer fall seasons and more frequent pulse events due to more rain on ice in the spring," Raz-Yaseef said. One thing that's for sure is the scientists now know not to ignore gas emissions in early spring. "Now we'll put the towers up much, much earlier, just in case there's another event, we'll be sure to catch it," Raz-Yaseef said. "It's one of those things, once you're aware of it, you open your eyes and looks for it." NGEE-Arctic is supported by the DOE Office of Science. ARM is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | April 7, 2016
Site: www.greencarcongress.com

« Williams Advanced Engineering collaborates with Wille on zero-emission multi-function machines | Main | Hyundai Motor powers world’s first hydrogen fuel cell vehicle car-sharing service » Cummins Inc. has been awarded a $4.5-million grant from the US Department of Energy to develop a Class 6 commercial plug-in hybrid electric vehicle that can reduce fuel consumption by at least 50% over conventional Class 6 vehicles. (Earlier post.) When fully loaded, Class 6 vehicles weigh between approximately 19,000 and 26,000 pounds; typical examples include school buses or single-axle work trucks. With their expertise in internal combustion engines and related products, Cummins researchers will optimize the powertrain by selecting the engine with the best architecture to use as an electric commercial vehicle range extender, using the engine to manage the charge level of the all-electric drive battery pack. The range extender will be integrated, using advanced vehicle controls, with the electrified powertrain and other applicable technologies. Ultimately, the researchers aim to demonstrate improved fuel consumption and state of the art drivability and performance regardless of environmental conditions. Cummins is partnering with PACCAR on the project, and the full team includes representatives from The Ohio State University, National Renewable Energy Laboratory and Argonne National Laboratory. The close integration and control of the electrified powertrain with an appropriately selected engine is critically important to developing a plug-in hybrid electric vehicle system. We believe that through the team’s efforts we can soon make these innovations commercially available. The reduction of fuel consumption will be met or exceeded during a wide-range of drive cycles designed to meet the needs of a wide variety of commercial fleet operators. The fuel reduction goals will be achieved through the use of an electrified vehicle powertrain, optimization of the internal combustion engine operation, and other technologies including intelligent transportation systems and electronic braking.


News Article
Site: www.asminternational.org

Most materials are capable of being only one thing at a time, but a team of engineers and physicists at the University of Wisconsin–Madison have now created an entirely new material in which completely contradictory properties can coexist. The compound is a polar metal. "Polar metals should not be possible," says Chang-Beom Eom, the professor of materials science and engineering (pictured here). Undeterred by the laws of the universe, Eom and his team created a compound that is a scientific oxymoron. Through a new synthesis approach supported by computational modeling, the group made a crystal with multiple personalities: part polar, part metallic. Metals conduct electricity because electrons flow freely throughout them. Polar materials, by contrast, impede the free flow of electrons and work as electrical insulators. The team needed to find something that can demonstrate both insulating and conducting properties. First, they separated the polar and metallic parts of the crystal. Some electrons gave rise to the metallic nature, moving within the material to conduct electricity. Other electrons contributed to the polar properties. However, because the natural molecular structure of the material is symmetrical, even after separating the two components, the material as a whole would not act polar. Equal and opposite arrangements of electrons canceled each other out. To overcome this obstacle, the researchers synthesized the substance with slightly off-kilter atoms, which threw off the internal symmetry enough to make the material polar. "The initial calculations that the theory suggested did not show the polar nature so we experimentally tested the materials, then went back and improved the models," says Eom. "We looped between theory and experiments, but most importantly, we actually created the material, demonstrated its polar and metallic properties, and developed an understanding of how this is happening." Ultimately, the polar metal was made by painstakingly growing thin films of material one atom at a time. Crucially, the substance was grown on top of a supporting lattice with a slightly offset molecular organization. Tightly clamping the growing film to this support skewed the internal arrangement of the material, stabilizing its internal geometry in the asymmetrical orientation necessary to maintain polarity. Synthesizing and characterizing this first-of-its-kind material required patience and precision. Researchers counted every atom deposited on the surface, as the substance slowly grew one layer at a time. They then used multiple complex optical and electronic and structural measurements to determine its properties. Their approach is an attempt to accelerate the discovery of multifunctional materials with unusual coexisting properties, paving the way to devices with the ability to perform simultaneous electrical, magnetic and optical functions. "This has been a complex effort, and theoretical and experimental contributions from all collaboration members has been essential. The project would not succeed otherwise," says Eom. This work was supported by the National Science Foundation under the Designed Materials to Revolutionize and Engineer our Future program, the U.S. Department of Energy Office of Basic Energy Sciences, and the Army Research Office. Eom's team blends theorists and experimentalists, including UW-Madison physics Professor Mark Rzchowski and James Rondinelli of Northwestern University, Venkatraman Gopalan of Penn State, Xiaoqing Pan of University of California, Irvine, Craig Fennie of Cornell University and Hua Zhou, Phillip Ryan, Yongseong Choi and Jong-Woo Kim of Argonne National Laboratory.


« USDOT awards $6M grant to advance Northwest US 33 Smart Mobility Corridor in Ohio | Main | GAC Bunker Fuels partners with REV LNG and REV LNG Marine to bring reliable LNG bunkering services to North America » Researchers at MIT have shown that a MOF (metal-organic framework) with high electrical conductivity—Ni (2,3,6,7,10,11-hexaiminotriphenylene) (Ni (HITP) )—can serve as the sole electrode material in a supercapacitor. This is the first example of a supercapacitor made entirely from neat MOFs as active materials, without conductive additives or other binders. The MOF-based device shows an areal capacitance that exceeds those of most carbon-based materials and capacity retention greater than 90% over 10,000 cycles, in line with commercial devices. Given the established structural and compositional tunability of MOFs, these results herald the advent of a new generation of supercapacitors whose active electrode materials can be tuned rationally, at the molecular level, the researchers suggested. A paper on their work is published in the journal Nature Materials. Because EDL capacitors (EDLCs) draw their superior power density from a combination of high electrical conductivity and high surface area, materials that possess both of these properties are required for robust, efficient devices. Such materials are rare. In fact, the active materials used thus far in EDLCs have been exclusively carbons, such as activated, templated and carbide-derived carbons, carbon nanotubes and, more recently, graphene. Transformative advances in EDLCs may thus be expected not just from developing better processing or synthetic techniques for carbon materials, but also from developing entirely new classes of active electrode materials that maximize both surface area and electrical conductivity. One class of materials that far surpasses even the porosity of activated carbons is metal–organic frameworks (MOFs). Conventional MOFs exhibit surface areas exceeding 7,000m2 g−1, yet their use in EDLCs has been limited as precursors/templates for carbons (via pyrolysis), or as hybrid composites with conductive additives such as graphene, because their electrical conductivity is typically much lower than that of graphitic carbons. Here, we show that the previously reported MOF Ni (HITP) , which exhibits a bulk electrical conductivity greater than 5,000S m−1, exceeding those of activated carbons and holey graphite (∼1,000S m−1) and on par with that of graphite itself, functions as an active electrode material for EDLCs. Although MOFs have an extraordinarily large surface area for their size—much greater than carbon materials do—their lack of electrical conductivity has been a major impediment to their application in a supercapacitor. One of the long-term goals was to make these materials electrically conductive, said Mircea Dincă, an MIT associate professor of chemistry who led the work. However, the material also exhibited another needed characteristic for such electrodes, which is that it conducts ions (atoms or molecules that carry a net electric charge) very well. Another advantage of the material used in these experiments is that it can be made under much less harsh conditions than those needed for the carbon-based materials, which require very high temperatures above 800 degrees Celsius and strong reagent chemicals for pretreatment. The team says supercapacitors, with their ability to store relatively large amounts of power, could play an important role in making renewable energy sources practical for widespread deployment. They could provide grid-scale storage that could help match usage times with generation times, for example, or be used in electric vehicles and other applications. The new devices produced by the team, even without any optimization of their characteristics, already match or exceed the performance of existing carbon-based versions in key parameters, such as their ability to withstand large numbers of charge/discharge cycles. Tests showed they lost less than 10% of their performance after 10,000 cycles, which is comparable to existing commercial supercapacitors. But that’s likely just the beginning, Dincă says. MOFs are a large class of materials whose characteristics can be tuned to a great extent by varying their chemical structure. Work on optimizing their molecular configurations to provide the most desirable attributes for this specific application is likely to lead to variations that could outperform any existing materials. While the MOF material has advantages in the simplicity and potentially low cost of manufacturing, the materials used to make it are more expensive than conventional carbon-based materials, Dincă said. Even if the material ends up being more expensive, however, if its performance is significantly better than that of carbon-based materials, it could find useful applications, he added. This discovery is very significant, from both a scientific and applications point of view, said Alexandru Vlad, a professor of chemistry at the Catholic University of Louvain in Belgium, who was not involved in this research. This work shows only the tip of the iceberg. With carbons we know pretty much everything, and the developments over the past years were modest and slow. But the MOF used by Dincă is one of the lowest-surface-area MOFs known, and some of these materials can reach up to three times more [surface area] than carbons. The capacity would then be astonishingly high, probably close to that of batteries, but with the power performance [the ability to deliver high power output] of supercapacitors. The research team included former MIT postdoc Dennis Sheberla (now a postdoc at Harvard University), MIT graduate student John Bachman, Joseph Elias PhD ’16, and Cheng-Jun Sun of Argonne National Laboratory. The work was supported by the U.S. Department of Energy through the Center for Excitonics, the Sloan Foundation, the Research Corporation for Science Advancement, 3M, and the National Science Foundation.


News Article | November 4, 2016
Site: www.greencarcongress.com

« National Research Council showcasing new algal biorefinery project to reduce CO2 emissions | Main | San Diego begins using Neste renewable diesel in city fleet » The US Department of Energy (DOE) will provide $335,000 for one year to Argonne National Laboratory to conduct hydrogen and synthetic fuel (“syngas”) testing on Capstone Microturbine’s C65 and C200 microturbines. In October 2015, Capstone partnered with Argonne National Laboratory, a science and engineering research facility that is managed by UChicago Argonne, LLC for the DOE’s Office of Science, for participation in the DOE’s Technologist in Residence (TIR) program. Since then, Capstone and Argonne have explored new focus areas to advance the adoption of low-emission, high-efficiency power solutions. The additional funding from the DOE will help to drive Capstone’s hydrogen and syngas fuel product offering to market and promote further research and development efforts. Capstone Turbine Corporation is the world's leading producer of low-emission microturbine systems and was the first to market commercially viable microturbine energy products. Capstone has shipped approximately 8,800 Capstone Microturbine systems to customers worldwide. These systems have logged millions of documented runtime operating hours.


News Article | October 11, 2016
Site: www.theenergycollective.com

To demonstrate the supercapacitor’s ability to store power, the researchers modified an off-the-shelf hand-crank flashlight (the red parts at each side) by cutting it in half and installing a small supercapacitor in the center, in a conventional button battery case, seen at top. When the crank is turned to provide power to the flashlight, the light continues to glow long after the cranking stops, thanks to the stored energy. Photo: Melanie Gonick   Energy storage devices called supercapacitors have become a hot area of research, in part because they can be charged rapidly and deliver intense bursts of power. However, all supercapacitors currently use components made of carbon, which require high temperatures and harsh chemicals to produce. Now researchers at MIT and elsewhere have for the first time developed a supercapacitor that uses no conductive carbon at all, and that could potentially produce more power than existing versions of this technology. Energy storage device could deliver more power than current versions of this technology. By David L. Chandler | MIT News Office The team’s findings are being reported in the journal Nature Materials, in a paper by Mircea Dincă, an MIT associate professor of chemistry; Yang Shao-Horn, the W.M. Keck Professor of Energy; and four others. “We’ve found an entirely new class of materials for supercapacitors,” Dincă says. Dincă and his team have been exploring for years a class of materials called metal-organic frameworks, or MOFs, which are extremely porous, sponge-like structures. These materials have an extraordinarily large surface area for their size, much greater than the carbon materials do. That is an essential characteristic for supercapacitors, whose performance depends on their surface area. But MOFs have a major drawback for such applications: They are not very electrically conductive, which is also an essential property for a material used in a capacitor. “One of our long-term goals was to make these materials electrically conductive,” Dincă says, even though doing so “was thought to be extremely difficult, if not impossible.” But the material did exhibit another needed characteristic for such electrodes, which is that it conducts ions (atoms or molecules that carry a net electric charge) very well. “All double-layer supercapacitors today are made from carbon,” Dincă says. “They use carbon nanotubes, graphene, activated carbon, all shapes and forms, but nothing else besides carbon. So this is the first noncarbon, electrical double-layer supercapacitor.” One advantage of the material used in these experiments, technically known as Ni3(hexaiminotriphenylene)2, is that it can be made under much less harsh conditions than those needed for the carbon-based materials, which require very high temperatures above 800 degrees Celsius and strong reagent chemicals for pretreatment. The team says supercapacitors, with their ability to store relatively large amounts of power, could play an important role in making renewable energy sources practical for widespread deployment. They could provide grid-scale storage that could help match usage times with generation times, for example, or be used in electric vehicles and other applications. The new devices produced by the team, even without any optimization of their characteristics, already match or exceed the performance of existing carbon-based versions in key parameters, such as their ability to withstand large numbers of charge/discharge cycles. Tests showed they lost less than 10 percent of their performance after 10,000 cycles, which is comparable to existing commercial supercapacitors. But that’s likely just the beginning, Dincă says. MOFs are a large class of materials whose characteristics can be tuned to a great extent by varying their chemical structure. Work on optimizing their molecular configurations to provide the most desirable attributes for this specific application is likely to lead to variations that could outperform any existing materials. “We have a new material to work with, and we haven’t optimized it at all,” he says. “It’s completely tunable, and that’s what’s exciting.” While there has been much research on MOFs, most of it has been directed at uses that take advantage of the materials’ record porosity, such as for storage of gases. “Our lab’s discovery of highly electrically conductive MOFs opened up a whole new category of applications,” Dincă says. Besides the new supercapacitor uses, the conductive MOFs could be useful for making electrochromic windows, which can be darkened with the flip of a switch, and chemoresistive sensors, which could be useful for detecting trace amounts of chemicals for medical or security applications. While the MOF material has advantages in the simplicity and potentially low cost of manufacturing, the materials used to make it are more expensive than conventional carbon-based materials, Dincă says. “Carbon is dirt cheap. It’s hard to find anything cheaper.” But even if the material ends up being more expensive, if its performance is significantly better than that of carbon-based materials, it could find useful applications, he says. This discovery is “very significant, from both a scientific and applications point of view,” says Alexandru Vlad, a professor of chemistry at the Catholic University of Louvain in Belgium, who was not involved in this research. He adds that “the supercapacitor field was (but will not be anymore) dominated by activated carbons,” because of their very high surface area and conductivity. But now, “here is the breakthrough provided by Dinca et al.: They could design a MOF with high surface area and high electrical conductivity, and thus completely challenge the supercapacitor value chain! There is essentially no more need of carbons for this highly demanded technology.” And a key advantage of that, he explains, is that “this work shows only the tip of the iceberg. With carbons we know pretty much everything, and the developments over the past years were modest and slow. But the MOF used by Dinca is one of the lowest-surface-area MOFs known, and some of these materials can reach up to three times more surface area than carbons. The capacity would then be astonishingly high, probably close to that of batteries, but with the power performance the ability to deliver high power output of supercapacitors.” The research team included former MIT postdoc Dennis Sheberla (now a postdoc at Harvard University), MIT graduate student John Bachman, Joseph Elias PhD ’16, and Cheng-Jun Sun of Argonne National Laboratory. The work was supported by the U.S. Department of Energy through the Center for Excitonics, the Sloan Foundation, the Research Corporation for Science Advancement, 3M, and the National Science Foundation. Reprinted with permission of MIT News Original Post


News Article | December 8, 2015
Site: www.greencarcongress.com

« Nissan and Eaton partner on second-life EV battery applications | Main | Nissan and ENEL announce V2G system, launch smart grid trials in Europe » Researchers at Argonne National Laboratory, in partnership with Caterpillar, have used high-energy synchrotron X-ray tomography to perform quantitative 3D-characterization of the distribution of graphite particles in high-strength compacted graphite iron (CGI). The size and morphology of graphite particles play a crucial role in determining various mechanical and thermal properties of cast iron. The study results, published in Scripta Materialia showed that high-energy X-ray tomography can reveal previously unknown behaviors of graphite in cast iron, such as the growth of nodules, as it undergoes various treatments. The X-rays can also unambiguously classify the particle type involved in the behavior, which is critical to identifying the structure-process relationship. These insights hold the key to manipulating the atomic structure of the graphite through manufacturing treatments such as changing the chemistry of the melt and altering the inoculants added to the liquid cast iron. Cast iron can be modified through the manufacturing process to optimize its mechanical and physical properties, such as strength and durability. This property makes it a material of choice for use in the transportation and machinery industries, which rely on cast iron’s resistance to wear, deformation, and rusting to design high-performance bridges, tools, and engine parts. However, controversy still exists over the correlation between manufacturing casting parameters and desirable properties. Limited by typical industrial 2-D imaging techniques or time-consuming 3-D laboratory studies, researchers have been unable to pinpoint the exact processing parameters needed to elicit the ideal properties for each cast iron application. The tomographic technique can provide a way to get those answers. By understanding the structure, it will be possible to develop alloys with improved mechanical and thermal properties. This implies that for applications such as vehicle engine and engine components, one could use less material and reduce overall vehicle weight, which would translate into fuel savings. —Dileep Singh, group leader of thermal-mechanical research at Argonne National Laboratory’s Center for Transportation Research and technical lead of the study For the transportation industry, the ability to modify manufacturing processes to create high-performance materials could aid in the development of more fuel-efficient engines or engine parts that can withstand heat better to have longer lifespans. The research team included Richard Huff from Caterpillar and Argonne researchers Singh, Chihpin Chuang, and John Hryn from the Energy Systems Division and Jon Almer and Peter Kenesei from the X-Ray Science Division. Caterpillar supplied engine alloy castings for use in the proof-of-principle study. The Advanced Photon Source (APS), a US Department of Energy (DOE) Office of Science User Facility based at DOE’s Argonne National Laboratory was used as part of this research. Synchrotron X-ray analysis has several advantages over the current techniques used to evaluate graphite microstructure. Three-dimensional imaging of the structure of graphite, its spatial arrangement in the alloy, and its phase connectivity are key factors that determine the properties of cast iron. These parameters cannot be attained reliably by the current industry standard 2-D test. Less frequently used, but more effective, is the use of focused ion beams (FIB) and transmission electron microscopy (TEM), which can provide high-resolution 3-D images, but is labor-intensive and time consuming and destroys the sample. High-energy X-rays penetrate inhomogeneous samples up to a centimeter thick under real operating conditions. This avoids the challenges of FIB and TEM techniques while also providing a better statistical representation of parameters in bulk material. The research team found that the synchrotron characterization methods enable new insight into why compacted graphite iron, used by Caterpillar in heavy-duty engine components, can conduct heat better than ductile iron while maintaining good ductile strength. The answer lay in the shape, size, and distribution of the graphite particles in the cast iron. The analysis revealed that the compacted graphite can grow with a coral-tree-like morphology and span several hundred microns in the iron matrix. The Vehicle Technologies Office, Office of Energy Efficiency and Renewable Energy, US Department of Energy, supported the work, and the Office of Science supported the use of the APS.


News Article | November 14, 2016
Site: www.prweb.com

The National Bankruptcy Conference (NBC) announced today the election of Jane L. Vris as Chair and Prof. Randal C. Picker as Vice Chair of the Conference. Richard Levin, the outgoing Chair, said “The leadership, energy and passion of these two esteemed bankruptcy experts will enable the Conference to continue its 80-year tradition of being a valuable resource to Congress on bankruptcy issues.” We are pleased to introduce the following new Conference officers: Jane Lee Vris, General Counsel, Managing Director at Millstein & Co., L.P. in New York, NY Ms. Vris serves as General Counsel and Managing at Millstein & Co., LP. She brings nearly thirty-five years of experience as a restructuring lawyer to the firm, first as a Partner at Wachtell, Lipton, Rosen & Katz and more recently at Vinson & Elkins. At Millstein, she has participated in engagements for a number of clients, including the fiscal agent for Puerto Rico, counsel to an ad hoc group of senior secured creditors in the consolidated cases of Energy Futures Holdings and its subsidiaries, the UAW both in and out of court, companies in private exchange offers, and the largest creditor in CORE Media’s chapter 11 case. Before joining Millstein, Ms. Vris advised boards, special committees, creditors, potential purchasers of assets from distressed companies, and equity investors in companies emerging from Chapter 11. Ms. Vris’s prior clients have included Deutsche Bundesbank, the central bank of Germany; Huntsman as the stalking horse bidder in Tronox; Cereberus as the equity owner of Chrysler in its historic chapter 11 case; She has testified before Congress on bankruptcy matters, including resolution bills for systemically important financial institutions; previously chaired the Steering Committee of the NYC Bankruptcy Assistance Project; is a fellow of the American College of Bankruptcy where she currently serves as a member of the Board of Directors; was a contributing editor to Colliers, the leading treatise on US bankruptcy law, for nearly twenty years; has written on numerous bankruptcy issues; and is a frequent speaker at professional conferences and programs. She has been recognized in numerous publications as a leading bankruptcy professional, including Chambers and the International Who’s Who of Insolvency & Restructuring Lawyers. Ms. Vris has a J.D. from the New York University School of Law, where she was managing editor of the NYU Law Review, and a B.A, magna cum laude, from the University of Pennsylvania. She is the first female officer elected to serve as Chair of the NBC in its over 80-year history. Prof. Randal C. Picker, Professor of Law at The University of Chicago in Chicago, IL Randal Picker is the James Parker Hall Distinguished Service Professor of Law and the Ludwig & Hilde Wolf Teaching Scholar at The University of Chicago Law School. He is also Senior Fellow at The Computation Institute of the University of Chicago and Argonne National Laboratory. He graduated from the College of the University in 1980, cum laude, with a Bachelor of Arts in economics and was elected to Phi Beta Kappa. He then spent two years in the Department of Economics, where he was a Friedman Fellow, completing his doctoral course work and exams. He received a master’s degree in 1982. Thereafter, he attended the Law School and graduated in 1985, cum laude. He is a member of the Order of the Coif. While at the Law School, Prof. Picker was an associate editor of the Law Review. After graduation, Prof. Picker clerked for Judge Richard A. Posner of the United States Court of Appeals for the Seventh Circuit. He then spent three years with Sidley & Austin in Chicago, where he worked in the areas of debt restructuring and corporate reorganizations in bankruptcy. He served as a commissioner to the National Conference of Commissioners on Uniform State Laws and served as a member of the drafting committee to revise Article 9 of the Uniform Commercial Code. He is the author of a casebook, Security Interests in Personal Property, on the subject. Prof. Picker does research in a wide variety of areas including antitrust, copyright, bankruptcy and tech policy and teaches in related areas. His online course, Internet Giants: The Law and Economics of Media Platforms, is available on the Coursera platform and has been seen by thousands of students around the world. He has been extensively involved in administrative matters at The University of Chicago and currently serves as Spokesperson of the Council. The NBC is a non-profit, non-partisan, self-supporting organization of approximately sixty lawyers, law professors and bankruptcy judges whose primary purpose is to advise Congress on the operation of bankruptcy and related laws and any proposed changes to those laws. The NBC was formed in the 1930’s at the request of Congress to assist in the drafting of major Depression-era bankruptcy law amendments, ultimately resulting in the Chandler Act of 1938. The NBC has been a resource to Congress on every significant piece of bankruptcy legislation since that time. Please visit http://www.nationalbankruptcyconference.org for additional information on our work.


News Article | December 1, 2016
Site: www.eurekalert.org

One of the biggest challenges to wider adoption of wind and solar power is how to store the excess energy they often produce. A technology developed at the University of Chicago, and now being commercialized by a University startup, is addressing the intermittent nature of these renewable sources. It uses a selectively evolved, unicellular microorganism that helps convert electricity into methane gas. That gas can be stored, transported and used wherever natural gas is used, including for the generation of power on demand. Laurens Mets, associate professor of molecular genetics and cell biology, began developing the technology in the late 1990s. From it, the startup Electrochaea was born with support from the University's technology transfer office, which is now part of the Polsky Center for Entrepreneurship and Innovation. "Direct scaling at this pace and scale is rare in the energy field," Mets said. "But we found this technology to be eminently scalable, so I'm very confident about its commercialization." Electrochaea was selected for the 2014 Global Cleantech 100--a list of 100 private companies with the greatest potential to solve the clean energy crisis according to the market intelligence firm Cleantech Group. It has experienced a string of successes, including a large-scale demonstration facility that delivers methane to Denmark's pipeline grid and a commercial-scale plant announced in October to be built in Hungary. "The disruptive energy storage technology developed by Dr. Mets in his lab has been validated by Electrochaea and is now being shown to scale in a commercially meaningful way," said Cristianne Frazier, senior project manager of technology commercialization and licensing at the Polsky Center. At the center of the power-to-gas technology is a strain of methanogenic Archaea--a microorganism that Mets adapted for industrial use. The above equations show the process of converting electricity and carbon dioxide to methane gas.download The process starts with surplus electricity that is coming from a wind farm or solar array, but isn't needed immediately. That power is used to convert water into hydrogen and oxygen. The hydrogen is combined with waste carbon dioxide from any of a variety of sources, such as a biogas or an industrial process, in a proprietary bioreactor in which the microorganisms efficiently catalyze conversion of the mixture into methane and water. The resulting methane can be transported in existing pipelines or converted into compressed natural gas or liquid natural gas, making it available to generate electricity. The technology offers a large carrying capacity--more than competing bulk-energy storage systems, such as batteries, pumped hydroelectric and compressed air, according to Mets. The technology enables increased use of variable electricity sources such as wind and solar by storing excess power, thus smoothing out the variability and making renewables more feasible and economically viable. The carbon dioxide produced by burning the methane product of the process was waste from its original source and would have been released into the atmosphere in any case. The power-to-gas technology is thus carbon-neutral in its primary impact on the environment. It has the additional important impact of displacing net carbon emissions from burning of fossil fuels for energy generation with energy derived from renewable wind and solar sources. The potential of the patented power-to-gas technology is significant, according to Seth Snyder, leader of the Water-Energy-Sustainability Initiative at the Argonne National Laboratory. "Methane could be the primary source for much of society's energy needs including electricity, heating, industrial processes and transportation," he said. "Therefore a robust way to create clean methane from renewable sources has the potential to transform our energy systems." Mets continues his research at the University and aims to refine the novel technology. He hopes to adapt it to produce gasoline and jet fuel. "What's so interesting is that Electrochaea is demonstrating that a very fundamental process in nature can be harnessed and adapted to address an immediate societal challenge," Snyder said. "If done correctly, the benefits could be significant in Europe and Asia." The links between the company and the University are numerous. The University recognized the importance of Met's discoveries early on and filed several families of patent applications that would be licensed to Electrochaea and become central to the startup's intellectual property portfolio. "The technology commercialization and licensing team at the Polsky Center, Electrochaea and Dr. Mets have worked collaboratively on everything from company formation and technology development to Series A financing and patent prosecution, and I believe those relationships have helped foster a successful company," Frazier said. Created in 2006, Electrochaea first validated the process in its laboratory in St. Louis. It began field testing Mets' power-to-gas technology in 2011. Three years later, Electrochaea started constructing a large-scale demonstration facility at a wastewater treatment plant outside Copenhagen, with the treatment plant providing the waste carbon dioxide used in the conversion process. Based on the success of that project, which is called BioCat and went live in June, Electrochaea is building a 10-megawatt plant in Hungary that will be the world's first commercial-scale power-to-gas plant. "Electrochaea ramped up very quickly, with several steps, from the one-liter reactor in my lab at the University, through the one-megawatt BioCat project and now the 10-megawatt commercial plant in Hungary. The microbes have proven to be very robust," Mets said. The Hungarian plant will be built by Electrochaea and Magyar Villamos Muvek, that country's largest energy provider. As with BioCat, the new plant will provide methane directly to the existing system of natural gas pipelines. Electrochaea plans to build an additional plant in Switzerland and envisions plants with up to 1,000 megawatts of capacity. Meanwhile, Pacific Gas and Electric Company is building a small demonstration plant at the National Renewable Energy Lab in Colorado.


ISELIN, N.J. and ARGONNE, Ill., Dec. 19, 2016 (GLOBE NEWSWIRE) -- The full International Trade Commission (ITC) has made a final determination that Umicore has infringed on BASF and Argonne National Laboratory’s patents. The order bans the import of Umicore’s infringing nickel-manganese-cobalt (NMC) cathode materials, confirms that lithium-ion batteries containing these materials are infringing, and finds that Umicore is contributing to and inducing its customers’ infringement. “We are very pleased that the ITC has confirmed the February 29, 2016 ruling of ITC Administrative Law Judge Thomas Pender,” said Kenneth Lane, President of BASF’s global Catalysts division. “This decisive finding highlights that Umicore has been competing unfairly in the market for NMC cathode materials. At BASF, we value innovation and we will continue to enforce and protect our intellectual property in this important area.” The Commission’s decision confirms Judge Pender’s earlier decision, which thoroughly rejected all of Umicore’s arguments. Judge Pender’s decision concluded that Umicore’s arguments and witnesses’ testimony were “mischaracterizations,” “misleading,” “not persuasive,” and “simply not credible.” Umicore’s primary scientific argument was that its NMC materials are single-phase, rather than two-phase as the patents require. After hearing testimony from six scientific experts and considering transmission electron microscopy and x-ray diffraction analyses on Umicore’s NMC materials, Judge Pender concluded that, “I do not find Umicore’s argument that its NMC materials are single-phase solid solutions credible.” BASF and Argonne, according to Judge Pender, “established persuasively” that Umicore’s NMC infringes. Umicore has continued infringing despite Judge Pender’s decision, and it has even gone so far as telling the media and marketplace that Umicore customers should continue to import products containing Umicore’s infringing NMC cathode materials. However, the full Commission decided that Umicore’s efforts and sales of NMC materials to battery manufacturers and other customers were acts of induced and contributory infringement and confirmed that those who import batteries containing Umicore’s infringing NMC are committing infringement as well. In its exclusion order, the ITC is “prohibiting the unlicensed entry of [infringing] lithium metal oxide cathode materials…that are manufactured by, or on behalf of, or imported by or on behalf of Umicore.” “Anyone importing products containing Umicore’s infringing NMC without a license is infringing,” said Matthew Lepore, General Counsel for BASF Corporation. “Umicore customers who continue importing infringing products without a license are exposing themselves to enhanced damages for willful patent infringement. The ITC found infringement and the U.S. Patent & Trademark Office and the ITC have confirmed the validity of these patents multiple times. There would be a high likelihood of success in an expedited district court proceeding to block unlicensed imports with a preliminary injunction.” “BASF is committed to the battery industry and has made significant investments in the research, development and manufacture of NMC cathode materials,” said Jeffrey Lou, Senior Vice President of BASF’s global Battery Materials business. “BASF has NMC manufacturing operations in the United States and in Asia with BASF Toda Battery Materials LLC, a joint venture for producing NMC and Nickel Cobalt Aluminum (NCA) cathode materials.” “Argonne National Laboratory is proud of its pioneering work in energy storage research. Our NMC cathode technology is just one of many advanced battery material innovations we have worked to create,” said Paul Kearns, Deputy Laboratory Director for Operations and Chief Operations Officer at Argonne National Laboratory. About BASF’s Catalysts Division BASF’s Catalysts division is the world’s leading supplier of environmental and process catalysts. The group offers exceptional expertise in the development of technologies that protect the air we breathe, produce the fuels that power our world and ensure efficient production of a wide variety of chemicals, plastics and other products, including advanced battery materials. By leveraging our industry-leading R&D platforms, passion for innovation and deep knowledge of precious and base metals, BASF’s Catalysts division develops unique, proprietary solutions that drive customer success. Further information on BASF’s Catalysts division is available on the Internet at www.catalysts.basf.com BASF Corporation, headquartered in Florham Park, New Jersey, is the North American affiliate of BASF SE, Ludwigshafen, Germany. BASF has nearly 17,500 employees in North America, and had sales of $17.4 billion in 2015. For more information about BASF’s North American operations, visit www.basf.us.   At BASF, we create chemistry for a sustainable future. We combine economic success with environmental protection and social responsibility. The approximately 112,000 employees in the BASF Group work on contributing to the success of our customers in nearly all sectors and almost every country in the world. Our portfolio is organized into five segments: Chemicals, Performance Products, Functional Materials & Solutions, Agricultural Solutions and Oil & Gas. BASF generated sales of more than €70 billion in 2015. BASF shares are traded on the stock exchanges in Frankfurt (BAS), London (BFA) and Zurich (BAS). Further information at www.basf.com.


News Article | February 15, 2017
Site: cerncourier.com

Tom Dombeck, an innovative and versatile physicist and project manager, passed away in Kāneʻohe, Hawaii, on 4 November 2016. His legacy includes many measurements in particle physics, the development of new techniques for the production of ultra-cold neutrons and substantial contributions to the management of several major scientific projects. Tom received a BA in physics from Columbia University in 1967 and a PhD in particle physics from Northwestern University in 1972, and his career saw him hold prominent roles at numerous institutes. He was a research associate at Imperial College London from 1972 to 1974 and visiting scientist at Dubna in the former USSR in 1975. Following six years at the University of Maryland, from 1981 to 1988 Tom held various roles at Los Alamos National Laboratory (LANL) after which he spent a year working in the US Department of Energy in the office of the Superconducting Supercollider (SSC). Afterwards he became a staff physicist and ultimately deputy project manager for operations at the SSC laboratory in Texas, where he led the successful “string test”. In 1994 he moved to a role as project manager for the Sloan Digital Sky Survey at the University of Chicago. Tom was deputy head for the technical division at Fermilab from 1997 to 1999, and project manager for the Next Linear Collider project at Fermilab between 2000 and 2002. From 2003 to 2006 he was project manager for the Pan-STARRS telescope at the University of Hawaii and an affiliated graduate faculty member there until 2016. Tom began his scientific research with bubble chambers and was a key participant in the experiment that observed the first neutrino interaction in a hydrogen filled bubble chamber in 1970 at the ZGS at Argonne National Laboratory. For many years he pursued measurements of the electric dipole moment (EDM) of the neutron and was also involved in the development of ultra-cold neutrons by Doppler shifting at pulsed sources. He proposed a new method for a neutron EDM measurement that involved Bragg scattering polarised neutrons from a silicon crystal and led an initial effort at the Missouri University Research Reactor, after which he initiated an experiment using the reactor at the NIST Center for Neutron Research. While at LANL, Tom led a neutrino-oscillation search that involved constructing a new beamline and neutrino source at LAMPF and provided improved limits on muon-neutrino to electron-neutrino oscillations. He carried these fundamental physics interests and abilities to his later work as a highly effective scientific programme manager. Tom was able to see the connections between disparate scientific areas and bring together new ideas and approaches that moved the field forwards. He could inspire people around him with his enthusiasm and kindness, and his wry sense of humour and wicked smile were trademarks that will long be remembered by his friends and colleagues. Tom was a devoted family man and is missed greatly by his wife Bonnie, his two children, Daniel and Heidi, and his four grandchildren. Sidney David Drell, professor emeritus of theoretical physics at SLAC National Accelerator Laboratory, senior fellow at Stanford’s Hoover Institution and a giant in the worlds of both academia and policy, died on 21 December 2016 at his home in Palo Alto, California. He was 90 years old. Drell made immense contributions to his field, including uncovering a process that bears his name and working on national and international security. His legacy as a humanitarian includes his friendship and support of Soviet physicist and dissident Andrei Sakharov, who won the Nobel Peace Prize in 1975 for his opposition of the abuse of power in the Soviet Union. Drell was also known for his welcoming nature and genuine, albeit perhaps unwarranted, humility. Drell’s commitment to arms control spanned more than 50 years. He served on numerous panels advising US Congress, the intelligence community and military. He was an original member of JASON, a group of academic scientists created to advise the government on national security and defence issues, and from 1992 to 2001 he was a member of the President’s Foreign Intelligence Advisory Board. He was also the co-founder of the Center for International Security and Cooperation at Stanford, and in 2006 he and former Secretary of State George Shultz began a programme at the Hoover Institution dedicated to developing practical steps towards ridding the world of nuclear weapons. In 1974, Drell met Sakharov at a conference hosted by the Soviet Academy of Sciences and they quickly became friends. When Sakharov was internally exiled to Gorky from 1980 to 1986 following his criticism of the Soviet invasion of Afghanistan, Drell exchanged letters with him and called on Soviet leader Mikhail Gorbachev for his release. He also organised a petition to allow another Soviet physicist and dissident, Nohim Meiman, to emigrate to Israel, and obtained the signatures of 118 members of the US National Academy of Sciences. Having graduated with a bachelor’s degree from Princeton University in 1946, Drell earned a master’s degree in 1947 and a PhD in physics in 1949 from the University of Illinois, Urbana-Champaign. He began at Stanford in 1950 as an instructor in physics, leaving to work as a researcher and assistant professor at the Massachusetts Institute of Technology and then returning to Stanford in 1956 as a professor of physics. He served as deputy director of SLAC from 1969 until his retirement from the lab in 1998. Drell’s research was in the fields of quantum electrodynamics and quantum chromodynamics. While at SLAC, he and research associate Tung-Mow Yan formulated the famous Drell–Yan Process, which has become an invaluable tool in particle physics. His theoretical work was critical in setting SLAC on the course that it took. As head of the SLAC theory group, Drell brought in a host of younger theoretical physicists who began creating the current picture of the structure of matter. He played an important role in developing the justification for experiments and turning the results into what became the foundation of the Standard Model of particle physics. For his research and lifetime of service to his country, Drell received many prestigious awards, including: the National Medal of Science; the Enrico Fermi Award; a fellowship from the MacArthur Foundation; the Heinz Award for contributions in public policy; the Rumford Medal from the American Academy of Arts and Sciences; and the National Intelligence Distinguished Service Medal. Drell was one of 10 scientists honoured as the founders of satellite reconnaissance as a space discipline by the US National Reconnaissance Office. He was elected to the National Academy of Sciences, the American Academy of Arts and Sciences and the American Philosophical Society, and was president of the American Physical Society in 1986. Drell was also an accomplished violinist who played chamber music throughout his life. He is survived by his wife, Harriet, and his children, Daniel, Virginia, Persis and Joanna. Persis Drell, a former director of SLAC who is also a physicist at Stanford and dean of the School of Engineering, will be the university’s next provost. • Based, with permission, on the obituary published on the Stanford University website on 22 December 2016. Mambillikalathil Govind Kumar Menon, a pioneer in particle physics and a distinguished statesman of science, passed away peacefully on 22 November at his home in New Delhi, India. He graduated with a bachelor of science from Jaswant College, Jodhpur, in 1946, and inspired by Chandrasekhara Venkata Raman, studied under the tutelage of spectroscopist Nanasaheb R Tawde before joining Cecil Powell’s group at the University of Bristol, UK, in 1949. Menon’s first important contribution was to establish the bosonic character of the pion through a study of fragments emerging from π-capture by light nuclei. He then focused his attention on the emerging field of K-meson physics. Along with his colleagues at Bristol, notably Peter Fowler, Cecil Powell and Cormac O’Ceallaigh, Menon discovered K+ → π+ π0 and K+ → π+ π– π+ events in nuclear emulsion indicating parity non-conservation, (the τ – θ puzzle). He also identified a sizeable collection of events showing the associated production of kaons and hyperons. In 1955 Menon joined the Tata Institute of Fundamental Research (TIFR), where he worked on cosmic-ray research programmes initiated by Homi Bhabha. Following Bhabha’s death in an air crash over Mont Blanc in 1966, the responsibility of the directorship of TIFR fell squarely on his shoulders, along with the wide-ranging initiatives for national development that Bhabha had started. Notwithstanding these additional demands on his time, his focus on particle physics never wavered. He continued with his research, establishing a collaboration with Arnold W Wolfendale at the University of Durham, UK, and Saburo Miyake of Osaka City University, Japan, for the study of particle physics with detectors deployed deep underground; he detected events induced by cosmic-ray neutrino interactions; and he also launched a dedicated effort to test the early predictions of violation of baryon-number conservation leading to proton decay. During his Bristol years, Menon established a close friendship with William O Lock, who had moved to CERN in 1959. This facilitated collaboration between TIFR and CERN, leading to the development of bubble-chamber techniques to study mesons produced in proton–antiproton collisions. These initial studies eventually led to highly successful collaborations between Indian researchers and the L3 experiment at LEP, and the CMS, ALICE and ATLAS experiments at the LHC. Menon won several awards including the Cecil F Powell and C V Raman medals, and was elected to the three scientific academies in India. He was elected as a fellow of the Royal Society in 1970, and subsequently to the Pontifical Academy of Sciences, American Academy of Arts and Sciences, the Russian Academy of Sciences and as an honorary fellow of the Institute of Physics and the Institution of Electrical & Electronics Engineers. He also served two terms as president of the International Council of Scientific Unions, and stimulated its participation in policy issues, including climate change. Menon held a firm conviction that science can bring about technological development and societal progress, which motivated him to work with Abdus Salam in founding the Third World Academy of Sciences. He held several high positions in the Indian government, and thus contributed to the growth of science and technology in India. Alongside his scientific achievements, M G K Menon was also very close to his wife Indumati and their two children Preeti and Anant Kumar. Our warmest thoughts go out to them and to innumerable others whose lives he touched in so many important ways. Helmut Oeschler, an active member of the ALICE collaboration, passed away from heart failure on 3 January while working at his desk. Born in Southern Germany, he received his PhD from the University of Heidelberg in 1972 and held postdoc positions at the Niels Bohr Institute in Copenhagen, and in Strasbourg, Saclay and Orsay in France. From 1981 he was at the Institute for Nuclear Physics of TU Darmstadt. He held a Doctorate Honoris Causa from Dubna University, Russia, and in 2006 he received the Gay-Lussac-Humboldt prize. Oeschler’s physics interests concerned the dynamics of nuclear reactions over a broad energy range, from the Coulomb barrier to ultra-relativistic collisions. He was a driving force for building the kaon spectrometer at the GSI in Darmstadt, which made it possible to measure strange particles in collisions of heavy nuclei. From the late 1990s he was actively involved in addressing new aspects of equilibration in relativistic nuclear reactions. Oeschler became a member of the ALICE collaboration at CERN in 2000 and made important contributions to the construction of the experiment. Together with his students, he was involved in developing track reconstruction software for measuring the production of charged particles in lead–lead collisions at the LHC. He also led the analysis efforts for the measurements of identified charged hadrons in the LHC’s first proton–proton collisions. From 2010 to 2014 he led the ALICE editorial board, overseeing the publication of key results relating to quark-gluon matter at the highest energy densities. His deep involvement in the data analysis and interpretation continued unabated and he made important contributions to several research topics. Advising and working in close collaboration with students was a much loved component of Helmut’s activity and was highly appreciated among the ALICE collaboration. Helmut Oeschler was a frequent visitor of South Africa and served there on numerous international advisory committees. He was instrumental in helping the South African community develop the physics of heavy-ion collisions and collaboration with CERN. With Helmut Oeschler we have lost an internationally renowned scientist and particular friend and colleague. His scientific contributions, especially on the production of strange particles in high-energy collisions, are important achievements.


News Article | January 7, 2016
Site: www.materialstoday.com

A team of scientists from the US Department of Energy's (DOE) Argonne National Laboratory, Northwestern University and Stony Brook University has, for the first time, created a two-dimensional sheet of boron – a material known as borophene. Scientists are interested in two-dimensional materials because of their unique characteristics, particularly involving their electronic properties. Borophene is an unusual material because it shows many metallic properties at the nanoscale even though three-dimensional, or bulk, boron is nonmetallic and semiconducting. Because borophene is both metallic and atomically thin, it holds promise for possible applications ranging from electronics to photovoltaics, said Argonne nanoscientist Nathan Guisinger, who led the experiment. "No bulk form of elemental boron has this metal-like behavior," he said. A paper describing this study is published in Science. Like its neighbor in the periodic table, carbon, which appears in nature in forms ranging from humble graphite to precious diamond, boron wears a number of different faces, called allotropes. But that's where the similarities end. While graphite is composed of stacks of two-dimensional sheets that can be peeled off one at a time to produce one-atom-thick graphene, there is no such analogous process for making two-dimensional boron. "Borophenes are extremely intriguing because they are quite different from previously studied two-dimensional materials," Guisinger said. "And because they don't appear in nature, the challenge involved designing an experiment to produce them synthetically in our lab." Although at least 16 bulk allotropes of boron are known, scientists had never before been able to make a whole sheet, or monolayer, of borophene. "It's only in the recent past that researchers have been able to make tiny bits of boron at the nanoscale," said Andrew Mannix, a Northwestern graduate student and first author of the study. "This is a brand new material with exciting properties that we are just beginning to investigate." "Boron has a rich and storied history and a very complicated chemistry," added Mark Hersam, professor of materials science and engineering at Northwestern's McCormick School of Engineering and Applied Science, who helped advise Mannix. "This is something that could have easily not worked, but Andy had the courage and persistence to make it happen." One of boron's most unusual features comprises its atomic configuration at the nanoscale. While other two-dimensional materials look more or less perfectly smooth at the nanoscale, borophene looks like corrugated cardboard, buckling up and down depending on how the boron atoms bind to one another. The ‘ridges’ of this cardboard-like structure produce a material phenomenon known as anisotropy, in which a material's mechanical or electronic properties – like its electrical conductivity – become directionally dependent. "This extreme anisotropy is rare in two-dimensional materials and has not been seen before in a two-dimensional metal," Mannix said. Based on theoretical predictions of borophene's characteristics, the researchers also noticed that it likely has a higher tensile strength, meaning the ability to resist breaking when pulled apart, than any other known material. "Other two-dimensional materials have been known to have high tensile strength, but this could be the strongest material we've found yet," Guisinger said. The discovery and synthesis of borophene was aided by computer simulation work led by Stony Brook researchers Xiang-Feng Zhou and Artem Oganov, who is currently affiliated with the Moscow Institute of Physics and Technology and the Skolkovo Institute of Science and Technology. Oganov and Zhou used advanced simulation methods that showed the formation of the crinkles on the corrugated surface. "Sometimes experimentalists find a material and they ask us to solve the structure, and sometimes we do predictions first and the experiment validates what we find," Oganov said. "The two go hand-in-hand, and in this international collaboration we had a bit of both." "The connection we have between the institutions allows us to achieve things that we couldn't do alone," Hersam added. "We needed to combine scanning tunneling microscopy with X-ray photoelectron spectroscopy and transmission electron microscopy to both obtain a view of the surface of the material and verify its atomic-scale thickness and chemical properties." As they grew the borophene monolayer, the researchers discovered another advantage with their experimental technique. Unlike previous experiments that used highly toxic gases in the production of nanoscale boron-based materials, this experiment involved a non-toxic technique called electron-beam evaporation. This involves vaporizing a source material and then condensing a thin film on a substrate – in this case, boron on silver. "When we did our theoretical work, I had doubts as to the feasibility of obtaining two-dimensional boron because boron likes to form clusters, and ironing it out into two-dimensions I thought would be challenging," Oganov said. "It turned out that growing on the substrate was key, because the boron and silver turn out not to react with each other." This story is adapted from material from Northwestern University, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | October 28, 2015
Site: www.nature.com

During the first eight months of 2015, a row of five electric vehicles sat parked in Indianapolis, Indiana, available for test drives. The compact vehicles are part of the first US wave of the ambitious BlueIndy electric car-sharing programme from the Bolloré Group, headquartered in Paris, which is already running similar large-scale schemes in London and the French capital. These Bluecars are a technological milestone: with unique 30-kilowatt-hour battery packs designed by Bolloré, they are the first plug-in vehicles on the road to have solid-state batteries, rather than the liquid-electrolyte batteries that many other electric vehicles use. BlueIndy president Hervé Muller says that the plug-in vehicles' batteries have been performing well. And the safety record for Bolloré's batteries is good. “We haven't had an issue in Indianapolis, nor in Paris with the 3,000 cars in service there that have driven more than 10 million miles,” Muller says. Bolloré's BlueIndy batteries, although groundbreaking, are an early iteration of solid-state technology and not yet ready for the mass market. Scientists, however, are working on more advanced cells. The lithium-ion batteries in most electric cars consist of a negative — usually graphite — anode, a positive cathode and a liquid electrolyte. When these batteries release power, lithium ions move from the anode, through the electrolyte, to the cathode. The more conductive the electrolyte, the better the battery performs. The promise of lithium-ion solid-state batteries is that they will replace the heavy and sometimes dangerous liquid electrolyte — which in car batteries can be volatile at high temperatures when the battery is charged or discharged quickly or when packs are damaged in accidents — with a lighter, more versatile solid alternative. Although finding a solid electrolyte with conductivity that is comparable with liquids has been a challenge, the advantages are many. The batteries are safer because flammable components have been removed. They deliver more power because solid electrolytes mean that the carbon-based anodes can be replaced with lithium metal, which has a higher energy density and cycle life, with less weight and cost. And without the need to package the liquid electrolyte safely, solid-state batteries can be made in more versatile shapes (even thin films), reducing manufacturing costs. This could make electric cars a more enticing proposition, with longer ranges and a lower purchase price. “Imagine batteries that do not catch fire and do not lose storage capacity. That is the promise of solid-state batteries,” says Gerbrand Ceder, a materials scientist at the Massachusetts Institute of Technology (MIT) in Cambridge. Solid-state batteries are not quite ready for mass-market vehicles yet — the Bluecars' batteries need to be warmed up and can power a very small car for 240 kilometres. Scientists are struggling to find solid materials with conductivity similar to that of liquids, as well as working to increase cycle life and longevity, and to improve the solid electrolyte's ability to operate at ambient temperatures. Yan Eric Wang, a materials scientist who worked with Ceder at MIT, and his team may have found the ideal solid electrolyte ( et al. Nature Mater. 14, 1026–1031; 2015). “We found a framework that could lead to identifying electrolyte materials with high ionic conductivity,” says Wang. The research, in partnership with electronics firm Samsung, pointed to a class of compounds of lithium, phosphorus, germanium and sulfur called super-ionic conductors. A solid electrolyte allows batteries to switch to lithium metal anodes, says Jeff Chamberlain, a battery researcher at the Joint Center for Energy Storage Research at Argonne National Laboratory in Illinois. “With a solid-state battery you open up the field of metallic anodes, and make possible a big jump in energy density. That would be a game changer.” To be commercially viable, solid-state batteries need to work reliably for years in a tough automotive environment. And durability is an issue for researchers. Lithium metal, although excellent at storing large amounts of energy at low volume, is very reactive. And it is prone to forming 'dendrites' — tiny lithium spurs that degrade battery performance and can cause a short circuit if they reach the cathode. With conventional lithium-ion batteries, a chemical separation layer guards against dendrites. Sam Jaffe, managing director of Cairn Energy Research Advisors in Boulder, Colorado, says that solid-state researchers are working with additives in the electrolyte carrier, as well as ceramic shields, in an attempt to block dendrite formation. In the short term, as solid-state science is evolving, the durability issue may not matter as much in applications such as mobile phones, because consumers tend to switch to newer technology in a year or two — whereas cars are kept for a decade or more. “People turn over portable electronics very quickly,” Chamberlain says, “so the ability to inject new technology is higher.” SolidEnergy Systems, headquartered in Waltham, Massachusetts, uses technology developed at MIT and is targeting smartphones. The firm developed an improved electrode material for solid-state batteries that replaces the usual graphite with a very thin film of lithium-metal foil. The electrode's ability to store lithium seems promising (go.nature.com/b3xkhh), and the higher the storage capacity, the more energy the battery can deliver. Something of a hybrid, SolidEnergy's cell maximizes efficiency by using both a solid and an ionic liquid electrolyte — and works at room temperature. A high-performance car battery is promised within the next four years. Commercial solid-state cells for mass-market electric cars are at least a decade away, but they are coming, researchers say. Solving the significant technical hurdles has become a central focus for a group of start-up companies and research labs. Most companies, even those that have generated interest from vehicle manufacturers, are still in the developmental stage. Cosmin Laslau at Lux Research, headquartered in Boston, Massachusetts, says that today's best lithium-ion batteries are approaching 250 watt-hours per kilogram (the early Nissan Leaf had only 140 watt-hours), an impressive measure of energy density, but they will struggle to reach 350 — a performance goal set by the US Battery Consortium. If solid-state batteries can reach 350 “that's a pretty good sustainable advantage over traditional lithium-ion”, he says. This could be good news for the electric vehicle industry. Donald Sadoway, a materials chemist at MIT, says that achieving such high energy densities is key to widespread adoption of electric vehicles. “If we had batteries with 350 watt-hours per kilogram we'd have EVs with 350 miles of range, and that's the end of petroleum,” he says. But scaling up from a cell that generates results in a bench test to a durable, roadworthy battery pack will be a long road. Menahem Anderman, a physical chemist and chief executive of Total Battery Consulting in Oregon House, California, authored the California Air Resources Board's 2000 report on battery technology. “I have not seen any indications of a significant breakthrough or signs that they have a technology likely to find its way into the mass-produced vehicle inside the next ten years,” he says. “A new development often shows improvement in one or two areas but experiences difficulties in others.” Chamberlain agrees, and he stretches the timeline a bit further. “To be in showroom cars ten years from now, the solid-state cells would essentially have to be leaving the laboratory now, and I don't think that's happening,” he says. Still, vehicle manufacturers are interested, whatever the timeframe, as evidenced by a rash of investment and acquisitions over the past few years. In 2014, Volkswagen bought a 5% stake in QuantumScape, an electronics firm in San Jose, California, that is hoping to benefit from developing solid-state technology that could triple the range of its forthcoming electric cars. Car manufacturer Toyota says that cells that it has developed with double the energy density of today's alternatives could provide electric vehicles with a range of 480 kilometres on one charge. It has built prototype cells and even a small scooter powered by the batteries. Positive results have been reported for solid super-ionic electrolytes ( et al. Nature Mater. 10, 682–686; 2011). But the company says that cell-energy density is still far below the potential, and production of the batteries is not expected to start until the early 2020s. Battery-development firm Seeo, based in Hayward, California, which was acquired in August by German auto supplier Bosch, is also making solid-state cells. Ulrik Grape, a vice-president at Seeo, says that the company's cells represent “a very durable technology with good cycle life. They're as good or better than current lithium-ion.” He adds that Seeo packs could be half the weight of those in the current Nissan Leaf — the leading battery electric car worldwide in terms of sales. The company also says its cells achieve 350 watt-hours per kilogram in the lab, but the real-world performance could be less. One more player, Solid Power, which is based in the Colorado Technology Center in Louisville and is a spin-off from the University of Colorado at Boulder, says it has made lab-scale cells that have reached 400 to 500 watt-hours per kilogram, and up to 500 charging–discharging cycles of durability. Company founder Doug Campbell says that Solid Power is targeting the automotive sector, although its first market may be the armed forces, for use in communications equipment. “Troops in the field can carry 60 pounds of batteries, and if we can cut that in half, it's a strong value proposition for the US military,” he says. For competitive reasons, many of the lab-stage battery companies are keen to emphasize the positive aspects of their research, but not all are forthcoming with sample cells or technical details. Sakti3, based in Ann Arbor, Michigan, and headed by former University of Michigan engineer Ann Marie Sastry, is promising greatly improved energy density and an improved manufacturing process, with weight and cost savings, from its thin-film cells that operate at ambient temperatures. Sastry says that the company, with investments from General Motors and UK-based Dyson, has demonstrated that it can double the density of conventional lithium-ion at the lab scale. But Sakti3 has released little information about its chemistry or its results, so its claims are difficult to verify. Laslau estimates that Sakti3 has reached a fairly impressive 300 watt-hours per kilogram, but Sastry says that the company is not disclosing energy-density results just yet. Cost to manufacture is another important factor. “We do think that solid-state technology will enable much better performance, at lower cost, than the incumbents,” Sastry says. Her company is aiming for cells that cost just $100 per kilowatt-hour, which means around $2,400 for an average-sized electric-vehicle pack, plus costs for packaging, transportation and other factors. Nissan currently sells its lithium-ion packs for $5,499 in the United States, although it may be losing money on every sale. Asked if her cells could enable the goal of a 450-km, $25,000 electric car, Sastry says: “At least.” Scaling up to that level, Sastry admits, will be challenging because of the need to invent manufacturing processes. But “everything we have done to date can in principle be done at scale”. The 30-kilowatt-hour solid-state lithium metal polymer battery packs in the Bluecars are here now, but the 240-km range is not a huge improvement on current technology. BlueIndy's packs require heating above ambient temperature, and that uses some of the battery's energy and reduces range. Solid-state-battery researchers will surely hit some as-yet unforeseen roadblocks on the way to a commercially durable cell. For cars, conventional lithium-ion probably has at least a decade of dominance remaining, although Sadoway believes that “the rush to deployment would be very, very fast” if stable, high-density solid-state cells were developed. Solid-state batteries, Sadoway adds, “would be preferable, because they're a lot safer”. The lithium-ion fires that have occurred on some aircraft show what can happen when you rush to scale up battery technology without doing your homework first.


News Article | April 2, 2016
Site: www.greencarcongress.com

« Tesla Model 3 reservations up to 232,000 units 24 hours after reveal | Main | PNNL team develops higher-strength, lower-cost titanium alloy aimed at improving vehicle fuel economy and reducing CO2 emissions » A team of chemists, led by Xiaohu Xia from Michigan Technological University, has developed an effective method based on seeded growth and chemical etching for the facile synthesis of ruthenium (Ru) nanoframes (NFs) with high purity for use as effective catalysts. A paper on their work is published in the ACS journal Nano Letters Although this marks the first synthesis of ruthenium nanoframes, the break-through is not limited to this one metal. Xia says the process the team developed is more important. … Over the past two decades, a variety of noble-metal NFs have been actively developed, including those made of Au, Ag, Pt, Pd, Rh, and a combination of them. … However, to the best of our knowledge Ru NFs have never been reported before. It should be noted that Ru nanostructures have found widespread use in many industrially important catalytic reactions such as hydrogenation, ammonia synthesis, and CO oxidation, which motivated us to explore NFs of Ru and investigate their catalytic properties. In an effort to make better catalysts, Xia brought together a team from the Argonne National Laboratory, University of Texas at Dallas, and Temple University. Haihang Ye, Xia's graduate student, is first author on the paper; undergraduate Joseph Vermeylen also contributed to the work. Ruthenium, a noble metal and fairly rare, has joined an elite group of catalyst metals including gold, platinum and palladium. Since the metals are so rare, there is a strong incentive to reduce how much of a catalyst is used in any given process. Usually researchers control shape and size—the key is more surface area. Nanoframes, which are nanoparticles with open centers, have an advantage with their gazebo-like atomic arrangements. The crystal structure of a material—i.e.the arrangements of atoms—can be difficult to manipulate. In general, ruthenium nanocrystals adopt the hexagonal close-packed (hcp) structure. But Xia and his team came up with an elegant solution for making ruthenium nanocrystals with another structure: face-centered cubic (fcc) structure. The process involves two steps, growth and etching. Basically, ruthenium doesn’t naturally grow in a crystal structure that can be made into a nanoframe. Instead, the team grew the nanoframe on a core of palladium, which they later removed. The first step starts with a palladium seed with an fcc structure, and the researchers used the metal as an atomic template. Ruthenium was directed to preferentially grow on the edges and corners of the palladium octahedron. That way, the newly formed ruthenium simply replicated the fcc structure of the palladium seed. The second step is called etching, which removes the palladium seed, leaving behind the hollowed out ruthenium nanoframe. To ensure the material had catalytic potential, the team ran the ruthenium nanoframes through several diagnostic tests: the reduction of p-nitrophenol by NaBH and the dehydrogenation of ammonia borane. While more data is needed to quantify how well ruthenium holds up against existing metal catalysts, Xia says the results from their experiments are promising. Improving the material’s stability is the next line of research. Being such a new catalyst, it needs to be fully characterized and have its limits tested. For example, Xia says they will want to know how thick the nanoframe material can grow and still keep the fcc structure and be effective as a catalyst. Most importantly, once the material has been vetted, researchers will be able to start applying the catalyst to several big challenges. Namely, Xia says that ruthenium nanoframes and other catalysts with unique crystal structures could improve hydrogen fuel production and carbon storage.


News Article | November 16, 2015
Site: motherboard.vice.com

On Nov. 16, 1942, engineers began work on the world's first nuclear reactor, which was then known as an "atomic pile." Chicago Pile-1 was a relatively simple affair devised at the University of Chicago under the supervision of the great physicist Enrico Fermi and brought to life in a makeshift lab underneath the stands of Stagg Field. Crews worked night and day to construct what Fermi described as a "a crude pile of black bricks and wooden timbers." It lead to a successful nuclear reaction only weeks later. Chicago Pile-1 was of course part of the Manhattan Project. Civilian nuclear power in 1942 would have been a non-sequitur. Fermi was selling CP-1 a bit short. The reactor was a neat arrangement of uranium pellets and graphite bricks. The uranium served as the fuel by emitting neutrons that might go on to collide with the nuclei of other uranium atoms within the pile, which would cause a fission reaction and lead to the release of more neutrons, and these would in turn go on to induce more, secondary reactions. The graphite, meanwhile, served to slow the neutrons down, thus making collisions more likely. The pile had no cooling system, nor did it offer bystanders any protection from radiation. The reaction could be controlled using rods made of cadmium and indium that, when inserted into the reactor, would absorb free neutrons, thus making fission reactions less likely. They would still be up against many bricks of black graphite all working toward the opposite effect. A painting of the reactor going critical. Image: Department of Energy "Fermi had convinced Arthur Compton that his calculations were reliable enough to rule out a runaway chain reaction or an explosion," explains a Department of Energy history of the event. "But, as the official historians of the Atomic Energy Commission later noted, the 'gamble' remained in conducting 'a possibly catastrophic experiment in one of the most densely populated areas of the nation!'" Between Nov. 16 and Dec. 1 the pile grew brick by brick. Workers took 12 hour shifts and, due to the graphite dust smeared everywhere, came to resemble coal miners. On the first, tests revealed that the pile was almost ready for action. The next day, after a steady six hours of reactions, CP-1 went critical. The reaction was self-sustaining. In January, the reactor was moved to a more reasonable location in Illinois' Red Gate Woods, at the site of what would become Argonne National Laboratory. It received an upgrade in the form of a protective radiation shield and was renamed Chicago Pile-2. Not much later it was buried where it stood, obsolete.


News Article | November 19, 2015
Site: news.mit.edu

Usually, it’s a tradeoff: If you want maximum physical protection, whether from biting predators or exploding artillery shells, that generally compromises your ability to see. But sea-dwelling creatures called chitons have figured out a way around that problem: Tiny eyes are embedded within their tough protective shells, with their transparent lenses made of the same ceramic material as the rest of their shells — and just as tough. These armor-plated eyes could provide a model for protective armor for soldiers or workers in hazardous surroundings, say researchers at MIT, Harvard University, and elsewhere who analyzed the structure and properties of these uniquely hardy optical systems. Their work is described this week in the journal Science by MIT professor Christine Ortiz; recent MIT graduate and Harvard postdoc Ling Li; recent MIT graduate Matthew Connors; MIT assistant professor Mathias Kolle; Joanna Aizenberg from Harvard University; and Daniel Speiser from the University of South Carolina. These chitons, a species called Acanthopleura granulate, have hundreds of tiny eyes dotting the surface of their tough shells. The researchers demonstrated that these are true eyes, capable of forming focused images. They also showed that unlike the eyes of almost all other living creatures, which are made primarily of protein, these eyes are made of the mineral aragonite, the same ceramic material as the rest of the creatures’ shells. These mineral-based eyes, Li says, “allow the animal to monitor its environment with the protective armor shell. The majority of the shell is opaque, and only the eyes are transparent.” Unlike most mollusks, chitons’ shells are made of eight overlapped plates, which allow them some flexibility. The little creatures — about the size of a potato chip, and resembling prehistoric trilobites — are found in many parts of the world, but are little noticed, as they resemble the rocks they adhere to. Since these chitons live in the intertidal zone — meaning they are sometimes underwater and sometimes exposed to air — their eyes need to be able to work in both environments. Using experimental measurements and theoretical modeling, the team was able to show that the eyes are able to focus light and form images within the photoreceptive chamber underneath the lens in both air and water. The team used specialized high-resolution X-ray tomography equipment at Argonne National Laboratory to probe the 3-D architecture of the tiny eyes, which are each less than a tenth of a millimeter across. Using other material characterization techniques, they were able to determine the size shape and crystal orientation of the crystalline grains that make up these lenses — critical to understanding their optical performance, Li says. While others had long ago noted the chitons’ tiny eyes, it had not been demonstrated that they were capable of forming focused images, as opposed to being simple photoreceptive areas. “A lot of people thought the eyes were so small, there was no way this small lens would be capable of forming an image,” Connors says. But the team isolated some of these lenses and “we were able to produce images,” he says. Ultimately, the research could lead to the design of bio-inspired materials to provide both physical protection and optical visibility at the same time. “Can we design some kind of structural material,” Li asks, “with additional capabilities for monitoring the environment?” That remains to be seen, but the new understanding of how Acanthopleura granulata accomplishes that trick should provide some helpful clues, he says. “High-resolution structure and property studies of the chiton system provide fascinating discoveries into materials-level tradeoffs imposed by the disparate functional requirements, in this case protection and vision, and are key to extracting design principles for multifunctional bio-inspired armor,” says Ortiz, the Morris Cohen Professor of Materials Science and Engineering and MIT’s dean for graduate education. Peter Fratzl, a professor of biomaterials at the Max Planck Institute of Colloids and Interfaces in Potsdam, Germany, who was not involved in this research, says, “This is a truly impressive example of a multifunctional material.” “In many instances, materials represent a compromise between conflicting properties, and materials development is often targeted at finding the best possible compromise,” Fratzl adds. “In this paper, the MIT-Harvard collaboration shows that chitons, a variety of mollusks that lives on rocks, has a visual system fully integrated in its armor. It is really astonishing to see how minerals can be used at the same time to focus light and to provide mechanical protection.” The work was funded by the Department of Defense, the Army Research Office through the MIT Institute for Soldier Nanotechnologies, the National Science Foundation, and the Department of Energy.


News Article | August 22, 2016
Site: news.mit.edu

Lithium-air batteries are considered highly promising technologies for electric cars and portable electronic devices because of their potential for delivering a high energy output in proportion to their weight. But such batteries have some pretty serious drawbacks: They waste much of the injected energy as heat and degrade relatively quickly. They also require expensive extra components to pump oxygen gas in and out, in an open-cell configuration that is very different from conventional sealed batteries. But a new variation of the battery chemistry, which could be used in a conventional, fully sealed battery, promises similar theoretical performance as lithium-air batteries, while overcoming all of these drawbacks. The new battery concept, called a nanolithia cathode battery, is described in the journal Nature Energy in a paper by Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering at MIT; postdoc Zhi Zhu; and five others at MIT, Argonne National Laboratory, and Peking University in China. One of the shortcomings of lithium-air batteries, Li explains, is the mismatch between the voltages involved in charging and discharging the batteries. The batteries’ output voltage is more than 1.2 volts lower than the voltage used to charge them, which represents a significant power loss incurred in each charging cycle. “You waste 30 percent of the electrical energy as heat in charging. … It can actually burn if you charge it too fast,” he says. Conventional lithium-air batteries draw in oxygen from the outside air to drive a chemical reaction with the battery’s lithium during the discharging cycle, and this oxygen is then released again to the atmosphere during the reverse reaction in the charging cycle. In the new variant, the same kind of electrochemical reactions take place between lithium and oxygen during charging and discharging, but they take place without ever letting the oxygen revert to a gaseous form. Instead, the oxygen stays inside the solid and transforms directly between its three redox states, while bound in the form of three different solid chemical compounds, Li O, Li O and LiO , which are mixed together in the form of a glass. This reduces the voltage loss by a factor of five, from 1.2 volts to 0.24 volts, so only 8 percent of the electrical energy is turned to heat. “This means faster charging for cars, as heat removal from the battery pack is less of a safety concern, as well as energy efficiency benefits,” Li says. This approach helps overcome another issue with lithium-air batteries: As the chemical reaction involved in charging and discharging converts oxygen between gaseous and solid forms, the material goes through huge volume changes that can disrupt electrical conduction paths in the structure, severely limiting its lifetime. The secret to the new formulation is creating minuscule particles, at the nanometer scale (billionths of a meter), which contain both the lithium and the oxygen in the form of a glass, confined tightly within a matrix of cobalt oxide. The researchers refer to these particles as nanolithia. In this form, the transitions between LiO , Li O and Li O can take place entirely inside the solid material, he says. The nanolithia particles would normally be very unstable, so the researchers embedded them within the cobalt oxide matrix, a sponge-like material with pores just a few nanometers across. The matrix stabilizes the particles and also acts as a catalyst for their transformations. Conventional lithium-air batteries, Li explains, are “really lithium-dry oxygen batteries, because they really can’t handle moisture or carbon dioxide,” so these have to be carefully scrubbed from the incoming air that feeds the batteries. “You need large auxiliary systems to remove the carbon dioxide and water, and it’s very hard to do this.” But the new battery, which never needs to draw in any outside air, circumvents this issue. The new battery is also inherently protected from overcharging, the team says, because the chemical reaction in this case is naturally self-limiting — when overcharged, the reaction shifts to a different form that prevents further activity. “With a typical battery, if you overcharge it, it can cause irreversible structural damage or even explode,” Li says. But with the nanolithia battery, “we have overcharged the battery for 15 days, to a hundred times its capacity, but there was no damage at all.” In cycling tests, a lab version of the new battery was put through 120 charging-discharging cycles, and showed less than a 2 percent loss of capacity, indicating that such batteries could have a long useful lifetime. And because such batteries could be installed and operated just like conventional solid lithium-ion batteries, without any of the auxiliary components needed for a lithium-air battery, they could be easily adapted to existing installations or conventional battery pack designs for cars, electronics, or even grid-scale power storage. Because these “solid oxygen” cathodes are much lighter than conventional lithium-ion battery cathodes, the new design could store as much as double the amount of energy for a given cathode weight, the team says. And with further refinement of the design, Li says, the new batteries could ultimately double that capacity again. All of this is accomplished without adding any expensive components or materials, according to Li. The carbonate they use as the liquid electrolyte in this battery “is the cheapest kind” of electrolyte, he says. And the cobalt oxide component weighs less than 50 percent of the nanolithia component. Overall, the new battery system is “very scalable, cheap, and much safer” than lithium-air batteries, Li says. The team expects to move from this lab-scale proof of concept to a practical prototype within about a year. “This is a foundational breakthrough, which may shift the paradigm of oxygen-based batteries,” says Xiulei Ji, an assistant professor of chemistry at Oregon State University, who was not involved in this work. “In this system, commercial carbonate-based electrolyte works very well with solvated superoxide shuttles, which is quite impressive and may have to do with the lack of any gaseous O in this sealed system. All active masses of the cathode throughout cycling are solid, which presents not only large energy density but compatibility with the current battery manufacturing infrastructure.” The research team included MIT research scientists Akihiro Kushima and Zongyou Yin; Lu Qi of Peking University; and Khalil Amine and Jun Lu of Argonne National Laboratory in Illinois. The work was supported by the National Science Foundation and the U.S. Department of Energy.


News Article | December 21, 2015
Site: www.cemag.us

A team of scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University and Stony Brook University has, for the first time, created a two-dimensional sheet of boron -- a material known as borophene. Scientists have been interested in two-dimensional materials for their unique characteristics, particularly involving their electronic properties. Borophene is an unusual material because it shows many metallic properties at the nanoscale even though three-dimensional, or bulk, boron is nonmetallic and semiconducting. Because borophene is both metallic and atomically thin, it holds promise for possible applications ranging from electronics to photovoltaics, said Argonne nanoscientist Nathan Guisinger, who led the experiment. “No bulk form of elemental boron has this metal-like behavior,” he said. The study will be published in the Dec. 18 issue of the journal Science. Like its periodic table neighbor carbon, which appears in nature in forms ranging from humble graphite to precious diamond, boron wears a number of different faces, called allotropes. But that’s where the similarities end. While graphite is composed of stacks of two-dimensional sheets that can be peeled off one at a time, there is no such analogous process for making two-dimensional boron. “Borophenes are extremely intriguing because they are quite different from previously studied two-dimensional materials,” Guisinger said. “And because they don’t appear in nature, the challenge involved designing an experiment to produce them synthetically in our lab.” Although at least 16 bulk allotropes of boron are known, scientists had never before been able to make a whole sheet, or monolayer, of borophene. “It’s only in the recent past that researchers have been able to make tiny bits of boron at the nanoscale,” said Andrew Mannix, a Northwestern graduate student and first author of the study. “This is a brand new material with exciting properties that we are just beginning to investigate.” “Boron has a rich and storied history and a very complicated chemistry,” added Mark Hersam, the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering and Applied Science, who helped advise Mannix. “This is something that could have easily not worked, but Andy had the courage and persistence to make it happen.” One of boron’s most unusual features consists of its atomic configuration at the nanoscale. While other two-dimensional materials look more or less like perfectly smooth and even planes at the nanoscale, borophene looks like corrugated cardboard, buckling up and down depending on how the boron atoms bind to one another, according to Mannix. The “ridges” of this cardboard-like structure result in a material phenomenon known as anisotropy, in which a material’s mechanical or electronic properties -- like its electrical conductivity -- become directionally dependent. “This extreme anisotropy is rare in two-dimensional materials and has not been seen before in a two-dimensional metal,” Mannix said. Based on theoretical predictions of borophene’s characteristics, the researchers also noticed that it likely has a higher tensile strength than any other known material. Tensile strength refers to the ability of a material to resist breaking when it is pulled apart. “Other two-dimensional materials have been known to have high tensile strength, but this could be the strongest material we’ve found yet,” Guisinger said. The discovery and synthesis of borophene was aided by computer simulation work led by Stony Brook researchers Xiang-Feng Zhou and Artem Oganov, who is currently affiliated with the Moscow Institute of Physics and Technology and the Skolkovo Institute of Science and Technology. Oganov and Zhou used advanced simulation methods that showed the formation of the crinkles of the corrugated surface. “Sometimes experimentalists find a material and they ask us to solve the structure, and sometimes we do predictions first and the experiment validates what we find,” Oganov said. “The two go hand-in-hand, and in this international collaboration we had a bit of both.” “The connection we have between the institutions allows us to achieve things that we couldn’t do alone,” Hersam added. “We needed to combine scanning tunneling microscopy with X-ray photoelectron spectroscopy and transmission electron microscopy to both obtain a view of the surface of the material and verify its atomic-scale thickness and chemical properties.” As they grew the borophene monolayer, the researchers discovered another advantage within their experimental technique. Unlike previous experiments that used highly toxic gases in the production of nanoscale boron-based materials, this experiment involved a non-toxic technique called electron-beam evaporation, which essentially vaporizes a source material and then condenses a thin film on a substrate -- in this case, boron on silver. “When we did our theoretical work, I had doubts as to the feasibility of obtaining two-dimensional boron because boron likes to form clusters, and ironing it out into two-dimensions I thought would be challenging,” Oganov said. “It turned out that growing on the substrate was key, because the boron and silver turn out not to react with each other.” The experimental work was funded by the DOE’s Office of Science and was performed at Argonne’s Center for Nanoscale Materials, a DOE Office of Science user facility, and at the Northwestern University Materials Research Center.


News Article
Site: www.asminternational.org

A team of chemists, led by Xiaohu Xia from Michigan Technological University, Houghton, found a better way to make metal nanoframe catalysts. The study shows how researchers made a catalyst for the first time out of the noble metal ruthenium. The effort brought together a team from Michigan Tech, Argonne National Laboratory, University of Texas at Dallas, and Temple University. The team's break-through is not limited to ruthenium, however. Xia says the process they developed is most important. "We are fine-tuning the surface, size, shape and crystal structure," he says. "Our goal is to increase their catalytic activity while reducing the usage of this precious material." The key is more surface area. Nanoframes, which are nanoparticles with open centers, have an advantage with their gazebo-like atomic arrangements. Xia adds that not only shape but crystal structure is important, "because catalytic reactions occur only on the surface of materials, the surface atom arrangement has a great impact on determining the catalytic activity." In general, ruthenium nanocrystals adopt the hexagonal close-packed (hcp) structure. But Xia and his team came up with an elegant solution for making ruthenium nanocrystals with another structure: face-centered cubic (fcc) structure. The process involves two steps, growth and etching. Basically, hcp ruthenium doesn't naturally grow in a crystal structure that can be made into a nanoframe. So, the team directed ruthenium growth on a palladium seed with an fcc structure, which the ruthenium replicated. Then they removed the palladium core, leaving behind an fcc ruthenium nanoframe. To ensure the material had catalytic potential, the ruthenium nanoframes were run through several diagnostic tests: p-nitrophenol reduction by NaBH4 and dehydrogenation of ammonia borane. While more data is needed to quantify how well ruthenium holds up against existing metal catalysts, Xia says the results from their experiments are promising. Once the material has been vetted, researchers will be able to start applying the catalyst to several big challenges. Namely, Xia says that ruthenium nanoframes and other catalysts with unique crystal structures could improve hydrogen fuel production and carbon storage. Image caption — Using a two-part process, ruthenium was turned into a nanoframe—a promising metal catalyst.


No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Human HOIP and HOIL-1L cDNA were purchased from Open Biosystems (cloneIDs 4653017 and 3877587, respectively). HOIP RBR (residues 696–1,072), HOIP RING2L (residues 853–1,072) and full-length HOIL-1L were cloned into the pET-NKI-6xHis-3C-LIC vector30 coding for an N-terminal 6×His tag with a 3C protease cleavage site. HOIP UBA–RBR (residues 475–1,072) was cloned into a pET-NKI-6×His-eGFP-3C-LIC vector that codes for a 3C-cleavable His-tagged enhanced green fluorescent protein (eGFP) followed by the HOIP sequence. Human UbcH5B and Cdc34 DNA were a gift from M. Petroski. Coding sequences for UbcH13 and Uev1a were extracted out of a human cDNA library (Agilent Megaman). For crystallization, UbcH5B (residues 2–147) with the mutations S22R (to prevent backside ubiquitin binding31) and C85K (to enable covalent ubiquitin linkage21) was cloned into the pET-NKI-6×His-3C-LIC vector. UbcH5B without S22R and C85K mutations (used for enzymatic assays), Cdc34, UbcH13 and Uev1a were cloned into the same vector. Untagged mono-ubiquitin with native N and C termini, used for crystallization and linear ubiquitination assays, was cloned into the pET29 vector (Novagen) using NdeI/XhoI restriction sites. N-terminally blocked mono-ubiquitin used for thioester assays was cloned in the pET-NKI-6×His-3C-LIC vector. Untagged linear di-ubiquitin was cloned with overlap extension PCR and ligated into the pET29 vector (Novagen) using NdeI/XhoI restriction sites. N- and C-terminally blocked di-ubiquitin with a N-terminal His tag and a C-terminal Ala–Ser sequence was cloned into the pET-NKI-6×His-3C-LIC vector. Human ubiquitin-activating enzyme E1 (Ube1) was cloned into a pET28 vector resulting in an N-terminal His tag. For NF-κB assays full-length HOIP with an N-terminal Flag tag and HOIL-1L with an N-terminal myc tag were cloned into pcDNA3.1(+) (Invitrogen) using EcoRI/NotI restriction sites. Mutations in UbcH5B, ubiquitin and HOIP were introduced using standard site-directed mutagenesis techniques. All proteins were expressed in BL21(DE3) E. coli after induction with 0.5 mM IPTG overnight at 20 °C. For expression of HOIP and HOIL-1L constructs, 0.5 mM ZnCl was added to the cultures before induction. Bacteria were harvested by centrifugation, lysed by addition of lysozyme and sonication in the presence of protease inhibitors (PMSF and leupeptin) and DNase. Lysates were cleared by centrifugation and His-tagged proteins were initially purified using Ni-NTA agarose (Qiagen). For HOIP RBR used for crystallization, and UbcH5B, Cdc34, UbcH13, Uev1a, wild-type ubiquitin to generate K48-linked di-ubiquitin and HOIL-1L His tags were removed by addition of 3C protease overnight at 4 °C. HOIP RBR and HOIL-1L were further purified using Superdex 200 10/300 GL or HiLoad 16/600 Superdex 200 pg size-exclusion chromatography columns (GE Healthcare) equilibrated in protein buffer (10 mM HEPES pH 7.9, 100 mM NaCl). UbcH5B used for biochemical assays was further purified on a Superdex 75 10/300 GL size-exclusion chromatography column (GE Healthcare) equilibrated in protein buffer. HOIP mutants for activity assays, and Cdc34, UbcH13 and Uev1a were desalted into protein buffer directly after Ni-NTA purification using PD MidiTrap G-25 desalting columns (GE Healthcare). Ube1 for biochemical assays was further purified using ion-exchange chromatography (Source Q) in 10 mM HEPES pH 7.9, 10 mM NaCl and eluted with a gradient from 10–500 mM NaCl. N-terminally His-tagged (di-)ubiquitin was purified using Ni-NTA as described above followed by size-exclusion chromatography using a Superdex 75 10/300 GL column (GE Healthcare) equilibrated in protein buffer or buffer exchange into protein buffer using PD MidiTrap G-25 desalting columns. To purify untagged mono- or di-ubiquitin, 0.5 mM EDTA and 100 mM sodium acetate pH 4.5 were added to the bacterial lysates and lysates were cleared by centrifugation, diluted sevenfold with 50 mM sodium acetate pH 4.5 and applied to a Source S 10/100 ion exchange column (GE Healthcare) equilibrated in 50 mM sodium acetate pH 4.5. Ubiquitin was eluted with a 0–500 mM NaCl gradient and further purified by size-exclusion chromatography on a Superdex 75 10/300 GL column (GE Healthcare) equilibrated in protein buffer. His-eGFP-HOIP was purified using size-exclusion chromatography as described for HOIP RBR, followed by 3C cleavage and removal of His-eGFP via a second round of size-exclusion chromatography. All proteins were generally flash frozen in liquid nitrogen in small aliquots and stored at −80 °C. UbcH5B~ubiquitin linkage was performed based on published methods21. Briefly, Ube1, UbcH5B(S22R/C85K) and ubiquitin were mixed and buffer exchanged into 50 mM Tris pH 10, 150 mM NaCl using PD-10 desalting columns (GE Healthcare). 10 mM MgCl , 5 mM ATP and 1 mM TCEP were added and the protein solution was incubated at 37 °C for 16 h. The completeness of the reaction was monitored using SDS–PAGE and covalently linked UbcH5B~ubiquitin was purified from unreacted proteins and Ube1 using a Superdex 75 10/300 GL size-exclusion chromatography column (GE Healthcare) equilibrated in protein buffer. HOIP RBR was mixed with a 1.3-fold molar excess of UbcH5B~ubiquitin and applied to a Superdex 200 10/300 GL size-exclusion chromatography column equilibrated in protein buffer. Complex formation and purity was confirmed using SDS–PAGE, and complex containing fractions were pooled and concentrated to ~12 mg ml−1 for crystallization. Crystallization was performed using the vapour diffusion technique in sitting drop MRC 96-well plates (Molecular Dimensions). Initial crystals were obtained mixing HOIP/UbcH5B~ubiquitin complex solution with an equimolar amount of free ubiquitin in the Morpheus Screen (Molecular Dimensions). Subsequently, 2 μl of the protein complex were mixed with 0.6 μl reservoir solution (0.1 M Morpheus Buffer 3 pH 8.5 (Tris/Bicine), 0.12 M Morpheus Alcohols Mix (0.02 M each of 1,6-hexanediol; 1-butanol; 1,2-propanediol (racemic); 2-propanol; 1,4-butanediol; 1,3-propanediol), 30% Morpheus P550MME_P20K mix (20% PEG550MME, 10% PEG20K) and 8% glycerol) in MRC 48-well plates (Molecular Dimensions). Crystals appeared after about one week at 12 °C and were cryo-cooled, and evaluated on a rotating anode X-ray generator (Rigaku FR-E superbright). Seeding and dehydration of the crystals was performed to improve crystal diffraction. For successful dehydration, reservoir was slowly added to the protein drop (3 × 0.5 μl within ~2 h) and subsequently equilibrated overnight at 12 °C against a reservoir solution with increased P550MME_P20K concentration by adding 11 μl 60% Morpheus P550MME_P20K stock solution to 50 μl reservoir solution. The new reservoir solution was then slowly added to the protein drop (3 × 0.5 μl, followed by 2 × 1 μl with removal of 1 μl each in the last steps). After further overnight equilibration, crystals were harvested from the drop and directly cryo-cooled in a cryogenic nitrogen stream at 100 K. Crystals diffracted in-house to 4–6 Å. Complete diffraction data were measured at 100 K at beamline 23ID-D of the General Medical Sciences and Cancer Institutes Structural Biology Facility at the Advanced Photon Source (GM/CA @ APS), Argonne National Laboratory. Despite their size (common dimensions of ~200 × 140 × 100 μm3) crystals exhibited substantial inhomogeneity resulting in split and smeared diffraction spots. Using raster scans32, a suitable region for data collection could be identified at the edge of the crystal. Using a small (20 μm diameter) beam, split spots could be separated to allow reliable indexing and integration. Utilization of a small beam necessitated higher flux to retain reliable diffraction. To mitigate the radiation damage, the total dose was distributed over a 100-μm stretch of the crystal by using the ‘helical’ mode of ‘vector’ data collection as implemented in JBluIce33. Data were measured at 1.282 Å wavelength with a Pilatus3 6M pixel array detector with a 1-mm-thick sensor (Dectris). Data were collected from a single crystal and indexed, integrated and scaled in XDS/XSCALE34. Data were further processed using AIMLESS35 from the CCP4 suite36 with a resolution cut-off of 3.48 Å, resulting in an and CC1/2 = 0.648 in the highest resolution shell. Phasing was carried out in Phaser37 using an MR-SAD protocol as implemented in PHENIX38. For this, independent molecular replacement searches were initially performed for the RING2L domain of HOIP (from PDB: 4LJP (ref. 14)), UbcH5B (from PDB: 3A33 (ref. 39)), and ubiquitin (from PDB: 4LJP (ref. 14)) with the four C-terminal residues deleted. Various ambiguous solutions were identified that could not be separated, and Zn2+ sites could not be identified using MR-SAD due to incompleteness of resultant models. However, manual inspection revealed that some MR solutions contained ubiquitin oriented near identically to the symmetry-related donor ubiquitin observed in the HOIP RING2L/ubiquitin-ubiquitin transfer complex (PDB: 4LJP (ref. 14)). Based on this observation, a trimmed search model was created that contained a complex of the core of HOIP RING2L (with residues 906–924 and 949–999 removed) and C-terminally truncated ubiquitin. An MR search using this model found a single solution for two copies of the complex. After successful iterative searches for two UbcH5B molecules and two further ubiquitin molecules, MR-SAD using Phaser identified 15 distinct Zn2+ sites including the known Zn2+ sites in the RING2L domain of HOIP. Further molecular replacement in Phaser using a single unit of the initial HOIP RING2L/UbcH5B~ubiquitin complex (without the additional second ubiquitin), and the NMR structure of HOIP IBR (zinc atoms removed, deposited in Protein Data Bank40 under PDB accession number 2CT7, unpublished) correctly placed a single HOIP IBR domain, which was then manually copied to the other NCS-related HOIP in the asymmetric unit. For molecular replacement of the HOIP RING1, Sculptor41 was used to generate a search model based on the structure of the RING1 domain of HHARI (PDB: 4KBL (ref. 11)). However, Phaser was not able to correctly place this domain, probably owing to the low sequence conservation of only 27% identity. However, since mutational analysis of HOIP suggested that the RING/E2 interaction is preserved between RING-type E3 ligases and RBR-type E3 ligases5, we overlaid the E2 of the published RNF4–RING/UbcH5A~ubiquitin structure (PDB: 4AP4 (ref. 21)) with the E2 in our structure and then used this overlay to add the RING1 model generated by Sculptor. This overlay placed the HOIP RING1 Zn2+-coordinating residues near the last remaining free Zn2+ ions found earlier by Phaser MR-SAD, indicating correct placement of the RING1 domain. In the final round of molecular replacement, the two additional ubiquitin (Ub ) molecules were reinstated at the RING1–IBR interface. At this stage, Refmac42 was used for refinement using settings optimized for low-resolution refinement43 including ‘jelly body refinement’ and Babinet scaling. ProSMART44 was used to generate external restraints against high-resolution structures (PDB: 4LJO (ref. 14) for HOIP RING2L and ubiquitin, and PDB: 2ESK (ref. 45) for UbcH5B). After this, clear extra electron density became visible for the unmodelled helical linker regions of the RING1–IBR and IBR–RING2L transitions and for other regions omitted in the initial search models. Further model building and refinement was manually performed in Coot46 and Refmac. During refinement additional clear positive difference map electron density became visible and Phaser was used to place one additional UbcH5B molecule (UbcH5B ) into this density. TLS restraints were generated using the TLSMD server47 and NCS restraints were used throughout refinement. One overall B-factor was refined in Refmac. In later rounds of refinement the PDB_REDO server48 was used for refinement optimization and MolProbity49 was used for structure validation. Data processing and refinement statistics are summarized in Extended Data Fig. 2b. Ramachandran statistics were calculated using MolProbity and 94.8% of all residues are in favoured regions, 4.9% in allowed regions and 0.3% are outliers. The final structure has a MolProbity score of 1.75 (100th percentile). In the final structure the two HOIP RBR molecules (see also Extended Data Fig. 3) are defined by electron density from residues 699 to 707, 711 to 948, 969 to 991, and 996 to 1,011 (chain A) and 699 to 754, 760 to 957, 967 to 1,015, 1,019 to 1,035 and 1,054 to 1,066 (chain B). The catalytic UbcH5B~ubiquitin conjugates are defined from UbcH5B residues 3 to 147 and ubiquitin residues 1 to 76 (chains C and E), and UbcH5B residues 2 to 147 and ubiquitin residues 1 to 76 (chains D and F). The allosteric ubiquitin chains (chains G and H) are defined from residues 1 to 76 and the additional UbcH5B (chain I) is defined from residues 2 to 146. PHENIX was used to calculate simulated annealing (SA) composite omit maps and feature enhanced maps (FEM). All molecular figures were prepared in PyMOL (Schrödinger, LLC). K48-linked and K63-linked ubiquitin chains were formed through a linkage-specific enzymatic reaction using Cdc34 and UbcH13/Uev1a E2 ubiquitin-conjugating enzymes, respectively, as described in the literature50. Ubiquitin chains were separated using ion-exchange chromatography as described above for purification of mono-ubiquitin. Purified K48-linked di-ubiquitin was directly desalted into protein buffer using PD-10 desalting columns, whereas K63-linked di-ubiquitin was further purified on a Superdex 75 10/300 GL size-exclusion chromatography column equilibrated in protein buffer. Native ubiquitin without additional residues was used to generate di-ubiquitin chains for ITC experiments, whereas N-terminally blocked ubiquitin was used to form K48-linked di-ubiquitin for testing allosteric activation of HOIP RBR. Linear ubiquitin formation assays were performed in 50 mM HEPES pH 7.9, 100 mM NaCl, 10 mM MgCl and 0.6 mM DTT using 200 nM E1, 1 μM UbcH5B, 1 μM HOIP RBR or HOIP RING2L and 40 μM untagged ubiquitin. Reactions were started by addition of 10 mM ATP and were incubated at 30 °C for 2 h. Samples were taken at the indicated time points and treated with 50 mM sodium acetate pH 4.5 as described previously6, mixed with SDS sample buffer and analysed by SDS–PAGE using 12% Bolt Bis-Tris gels (Life Technologies). Proteins were visualized with Coomassie Brilliant blue dye. To test the activating effect of linear di-ubiquitin on auto-inhibited HOIP UBA–RBR, 5 μM HOIP UBA–RBR was pre-incubated with N- and C-terminally blocked linear di-ubiquitin or HOIL-1L at the indicated concentrations before addition of the remaining assay components. Samples were taken after 60 min and subsequently treated as described above. To monitor HOIP~ubiquitin thioester ubiquitin transfer from UbcH5B to HOIP, Ube1 (100 nM), UbcH5B (4 μM) and N-terminally blocked ubiquitin (32 μM) were mixed in 50 mM HEPES pH 7.9, 100 mM NaCl, 10 mM MgCl and 5 mM ATP and incubated at 25 °C for 5 min when 2 μM HOIP RBR was added. Samples were taken 10 s after HOIP addition, quenched by addition of pre-heated SDS protein-loading buffer without DTT, and run on a 12% SDS–PAGE gel (Life Technologies). The 10-s time point used was empirically determined with a time-course experiment (Extended Data Fig. 9g). Gels were stained with Coomassie Brilliant blue dye and scanned on a Li-COR Odyssey scanner using the 700 nm (red) channel. For the thioester transfer assay shown in Fig. 3d, 200 nM Ube1, 2 μM UbcH5B, 8 μM HOIP RBR, 8 μM N-terminally blocked ubiquitin and 10 mM ATP were used and samples taken after 30 s. Furthermore, proteins were transferred to a PVDF membrane and ubiquitin was visualized on a LI-COR Odyssey scanner at 800 nm using an anti-ubiquitin antibody (P4D1, Santa Cruz, 1:200 dilution in TBST (50 mM Tris pH 7.4, 150 mM NaCl, 0.05% Tween-20)) followed by an IRDye 800CW secondary antibody (LI-COR, 1:10,000 dilution in TBST). All quantitative experiments shown in graphs were performed in triplicates and band intensities were quantified using the ImageStudio software (LI-COR). HOIP thioester transfer activity was calculated as the fraction of HOIP~ubiquitin to total HOIP for each mutant and normalized against thioester transfer activity of wild-type HOIP. Data were analysed in GraphPad Prism using two-tailed unpaired Student’s t-test or one-way ANOVA followed by Tukey’s post hoc test. To test the allosteric activation of HOIP RBR by linear di-ubiquitin, a modified ubiquitin transfer assay was performed. HOIP RBR was pre-incubated with N- and C-terminally blocked linear di-ubiquitin at the indicated final concentrations for 5 min at 25 °C. At the same time, Ube1, UbcH5B, ubiquitin and ATP were premixed and incubated for 5 min at 25 °C, resulting in fully loaded UbcH5B~ubiquitin. Both mixtures were subsequently mixed together, resulting in final concentrations of 100 nM Ube1, 2 μM UbcH5B, 8 μM N-terminally blocked ubiquitin and 2 μM HOIP RBR in the final buffer described for the standard ubiquitin transfer assay. Samples were taken after 30 s and further treated as described for the standard transfer assay. A 30-s time point was determined to give the best results in this assay, in which lower E2 and mono-ubiquitin concentrations were used, resulting in an overall slower reaction rate. The experiments comparing the effects of linear versus K48-linked di-ubiquitin (Extended Data Fig. 9e) were performed similarly, with the difference that all samples were incubated with apyrase (Sigma) for 5 min to deplete ATP before addition of HOIP/di-ubiquitin and prevent E2-loading of K48-linked di-ubiquitin, which features a free C terminus on one of the ubiquitin units. Sedimentation equilibrium experiments were performed in a ProteomeLab XL-I (Beckman Coulter) analytical ultracentrifuge. HOIP RBR/UbcH5B~ubiquitin as used for crystallization was loaded into a 6-channel equilibrium cell at 5.0, 2.5 and 1.25 μM concentration and centrifuged at 10,000 r.p.m., 20 °C in an An-50 Ti 8-place rotor until equilibrium was achieved. Data were analysed using HeteroAnalysis software (J. L. Cole and J. W. Lary, University of Connecticut; http://www.biotech.uconn.edu/auf/). ITC experiments were performed on an ITC200 calorimeter (Microcal). Aliquots (2 μl each) of 500–650 μM UbcH5B~ubiquitin or di-ubiquitin solution were injected into the cell containing 40–50 μM HOIP RBR or HOIP RBR/di-ubiquitin complexes. The experiments were performed at 23 °C in buffer containing 10 mM HEPES pH 7.9, 100 mM NaCl. For titrations of UbcH5B~ubiquitin into HOIP RBR/di-ubiquitin complexes, HOIP RBR was pre-incubated with an equimolar amount of di-ubiquitin before the ITC experiments. Data were analysed using the Origin software (Microcal). Human embryonic kidney (HEK) 293T cells (ATCC) were co-transfected with NF-κB-luc reporter plasmid that contains an NF-κB response element upstream of the promoter driving the luciferase reporter gene, pGL4.74[hRluc/TK] control vector (Promega) and epitope tagged Flag-HOIP or myc-HOIL-1L pcDNA3.1(+) plasmids in 6-well plates in triplicates using Lipofectamine 2000 transfection reagent. Since this assay could be carried out in a variety of cellular contexts, HEK293T cells were used because they are easy to transfect and suitable for the assay. The cells tested negative for mycoplasma contamination. Empty pcDNA3.1(+) vector was used as control. After 36 h, cells were lysed and 20 μl cell lysates were used to measure firefly luciferase and Renilla luciferase (transfection control) signals using the dual luciferase reporter assay system according to the manufacturer’s protocol (Promega). Data were analysed in GraphPad Prism and one-way ANOVA followed by Tukey’s post hoc tests were used for statistical analysis. Immunoblotting was performed with anti-Flag (clone M2, Sigma-Aldrich) and anti-myc (clone 9E10, Sigma-Aldrich) antibodies, to confirm equivalent wild-type and mutant protein expression levels.


News Article | April 1, 2016
Site: www.cemag.us

The world is run by catalysts. They clean up after cars, help make fertilizers, and could be the key to better hydrogen fuel. Now, a team of chemists, led by Xiaohu Xia from Michigan Technological University, has found a better way to make metal catalysts. Last week, Nano Letters published the team's study, which covers how the researchers made a catalyst for the first time out of the metal ruthenium. The break-through is not limited to this one metal, however. Xia says the process they developed is most important. "We are fine-tuning the surface, size, shape, and crystal structure," he says. "Our goal is to increase their catalytic activity while reducing the usage of this precious material." In an effort to make better catalysts, Xia brought together a team from the Argonne National Laboratory, University of Texas at Dallas, and Temple University. Haihang Ye, Xia's graduate student, is first author on the paper; undergraduate Joseph Vermeylen also contributed to the work. Chemically speaking, catalysts are substances that increase the rate of a reaction, making it start or continue more quickly. Materially speaking, catalyst sources vary, although some of the most important tend to be expensive metals. Ruthenium, a noble metal and fairly rare, has joined an elite group of catalyst metals including gold, platinum and palladium. Since the metals are so rare, there is a strong incentive to reduce how much of a catalyst is used in any given process, Xia says. Usually researchers control shape and size — the key is more surface area. Nanoframes, which are nanoparticles with open centers, have an advantage with their gazebo-like atomic arrangements. "Since we have the interior space available, it's another benefit," he says, adding that their particular ruthenium structures get even more detailed. "Because catalytic reactions occur only on the surface of materials, the surface atom arrangement has a great impact on determining the catalytic activity." The arrangements of atoms — the crystal structure of a material — can be difficult to manipulate. In general, ruthenium nanocrystals adopt the hexagonal close-packed (hcp) structure. But Xia and his team came up with an elegant solution for making ruthenium nanocrystals with another structure: face-centered cubic (fcc) structure. The process involves two steps, growth and etching. Basically, ruthenium doesn't naturally grow in a crystal structure that can be made into a nanoframe. Instead, the team grew the nanoframe on a core of palladium, which they later removed. The first step starts with a palladium seed with an fcc structure, and the researchers used the metal as an atomic template. Ruthenium was directed to preferentially grow on the edges and corners of the palladium octahedron. That way, the newly formed ruthenium simply replicated the fcc structure of the palladium seed. The second step is called etching, which removes the palladium seed, leaving behind the hollowed out ruthenium nanoframe. To ensure the material had catalytic potential, the team ran the ruthenium nanoframes through several diagnostic tests: the reduction of p-nitrophenol by NaBH4 and the dehydrogenation of ammonia borane. While more data is needed to quantify how well ruthenium holds up against existing metal catalysts, Xia says the results from their experiments are promising. Improving the material's stability is the next line of research. Being such a new catalyst, it needs to be fully characterized and have its limits tested. For example, Xia says they will want to know how thick the nanoframe material can grow and still keep the fcc structure and be effective as a catalyst. Most importantly, once the material has been vetted, researchers will be able to start applying the catalyst to several big challenges. Namely, Xia says that ruthenium nanoframes and other catalysts with unique crystal structures could improve hydrogen fuel production and carbon storage. A noble calling for a metal catalyst.


News Article | October 30, 2016
Site: www.sciencedaily.com

Researchers at the U.S. Department of Energy's (DOE's) Argonne National Laboratory have helped develop a plan for the operation of Glen Canyon Dam in Glen Canyon National Recreation Area, upstream of Grand Canyon National Park. The plan, known as the Glen Canyon Dam Long-Term Experimental and Management Plan and documented in a final environmental impact statement, recommends a strategy that would balance hydropower with the protection of environmental, cultural and recreational resources in the area. The plan, which was released earlier this month, is designed to be flexible and responsive to changing conditions and new information, so that operations can be adjusted to improve conditions. The report is the first step in a 20-year experimental, management and environmental monitoring effort conducted by the U.S. Department of the Interior's Bureau of Reclamation, the National Park Service and the U.S. Geological Survey. Located fifteen miles north of the Grand Canyon, the Glen Canyon Dam delivers water from the Upper to the Lower Colorado River Basin. The dam was built in 1963 for water storage and flood control and is crucial to water distribution to about 40 million people in Arizona, Colorado, Nebraska, Nevada, New Mexico, Utah and Wyoming. The hydropower facility also provides 1,300 megawatts of electrical generating capacity to more than five million customers. The dam prevents most sediment from entering the river downstream, releases cool water and changes river flow patterns: thus, it also impacts downstream resources. For example, it has contributed to a decline in the population of native humpback chub, as well as the erosion of beaches once built and replenished from sediment transported downstream during annual floods. The team identified seven alternative ways of operating the dam. Each one would implement different monthly and daily water release patterns and experimentally test special releases, such as short-duration high flows to simulate floods and build beaches, low summer flows to allow the water to warm to benefit native fish or short-term fluctuations to manage the trout population. After modeling the effects of these alternatives, the team recommended one that would improve the conditions of downstream resources and limit impact on hydropower generation. "The team, with input from cooperating agencies, Tribes, and stakeholders, has identified a preferred alternative that will benefit the endangered humpback chub, vegetation, wildlife, cultural resources, Tribal resources and recreational activities, including trout fishing, boating and camping, while limiting its effect on clean renewable hydropower," said ecologist Kirk LaGory, manager of the Rivers and Hydropower Program in Argonne's Environmental Sciences Division and team lead for the environmental impact statement. In addition to analyzing environmental impacts, the team assessed how each alternative would affect power production during periods of peak demand when the value of energy is higher (which occur in the winter and summer and in the mornings and evenings) versus when demand is low and the value of energy is lower (like during the middle of the night). The team also analyzed the overall capacity of the hydropower plant over a 20-year period. In doing so, they measured the effects of variable water flow (e.g., drought years and wet years and climate change) that can alter the amount of r energy production year to year. "We used these data to estimate the energy and operational capacity of the Glen Canyon power plant that would be produced under each alternative and projected how other power plants in the Western power grid would respond to operations under each alternative," said Thomas Veselka, a computational engineer in Argonne's Energy Systems Division who was involved in the power system analysis. In the next few months, the U.S. Department of the Interior is expected to produce a Record of Decision that identifies how the department plans to operate Glen Canyon Dam for the next 20 years. The plan would be supported by the Glen Canyon Dam Adaptive Management Program, a successful management strategy for long-term research and monitoring of dam operations and downstream resources that has been in place for the last 20 years. The U.S. Geological Survey's Grand Canyon Monitoring and Research Center, in coordination with the Bureau of Reclamation, the National Park Service, other agencies and the Tribes, will then implement a science program to monitor the impact of dam operations on fish populations, river sediment, vegetation, cultural resources and other environmental resources. See more at: http://ltempeis.anl.gov/documents/final-eis/


News Article | November 29, 2016
Site: www.eurekalert.org

Humans and Komodo dragons, the largest lizards in the world, could not be more different. In the wild, these four-legged carnivores wander in solitude across islands in Indonesia, consuming large prey like deer and water buffalo. But when placed in a closed environment, Komodo dragons interact with their surroundings much in the same way humans do - at least on the microbial level - and researchers are using this knowledge to help them understand the health of animals in captivity. Researchers at the University of California San Diego, the University of Colorado-Boulder, the University of Chicago and the U.S. Department of Energy's (DOE's) Argonne National Laboratory, are the first to identify similarities in the way in which Komodo dragons and humans and their pets share microbes within closed environments. Their findings, published in The American Society of Microbiology's mSystems journal, say that the pattern of microbial exchange is "likely circular in nature," meaning that captive dragons contribute microbes to their environment, and reacquire these same microbes from their environment, repeating this exchange in an ongoing cycle without other external sources of microbial diversity. Researchers noticed this pattern after sampling the microbial communities found in the saliva, skin and feces of 37 Komodo dragons across 12 U.S. zoos as well as the environment of two of the 12 enclosures. They then compared these two sets of data to each other and to past studies of humans and pets. Statistical comparison revealed that, as with humans and pets in their homes, captive Komodo dragons transfer a significant source of bacteria and other microbes to their enclosures. These findings are helping researchers better understand the relationship between captivity and microbial diversity and health, knowledge that could be key to improving the health of animals in the care of zookeepers, veterinarians and other caretakers. "Regardless of whether you're in a closed or open environment, there's always a constant exchange of microbes between a host and their environment, and that constant exposure has impacts on health; for example it can lead to changes in a host's immune system that help the host stave off pathogens," said Argonne's Jack Gilbert, an author of the study and the director of The Microbiome Center, a joint Argonne, University of Chicago and Marine Biological Laboratory program. "The problem is that the degree of exposure becomes limited when you put a host in captivity, and this change has unknown consequences on health," he said, "which is exactly why we're trying to learn more about it." Evidence from past studies suggests that animals in captivity experience diseases that are associated with or worsened by captivity. Collectively these studies, which have included green sea turtles, polar bears and monkeys, bring weight to the "hygiene hypothesis," which theorizes that reduced exposure to microbes is contributing to the increase in autoimmune and allergic disease occurrence in Westernized nations. "For some animals there are diseases that affect them but don't affect their wild counterparts, or don't affect their counterparts as severely, so it makes sense that people, as they spend less time outdoors, would be affected in the same ways," said lead author Embriette Hyde, an assistant project scientist and project manager of the American Gut Project. This study is the largest of its kind on captive Komodo dragons, and the first to investigate the microbial communities in the feces and skin of Komodo dragons. To date, no comparable studies of wild Komodo dragons have been conducted. Researchers are, however, continuing to explore microbiome interactions using other animal models. Gilbert, along with a separate group of researchers, is studying these interactions in captive dolphins and their environment. "Since it's quite difficult to track microbial exposure on a daily basis, especially with humans - who move around a lot - we're exploring new animal models to see if we can ask the same questions we would about humans and get answers in more rigorous, controlled ways," Gilbert said. Through such research, investigators are hoping to identify useful systems to simulate the interactions between an animal and its environment, and find out whether the interactions within a given system are comparable to how humans interact with their environment. Such answers can expand our knowledge of the microbial health of humans and animals alike, and lead to improvements in animal husbandry practices. This work was funded by the John S. Templeton Foundation, the Alfred P. Sloan Foundation, and Argonne's Laboratory Directed Research and Development program. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.


News Article | November 21, 2016
Site: www.eurekalert.org

Researchers at the U.S. Department of Energy's (DOE's) Argonne National Laboratory have conducted a detailed study of the reflectivity effects of converting land to grow biofuel crops. Their study is part of an overall analysis of greenhouse gas emissions from land use change: instances where land that was previously forests, grasslands or pastureland is converted to producing biofuel crops. Historically, these types of analyses considered only changes in the amount of carbon stored in the soil and vegetation of these lands. The new analysis, published in The Royal Chemistry Society's Energy & Environmental Science, incorporates the additional effect of changes in reflectivity, or "albedo." Albedo effects sum up the amount of incoming solar energy that gets reflected back into space; these changes, along with numerous other factors, in turn contribute to changes in greenhouse gas emissions. While there is much data variability, the findings reveal that when a piece of land is changed to produce a biofuel crop, albedo effects also changed. When only albedo change effects are considered, researchers found that land converted to producing corn ethanol had a net cooling effect on climate. By comparison, land that was converted to producing miscanthus and switchgrass, two other plant sources for next-generation biofuels, had a net warming effect. But when carbon stock changes, another key effect of land use change, are also taken into account, corn and switchgrass ethanol exhibit net warming effects associated with land use change whereas miscanthus grass ethanol exhibits a net cooling effect. Led by researchers in Argonne's Energy Systems and Environmental Sciences Divisions, the work outlines the importance of considering changes in reflectivity when assessing land use change-induced effects of biofuel production on climate. "Our analysis is helping build a fuller picture of the climate effects of biofuel feedstock production," said Hao Cai, an Argonne environmental analyst and lead author of the study. Cai and his team began their research by first collecting data on land cover and albedo gathered from the U.S. Department of Agriculture and NASA satellites, respectively. They mapped the albedo data associated with specific land types--say, corn cropland or prairie. Then they ran simulations where they converted different land types to produce corn, miscanthus and switchgrass, and looked to see what would happen to albedo and to the net cooling or warming of the atmosphere as a result. Unlike past studies of albedo effects of land use change, which analyzed no more than 20 sites of specific land types, the team investigated millions of sites in more than a thousand counties in the U.S., covering 70 percent of the nation's corn production fields. "Another factor that sets this study apart is the specific modeling approach used to simulate albedo effect on a finer scale and provide more robust data analysis," said Yan Feng, an atmospheric and climate scientist involved in the study. The model allowed Feng and fellow researchers to examine the albedo dynamics of millions of parcels of land individually in squares just 500 meters at a side (5382 feet). By analyzing a large number of sites in detail, the researchers were able to take into account site-specific variations to better represent albedo effects in their analysis. Albedo effects varied among corn, miscanthus and switchgrass crops in part due to differences in their size and shape, as well as growth and environmental factors -- all of which can vary dramatically from one site to the next. Albedo also varied considerably based on the type of land that was converted. For example, forest had the lowest albedo compared to other land covers, like shrubland and grassland, found within the same agro-ecological zone. The magnitude of the area of land converted, as determined by economic modeling, also varied. For example, when the researchers compared miscanthus and switchgrass production necessary to produce a given biofuel volume, they found that converting land to miscanthus had a lower warming albedo effect compared to switchgrass. One cause for this is the fact that miscanthus requires less land than switchgrass to yield an equal volume of ethanol because miscanthus has higher yields. The researchers used computing resources at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, to process and visualize the albedo and land cover data. "ALCF's high-performance computing resources were critical to our analysis based on the amount of data we were working with. The facility provides powerful and helpful tools for data-intensive analyses like this," said Jiali Wang, an Argonne atmospheric scientist involved in processing the data. The study, "Consideration of land use change-induced surface albedo effects in life-cycle analysis of biofuels," was supported by the DOE's Office of Energy Efficiency and Renewable Energy. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.


News Article | November 14, 2016
Site: www.sciencedaily.com

Isotopes -- atoms of an element that are chemically identical but vary in neutron number and mass -- are essential to nuclear medicine. In an effort to return to a stable mass, isotopes known as radioisotopes emit radiation that can damage diseased tissue and can be traced in certain environments, making them useful for medical imaging and cancer therapy, as well as tracking environmental change in oceans and soil, studying the fundamental science of nuclei and safeguarding national security. Of the thousands of radioisotopes identified so far, a few occur in nature, but most must be produced in nuclear reactors or particle accelerators. The most widely used medical radioisotope is technetium-99m (Tc-99m), the daughter isotope (meaning the product of radioactive decay) of molybdenum-99 (Mo-99). The 66-hour half-life of Mo-99, which is currently produced by the fission of uranium targets in research reactors, makes it a challenge to produce and transport around the world before it spontaneously decays into Tc-99m. Built 30 to 50 years ago at the height of the Atoms for Peace era, several of the world's key nuclear reactors producing Mo-99 have experienced unplanned shutdowns that have disrupted the Mo-99 radioisotope supply. With the possibility of an unstable supply, there is a strong incentive to develop alternative production methods for this essential isotope. To address this issue, the National Nuclear Security Administration (NNSA) is supporting commercial partners to accelerate the establishment of a domestic, commercial Mo-99 production capability that does not use highly enriched uranium. At the same time, the NNSA is working to significantly reduce the risk of nuclear proliferation by supporting international Mo-99 producers in converting their Mo-99 production processes from highly enriched uranium (HEU) to low-enriched uranium (LEU). The U.S. Department of Energy's (DOE's) Argonne National Laboratory actively supports both of these efforts. "Beginning in the 1980s, Argonne focused on converting targets for the production of molybdenum-99 from highly enriched uranium to low-enriched uranium and provided the technology to do so," said George Vandegrift, a Distinguished Fellow in the Nuclear Engineering Division at Argonne. In the past five years, Argonne has partnered with industry and other national laboratories under NNSA stewardship to help establish U.S. production of Mo-99. Now, in addition to continuing this technology development for Mo-99 production methods for the DOE and NNSA, Argonne is also advancing new production and processing approaches for other promising radioisotopes with the encouragement and support of the DOE Isotope Program. Whereas Tc-99m, derived from its parent isotope Mo-99, is used primarily for diagnostic procedures, there is great interest in the development of efficient production methods for isotopes that can be used for cancer therapy. Argonne is using its recently upgraded electron linear accelerator (LINAC), along with its historical expertise in developing targets and recovering and purifying radioisotopes, to explore a variety of such therapeutic isotopes. Argonne's electron LINAC operates in a "sweet spot" for making particular radioisotopes. At 30-50 mega-electron volts and relatively high currents, the LINAC has the capability to produce potentially important radioisotopes. For example, radioisotopes like copper-67 (Cu-67), and scandium-47 (Sc-47) -- whose use has been impeded in the development of pharmaceutical applications by lack of reliable supply or low specific activity. More than 80 percent of nuclear diagnostic procedures (40 million per year) worldwide use Tc-99m. The United States alone uses half of the world's supply, yet imports 100 percent of the parent isotope Mo-99 from foreign countries. Vandegrift said technetium is widely used because of its radiological properties and rich chemistry. Tc-99m has a short half-life, emits gamma rays at energies that are easy to detect and binds to many kinds of proteins that allow the radioisotope to image specific tissues and functions of the body. "When Mo-99 faced a crisis in supply, NNSA expanded the HEU to LEU conversion program to include development of domestic supplies of Mo-99," Vandegrift said. Funded by the NNSA, Argonne is collaborating with two U.S. companies, SHINE Medical Technologies and NorthStar Medical Technologies, LLC, to establish a scalable, safe and reliable way to produce, recover and purify Mo-99 that meets the required pharmaceutical standards for use of Tc-99m as a radiopharmaceutical. Argonne demonstrated each company's production pathway with its LINAC and provided expertise in target chemistry and post processing. Both companies are in the process of establishing production facilities that will allow them to make and distribute enough Mo-99 to each meet 50 percent of U.S. demand without the use of HEU. The proposed SHINE production method uses a low-energy accelerator-based neutron generator to propagate fission of a low-enriched uranyl sulfate solution, producing hundreds of isotopes, including Mo-99. Argonne researchers have demonstrated the feasibility of this method using the electron LINAC to produce neutrons via a converter to irradiate a uranyl sulfate solution. They also developed technologies for the recovery and purification of Mo-99, which makes up six percent of the fission products present in the irradiated solution. Argonne recently produced and processed a significant quantity of Mo-99 using this technology that was developed for SHINE. The Mo-99 was sent to GE Healthcare for testing and passed all purity and utilization requirements. The U.S. Nuclear Regulatory Commission recently authorized issuance of a construction permit for SHINE. "SHINE has benefited tremendously from the work Argonne has done to help design and test our process. Argonne's unique capabilities meant they were able to demonstrate the SHINE process years before our manufacturing facility is built, and that has greatly reduced our startup risks," said Katrina Pitas, SHINE Vice President of Business Development. "We're grateful to George Vandegrift, a pioneer in the LEU Mo-99 world, and his entire team for the shrewd insights and hard work they've contributed to the SHINE project." The NorthStar process, on the other hand, uses a high-energy electron beam to produce photons in a converter target to irradiate a set of stable Mo-100 sintered disks. In this process, photons hit atoms of Mo-100, discharging neutrons and leaving a significant yield of Mo-99 for processing. The Argonne team optimized disk chemistry and structure and developed a chemical recycling pathway that will enable NorthStar to reuse costly enriched-molybdenum target material. The test facility built at Argonne, in collaboration with Los Alamos National Laboratory, is a pilot-scale operation with all of the components of NorthStar's proposed production unit. In the last production test run at Argonne, several curies of Mo-99 were produced and sent to NorthStar for evaluation. "NorthStar has enjoyed significant support and contribution from Argonne National Laboratory over the past five or more years as we developed and optimized our production process for Mo-99 via an electron accelerator," said Jim Harvey, NorthStar Senior Vice President and Chief Science Officer. "The development of chemical processes to dissolve and recover enriched target material combined with the ability to now run production of meaningful irradiations using Argonne's electron accelerator are directly contributing to our successful future deployment of this Mo-99 production process in the United States." With successful demonstrations of both processes developed at Argonne's LINAC, SHINE and NorthStar have continued plans to use accelerators for production. SHINE will use an accelerator-based neutron generator at its Janesville, Wis., facility for its neutron-induced-fission process, and NorthStar has long-term plans to install its own electron LINAC at its Beloit, Wis., facility as the basis of its gamma-induced-reaction process. Back at Argonne, researchers, with support and encouragement of the DOE Isotope Program, are expanding their radioisotope program with the goal of performing groundbreaking research and carrying out the development and demonstration needed for supplying a range of key radioisotopes. Nontoxic with suitable emissions for radiotherapy and imaging and a 2.6-day half-life, Cu-67 has been recognized for decades for its suitable properties as a "theranostic" -- a combined therapeutic and diagnostic radioisotope. "Cu-67 could allow doctors to monitor therapy in real-time," said David Rotsch, Argonne chemist. "The beta particles it emits have been shown to penetrate a few millimeters into tissues, killing cancer cells, but it also emits gamma radiation that can be used to track how well it is eradicating those cancer cells." Using the LINAC, researchers produced high specific activity Cu-67 and demonstrated small-scale production of the radioisotope. Now, with funding from the DOE Isotope Program, they are preparing to produce enough Cu-67 to meet estimated demand for external R&D and future clinical trials. Argonne will distribute Cu-67 through the DOE's National Isotope Development Center as early as November of this year. "We're starting at 100 milliCurie batches and can increase to two Curie batches depending on demand," Rotsch said. The team also received Argonne Laboratory Directed Research and Development (LDRD) funding in fall 2015 to investigate the production of various other radioisotopes with the electron LINAC. One focus of the LDRD is creating a production mode for Sc-47 that will enable quick and simple processing for distribution. Like Cu-67, Sc-47 has characteristics that make it a good theranostic candidate. Argonne researchers are developing a process for the production and purification of Sc-47 to minimize processing times and maximize yields. This isotope would also be made available through the DOE Isotope Program. "We're in a good position because, right now, our LINAC is unique in having a higher power and energy beam," said Sergey Chemerisov, Accelerator Facility Manager in the Nuclear Engineering Division. "We're the only DOE LINAC that can do this job right now."


Home > Press > Researchers peer into atom-sized tunnels in hunt for better battery: May improve lithium ion for larger devices, like cars Abstract: Battery researchers seeking improved electrode materials have focused on "tunneled" structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. The finding is reported in Nature Communications. "Significant research has been done to increase the energy density and power density of lithium ion battery systems," says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation, he said, is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. "So for an electric car, we need to increase the energy and power of the battery -- and decrease the cost as well," he said. His team, which includes coworkers at Argonne National Laboratory, Michigan Technological University and the University of Bath in the U.K., has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly spaced tunnels that allow charge carriers -- like lithium ions -- to move in and out freely. "But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale," Shahbazian-Yassar said. "We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium." But the tunnel stabilizers, being positively charged like the lithium ions, should repel each other. "If lithium goes in, will the tunnel stabilizer come out?" Shahbazian-Yassar shrugged. "The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?" The new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material -- which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, was then able to use a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-ångstrom resolution so he could clearly see inside them -- and he saw they do change in the presence of a stabilizer ion. "It's a direct way to see the tunnels," Yuan said. "And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer." The finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge, Shahbazian-Yassar said. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. "With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time," he said. ### Co-authors on the Nature Communications paper are Kun He, Soroosh Sharifi-Asl, Boao Song and Anmin Nie of UIC; Chun Zhan, Zhenzhen Yang, Xiangyi Luo, Hao Wang, Khalil Amine and Jun Lu of Argonne; Hungru Chen, Stephen M. Wood and M. Saiful Islam of the University of Bath; and Wentao Yao of Michigan Tech. Funding was provided by the National Science Foundation and the U.S. Department of Energy. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | April 4, 2016
Site: www.nanotech-now.com

Home > Press > Ruthenium nanoframes open the doors to better catalysts Abstract: The world is run by catalysts. They clean up after cars, help make fertilizers, and could be the key to better hydrogen fuel. Now, a team of chemists, led by Xiaohu Xia from Michigan Technological University, has found a better way to make metal nanoframe catalysts. Last week, Nano Letters published the team's study, which covers how the researchers made a catalyst for the first time out of the noble metal ruthenium. The effort brought together a team from Michigan Tech, the Argonne National Laboratory, University of Texas at Dallas, and Temple University. The team's break-through is not limited to ruthenium, however. Xia says the process they developed is most important. "We are fine-tuning the surface, size, shape and crystal structure," he says. "Our goal is to increase their catalytic activity while reducing the usage of this precious material." The key is more surface area. Nanoframes, which are nanoparticles with open centers, have an advantage with their gazebo-like atomic arrangements. Xia adds that not only shape but crystal structure is important, "Because catalytic reactions occur only on the surface of materials, the surface atom arrangement has a great impact on determining the catalytic activity." In general, ruthenium nanocrystals adopt the hexagonal close-packed (hcp) structure. But Xia and his team came up with an elegant solution for making ruthenium nanocrystals with another structure: face-centered cubic (fcc) structure. The process involves two steps, growth and etching. Basically, hcp ruthenium doesn't naturally grow in a crystal structure that can be made into a nanoframe. So, the team directed ruthenium growth on a palladium seed with an fcc structure, which the ruthenium replicated. Then they removed the palladium core, leaving behind an fcc ruthenium nanoframe. To ensure the material had catalytic potential, the team ran the ruthenium nanoframes through several diagnostic tests: the reduction of p-nitrophenol by NaBH4 and the dehydrogenation of ammonia borane. While more data is needed to quantify how well ruthenium holds up against existing metal catalysts, Xia says the results from their experiments are promising. Once the material has been vetted, researchers will be able to start applying the catalyst to several big challenges. Namely, Xia says that ruthenium nanoframes and other catalysts with unique crystal structures could improve hydrogen fuel production and carbon storage. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | April 6, 2016
Site: www.nanotech-now.com

Abstract: Think of it as a microscopic movie: A sequence of X-ray images shows the explosion of superheated nanoparticles. The picture series reveals how the atoms in these particles move, how they form plasma and how the particles change shape. The method of taking these pictures is a collaborative creation that involved Kansas State University researchers Artem Rudenko and Daniel Rolles, both assistant professors of physics. The movies help scientists understand interactions of intense laser light with matter. But even more importantly, these experiments lead the way to filming various processes that involve ultrafast dynamics of microscopic samples, such as the formation of aerosols -- which play a major role in climate models -- or laser-driven fusion. "We can create a real movie of the microworld," Rudenko said. "The key development is that now we can take sequences of pictures on the nanoscale." Rudenko and Rolles -- both affiliated with the university's James R. Macdonald Laboratory -- collaborated with researchers at SLAC National Accelerator Laboratory at Stanford University, Argonne National Laboratory and the Max Planck Institutes in Germany. Their publication, "Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles," appears in Nature Photonics. In this work, the collaboration used intense lasers to heat xenon nanoscale clusters and then took a series of X-ray pictures to show what happened to the particles. The picture series became a movie of how these objects move at the level of femtoseconds, which are one-millionth of a billionth of a second. "What makes nano so interesting is that the behavior for many things changes when you get to the nanoscale," Rolles said. "Nano-objects bridge the gap between bulk matter and individual atoms or molecules. This research helps us as we try to understand the behavior of nano-objects and how they change shape and properties within extremely short times." The pictures of the nanoparticles cannot be taken with normal optical light, but must be taken with X-rays because X-ray light has nanometer wavelengths that enable researchers to view nanoscale objects, Rolles said. The light wavelength must match the size of the object. To take the pictures, the researchers needed two ingredients: very short X-ray pulses and very powerful X-ray pulses. The Linac Coherent Light Source at SLAC provided those two ingredients, and Rudenko and Rolles traveled to California to use this machine to take the perfect pictures. The photo-taking method and the pictures it produces have numerous applications in physics and chemistry, Rolles said. The method is also valuable for visualizing laser interactions with nanoparticles and for the rapidly developing field of nanoplasmonics, in which the properties of nanoparticles are manipulated with intense light fields. This may help to build next-generation electronics. "Light-driven electronics can be much faster than conventional electronics because the key processes will be driven by light, which can be extremely fast," Rudenko said. "This research has big potential for optoelectronics, but in order to improve technology, we need to know how a laser drives those nanoparticles. The movie-making technology is an important step in this direction." Rudenko and Rolles are continuing to improve the moviemaking process. In collaboration with the university's soft matter physics group, they have extended the range of samples, which can be put into the X-ray machine and now can produce movies of gold and silica nanoparticles. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Scientists create atomically thin boron: Synthesis of borophene expands family of 2-dimensional materials beyond graphene Abstract: A team of scientists from the U.S. Department of Energy's (DOE) Argonne National Laboratory, Northwestern University and Stony Brook University has, for the first time, created a two-dimensional sheet of boron -- a material known as borophene. Scientists have been interested in two-dimensional materials for their unique characteristics, particularly involving their electronic properties. Borophene is an unusual material because it shows many metallic properties at the nanoscale even though three-dimensional, or bulk, boron is nonmetallic and semiconducting. Because borophene is both metallic and atomically thin, it holds promise for possible applications ranging from electronics to photovoltaics, said Argonne nanoscientist Nathan Guisinger, who led the experiment. "No bulk form of elemental boron has this metal-like behavior," he said. The study will be published Dec. 18 by the journal Science. Like its periodic table neighbor carbon, which appears in nature in forms ranging from humble graphite to precious diamond, boron wears a number of different faces, called allotropes. But that's where the similarities end. While graphite is composed of stacks of two-dimensional sheets that can be peeled off one at a time, there is no such analogous process for making two-dimensional boron. "Borophenes are extremely intriguing because they are quite different from previously studied two-dimensional materials," Guisinger said. "And because they don't appear in nature, the challenge involved designing an experiment to produce them synthetically in our lab." Although at least 16 bulk allotropes of boron are known, scientists had never before been able to make a whole sheet, or monolayer, of borophene. "It's only in the recent past that researchers have been able to make tiny bits of boron at the nanoscale," said Andrew Mannix, a Northwestern graduate student and first author of the study. "This is a brand new material with exciting properties that we are just beginning to investigate." "Boron has a rich and storied history and a very complicated chemistry," added Mark Hersam, professor of materials science and engineering at Northwestern's McCormick School of Engineering and Applied Science, who helped advise Mannix. "This is something that could have easily not worked, but Andy had the courage and persistence to make it happen." One of boron's most unusual features consists of its atomic configuration at the nanoscale. While other two-dimensional materials look more or less like perfectly smooth and even planes at the nanoscale, borophene looks like corrugated cardboard, buckling up and down depending on how the boron atoms bind to one another, according to Mannix. The "ridges" of this cardboard-like structure result in a material phenomenon known as anisotropy, in which a material's mechanical or electronic properties -- like its electrical conductivity -- become directionally dependent. "This extreme anisotropy is rare in two-dimensional materials and has not been seen before in a two-dimensional metal," Mannix said. Based on theoretical predictions of borophene's characteristics, the researchers also noticed that it likely has a higher tensile strength than any other known material. Tensile strength refers to the ability of a material to resist breaking when it is pulled apart. "Other two-dimensional materials have been known to have high tensile strength, but this could be the strongest material we've found yet," Guisinger said. The discovery and synthesis of borophene was aided by computer simulation work led by Stony Brook researchers Xiang-Feng Zhou and Artem Oganov, who is currently affiliated with the Moscow Institute of Physics and Technology and the Skolkovo Institute of Science and Technology. Oganov and Zhou used advanced simulation methods that showed the formation of the crinkles of the corrugated surface. "Sometimes experimentalists find a material and they ask us to solve the structure, and sometimes we do predictions first and the experiment validates what we find," Oganov said. "The two go hand-in-hand, and in this international collaboration we had a bit of both." "The connection we have between the institutions allows us to achieve things that we couldn't do alone," Hersam added. "We needed to combine scanning tunneling microscopy with X-ray photoelectron spectroscopy and transmission electron microscopy to both obtain a view of the surface of the material and verify its atomic-scale thickness and chemical properties." As they grew the borophene monolayer, the researchers discovered another advantage within their experimental technique. Unlike previous experiments that used highly toxic gases in the production of nanoscale boron-based materials, this experiment involved a non-toxic technique called electron-beam evaporation, which essentially vaporizes a source material and then condenses a thin film on a substrate -- in this case, boron on silver. "When we did our theoretical work, I had doubts as to the feasibility of obtaining two-dimensional boron because boron likes to form clusters, and ironing it out into two-dimensions I thought would be challenging," Oganov said. "It turned out that growing on the substrate was key, because the boron and silver turn out not to react with each other." ### The experimental work was performed at Argonne's Center for Nanoscale Materials, a DOE Office of Science user facility, and at the Northwestern University Materials Research Center. - Jared Sagoff, public information officer at Argonne National Laboratory, is the author of this story. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | December 8, 2016
Site: www.eurekalert.org

Battery researchers seeking improved electrode materials have focused on "tunneled" structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. The finding is reported in Nature Communications. "Significant research has been done to increase the energy density and power density of lithium ion battery systems," says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation, he said, is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. "So for an electric car, we need to increase the energy and power of the battery -- and decrease the cost as well," he said. His team, which includes coworkers at Argonne National Laboratory, Michigan Technological University and the University of Bath in the U.K., has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly spaced tunnels that allow charge carriers -- like lithium ions -- to move in and out freely. "But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale," Shahbazian-Yassar said. "We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium." But the tunnel stabilizers, being positively charged like the lithium ions, should repel each other. "If lithium goes in, will the tunnel stabilizer come out?" Shahbazian-Yassar shrugged. "The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?" The new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material -- which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, was then able to use a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-ångstrom resolution so he could clearly see inside them -- and he saw they do change in the presence of a stabilizer ion. "It's a direct way to see the tunnels," Yuan said. "And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer." The finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge, Shahbazian-Yassar said. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. "With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time," he said. Co-authors on the Nature Communications paper are Kun He, Soroosh Sharifi-Asl, Boao Song and Anmin Nie of UIC; Chun Zhan, Zhenzhen Yang, Xiangyi Luo, Hao Wang, Khalil Amine and Jun Lu of Argonne; Hungru Chen, Stephen M. Wood and M. Saiful Islam of the University of Bath; and Wentao Yao of Michigan Tech. Funding was provided by the National Science Foundation and the U.S. Department of Energy.


News Article | August 22, 2016
Site: cleantechnica.com

CleanTechnica spills a lot of ink on solar cell efficiency, but there is another important angle to the solar revolution that doesn’t get quite as much attention. That’s the “artificial leaf” concept, in which solar energy touches off a chemical reaction that produces a usable fuel. If that starts ringing some bells, you’re probably thinking about that time you learned about the natural process of photosynthesis in high school. In the latest development, researchers at the University of Illinois at Chicago have tweaked Mother Nature to capture and recycle atmospheric carbon dioxide, with an assist from researchers at Argonne National Laboratory, the University of New Mexico and the University of Illinois at Urbana-Champaign. The last couple of times CleanTechnica covered the artificial leaf concept, the end goal was to use solar energy for splitting water, arriving at hydrogen as the end product. Essentially, it’s a hydrogen-based energy storage system. It’s worth pausing to note that one of the original pioneers of the artificial leaf concept, Daniel Nocera, has also been making waves in energy storage. The flow battery work of his startup Sun Catalytix caught the attention of Lockheed Martin back in 2012. Lockheed eventually acquired the company in 2014 and renamed the operation “Lockheed Martin Advanced Energy Storage, LLC.” Meanwhile, artificial leaf research has been progressing on other fronts. Last year, for example, a team at Harvard came up with a “bionic leaf” system that produces rubbing alcohol. The new UIC/Argonne iteration takes it up to the next level. The new system produces a mix of carbon and hydrogen that could be used directly as synthetic gas, which doesn’t seem like that big of a deal except when you consider the atmospheric carbon capture angle. Once you have syngas, you can also get to diesel, gasoline and other liquid fuels. Here’s how the UIC team articulates it: The ability to turn CO2 into fuel at a cost comparable to a gallon of gasoline would render fossil fuels obsolete. The Argonne team is particularly excited about tackling the carbon monoxide conversion part of the process, because in CO2 form carbon is difficult to recombine with for anything else. That’s the sticky wicket in a nutshell — finding an efficient, commercially viable way to convert CO2 to CO. Plants and other organisms can hack CO2 into CO without batting an eyelash, but replicating that process in the lab during one human lifetime is quite a tall order. The basic idea behind the new artificial leaf is an economical one-pot system that encompasses the entire three-step process. The first two steps sound a lot like conventional artificial leaf business. Step one is the energy harvesting part, in which photons are converted into pairs of electrons (negative charges) and “holes” (positive charges). Step two is the reaction between holes and water molecules. That results in the creation of protons and oxygen molecules. Step three is where the magic happens: Finally, the protons, electrons and carbon dioxide all react together to create carbon monoxide and water. To flesh out that third step, you can read all about the new system in the journal Science under the title “Nanostructured transition metal dichalcogenide electrocatalysts for CO2 reduction in ionic liquid.” For those of you on the go, the short version is that the research team was challenged to come up with a durable, economical catalyst to accomplish the reaction in Step 3. They finally settled on using nanoscale flakes of the transition metal WSe2 (tungsten diselenide). In addition, the team found that adding an ionic (“salty”) liquid to water further enhanced the efficiency. Here’s the rundown from UIC: The UIC artificial leaf consists of two silicon triple-junction photovoltaic cells of 18 square centimeters to harvest light; the tungsten diselenide and ionic liquid co-catalyst system on the cathode side; and cobalt oxide in potassium phosphate electrolyte on the anode side. When light of 100 watts per square meter – about the average intensity reaching the Earth’s surface – energizes the cell, hydrogen and carbon monoxide gas bubble up from the cathode, while free oxygen and hydrogen ions are produced at the anode. As for commercialization, if that doesn’t work out on Earth then there’s always Mars, which has atmospherice CO2 in abundance. Photo (cropped): “Simulated sunlight powers a solar cell that converts atmospheric carbon dioxide directly into syngas” via UIC.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 23, 2016
Site: www.businesswire.com

SALT LAKE CITY--(BUSINESS WIRE)--SC16, the 28th annual international conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers and scientists - from those just starting their careers to those whose contributions have made lasting impacts. The conference drew more than 11,100 registered attendees and featured a technical program spanning six days. The exhibit hall featured 349 exhibitors from industry, academia and research organizations from around the world. “There has never been a more important time for high performance computing, networking and data analysis,” said SC16 General Chair John West from the Texas Advanced Computing Center. “But it is also an acute time for growing our workforce and expanding diversity in the industry. SC16 was the perfect blend of research, technological advancement, career recognition and improving the ways in which we attract and retain that next generation of scientists.” According to Trey Breckenridge, SC16 Exhibits Chair from Mississippi State University, the SC16 Exhibition was the largest in the history of the conference. The overall size of the exhibition was 150,000 net square feet (breaking the 2015 record of 141,430). The 349 industry and research-focused exhibits included 44 first-timers and 120 organizations from 25 countries outside the United States. During the conference, Salt Lake City also became the hub for the world’s fastest computer network: SCinet, SC16’s custom-built network which delivered 3.15 terabits per second in bandwidth. The network featured 56 miles of fiber deployed throughout the convention center and $32 million in loaned equipment. It was all made possible by 200 volunteers representing global organizations spanning academia, government and industry. For the third year, SC featured an opening “HPC Matters” plenary that this year focused on Precision Medicine, which examined what the future holds in this regard and how advances are only possible through the power of high performance computing and big data. Leading voices from the frontlines of clinical care, medical research, HPC system evolution, pharmaceutical R&D and public policy shared diverse perspectives on the future of precision medicine and how it will impact society. The Technical Program again offered the highest quality original HPC research. The SC workshops set a record with more than 2,500 attendees. There were 14 Best Paper Finalists and six Gordon Bell Finalists. These submissions represent the best of the best in a wide variety of research topics in HPC. “These awards are very important for the SC Conference Series. They celebrate the best and the brightest in high performance computing,” said Satoshi Matsuoka, SC16 Awards Chair from Tokyo Institute of Technology. “These awards are not just plaques or certificates. They define excellence. They set the bar for the years to come and are powerful inspiration for both early career and senior researchers.” Following is the list of Technical Program awards presented at SC16: SC16 received 442 paper submissions, of which 81 were accepted (18.3 percent acceptance rate). Of those, 13 were selected as finalists for the Best Paper (six) and Best Student Paper (seven) awards. The Best Paper Award went to “Daino: A High-Level Framework for Parallel and Efficient AMR on GPUs” by Mohamed Wahib Attia and Naoya Maruyama, RIKEN; and Takayuki Aoki, Tokyo Institute of Technology. The Best Student Paper Award went to “Flexfly: Enabling a Reconfigurable Dragonfly Through Silicon Photonics” by Ke Wen, Payman Samadi, Sebastien Rumley, Christine P. Chen, Yiwen Shen, Meisam Bahadori, and Karen Bergman, Columbia University and Jeremiah Wilke, Sandia National Laboratories. The ACM Gordon Bell Prize is awarded for outstanding team achievement in high performance computing and tracks the progress of parallel computing. This year, the prize was awarded to a 12-member Chinese team for their research project, “10M-Core Scalable Fully-Implicit Solver for Nonhydrostatic Atmospheric Dynamics.” The winning team presented a solver (method for calculating) atmospheric dynamics. In the abstract of their presentation, the winning team writes, “On the road to the seamless weather-climate prediction, a major obstacle is the difficulty of dealing with various spatial and temporal scales. The atmosphere contains time-dependent multi-scale dynamics that support a variety of wave motions.” To simulate the vast number of variables inherent in a weather system developing in the atmosphere, the winning group presents a highly scalable fully implicit solver for three-dimensional nonhydrostatic atmospheric simulations governed by fully compressible Euler equations. Euler equations are a set of equations frequently used to understand fluid dynamics (liquids and gasses in motion). Winning team members are Chao Yang, Chinese Academy of Sciences; Wei Xue, Weimin Zheng, Guangwen Yang, Ping Xu, and Haohuan Fu, Tsinghua University; Hongtao You, National Research Center of Parallel Computer Engineering and Technology; Xinliang Wang, Beijing Normal University; Yulong Ao and Fangfang Liu, Chinese Academy of Sciences, Lin Gan, Tsinghua University; Lanning Wang, Beijing Normal University. This year, SC received 172 detailed poster submissions that went through a rigorous review process. In the end, 112 posters were accepted and five finalists were selected for the Best Poster Award. As part of its research poster activities, SC16 also hosted the ACM Student Research Competition for both undergraduate and graduate students. In all 63 submissions were received, 26 Student Research Competition posters were accepted – 14 in the graduate category and 12 in the undergraduate category. The Best Poster Award went to “A Fast Implicit Solver with Low Memory Footprint and High Scalability for Comprehensive Earthquake Simulation System” with Kohei Fujita from RIKEN as the lead author. First Place: “Touring Dataland? Automated Recommendations for the Big Data Traveler” by Willian Agnew and Michael Fischer, Advisors: Kyle Chard and Ian Foster. Second Place: “Analysis of Variable Selection Methods on Scientific Cluster Measurement Data” by Jonathan Wang, Advisors: Wucherl Yoo and Alex Sim. Third Place: “Discovering Energy Usage Patterns on Scientific Clusters” by Matthew Bae, Advisors: Wucherl Yoo, Alex Sim and Kesheng Wu. First Place: “Job Startup at Exascale: Challenges and Solutions” by Sourav Chakroborty, Advisor: Dhabaleswar K. Panda. Second Place: “Performance Modeling and Engineering with Kerncraft,” by Julian Hammer, Advisors: Georg Hager and Gerhard Wellein. Third Place: “Design and Evaluation of Topology-Aware Scatter and AllGather Algorithms for Dragonfly Networks” by Nathanael Cheriere, Advisor: Matthieu Dorier. The Scientific Visualization and Data Analytics Award featured six finalists. The award went to “Visualization and Analysis of Threats from Asteroid Ocean Impacts” with John Patchett as the lead author. The Student Cluster Competition returned for its 10th year. The competition which debuted at SC07 in Reno and has since been replicated in Europe, Asia and Africa, is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC16 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges. The students partner with vendors to design and build a cutting-edge cluster from commercially available components, not to exceed a 3120-watt power limit and work with application experts to tune and run the competition codes. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery. This is the university’s third appearance in the competition. Also, an ACM SIGHPC Certificate of Appreciation is presented to the authors of a recent SC paper to be used for the SC16 Student Cluster Competition Reproducibility Initiative. The selected paper was “A Parallel Connectivity Algorithm for de Bruijn Graphs in Metagenomic Applications” by Patrick Flick, Chirag Jain, Tony Pan and Srinivas Aluru from Georgia Institute of Technology. The George Michael Memorial HPC Fellowship honors exceptional Ph.D. students. The first recipient is Johann Rudi from the Institute for Computational Engineering and Sciences at the University of Texas at Austin for his project, “Extreme-Scale Implicit Solver for Nonlinear, Multiscale, and Heterogeneous Stokes Flow in the Earth’s Mantle.” The second recipient is Axel Huebl from Helmholtz-Zentrum Dresden-Rossendorf at the Technical University of Dresden for his project, “Scalable, Many-core Particle-in-cell Algorithms to Stimulate Next Generation Particle Accelerators and Corresponding Large-scale Data Analytics.” The SC Conference Series also serves as the venue for recognizing leaders in the HPC community for their contributions during their careers. Here are the career awards presented at SC16: The IEEE-CS Seymour Cray Computer Engineering Award recognizes innovative contributions to high performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The 2016 IEEE-CS Seymour Cray Computer Engineering Award was presented to William J. Camp of Los Alamos National Laboratory “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Camp previously served as Intel’s Chief Supercomputing Architect and directed Intel’s Exascale R&D efforts. Established in memory of Ken Kennedy, the founder of Rice University's nationally ranked computer science program and one of the world's foremost experts on high-performance computing, the ACM/IEEE-CS Ken Kennedy Award recognizes outstanding contributions to programmability or productivity in high-performance computing together with significant community service or mentoring contributions. The 2016 Ken Kennedy Award was presented to William D. Gropp “for highly influential contributions to the programmability of high-performance parallel and distributed computers, and extraordinary service to the profession.” Gropp Is the Acting Director of the National Center for Supercomputing Applications and Director, Parallel Computing Institute, Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign. The IEEE-CS Sidney Fernbach Memorial Award is awarded for outstanding contributions in the application of high performance computers using innovative approaches. The 2016 IEEE-CS Sidney Fernbach Memorial Award was presented to Vipin Kumar “for foundational work on understanding scalability, and highly scalable algorithms for graph positioning, sparse linear systems and data mining.” Kumar is a Regents Professor at the University of Minnesota. The Supercomputing Conference Test of Time Award recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends. The winning paper is “Automatically Tuned Linear Algebra Software” by Clint Whaley from University of Tennessee and Jack Dongarra from University of Tennessee and Oak Ridge National Laboratory. IEEE TCSC Award for Excellence in Scalable Computing for Early Career Researchers: The IEEE TCHPC Award for Excellence in Scalable Computing for Early Career Researchers recognizes individuals who have made outstanding and potentially long-lasting contributions to the field within five years of receiving their Ph.D. The 2016 awards were presented to Kyle Chard, Computation Institute , University of Chicago and Argonne National Laboratory; Sunita Chandrassekaran, University of Delaware; and Seyong Lee, Oak Ridge National Laboratory. SC17 will be held next November 12-17 in Denver, Colorado. For more details, go to http://sc17.supercomputing.org/. SC16, sponsored by the IEEE Computer Society and ACM (Association for Computing Machinery), offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world-class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC16, visit: http://sc16.supercomputing.org.


News Article | December 9, 2016
Site: www.cemag.us

Battery researchers seeking improved electrode materials have focused on “tunneled” structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. The finding is reported in Nature Communications. “Significant research has been done to increase the energy density and power density of lithium ion battery systems,” says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation, he said, is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. “So for an electric car, we need to increase the energy and power of the battery — and decrease the cost as well,” he said. His team, which includes coworkers at Argonne National Laboratory, Michigan Technological University, and the University of Bath in the U.K., has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly spaced tunnels that allow charge carriers — like lithium ions — to move in and out freely. “But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale,” Shahbazian-Yassar says. “We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium.” But the tunnel stabilizers, being positively charged like the lithium ions, should repel each other. “If lithium goes in, will the tunnel stabilizer come out?” Shahbazian-Yassar shrugged. “The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?” The new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material — which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, was then able to use a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-ångstrom resolution so he could clearly see inside them — and he saw they do change in the presence of a stabilizer ion. “It’s a direct way to see the tunnels,” Yuan says. “And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer.” The finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge, Shahbazian-Yassar says. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. “With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time,” he says. Co-authors on the Nature Communications paper are Kun He, Soroosh Sharifi-Asl, Boao Song, and Anmin Nie of UIC; Chun Zhan, Zhenzhen Yang, Xiangyi Luo, Hao Wang, Khalil Amine, and Jun Lu of Argonne; Hungru Chen, Stephen M. Wood, and M. Saiful Islam of the University of Bath; and Wentao Yao of Michigan Tech. Funding was provided by the National Science Foundation and the U.S. Department of Energy.


News Article | December 20, 2016
Site: www.materialstoday.com

Battery researchers seeking improved electrode materials have focused on ‘tunneled’ structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago (UIC) has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. This finding is reported in a paper in Nature Communications. "Significant research has been done to increase the energy density and power density of lithium ion (Li-ion) battery systems," says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation of Li-ion batteries is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. "So for an electric car, we need to increase the energy and power of the battery – and decrease the cost as well." His team, which includes co-workers at Argonne National Laboratory, Michigan Technological University and the University of Bath in the UK, has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly-spaced tunnels that allow charge carriers – like lithium ions – to move in and out freely. "But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale," Shahbazian-Yassar said. "We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium." The potential problem, however, is that as the tunnel stabilizers and lithium ions are both positively charged they should repel each other. "If lithium goes in, will the tunnel stabilizer come out?" questions Shahbazian-Yassar. "The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?" This new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material, which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, used a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-angstrom resolution. This allowed him to see inside the tunnels – and he saw that they do change in the presence of a stabilizer ion. "It's a direct way to see the tunnels," Yuan said. "And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer." According to Shahbazian-Yassar, this finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. "With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time," he says. This story is adapted from material from the University of Illinois at Chicago, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | December 9, 2016
Site: www.rdmag.com

Battery researchers seeking improved electrode materials have focused on "tunneled" structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. "Significant research has been done to increase the energy density and power density of lithium ion battery systems," says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation, he said, is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. "So for an electric car, we need to increase the energy and power of the battery -- and decrease the cost as well," he said. His team, which includes coworkers at Argonne National Laboratory, Michigan Technological University and the University of Bath in the U.K., has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly spaced tunnels that allow charge carriers -- like lithium ions -- to move in and out freely. "But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale," Shahbazian-Yassar said. "We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium." But the tunnel stabilizers, being positively charged like the lithium ions, should repel each other. "If lithium goes in, will the tunnel stabilizer come out?" Shahbazian-Yassar shrugged. "The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?" The new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material -- which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, was then able to use a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-ångstrom resolution so he could clearly see inside them -- and he saw they do change in the presence of a stabilizer ion. "It's a direct way to see the tunnels," Yuan said. "And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer." The finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge, Shahbazian-Yassar said. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. "With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time," he said.


News Article | December 8, 2016
Site: phys.org

Electron microscopy of a manganese dioxide nanowire in cross-section shows its tunnelled atomic structure, stabilized by potassium ions. The tunnels measure about three-quarters of a nanometer, or 7.5 ångstroms, in diameter. The inset is a model of the tunnel viewed down its axis. Credit: Reza Shahbazian-Yassar/UIC Battery researchers seeking improved electrode materials have focused on "tunneled" structures that make it easier for charge-carrying ions to move in and out of the electrode. Now a team led by a researcher at the University of Illinois at Chicago has used a special electron microscope with atomic-level resolution to show that certain large ions can hold the tunnels open so that the charge-carrying ions can enter and exit the electrode easily and quickly. The finding is reported in Nature Communications. "Significant research has been done to increase the energy density and power density of lithium ion battery systems," says Reza Shahbazian-Yassar, associate professor of mechanical and industrial engineering at UIC. The current generation, he said, is useful enough for portable devices, but the maximum energy and power that can be extracted is limiting. "So for an electric car, we need to increase the energy and power of the battery—and decrease the cost as well," he said. His team, which includes coworkers at Argonne National Laboratory, Michigan Technological Institute and the University of Bath in the U.K., has focused on developing a cathode based on manganese dioxide, a very low cost and environmentally-friendly material with high storage capacity. Manganese dioxide has a lattice structure with regularly spaced tunnels that allow charge carriers—like lithium ions—to move in and out freely. "But for the tunnels to survive for long-lasting function, they need support structures at the atomic scale," Shahbazian-Yassar said. "We call them tunnel stabilizers, and they are generally big, positive ions, like potassium or barium." But the tunnel stabilizers, being positively charged like the lithium ions, should repel each other. "If lithium goes in, will the tunnel stabilizer come out?" Shahbazian-Yassar shrugged. "The research community was in disagreement about the role of tunnel stabilizers during the transfer of lithium into tunnels. Does it help, or hurt?" The new study represents the first use of electron microscopy to visualize the atomic structure of tunnels in a one-dimensional electrode material—which the researchers say had not previously been possible due to the difficulty of preparing samples. It took them two years to establish the procedure to look for tunnels in potassium-doped nanowires of manganese dioxide down to the single-atom level. Yifei Yuan, a postdoctoral researcher working jointly at Argonne National Laboratory and UIC and the lead author on the study, was then able to use a powerful technique called aberration-corrected scanning transmission electron microscopy to image the tunnels at sub-ångstrom resolution so he could clearly see inside them—and he saw they do change in the presence of a stabilizer ion. "It's a direct way to see the tunnels," Yuan said. "And we saw that when you add a tunnel stabilizer, the tunnels expand, their electronic structures also change, and such changes allow the lithium ions to move in and out, around the stabilizer." The finding shows that tunnel stabilizers can help in the transfer of ions into tunnels and the rate of charge and discharge, Shahbazian-Yassar said. The presence of potassium ions in the tunnels improves the electronic conductivity of manganese dioxide and the ability of lithium ions to diffuse quickly in and out of the nanowires. "With potassium ions staying in the center of the tunnels, the capacity retention improves by half under high cycling current, which means the battery can hold on to its capacity for a longer time," he said. Explore further: Better than perfect: Defects in materials could be key to better batteries More information: Yifei Yuan et al, The influence of large cations on the electrochemical properties of tunnel-structured metal oxides, Nature Communications (2016). DOI: 10.1038/ncomms13374


News Article | December 20, 2016
Site: www.eurekalert.org

Scientists at Washington University School of Medicine in St. Louis have detailed the structure of a molecule that has been implicated in Alzheimer's disease. Knowing the shape of the molecule -- and how that shape may be disrupted by certain genetic mutations -- can help in understanding how Alzheimer's and other neurodegenerative diseases develop and how to prevent and treat them. The study is published Dec. 20 in the journal eLife. The idea that the molecule TREM2 is involved in cognitive decline -- the hallmark of neurodegenerative diseases, including Alzheimer's -- has gained considerable support in recent years. Past studies have demonstrated that certain mutations that alter the structure of TREM2 are associated with an increased risk of developing late-onset Alzheimer's, frontal temporal dementia, Parkinson's disease and sporadic amyotrophic lateral sclerosis (ALS). Other TREM2 mutations are linked to Nasu-Hakola disease, a rare inherited condition that causes progressive dementia and death in most patients by age 50. "We don't know exactly what dysfunctional TREM2 does to contribute to neurodegeneration, but we know inflammation is the common thread in all these conditions," said senior author Thomas J. Brett, PhD, an assistant professor of medicine. "Our study looked at these mutations in TREM2 and asked what they do to the structure of the protein itself, and how that might impact its function. If we can understand that, we can begin to look for ways to correct it." The analysis of TREM2 structure, completed by first author, Daniel L. Kober, a doctoral student in Brett's lab, revealed that the mutations associated with Alzheimer's alter the surface of the protein, while those linked to Nasu-Hakola influence the "guts" of the protein. The difference in location could explain the severity of Nasu-Hakula, in which signs of dementia begin in young adulthood. The internal mutations totally disrupt the structure of TREM2, resulting in fewer TREM2 molecules. The surface mutations, in contrast, leave TREM2 intact but likely make it harder for the molecule to connect to proteins or send signals as normal TREM2 molecules would. TREM2 lies on the surface of immune cells called microglia, which are thought to be important "housekeeping" cells. Via a process called phagocytosis, such cells are responsible for engulfing and cleaning up cellular waste, including the amyloid beta that is known to accumulate in Alzheimer's disease. If the microglia lack TREM2, or the TREM2 that is present doesn't function properly, the cellular housekeepers can't perform their cleanup tasks. "Exactly what TREM2 does is still an open question," Brett said. "We know mice without TREM2 have defects in microglia, which are important in maintaining healthy brain biology. Now that we have these structures, we can study how TREM2 works, or doesn't work, in these neurodegenerative diseases." TREM2 also has been implicated in other inflammatory conditions, including chronic obstructive pulmonary disease and stroke, making the structure of TREM2 important for understanding chronic and degenerative diseases throughout the body, he added. This work was supported by the National Institutes of Health (NIH), grant numbers R01-HL119813, R01-AG044546, R01-AG051485, R01-HL120153, R01-HL121791, K01-AG046374, T32-GM007067, K08-HL121168, and P50-AG005681-30.1; the Burroughs-Wellcome Fund; the Alzheimer's Association, grant number AARG-16-441560; and the American Heart Association, grant number PRE22110004. Results were derived from work performed at Argonne National Laboratory (ANL) Structural Biology Center. ANL is operated by U. Chicago Argonne, LLC, for the U.S. DOE, Office of Biological and Environmental Research, supported by grant number DE-AC02-06CH11357. Kober DL, Alexander-Brett JM, Karch CM, Cruchaga C, Colonna M, Holtzman MJ, Brett TJ. Neurodegenerative disease mutations in TREM2 reveal a functional surface and distinct loss-of-function mechanisms. eLife. Dec. 20, 2016. Washington University School of Medicine's 2,100 employed and volunteer faculty physicians also are the medical staff of Barnes-Jewish and St. Louis Children's hospitals. The School of Medicine is one of the leading medical research, teaching and patient-care institutions in the nation, currently ranked sixth in the nation by U.S. News & World Report. Through its affiliations with Barnes-Jewish and St. Louis Children's hospitals, the School of Medicine is linked to BJC HealthCare.


News Article | February 15, 2017
Site: www.nature.com

We synthesized nanocrystals (NCs) according to reported methods using conventional air-free techniques. Details are provided in the Supplementary Methods. To facilitate the dispersion of NCs in molten inorganic salts and liquid metals, the native organic surface ligands of NCs were removed with ligand-stripping agents. For instance, 2 ml of an FePt (or upconverting nanoparticles (UCNPs), Fe O , Pt, Pd) NC dispersion in hexane (5 mg ml−1) was mixed with 2 ml of 0.01 M nitrosonium tetrafluoroborate (NOBF , 97%, Aldrich) solution in dichloromethane31. The mixture was vigorously stirred until the NCs precipitated, typically within 30 min. The UCNPs needed to be stirred for a longer period of time (up to 10 h). The bare NCs were separated from the supernatant by centrifugation and dispersed well in N,N-dimethylformamide (DMF). To purify these NCs, a mixture of toluene/hexane (1:1 by volume) was added as a non-solvent to flocculate NCs, which were then redispersed in fresh DMF. The purification process was repeated 2 or 3 times. Removing native ligands from InP and CdSe NCs (or CdSe/CdZnS core/shell quantum dots (QDs)) was achieved by using tetrafluoroboric acid (HBF , 50–55% w/w with diethyl ether, Alfa Aesar)32 and trimetyloxonium tetrafluoroborate (Me OBF , 95%, Aldrich)7 as ligand-stripping agents, respectively. All procedures were performed in a glove box filled with N2 unless otherwise stated. Caution: to disperse NCs, inorganic salts were typically heated to 100–300 °C to a molten state. Vials containing molten inorganic salts need to be handled with care. NCs can disperse in molten inorganic salts in several ways. First, Pt NCs can disperse in a ternary mixture with a composition of AlCl :NaCl:KCl = 63.5:20:16.5 mol% (further referred to as AlCl /AlCl −) through a phase transfer process. In this procedure, a solution of Pt NCs in decane (about 0.3 ml, 5 mg ml−1) was combined with molten AlCl /AlCl − (about 0.3 ml). The resulting mixture was stirred with a Pyrex-glass-coated stir bar until NCs completely transferred from the upper, organic phase to the molten salt solvent, typically within 15 min. Pt NC colloids in AlCl /AlCl − can also be achieved using a solvent-free method. In this approach, dried NC solids were mixed with solidified AlCl /AlCl −, then heated to about 120 °C, and stirred. A homogeneous Pt NC colloid was typically obtained within 30 min. A third procedure involved adding a small volume of NC dispersion in hexane into molten AlCl /AlCl −. For instance, a small drop (0.025 ml) of hexane containing 1–5 mg of Pt NCs was quickly added to 0.6 g (about 0.4 ml) of molten AlCl /AlCl −. The hexane vaporized upon contact because of the high temperature. The Pt NCs formed a stable, black colloid in AlCl /AlCl − while being stirred. Using the above approaches, various NCs (for example, Pd, FePt, UCNPs) with coordinated organic ligands, naked surfaces, or inorganic ligands (the last two typically with the solvent-free method) can form stable colloids in AlCl /AlCl −. An Fe O NC colloid in AlCl /AlCl − was obtained from dried bare NCs using the solvent-free method. Depending on a number of factors, including the type and concentration of NCs and dispersion methods, it may take less than 5 min to hours for stable NC colloids to form. Stable NC colloids in halide-based (for example, LiCl/LiI/KI) or pseudohalide-based (for example, NaSCN/KSCN) molten salts were obtained from bare NCs/molten salts mainly using the solvent-free method. For instance, about 5 mg of dried, bare UCNPs were mixed with 0.5 g of NaSCN/KSCN (about 0.3 ml in the molten state at about 130 °C) at room temperature. The resulting mixture was stirred at 130 °C for several hours until UCNPs formed a homogeneous, stable colloid in the molten salt. A similar procedure can be applied to other bare NCs. It typically takes several hours to a day to disperse the NC solids in salts, depending on the combinations of NCs and salts, the amount of NCs, and the stirring conditions. For further morphological, structural, surface and compositional characterizations, NCs were recovered from molten salts after the salt matrices were dissolved with a single solvent or a series of solvents, depending on the compositions of salts. To recover NCs from AlCl /AlCl −, a sequential washing procedure with a large excess of anhydrous diethyl ether and tetrahydrofuran (THF) was performed in a glove box filled with N . These aprotic solvents can dissolve the Lewis acidic components ([Al Cl −] with minor Al Cl , AlCl ) by forming Lewis acid–base complexes. (We note that any residual moisture or protic solvents may react with [Al Cl −], forming acidic species that may damage NCs). The residual salts were dissolved in methanol and water, in air. NCs were collected by centrifugation, washed with water, and dried before characterizations. Similarly, NCs can be recovered from NaSCN/KSCN (acetone or water), LiCl/LiI/KI (anhydrous methanol, followed by water), and other salt matrices. In some cases, the recovered NC solids can be functionalized using additional organic ligands and redispersed in non-polar solvents (toluene or hexane) through sonication. For instance, 5 mg of dried Pt NCs (or Pd, Fe O NCs, and UCNPs) recovered from AlCl /AlCl − can form a stable solution in 1 ml of toluene with 0.1–0.2 ml of OA and OLA after being sonicated for about 30 min. A conventional purification process using toluene/ethanol as a solvent/non-solvent pair was applied to remove excess organic ligands. Finally, NCs functionalized with organic ligands were dispersed in toluene or hexane. NCs (such as CdSe, Pt, Fe O , FePt, QDs) functionalized by organic ligands in toluene were dried under vacuum to remove organic solvents. The dried NCs were mixed with hydrophobic ionic liquids (ILs), such as trihexyl(tetradecyl)phosphonium bis(2,4,4-trimethylpentyl)phosphinate (P+P−), in a glove box filled with N . After stirring for less than 30 min, we obtained a stable NC solution in P+P− with a high concentration (>50 mg ml−1). NC dispersions in P+P− can be diluted with common non-polar solvents such as toluene without affecting colloidal stability. Adding acetonitrile or ethanol to NC dispersions in P+P−/toluene resulted in a flocculated solution. After centrifugation, NCs were recovered and redispersed in toluene, leaving P+P− in the supernatant. A phase transfer process was employed to disperse NCs in hydrophilic, imidazolium-based ILs, including [BMIM]+Cl− and [BMIM]+I−, in a glove box filled with N . In brief, 1.0 ml of CdSe NCs with oleic acid/trioctylphosphine/trioctylphosphine oxide (OA/TOP/TOPO) in octane (15 mg ml−1) was mixed with 1.0 g of [BMIM]+Cl− (about 0.95 ml at 80 °C). Once the solution was vigorously stirred at 80 °C (the melting point of [BMIM]+Cl− is 73 °C), the NCs gradually transferred to the bottom IL phase within several hours, resulting in a colourless top phase. The red-coloured [BMIM]+Cl− phase was rinsed with fresh octane several times to remove residual organic ligands. Afterward, the slightly turbid IL phase was rinsed with anhydrous ethyl acetate multiple times to break the emulsions presumably from residual species related to organic ligands, resulting in a stable CdSe NC colloid in [BMIM]+Cl−. These NC colloids in [BMIM]+Cl− are stable under inert atmosphere for several months. CdSe NCs in [BMIM]+Cl− can be recovered by adding acetonitrile and separated from the supernatant, which contains acetonitrile and [BMIM]+Cl−. The recovered CdSe NCs can be redispersed in fresh [BMIM]+Cl− to form a stable colloid once gently heated and stirred. In addition, CdSe NCs/[BMIM]+Cl− remained colloidally stable after they were diluted with polar organic solvents such as N-methylformamide (NMF). NCs can also be recovered by adding H O to the solution in NMF, followed by centrifugation. Phase transfer of CdSe NCs from octane to ILs was also observed when [BMIM]+I− was used as the bottom phase. Similar purification processes were applied to achieve stable CdSe NC colloids in [BMIM]+I− or to recover NCs. In the cases of [BMIM]+BF −, 1-ethyl-3-methylimidazolium ethylsulfate ([EMIM]+[EtSO ]−), or 1-butyl-3-methylimidazolium octylsulfate ([BMIM]+[OctSO ]−), no phase transfer was observed. However, CdSe NCs can be stabilized in [BMIM]+BF − or [EMIM]+[EtSO ]− in the presence of a small amount of [BMIM]+Cl− or [BMIM]+I−. For instance, CdSe NCs in 0.05–1 g of [BMIM]+Cl− maintained colloidal stability after being diluted with 1.0 ml of [EMIM]+[EtSO ]−. 2 ml of a bare Pt NC dispersion (1–5 mg ml−1) in DMF was stirred in the presence of a droplet of liquid metal (about 0.5 g of Ga–In or Wood’s alloy) at ~40 and 80 °C, respectively, for 1–3 days. Typically, the bare Pt NCs in DMF were destabilized while the mixture (Pt NCs in DMF with liquid metals) was stirred. Afterward, the mixture was allowed to cool to room temperature. The metal drop/slug was carefully separated from the supernatant and the insoluble NC precipitates, and was then washed with fresh DMF to remove NC residue on the metal surface. For Wood’s alloy, the solidified metal slugs were gently polished with fine sand paper to completely remove the insoluble NC residue on the surface. Pt NCs can be recovered with acid to digest the metal matrices. Ga–In was digested with diluted HCl (≥37%, TraceSELECT, for trace analysis, fuming, Aldrich); Wood’s alloy or Sn–Pb alloy was digested by a mixture of HNO (≥69%, TraceSELECT, for trace analysis, Aldrich) and HF (48 wt% in water, ≥99.99%, metal traces, Aldrich) diluted by deionized (DI) water. The samples in acids were sonicated to accelerate the digestion of metal matrices. Pt NCs were separated from the acidic solution by centrifugation and then rinsed with DI water. To study the origin of colloidal stability in molten inorganic salts, we performed classical molecular dynamics (MD) and umbrella sampling simulations33 of passivated CdSe NCs (in zinc-blende configuration) solvated in molten KCl salt. MD visualization was created using VMD software34 and the error bars in MD simulation were related to state-state covariance as discussed elsewhere35. Detailed information on simulation methods and theoretical analysis36 is available in the Supplementary Materials. Transmission electron microscopy (TEM) of recovered NCs from molten inorganic salts or liquid metals were collected using a 300 kV FEI Tecnai F30 microscope. The recovered NC solids were suspended in hexane and drop-cast on a carbon-coated copper grid. The TEM images of refunctionalized Pt or Fe O NCs were prepared by drop-casting NC dispersion in toluene on TEM grids. The structural information of recovered NCs was assessed from wide-angle powder X-ray diffraction (XRD) patterns using a Bruker D8 diffractometer with a Cu K X-ray source operating at 40 kV and 40 mA. In order to study the colloidal stability of NCs in molten inorganic salts and ILs, small angle X-ray scattering (SAXS) measurements were carried out at the 12-ID-B beamline (Advanced Photon Source, Argonne National Laboratory). A 14 keV X-ray beam was focused on samples with a 300 × 20 μm2 (H×V) beam size. A Pilatus2M detector for SAXS imaging was located about 2 m downstream of the samples. NCs in various media were prepared with a concentration of about 5 mg ml−1 in glass capillaries (0.3–0.8 mm), which were sealed using a flame or epoxy under inert atmosphere. The capillaries were then loaded to a THMS 600 Linkam hot stage for SAXS measurements at elevated temperatures. Samples in inorganic salts or IL matrices were heated to a molten state for SAXS measurements. For comparison, NC solutions/suspensions in toluene, heptadecane, and a mixture of toluene/ethanol with similar concentrations were also measured. The NC solution in heptadecane was measured at the same temperature with NC colloids in AlCl /AlCl −. The low signal-to-noise ratio in SAXS data at high q values in molten salts is related to high X-ray absorption by the molten salt as compared to traditionally used hydrocarbon solvents. At higher q values, the solvent background becomes comparable to the scattering signal from the nanoparticles. The UV–visible spectra of CdSe NCs in toluene, P+P−, [BMIM]+I−, and mixed solvents were collected using a Cary 5000 ultraviolet-visible-near infrared (UV-Vis-NIR) spectrophotometer. Fourier transform infrared (FTIR) spectra of Pt NCs recovered from AlCl /AlCl −, QDs recovered from NaSCN/KSCN, and CdSe NCs recovered from P+P− and [BMIM]+Cl− or [BMIM]+I− were acquired in transmission mode using a NicoletNexus-670 FTIR spectrometer. For quantitative comparison, the same amount of dried, organically capped, or recovered NC solids were mixed with KBr powder and pressed into pellets with the same dimensions. For qualitative analysis, photoluminescence (PL) spectra of QDs and UCNPs in organic solvents and NaSCN/KSCN (in both the molten and solidified states) were recorded using an optical fibre (Ocean Optics, QE Pro, QP600-1-UV-VIS) connected to an optical microscope (Olympus BX51). QDs and UCNPs were excited by a blue light (white light with a short-pass filter, cut-off at 492 nm) and a 980 nm IR laser, respectively. To monitor the PL spectra of NCs in molten NaSCN/KSCN, the samples were preheated at 150 °C. To quantitatively compare absorption features and PL intensities of QDs with native organic ligands, those recovered from NaSCN/KSCN, and those redispersed in toluene with the addition of organic ligands, the absorption and emission spectra of the QD solutions were collected using a Cary 5000 UV-Vis-NIR spectrophotometer and Nanolog fluorometer (Horiba JY), respectively. 1H-NMR spectra of CdSe NC colloids in toluene and P+P−/toluene were recorded at 500 MHz on a Bruker Ultrashield 500 plus spectrometer. OA/TOP/TOPO-capped CdSe NCs in hexane were thoroughly dried and redispersed in 1.0 ml of d8-toluene with a concentration of 25 mg ml−1. To probe the OA ligands on the surface of CdSe NCs in P+P−, 0.1 ml of CdSe/P+P− (25 mg ml−1) was diluted with 1.0 ml of d8-toluene. To test the presence of OA ligands on the surface of CdSe NCs in [BMIM]+Cl−, we recovered the original organic ligands by digesting CdSe NCs. In brief, DI water was added to CdSe NCs/[BMIM]+Cl− (1.0 ml, 25 mg ml−1) to cause the CdSe NCs to precipitate out while dissolving the IL matrix. The NC precipitates were rinsed several times with DI water to completely remove [BMIM]+Cl−. Afterward, NCs were digested in half-concentrated aqua regia following the standard procedure6. The organic ligands (OA/TOP/TOPO), if any, were then extracted with diethyl ether, dried under vacuum, and dissolved in d8-toluene (1.0 ml) for NMR study. All samples were measured at 298 K. Inductively coupled plasma optical emission spectroscopy (ICP–OES) analysis of samples digested in acidic aqueous solutions was carried out on a Agilent 700 Series spectrometer. To probe the TOP and TOPO ligands on the surface of CdSe NCs in toluene, P+P−, and [BMIM]+Cl−, the same amount of NCs recovered from various solvents were digested by piranha solution and diluted with DI water. To prove the homogeneity of NCs in liquid metal, several aliquots of Pt NCs in Ga–In eutectic were digested by aqua regia and diluted with DI water for quantitative ICP–OES analysis of the ratio of Pt:Ga and Pt:In. Similarly, a piece of Wood’s alloy (about 0.45 g) containing bare Pt NCs was digested by aqua regia, and diluted with DI water to determine the concentration of Pt NCs. No statistical methods were used to predetermine sample size. The simulations described in the paper were performed with publicly available simulation packages (LAMMPS). The scripts used for data analysis are available from S.V. (svaikunt@uchicago.edu) on request. The data supporting the findings of this study are available from D.V.T. on request.


News Article | October 5, 2016
Site: www.nature.com

The ExbB construct with and without a C-terminal 6×His tag was subcloned into pET26b (Novagen). ExbD was subcloned into pACYCDuet-1 vector (Novagen) with an N-terminal Strep-tag and a C-terminal 10×His tag. ExbD was also subcloned into a pCDF-1b vector (Novagen) containing a C-terminal TEV protease site followed by a 10×His tag. An ExbD construct containing a C-terminal TEV protease site (preceded by a Gly-Gly-Gly linker for efficient digestion by TEV protease) followed by a 10×His tag was constructed by deletion of the sequence encoding the periplasmic domain of ExbD (residues 50–141). TonB was cloned into a pACYCDUET-1 vector with an N-terminal 10×His tag followed by a TEV protease site. Mutants of TonB (C18A), ExbD (D25A, N78C and E113C), and ExbB (C25S) were prepared by site-directed mutagenesis (primer sequences for all cloning and mutagenesis experiments are available upon request). The sequences of all plasmid constructs and mutations were verified by sequence analysis (Macrogen USA and Eurofins Genomics GmbH). Expression of ExbB with a C-terminal 6×His tag was performed by transforming E. coli BL21(DE3) cells (NEB) with the pET26b/ExbB vector. Co-expression was performed by co-transforming E. coli BL21(DE3) cells with the respective ExbB, ExbD, and/or TonB plasmids. For all transformations, cells were plated onto LB agar plates supplemented with appropriate antibiotics. Colonies were then used for a starter culture to inoculate 12 flasks containing either 1 l 2×YT medium (Ton subcomplex) or SelenoMet medium supplemented with l-methionine at 40 mg/l (Molecular Dimensions) (Ton complex), with appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with isopropyl β-d-1-thiogalactopyranoside (IPTG) to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. For selenomethionine-substituted samples for experimental phasing, B834(DE3) cells (NEB) were co-transformed with pET26b/ExbB and pCDF-1b/ExbD plasmids. Single colonies were used to inoculate 12 flasks containing 1 l SelenoMet medium (Molecular Dimensions) supplemented with 40 mg/ml l-selenomethionine and appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with IPTG to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. Cells were harvested and used immediately or stored at −80 °C. For purification, cells were resuspended in either 1×PBS (Ton subcomplex) or TBS (Ton complex) supplemented with 100 μM 4-(2-aminoethyl)benzenesulfonyl fluoride (AEBSF), 100 μM DNase, and 50 μg/ml lysozyme, and disrupted with two passages through an EmulsiFlex-C3 (Avestin) operating at ~15,000 p.s.i. Membranes were pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C. Membranes were then resuspended in 1×PBS or TBS using a dounce homogenizer and solubilized by the addition of Triton X-100 (Ton subcomplex) or DDM (Anatrace) (Ton complex) to a final concentration of 1% by stirring at medium speed for 1 h to overnight at 4 °C. Insoluble material was pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C and the supernatant was used immediately. Immobilized metal affinity chromatography (IMAC) was performed on an AkTA Purifier (GE Healthcare) using a 15-ml Ni-NTA agarose column (Qiagen) equilibrated with 1×PBS or TBS supplemented with 0.1% Triton X-100 or 0.1% DDM. The supernatant was supplemented with 10 mM imidazole and loaded onto the column. The column was washed in three steps with 1×PBS or TBS supplemented with 20, 40 and 60 mM imidazole, respectively, and eluted with 1×PBS or TBS supplemented with 250 mM imidazole in 2-ml fractions. Fractions were analysed by SDS–PAGE and those fractions containing the complex were pooled. To remove the 10×His tag, TEV protease was added to the sample at 0.1 mg/ml final concentration and rocked overnight at 4 °C. For the Ton complex, the sample was then diluted 2–3 times with 25 mM HEPES, pH 7.3, and 0.1% DDM and loaded onto an anion exchange 6-ml ResourceQ column (GE Healthcare). Elution was performed with a 0–1 M NaCl gradient over 5 column volumes. For the Ton subcomplex, the sample was concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 50-kDa MW cut-off (Millipore), filtered, and purified by size-exclusion chromatography using a Superdex 200 HL 16/600 column (GE Healthcare) at a flow rate of 0.5–1.0 ml/min. The buffer consisted of 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, 0.01% NaN , and 0.08% C E . For the Ton complex, eluted fractions were concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 100-kDa MW cut-off (Millipore), and passed over a Superose6HR 10/30 column (GE Healthcare) at a flow rate of 0.5 ml/min using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM. Far-UV circular dichroism (CD) spectra (185–260 nm) were measured in 0.1 M NaP , pH 7.0, and 0.03% DDM using quartz cuvettes with a 0.02–0.2 mm optical path length. The results were analysed using the DichroWeb package of programs42 and different sets of reference proteins, including the SMP180 set of membrane proteins. The analysis of the thermal stability of the complexes reconstituted into liposomes was measured by the temperature dependence of the CD signal amplitude at 222 nm. Thermal melting was performed in a magnetically stirred 1-cm quartz cuvette containing 10 mM HEPES, pH 7.0, and 100 mM NaCl with a rate of temperature increase of 0.5 °C/min. Melting curves were normalized to the measured value of the molar ellipticity change at 10 °C. For crystallization, samples were concentrated to ~10 mg/ml and sparse matrix screening was performed using a TTP Labtech Mosquito crystallization robot using hanging drop vapour diffusion and plates incubated at 15–21 °C. Initially, many lead conditions were observed to produce crystals with hexagonal morphology; however, none diffracted to better than ~7 Å and most suffered from anisotropy. To avoid this packing, we performed reductive methylation of our samples before crystallization using the Reductive Alkylation Kit (Hampton Research), followed by an additional size-exclusion chromatography step. This led to a condition which produced diffraction spots to ~4 Å resolution. Further optimization and screening allowed us to grow crystals in 100 mM Na-acetate, pH 4.5, 100 mM MgCl , and 25% PEG 400 that routinely diffracted to ~3.5 Å resolution or better. For heavy atom soaking, crystals were transferred to a drop containing 1 mM HgCl and incubated overnight at room temperature and then harvested directly from the soaking condition. The best native crystals for the ExbB–ExbD complex, however, were grown from 100 mM HEPES-NaOH, pH 7.0, 100 mM CaCl , and 22% PEG MME 550 and diffracted to 2.6 Å resolution; these crystals were also used for heavy atom soaking experiments. Unfortunately, none of the heavy atom soaked crystals (nor the selenomethionine substituted crystals) were useful for phasing owing to crystal pathologies, which we suspected were twinning related. However, selenomethionine substituted crystals of the ExbB –ExbD complex were obtained using 100 mM MES/imidazole, pH 6.5, 30 mM MgCl , 30 mM CaCl , 50% ethylene glycol, and 8% PEG 8000 and diffracted to 5.2 Å resolution with no twinning-related issues. Both native and selenomethionine-substituted crystals were harvested directly from the crystallization drops. Screening for diffraction quality was performed at the GM/CA-CAT and SER-CAT beamlines at the Advanced Photon Source at Argonne National Laboratory and at beamlines 5.0.1 and 8.2.1 at the Advanced Light Source at Lawrence Berkeley National Laboratory. Final datasets were collected at the SER-CAT beamline and all data were processed using either HKL200043 or Xia244. A summary of the data collection statistics can be found in Supplementary Table 1. The presence of both components of the Ton subcomplex within the crystals was confirmed by SDS–PAGE and mass spectrometry analyses of harvested crystals. For phasing the ExbB–ExbD complex structure, three datasets were collected on selenomethionine substituted crystals of the ExbB –ExbD complex at a wavelength of 0.979 Å. The data were processed with Xia244 and, based on non-isomorphism, one dataset was removed. The final two datasets were processed together in space group P4 2 2 to a final resolution of 5.2 Å. Selenium sites (35 total) were located using HKL2MAP45 after 5,000 tries within SHELXD at a resolution range of 20–6 Å. The sites were then fed into AutoSol (PHENIX)46 which removed one site, producing a phase-extended density-modified electron density map into which we could build an initial poly-alanine model. Five-fold symmetry was clearly observed, with each monomer consisting of very elongated α-helices, and directionality was determined on the basis of the predicted topology of ExbB, which contains a single large cytoplasmic domain. This model was then used as a search model to solve the native and Hg-soaked structures by molecular replacement using PHASER/PHENIX46, 47 and the sequence docked on the basis of anomalous peaks from the SeSAD dataset. The ExbB–ExbD complex was solved in space group P2 to 2.6 Å resolution with R/R values of 0.21/0.26 and the Hg-soaked structure in space group P2 2 2 to 3.5 Å resolution with R/R values of 0.25/0.30. All model building was performed using COOT and subsequent refinement done in PHENIX46. r.m.s.d. analysis was performed within PyMOL (Schrödinger). Electrostatic surface properties (calculated using the Linearized Poisson-Boltzman Equation mode with a solvent radius of 1.4), including generation of the electric field lines, were analysed and visualized using the APBS plugin within PyMOL (Schrödinger). Buried surface area was calculated using the PDBePISA server48. Structure-related figures were made with PyMOL (Schrödinger) and Chimera49 and annotated and finalized with Adobe Photoshop and Illustrator. Coordinates and structure factors for the ExbB/ExbD complexes have been deposited into the Protein Data Bank (PDB accession codes 5SV0 and 5SV1). For 2D crystallization experiments, the Ton subcomplex (ExbB–ExbD) was extracted and purified by IMAC as previously described. The sample was passed over a Superose 12 HR 10/30 column using 20 mM Tris-HCl, pH 7, 150 mM NaCl, 0.01% NaN , and 0.035% Triton X-100. The purified complex was then mixed with a solution stock of E. coli polar lipid (Avanti Polar Lipids, Inc.) at 10 mg/ml in 2% Triton X-100, to reach final concentrations of 0.5–1.0 mg/ml protein and 0.1–0.4 mg/ml lipid. The lipid-protein-detergent samples solutions were placed into Mini Slide-A-Lyser dialysis devices (Pierce) with a 20-kDa MW cutoff, and dialysed in 1 l of 25 mM Tris-HCl, pH 7.0, 150 mM NaCl, and 0.01% NaN at 4 °C. Aliquots of dialysed samples were observed periodically by electron microscopy to monitor the formation of 2D crystals. Sample preparation for electron microscopy was carried out by applying a 5-μl drop of protein-lipid material on a glow discharged carbon-coated electron microscopy grid. Staining was performed by addition of 1% (w/v) uranyl acetate and incubation for 1 min. Grids were then imaged on a Tecnai G2 200 LaB6 electron microscope operating at 200 kV at the Institut de Microbiologie de la Méditerranée. Images were recorded with a 2K Eagle CCD camera. The best 2D crystals were selected through observation of the power spectrum of the images using ImageJ software41. Selected images were processed using the IPLT Correlation Averaging suite program50. A filtered image was generated by optical filtering of the low resolution spots, and padded to contain only 4–6 unit cells. The padded image was cross-correlated with the original large image. The positions of the cross-correlation peaks were determined and used to extract sub-images that were summed to generate an average image of the 2D unit cell. Site-directed spin labelling was used to covalently attach the spin label (1-oxyl-2,2,5,5-tetramethyl-∆3-pyrroline-3-methyl) methanethiosulfonate (MTSL) (Toronto Research Chemicals) to Cys25 on ExbB and to cysteines engineered at positions 78 and 113 on ExbD (N78C, E113C; ExbD constructs were in the pACYC vector containing an N-terminal strep-tag and a C-terminal 10×His tag for the Ton subcomplex, and in the pCDF-1b vector for the Ton complex). For labelling with MTSL, samples were first incubated with 2–10 mM dithiothreitol (DTT) for 1–2 h and the DTT then removed by passage over a HiTrap desalting column (GE Healthcare) or during anion exchange (Ton complex). Samples were then incubated with a 10× molar excess of MTSL overnight at 4 °C and then passed over a Superose 6HR 10/30 gel filtration column (GE Healthcare) using 20 mM HEPES-NaOH, pH 7.5, 200 mM NaCl, 0.08% C E or 0.03% DDM (Ton subcomplex); or 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM (Ton complex). For DEER measurements, the samples were diluted with D O to a final concentration of 30% and cryoprotected with 10% v/v D8-glycerol before being flash frozen in liquid nitrogen. Continuous wave (CW) electron paramagnetic resonance (EPR) experiments were carried out at room temperature on a bench-top X-band MiniScope MS 400 (Magnettech by Freiberg Instrument) at 9.5 GHz (X-band) with 2.5 mW microwave power, 15 mT sweep width and 0.15 mT modulation amplitude. Spin labelling efficiency was calculated from the second integral of the derivative spectra compared to a standard spin concentration of 100 μM (Tempol in water). The ExbB native cysteine C25 was labelled with a 50% efficiency, while the ExbD mutants were labelled with efficiencies >80%. DEER measurements were initially performed at ETH Zurich on a commercial Bruker ELEXSYS-II E580 Q-band spectrometer (34–35 GHz) and later on a Bruker ELEXSYS E580Q-AWG dedicated pulse Q-band spectrometer operating at 34–35 GHz. Both spectrometers were equipped with a TWT amplifier (150 W) and a home-made rectangular resonator (from ETH Zurich) enabling the insertion of 30–40 μl sample volume in quartz tubes with 3 mm outer diameter51. Dipolar time evolution data were acquired using the four-pulse DEER experiment at 50 K. All pulses were set to be rectangular with 12 ns length, with the pump frequency at the maximum of the echo-detected field swept spectrum, 100 MHz higher than the observer frequency. Deuterium nuclear modulations were averaged by increasing the first interpulse delay by 16 ns for 8 steps as previously described51. The background of the normalized DEER primary data (V(t)/V(0)) was fitted with optimized dimensions from 2.5 to 3.2 and the resulting normalized secondary data (F(t)/F(0)) were converted by model-free Tikhonov regularization to distance distributions with the software DeerAnalysis201552, 53. The simulation of the possible spin label rotamers populated at selected positions in the protein was performed using the Matlab program package MMM2015.1 using the MTSL ambient temperature library54. The ExbB –ExbD complex (ExbD was in the pACYC vector containing an N-terminal strep-tag and a C-terminal 6×HIS tag) was expressed and purified as described earlier. To prepare the sample for crosslinking, the sample was incubated at 4 °C with 5 mM DTT for at least 1 h. The DTT was then removed using a desalting column in 20 mM HEPES, pH 7.0, 150 mM NaCl, and 0.1% DDM. The crosslinker 1,8-bismaleimidodiethylenglycol (BM(PEG) ) (Pierce) was added at a final concentration of 0.2 mM and the reaction was incubated at 4 °C overnight. The sample was concentrated and passed over a Superose 6HR 10/30 gel filtration column using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.035% DDM on an AkTA Purifier system (GE Healthcare). The results were visualized by SDS–PAGE analysis. Protein complexes were reconstituted into liposomes by dialysis of the protein–lipid–detergent mixture. Lipids (DOPG, DOPC and DOPE) dissolved in chloroform were mixed in a molar ratio of 2:3:5. Chloroform was removed by vortexing in a stream of nitrogen gas in a glass tube followed by drying in vacuum for 2–3 h. The lipid film was hydrated in 1 ml TN buffer (10 mM Tris-HCl, pH 7.5, 50 mM NaCl), followed by five cycles of freeze–thaw and sonication using a water bath sonicator until the suspension of lipids became clear (10–15 min). For proteoliposome preparation, small unilamellar vesicles (SUVs) were mixed with octylglucoside (final concentration, 2%) and then proteins added to achieve a molar ratio of total lipid to protein ∼500–2,000 mol/mol. After 1 h incubation in ice, the lipid–protein–detergent mixture was dialysed into 10 mM Tris-HCl, pH 7.5, 0.3 M sucrose, and 50 mM KCl for 30–40 h using a dialysis membrane with a MW cut-off pore size of 10 kDa. Mueller-Rudin type planar bilayer membranes were formed on a 0.2-mm diameter aperture in a partition that separates two 1-ml compartments, using a mixture of lipids, DOPG, DOPC and DOPE, at a molar ratio of 2:3:5 (10 mg/ml) in n-decane, applied by a brush technique55. The aqueous solution in both compartments consisted of 2 mM KP , pH 7.0, and 0.1 M and 0.4 M KCl in the cis- and trans-compartments, respectively. To study the pH dependence of channel activity, bathing solutions were buffered with 2 mM Na-acetate (pK 4.8), Na-cacodylate (pK 6.2), and Tris (pK 8.3). The pH of the bathing solution was changed by adding 10–20 μl 0.1 M HCl or KOH. The cis-side of the planar bilayer is defined as that to which the electrical potential is applied. Proteoliposomes, 0.1–2 μl, were added to the trans-compartment, and the solutions were stirred until the transmembrane current appeared. A large concentration of an osmolyte inside of the liposomes and the transmembrane KCl concentration gradient caused proteoliposome fusion with the pre-formed planar lipid membrane bilayer. The transmembrane current was measured in voltage-clamp mode with Ag/AgCl electrodes and agar bridges, using a BC-525C amplifier (Warner Instruments). The single-channel conductance of the ExbB–ExbD complexes was measured in symmetrical salt conditions: 0.1 M KCl solution, pH 7.5, at a holding potential of +50 or −50 mV. For ion selectivity experiments, zero-current potential (V ) was determined from volt-ampere characteristics measured in asymmetric salt conditions. Relative cation/anion permeability was calculated using the Goldman-Hodgkin-Katz equation56.


Englert C.,University of Glasgow | Low I.,Argonne National Laboratory | Low I.,Northwestern University | Spannowsky M.,Durham University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Top quark loops in Higgs production via gluon fusion at large invariant final state masses can induce important interference effects in searches for additional Higgs bosons as predicted in, e.g., Higgs portal scenarios and the minimal supersymmetric Standard Model when the heavy scalar is broad or the final state resolution is poor. Currently, the limit setting as performed by both ATLAS and CMS is based on injecting a heavy Higgs-like signal neglecting interference effects. In this paper, we perform a study of such "on-shell" interference effects in pp→ZZ and find that they lead to a O(30%) width scheme-dependent modification of the signal strength. Including the continuum contributions to obtain, e.g., the full pp→ZZ→4 final state, this modification is reduced to the 10% level in the considered intermediate mass range. © 2015 American Physical Society.


Segovia J.,University of Salamanca | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

Analyses of the three valence-quark bound-state problem in relativistic quantum field theory predict that the nucleon may be understood primarily as a Borromean bound-state, in which binding arises mainly from two separate effects. One originates in non-Abelian facets of QCD that are expressed in the strong running coupling and generate confined but strongly-correlated colour-antitriplet diquark clusters in both the scalar-isoscalar and pseudovector-isotriplet channels. That attraction is magnified by quark exchange associated with diquark breakup and reformation. Diquark clustering is driven by the same mechanism which dynamically breaks chiral symmetry in the Standard Model. It has numerous observable consequences, the complete elucidation of which requires a framework that also simultaneously expresses the running of the coupling and masses in the strong interaction. Planned experiments are capable of validating this picture. © 2015 Argonne National Laboratory and The Authors.


Vaughey J.T.,Argonne National Laboratory | Liu G.,Lawrence Berkeley National Laboratory | Zhang J.-G.,Pacific Northwest National Laboratory
MRS Bulletin | Year: 2014

The success of high capacity energy storage systems based on lithium (Li) batteries relies on the realization of the promise of Li-metal anodes. Li metal has many advantageous properties, including an extremely high theoretical specific capacity (3860 mAh g-1), the lowest electrochemical potential (-3.040 V versus standard hydrogen electrode), and low density (0.59 g cm-3), which, all together, make it a very desirable electrode for energy storage devices. However, while primary Li batteries are used for numerous commercial applications, rechargeable Li-metal batteries that utilize Li-metal anodes have not been as successful. This article discusses the properties of Li metal in the absence of surface stabilization, as well as three different approaches currently under investigation for stabilizing the surface of Li metal to control its reactivity within the electrochemical environment of a Li-based battery. © 2014 Materials Research Society.


Croy J.R.,Argonne National Laboratory | Abouimrane A.,Argonne National Laboratory | Zhang Z.,Pacific Northwest National Laboratory
MRS Bulletin | Year: 2014

The commercialization of lithium-ion batteries has intimately changed our lives and enabled portable electronic devices, which has revolutionized communications, entertainment, medicine, and more. After three decades of commercial development, researchers around the world are now pursuing major advances that would allow this technology to power the next generation of light-duty, electric, and hybrid-electric vehicles. If this goal is to be met, concerted advances in safety and cost, as well as cycle-life and energy densities, must be realized through advances in the properties of the highly correlated, but separate, components of lithium-ion energy-storage systems. © 2014 Materials Research Society.


Ament L.J.P.,Leiden University | Van Veenendaal M.,Argonne National Laboratory | Van Veenendaal M.,Northern Illinois University | Devereaux T.P.,SLAC | And 2 more authors.
Reviews of Modern Physics | Year: 2011

In the past decade, resonant inelastic x-ray scattering (RIXS) has made remarkable progress as a spectroscopic technique. This is a direct result of the availability of high-brilliance synchrotron x-ray radiation sources and of advanced photon detection instrumentation. The technique's unique capability to probe elementary excitations in complex materials by measuring their energy, momentum, and polarization dependence has brought RIXS to the forefront of experimental photon science. Both the experimental and theoretical RIXS investigations of the past decade are reviewed, focusing on those determining the low-energy charge, spin, orbital, and lattice excitations of solids. The fundamentals of RIXS as an experimental method are presented and then the theoretical state of affairs, its recent developments, and the different (approximate) methods to compute the dynamical RIXS response are reviewed. The last decade's body of experimental RIXS data and its interpretation is surveyed, with an emphasis on RIXS studies of correlated electron systems, especially transition-metal compounds. Finally, the promise that RIXS holds for the near future is discussed, particularly in view of the advent of x-ray laser photon sources. © 2011 American Physical Society.


Gumbart J.C.,Argonne National Laboratory | Roux B.,Argonne National Laboratory | Roux B.,University of Chicago | Chipot C.,University of Illinois at Urbana - Champaign | Chipot C.,CNRS Structure and Reactivity of Complex Molecular Systems
Journal of Chemical Theory and Computation | Year: 2013

Accurate prediction of standard binding free energies describing protein-ligand association remains a daunting computational endeavor. This challenge is rooted to a large extent in the considerable changes in conformational, translational, and rotational entropies underlying the binding process that atomistic simulations cannot easily sample. In spite of significant methodological advances, reflected in a continuously improving agreement with experiment, a characterization of alternate strategies aimed at measuring binding affinities, notably their respective advantages and drawbacks, is somewhat lacking. Here, two distinct avenues to determine the standard binding free energy are compared in the case of a short, proline-rich peptide associating to the Src homology domain 3 of tyrosine kinase Abl. These avenues, one relying upon alchemical transformations and the other on potentials of mean force (PMFs), invoke a series of geometrical restraints acting on collective variables designed to alleviate sampling limitations inherent to classical molecular dynamics simulations. The experimental binding free energy of ΔGbind = -7.99 kcal/mol is well reproduced by the two strategies developed herein, with ΔGbind = -7.7 for the alchemical route and ΔGbind = -7.8 kcal/mol for the alternate PMF-based route. In detailing the underpinnings of these numerical strategies devised for the accurate determination of standard binding free energies, many practical elements of the proposed rigorous, conceptual framework are clarified, thereby paving way to tackle virtually any recognition and association phenomenon. © 2012 American Chemical Society.


Muduli P.K.,Gothenburg University | Heinonen O.G.,Argonne National Laboratory | Heinonen O.G.,Northwestern University | Akerman J.,Gothenburg University | Akerman J.,KTH Royal Institute of Technology
Physical Review Letters | Year: 2012

We discuss the coherence of magnetic oscillations in a magnetic tunnel junction based spin torque oscillator as a function of the external field angle. Time-frequency analysis shows mode hopping between distinct oscillator modes, which arises from linear and nonlinear couplings in the Landau-Lifshitz-Gilbert equation, analogous to mode hopping observed in semiconductor ring lasers. These couplings and, therefore, mode hopping are minimized near the current threshold for the antiparallel alignment of free-layer with reference layer magnetization. Away from the antiparallel alignment, mode hopping limits oscillator coherence. © 2012 American Physical Society.


Wurtz G.A.,University of North Florida | Pollard R.,Queen's University of Belfast | Hendren W.,Queen's University of Belfast | Wiederrecht G.P.,Argonne National Laboratory | And 3 more authors.
Nature Nanotechnology | Year: 2011

All-optical signal processing enables modulation and transmission speeds not achievable using electronics alone1,2. However, its practical applications are limited by the inherently weak nonlinear effects that govern photon-photon interactions in conventional materials, particularly at high switching rates3. Here, we show that the recently discovered nonlocal optical behaviour of plasmonic nanorod metamaterials4 enables an enhanced, ultrafast, nonlinear optical response. We observe a large (80%) change of transmission through a subwavelength thick slab of metamaterial subjected to a low control light fluence of 7 mJ cm -2, with switching frequencies in the terahertz range. We show that both the response time and the nonlinearity can be engineered by appropriate design of the metamaterial nanostructure. The use of nonlocality to enhance the nonlinear optical response of metamaterials, demonstrated here in plasmonic nanorod composites, could lead to ultrafast, low-power all-optical information processing in subwavelength-scale devices. © 2011 Macmillan Publishers Limited. All rights reserved.


Peng S.,Argonne National Laboratory | Lei C.,University of Illinois at Urbana - Champaign | Ren Y.,Argonne National Laboratory | Cook R.E.,Argonne National Laboratory | Sun Y.,Argonne National Laboratory
Angewandte Chemie - International Edition | Year: 2011

Superparafragilistic: An amorphous seed-mediated strategy has been developed for the synthesis of hybrid nanoparticles (see images) that are composed of silver (yellow) and iron oxide (blue) nanodomains and exhibit unique optical properties. These properties originate from both the strong surface plasmon resonance of the silver and the strong superparamagnetic responses of the iron oxide nanodomains. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Morin A.,Harvard University | Urban J.,University of California at Berkeley | Adams P.D.,Lawrence Berkeley National Laboratory | Foster I.,Argonne National Laboratory | And 3 more authors.
Science | Year: 2012

Funders, publishers, and research institutions must act to ensure that research computer code is made widely available.


Szalay P.G.,Eötvös Loránd University | Muller T.,Jülich Research Center | Gidofalvi G.,Gonzaga University | Lischka H.,Texas Tech University | And 2 more authors.
Chemical Reviews | Year: 2012

Recent developments on gradient theory, calculation of molecular properties, nonadiabatic coupling between electronic states, and relativistic and spin-orbit effects, are discussed. Shepard et al. proposed the subspace projected approximate matrix (SPAM) method that employs a sequence of one or more approximations to the H matrix. Duch and Diercksen compared the formulas and concluded that the DuchDiercksen and Pople corrections clearly outperform the original Davidson correction and its renormalized variants. Tanaka and co-workers developed multireference coupled pair approximation (MRCPA) method that is size-consistent for noninteracting electron pairs and can be applied to excited states, but analytic gradient calculations are not available. Chan and Head-Gordon used a reverse CuthillMcKee reordering of the orbitals to make the one-electron integral matrix close to band-diagonal.


Chang L.,Jülich Research Center | Cloet I.C.,Argonne National Laboratory | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center | Tandy P.C.,Kent State University
Physical Review Letters | Year: 2013

A novel method is employed to compute the pion electromagnetic form factor, Fπ(Q2), on the entire domain of spacelike momentum transfer using the Dyson-Schwinger equation (DSE) framework in QCD. The DSE architecture unifies this prediction with that of the pion's valence-quark parton distribution amplitude (PDA). Using this PDA, the leading-order, leading-twist perturbative QCD result for Q2Fπ(Q2) underestimates the full computation by just 15% on Q2â‰8 GeV2, in stark contrast to the result obtained using the asymptotic PDA. The analysis shows that hard contributions to the pion form factor dominate for Q2â‰8 GeV2, but, even so, the magnitude of Q2Fπ(Q2) reflects the scale of dynamical chiral symmetry breaking, a pivotal emergent phenomenon in the standard model. © 2013 American Physical Society.


Chang L.,Jülich Research Center | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

The pseudoscalar projection of the pion's Poincaré-covariant Bethe-Salpeter amplitude onto the light-front may be understood to provide the probability distribution of the chiral condensate within the pion. Unlike the parton distribution amplitudes usually considered and as befitting a collective effect, this condensate distribution receives contributions from all Fock space components of the pion's light-front wave function. We compute this condensate distribution using the Dyson-Schwinger equation (DSE) framework and show the result to be a model-independent feature of quantum chromodynamics (QCD). Our analysis establishes that this condensate is concentrated in the neighbourhood of the boundaries of the distribution's domain of support. It thereby confirms the dominant role played by many-particle Fock states within the pion's light-front wave function in generating the chiral condensate and verifies that light-front longitudinal zero modes do not play a material role in that process. © 2013 Elsevier B.V.


Roberts C.D.,Argonne National Laboratory | Holt R.J.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

Dyson-Schwinger equation treatments of the strong interaction show that the presence and importance of nonpointlike diquark correlations within the nucleon are a natural consequence of dynamical chiral symmetry breaking. Using this foundation, we deduce a collection of simple formulae, expressed in terms of diquark appearance and mixing probabilities, from which one may compute ratios of longitudinal-spin-dependent u- and d-quark parton distribution functions on the domain x≃. 1. A comparison with predictions from other approaches plus a consideration of extant and planned experiments shows that the measurement of nucleon longitudinal spin asymmetries on x≃. 1 can add considerably to our capacity for discriminating between contemporary pictures of nucleon structure. © 2013 Elsevier B.V.


Cloet I.C.,Argonne National Laboratory | Chang L.,Jülich Research Center | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center | Tandy P.C.,Kent State University
Physical Review Letters | Year: 2013

A method is explained through which a pointwise accurate approximation to the pion's valence-quark distribution amplitude (PDA) may be obtained from a limited number of moments. In connection with the single nontrivial moment accessible in contemporary simulations of lattice-regularized QCD, the method yields a PDA that is a broad concave function whose pointwise form agrees with that predicted by Dyson-Schwinger equation analyses of the pion. Under leading-order evolution, the PDA remains broad to energy scales in excess of 100 GeV, a feature which signals persistence of the influence of dynamical chiral symmetry breaking. Consequently, the asymptotic distribution φπasy(x) is a poor approximation to the pion's PDA at all such scales that are either currently accessible or foreseeable in experiments on pion elastic and transition form factors. Thus, related expectations based on φπasy(x) should be revised. © 2013 American Physical Society.


Yan M.,Jülich Research Center | Kakay A.,Jülich Research Center | Gliga S.,Jülich Research Center | Gliga S.,Argonne National Laboratory | Hertel R.,Jülich Research Center
Physical Review Letters | Year: 2010

We present a micromagnetic study on the current-induced domain-wall motion in cylindrical Permalloy nanowires with diameters below 50 nm. The transverse domain walls forming in such thin, round wires are found to differ significantly from those known from flat nanostrips. In particular, we show that these domain walls are zero-mass micromagnetic objects. As a consequence, they display outstanding dynamic properties, most importantly the absence of a breakdown velocity generally known as the Walker limit. Our simulation data are confirmed by an analytic model which provides a detailed physical understanding. We further predict that a particular effect of the current-induced dynamics of these domain walls could be exploited to measure the nonadiabatic spin-transfer torque coefficient. © 2010 The American Physical Society.


An C.,University of Illinois at Urbana - Champaign | An C.,China University of Petroleum - East China | Peng S.,Argonne National Laboratory | Sun Y.,Argonne National Laboratory
Advanced Materials | Year: 2010

Figure Equotion Present Highly efficient plasmonic photocatalysts of AgCl:Ag hybrid nanoparticles are successfully synthesized via a one-pot synthetic approach involving a precipitation reaction followed by polyol reduction. The as-synthesized nanoparticles exhibit high catalytic performance under visible light and sunlight for decomposing organics, such as methylene blue. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Gelis A.V.,Argonne National Laboratory | Lumetta G.J.,Pacific Northwest National Laboratory
Industrial and Engineering Chemistry Research | Year: 2014

Separation of the minor actinides (Am, Cm) from the lanthanides at an industrial scale remains a significant technical challenge for closing the nuclear fuel cycle. To increase the safety of used nuclear fuel (UNF) reprocessing, as well as reduce associated costs, a novel solvent extraction process has been developed. The process allows for partitioning minor actinides, lanthanides, and fission products following uranium/plutonium/neptunium removal, minimizing the number of separation steps, flowsheets, chemical consumption, and waste. This new process, actinide lanthanide separation (ALSEP), uses an organic solvent consisting of a neutral diglycolamide extractant, either N,N,N′,N′-tetra(2-ethylhexyl)diglycolamide (T2EHDGA) or N,N,N′,N′-tetraoctyldiglycolamide (TODGA), and an acidic extractant 2-ethylhexylphosphonic acid mono-2-ethylhexyl ester (HEH[EHP]), dissolved in an aliphatic diluent (e.g., n-dodecane). The An/Ln coextraction is conducted from moderate-to-strong nitric acid, while the selective stripping of the minor actinides from the lanthanides is carried out using a polyaminocarboxylic acid/citrate buffered solution at pH anywhere between 3 and 4.5. The extraction and separation of the actinides from the fission products is very effective in a wide range of HNO3 concentrations, and the minimum separation factors for lanthanide/Am exceed 30 for Nd/Am, reaching >60 for Eu/Am under some conditions. The experimental results presented here demonstrate the great potential for a combined system, consisting of a neutral extractant such as T2EHDGA or TODGA, and an acidic extractant such as HEH[EHP], for separating the minor actinides from the lanthanides. © 2014 American Chemical Society.


Fulton J.L.,Pacific Northwest National Laboratory | Balasubramanian M.,Argonne National Laboratory
Journal of the American Chemical Society | Year: 2010

A comprehensive analysis of the H3O+ and H 2O structure in the first solvation shell about Cl- in aqueous HCl solutions is reported from X-ray absorption fine structure (XAFS) measurements. Results show increasing degree of contact ion pairing between Cl- and H3O+ as the HCl concentration increases from 6.0 m, 10.0 m, and finally 16.1 m HCl (acid concentrations are expressed as molality or mole HCl/1000 g water). At the highest acid concentration there are on average, approximately 1.6 H3O+ ions and 4.2 H 2O's in the first shell about Cl-. The structure of the Cl-/H3O+ contact ion pair is distinctly different from that of the H2O structure about Cl-. The Cl-O bond length (2.98 Å) for Cl-/H3O+ is approximately 0.16 Å shorter than the Cl-/H2O bond. The bridging proton resides at an intermediate position between Cl and O at 1.60 Å from the Cl- and approximately 1.37 Å from the O of the H3O+. The bridging-proton structure of this contact ion pair, (Cl-H-OH2), is similar to the structure of the water Zundel ion, (H2O-H-OH2 +). In both cases there is a shortened Cl-O or O-O bond, and the intervening proton bond distances are substantially longer than for the covalent bonds of either HCl or H 2O. A detailed structural analysis of the aqueous chloride species, Cl-/(H2O)n, was also completed as part of this study in order to understand the relative importance of various XAFS photoelectron scattering paths. For aqueous Cl- the measured Cl-O and Cl-H distances of 3.14 Å and 2.23 Å, respectively, are in excellent agreement with earlier neutron and X-ray diffraction results. Overall, these results significantly improve our understanding of the interaction of H 3O+ with Cl-. The results are of interest to fundamental physical chemistry and they have important consequences in biochemical, geochemical, and atmospheric processes. © 2010 American Chemical Society.


Qin S.-X.,Goethe University Frankfurt | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Jülich Research Center
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

The colour-singlet axial-vector vertex plays a pivotal role in understanding dynamical chiral symmetry breaking and numerous hadronic weak interactions, yet scant model-independent information is available. We therefore use longitudinal and transverse Ward-Green-Takahashi (WGT) identities, together with kinematic constraints, in order to ameliorate this situation and expose novel features of the axial vertex: amongst them, Ward-like identities for elements in the transverse piece of the vertex, which complement and shed new light on identities determined previously for components in its longitudinal part. Such algebraic results are verified via solutions of the Bethe-Salpeter equation for the axial vertex obtained using two materially different kernels for the relevant Dyson-Schwinger equations. The solutions also provide insights that suggest a practical Ansatz for the axial-vector vertex. © 2014.


Kumar K.S.,University of Massachusetts Amherst | Mantry S.,Argonne National Laboratory | Marciano W.J.,Brookhaven National Laboratory | Souder P.A.,Syracuse University
Annual Review of Nuclear and Particle Science | Year: 2013

We review the status of precision measurements of weak neutral-current interactions, mediated by the Z boson, at Q≪MZ. They can be used to extract values for the weak mixing angle , a fundamental parameter of the electroweak sector of the Standard Model. Apart from providing a comprehensive test of the electroweak theory at the quantum-loop level, such measurements allow indirect access to New Physics effects at and beyond the TeV scale. After providing a theoretical introduction and a brief overview of the three most precise low-Q weak mixing angle determinations, we describe the ongoing experimental program and prospects for future, more sensitive studies. We also compare the sensitivities of planned and proposed measurements with physics beyond the Standard Model. © Copyright ©2013 by Annual Reviews. All rights reserved.


Muduli P.K.,Gothenburg University | Heinonen O.G.,Argonne National Laboratory | Akerman J.,KTH Royal Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We have measured the bias voltage and field dependence of eigenmode frequencies in a magnetic tunnel junction with MgO barrier. We show that both free layer (FL) and reference layer (RL) modes are excited, and that a crossover between these modes is observed by varying external field and bias voltage. The bias voltage dependence of the FL and RL modes are shown to be dramatically different. The bias dependence of the FL modes is linear in bias voltage, whereas that of the RL mode is strongly quadratic. Using modeling and micromagnetic simulations, we show that the linear bias dependence of FL frequencies is primarily due to a linear dependence of the perpendicular spin torque on bias voltage, whereas the quadratic dependence of the RL on bias voltage is dominated by the reduction of exchange bias due to Joule heating, and is not attributable to a quadratic dependence of the perpendicular spin torque on bias voltage. © 2011 American Physical Society.


Grant
Agency: Department of Defense | Branch: Missile Defense Agency | Program: STTR | Phase: Phase I | Award Amount: 99.48K | Year: 2014

Peregrine will rely on advanced composites with unique dampening factors and our experience with kinematic mounts to develop and fabricate a state of the art, ultra-lightweight optical bench. This will be a high stiffness, and lightweight optical support structure capable of flying on unmanned aerial vehicles (UAVs) at high altitudes. It will have mounts capable of yielding less than a few hundred nanoradians of jitter during operation. The design will be based upon thesis, materials and technology that have been proven in a laboratory environment. Approved for Public Release 14-MDA-7663 (8 January 14)


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 4.34M | Year: 2014

This world-leading Centre for Doctoral Training in Bioenergy will focus on delivering the people to realise the potential of biomass to provide secure, affordable and sustainable low carbon energy in the UK and internationally. Sustainably-sourced bioenergy has the potential to make a major contribution to low carbon pathways in the UK and globally, contributing to the UKs goal of reducing its greenhouse gas emissions by 80% by 2050 and the international mitigation target of a maximum 2 degrees Celsius temperature rise. Bioenergy can make a significant contribution to all three energy sectors: electricity, heat and transport, but faces challenges concerning technical performance, cost effectiveness, ensuring that it is sustainably produced and does not adversely impact food security and biodiversity. Bioenergy can also contribute to social and economic development in developing countries, by providing access to modern energy services and creating job opportunities both directly and in the broader economy. Many of the challenges associated with realising the potential of bioenergy have engineering and physical sciences at their core, but transcend traditional discipline boundaries within and beyond engineering. This requires an effective whole systems research training response and given the depth and breadth of the bioenergy challenge, only a CDT will deliver the necessary level of integration. Thus, the graduates from the CDT in Bioenergy will be equipped with the tools and skills to make intelligent and informed, responsible choices about the implementation of bioenergy, and the growing range of social and economic concerns. There is projected to be a large absorptive capacity for trained individuals in bioenergy, far exceeding current supply. A recent report concerning UK job creation in bioenergy sectors concluded that there may be somewhere in the region of 35-50,000 UK jobs in bioenergy by 2020 (NNFCC report for DECC, 2012). This concerned job creation in electricity production, heat, and anaerobic digestion (AD) applications of biomass. The majority of jobs are expected to be technical, primarily in the engineering and construction sectors during the building and operation of new bioenergy facilities. To help develop and realise the potential of this sector, the CDT will build strategically on our research foundation to deliver world-class doctoral training, based around key areas: [1] Feedstocks, pre-processing and safety; [2] Conversion; [3] Utilisation, emissions and impact; [4] Sustainability and Whole systems. Theme 1 will link feedstocks to conversion options, and Themes 2 and 3 include the core underpinning science and engineering research, together with innovation and application. Theme 4 will underpin this with a thorough understanding of the whole energy system including sustainability, social, economic public and political issues, drawing on world-leading research centres at Leeds. The unique training provision proposed, together with the multidisciplinary supervisory team will ensure that students are equipped to become future leaders, and responsible innovators in the bioenergy sector.


Shvydko Y.V.,Argonne National Laboratory | Stoupin S.,Argonne National Laboratory | Cunsolo A.,Argonne National Laboratory | Cunsolo A.,Brookhaven National Laboratory | And 2 more authors.
Nature Physics | Year: 2010

Owing to the depth to which hard X-rays penetrate into most materials, it is commonly accepted that the only way to realize hard-X-ray mirrors with near 100% reflectance is under conditions of total external reflection at grazing incidence to a surface. At angles away from grazing incidence, substantial reflectance of hard X-rays occurs only as a result of constructive interference of the waves scattered from periodically ordered atomic planes in crystals (Bragg diffraction). Theory predicts that even at normal incidence the reflection of X-rays from diamond under the Bragg condition should approach 100%substantially higher than from any other crystal. Here we demonstrate that commercially produced synthetic diamond crystals do indeed show an unprecedented reflecting power at normal incidence and millielectronvolt-narrow reflection bandwidths for hard X-rays. Bragg diffraction measurements of reflectivity and the energy bandwidth show remarkable agreement with theory. Such properties are valuable to the development of hard-X-ray optics, and could greatly assist the realization of fully coherent X-ray sources, such as X-ray free-electron laser oscillators. © 2010 Macmillan Publishers Limited. All rights reserved.


« BMW using Hexcel CFRP in 7 Series B-pillar | Main | Zhejiang VIE Science and Technology leads US$10M Series B for wireless charging company Evatran » Researchers from Argonne National Laboratory, with colleagues in the US and Korea, have demonstrated a lithium-oxygen battery based on lithium superoxide (LiO ). The work, reported in the journal Nature, could open the way to very high-energy-density batteries based on LiO as well as to other possible uses of the compound, such as oxygen storage. Lithium-air batteries form lithium peroxide (Li O )—a solid precipitate that clogs the pores of the electrode and degrades cell performance—as part of the charge−discharge reaction process. This remains a core challenge that needs to be overcome for the viable commercialization of Li-air technology. However, a number of studies of Li–air batteries have found evidence of LiO being formed as one component of the discharge product along with lithium peroxide (Li O ). Unlike lithium peroxide, lithium superoxide can easily dissociate into lithium and oxygen, leading to high efficiency and good cycle life. In addition, theoretical calculations have indicated that some forms of LiO may have a long lifetime. Here we show that crystalline LiO can be stabilized in a Li–O battery by using a suitable graphene-based cathode. Various characterization techniques reveal no evidence for the presence of Li O . A novel templating growth mechanism involving the use of iridium nanoparticles on the cathode surface may be responsible for the growth of crystalline LiO . Our results demonstrate that the LiO formed in the Li–O battery is stable enough for the battery to be repeatedly charged and discharged with a very low charge potential (about 3.2 volts). The major advantage of a battery based on lithium superoxide, Argonne battery scientists Larry Curtiss and Khalil Amine explained, is that it allows, at least in theory, for the creation of a lithium-air battery that consists of a closed system. Open systems require the consistent intake of extra oxygen from the environment, while closed systems do not—making them safer and more efficient. The researchers attributed the growth of the lithium superoxide to the spacing of iridium atoms in the electrode used in the experiment. The researchers confirmed the lack of lithium peroxide by using X-ray diffraction provided by the Advanced Photon Source, a DOE Office of Science User Facility located at Argonne. They also received allocations of time on the Mira supercomputer at the Argonne Leadership Computing Facility, which is also a DOE Office of Science User Facility. The researchers also performed some of the work at Argonne’s Center for Nanoscale Materials, which is also a DOE Office of Science User Facility. The work was funded by the DOE’s Office of Energy Efficiency and Renewable Energy and Office of Science.


News Article | February 21, 2017
Site: www.eurekalert.org

New research from The University of Texas at Austin reveals that the Earth's unique iron composition isn't linked to the formation of the planet's core, calling into question a prevailing theory about the events that shaped our planet during its earliest years. The research, published in Nature Communications on Feb. 20, opens the door for other competing theories about why the Earth, relative to other planets, has higher levels of heavy iron isotopes. Among them: light iron isotopes may have been vaporized into space by a large impact with another planet that formed the moon; the slow churning of the mantle as it makes and recycles the Earth's crust may preferentially incorporate heavy iron into rock; or, the composition of the raw material that formed the planet in its earliest days may have been enriched with heavy iron. An isotope is a variety of atom that has a different weight from other atoms of the same element because it has a different numbers of neutrons. "The Earth's core formation was probably the biggest event affecting Earth's history. Materials that make up the whole Earth were melted and differentiated," said Jung-Fu Lin, a professor at the UT Jackson School of Geosciences and one of the study's authors. "But in this study, we say that there must be other origins for Earth's iron isotope anomaly." Jin Liu, now a postdoctoral researcher at Stanford University, led the research while earning his Ph.D. at the Jackson School. Collaborators include scientists from The University of Chicago, Sorbonne Universities in France, Argonne National Laboratory, the Center for High Pressure Science and Advanced Technology Research in China, and the University of Illinois at Urbana-Champaign. Rock samples from other planetary bodies and objects--ranging from the moon, to Mars, to ancient meteorites called chondrites--all share about the same ratio of heavy to light iron isotopes. In comparison to these samples from space, rocks from Earth have about 0.01 percent more heavy iron isotopes than light isotopes. That might not sound like much, but Lin said it's significant enough to make the Earth's iron composition unique among known worlds. "This 0.01 percent anomaly is very significant compared with, say, chondrites," Lin said. "This significant difference thus represents a different source or origin of our planet." Lin said that one of the most popular theories to explain the Earth's iron signature is that the relatively large size of the planet (compared with other rocky bodies in the solar system) created high pressure and high temperature conditions during core formation that made different proportions of heavy and light iron isotopes accumulate in the core and mantle. This resulted in a larger share of heavy iron isotopes bonding with elements that make up the rocky mantle, while lighter iron isotopes bonded together and with other trace metals to form the Earth's core. But when the research team used a diamond anvil to subject small samples of metal alloys and silicate rocks to core formation pressures, they not only found that the iron isotopes stayed put, but that the bonds between iron and other elements got stronger. Instead of breaking and rebonding with common mantle or core elements, the initial bond configuration got sturdier. "Our high pressure studies find that iron isotopic fractionation between silicate mantle and metal core is minimal," said Liu, the lead author. Co-author Nicolas Dauphas, a professor at the University of Chicago, emphasized that analyzing the atomic scale measurements was a feat unto itself. "One has to use sophisticated mathematical techniques to make sense of the measurements," he said. "It took a dream team to pull this off." Helen Williams, a geology lecturer at the University of Cambridge, said it's difficult to know the physical conditions of Earth's core formation, but that the high pressures in the experiment make for a more realistic simulation. "This is a really elegant study using a highly novel approach that confirms older experimental results and extends them to much higher pressures appropriate to the likely conditions of core-mantle equilibrium on Earth," Williams said. Lin said it will take more research to uncover the reason for the Earth's unique iron signature, and that experiments that approximate early conditions on Earth will play a key role because rocks from the core are impossible to attain. The research was funded by the National Science Foundation, the Center for High Pressure Science and Technology Advanced Research, NASA, the French National Research Agency, and the Consortium for Materials Properties Research in Earth Sciences.


De Amerikaanse International Trade Commission (ITC) heeft vandaag een kennisgeving gedaan over haar uitspraak in de procedure voor inbreuk op octrooien die ingediend werd door BASF en de University of Chicago Argonne National Laboratory tegen Umicore. Umicore betwist nadrukkelijk de vaststelling dat twee Amerikaanse octrooien die betrekking hebben op bepaalde kathodematerialen voor batterijen, geschonden werden. Het uitspreken van een verwijderingsbesluit is onderhevig aan een automatische herziening door de Amerikaanse Trade Representative. Indien de uitspraak wordt gehandhaafd, zal Umicore beroep aantekenen voor het Amerikaanse Federal Circuit Court of Appeals. Kurt Vandeputte, Vice President van Umicore's Rechargeable Battery Materials business unit, verklaarde: "We zijn er van overtuigd dat de beslissing van de ITC zowel vanuit een wetenschappelijk als vanuit juridisch oogpunt onjuist is. In het belang van Umicore, onze klanten en het onderzoek en ontwikkeling inzake batterijen in de Verenigde Staten zullen we beroep aantekenen tegen deze beslissing."


News Article | January 19, 2016
Site: www.greencarcongress.com

« Zhejiang VIE Science and Technology leads US$10M Series B for wireless charging company Evatran | Main | European Parliament to vote on proposal to veto increased diesel emissions limits » One potential path for delivering higher energy density storage that current Li-ion batteries is to use a multivalent ion such as Ca2+ or Mg2+. (Earlier post.) Now, researchers at Argonne National Laboratory and the Joint Center for Energy Storage Research (JCESR) have demonstrated the first rechargeable calcium-ion battery. Utilizing manganese hexacyanoferrate (MFCN) as the cathode, the battery intercalates Ca reversibly in a dry nonaqueous electrolyte with tin as the anode with a capacity of about 80 mAh/g. This insertion at approximately 3.4 V vs Ca/Ca2+, reported in their paper in the ACS journal Chemical Materials,causes only the Mn oxidation state to change. This implies a theoretical capacity of 100 mAh/g (excluding the mass of Ca). Through modification of the Prussian blue type structure and improvements in the cell design, capacity could theoretically be doubled to about 200 mAh/g if both transition metals undergo redox, the authors said. The authors noted that the preferred anode would be Ca metal, but currently no electrolyte has been shown to reversibly plate and strip Ca metal with high Coulombic efficiency. There is an approximately 50% reduction in capacity after 35 cycles… This capacity fade could be due to a number of factors including the electrolyte being absorbed by the graphite foil, delamination of the tin anode due to volume expansion, lack of passivation on the tin causing it to dissolve into the electrolyte, and an increase in cell resistance due to surface film formation. It is clear that for a more practical battery new electrolytes are needed to improve passivation and surface layer formation, along with enabling a high Coulombic efficiency Ca metal anode; however, these results are the first proof of concept for a functioning reversible nonaqueous calcium battery system.


News Article | November 14, 2016
Site: globenewswire.com

In a release issued earlier today under the headline "Cray XC40 “Theta” Supercomputer Accepted and in Production at Argonne National Laboratory" by Cray Inc. (Nasdaq:CRAY), please note that the headline included the phrase "and in Production". The system has been accepted but is not yet in production. The corrected release follows: SEATTLE and SALT LAKE CITY, Nov. 14, 2016 (GLOBE NEWSWIRE) -- At the 2016 Supercomputing Conference in Salt Lake City, Utah, global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced it has achieved acceptance for “Theta,” the Cray® XC40™ supercomputer, as well as a Cray Sonexion® 3000 storage system, located at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility at Argonne National Laboratory. The Theta system marks Cray and Intel’s first acceptance for a large-scale supercomputer featuring the latest generation of Intel® Xeon Phi™ processors formerly code named “Knights Landing.” Theta has a peak performance of more than eight petaflops, and is currently running a broad range of scientific applications through the ALCF’s Theta Early Science Program. The Cray XC40 system was delivered to Argonne National Laboratory through a partnership with Intel as part of the DOE’s initiative to build state-of-the art supercomputers through the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore National Laboratories (CORAL) program. “The Argonne Leadership Computing Facility is committed to providing advanced supercomputing resources, like the Theta system, that help our researchers solve some of the world’s largest and most complex problems,” said Rick Stevens, Associate Laboratory Director for Argonne National Laboratory. “We are pleased that Theta is now accelerating our scientific research.” Cray, in partnership with Intel, will also deliver a flagship system called “Aurora” as part of the CORAL program. Aurora will be based on the next-generation Cray supercomputer code named “Shasta,” the successor to the current, industry-leading Cray XC™ line of supercomputers. Aurora is expected to have a peak performance of 180 petaflops and is scheduled for delivery in 2018. “The acceptance of Theta is a significant milestone for Cray and Intel as we continue to work towards the installation and acceptance of several Cray supercomputers with the new Intel Xeon Phi processors,” said Peter Ungaro, president and CEO of Cray. “Our collaboration with Intel and Argonne is focused on producing powerful supercomputers for the DOE user community. The Theta system is exciting step forward in our partnership, and one that I know many other supercomputer centers around the world are watching as they evaluate this new processor technology for their next mission-critical system.” Cray is a leader in deploying supercomputers with the Intel Xeon Phi processors, and several top supercomputing centers have signed large contracts for Cray XC™ supercomputers with Intel’s new processor. These systems include the “Cori” system at the U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC); the “Trinity” system at the National Nuclear Security Administration; and the Cray XC40 supercomputer at the Academic Center for Computing and Media Studies at Kyoto University in Kyoto, Japan. “The Theta supercomputer at Argonne is an important step in the CORAL project and a milestone on the path towards the Aurora system,” said Dr. Rajeeb Hazra, Vice President of the Data Center group and General Manager of the Enterprise and Government group at Intel. “The combination of Intel’s leading technology, latest Intel Xeon Phi processor technology, codenamed Knights Landing, Intel Scalable System Framework, and Cray’s system capabilities is helping to drive scientific discoveries and engineering breakthroughs.” For more information on the Cray XC supercomputers and Cray Sonexion storage systems, please visit the Cray website at www.cray.com. About Cray Inc. Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information. Safe Harbor Statement This press release contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934 and Section 27A of the Securities Act of 1933, including, but not limited to, statements related to Cray's ability to deliver the systems required by the CORAL program when required and that meet the CORAL program's needs and the planned availability, performance and features of the Cray Shasta system. These statements involve current expectations, forecasts of future events and other statements that are not historical facts. Inaccurate assumptions and known and unknown risks and uncertainties can affect the accuracy of forward-looking statements and cause actual results to differ materially from those anticipated by these forward-looking statements. Factors that could affect actual future events or results include, but are not limited to, the risk that orders for Cray Shasta-based systems through the CORAL program are smaller than expected or not placed at all, the risk that the systems required by the CORAL program are not delivered in a timely fashion or do not perform as expected, the risk that Cray is not able to successfully complete its planned product development efforts or to ship Shasta systems within the planned timeframe or at all, the risk that the Shasta system will not have the features, performance or components currently planned, the risk that processors and interconnect technology planned for the Cray Shasta system are not available when expected or with the performance expected and such other risks as identified in the Company’s quarterly report on Form 10-Q for the quarter ended September 30, 2016, and from time to time in other reports filed by Cray with the U.S. Securities and Exchange Commission. You should not rely unduly on these forward-looking statements, which apply only as of the date of this release. Cray undertakes no duty to publicly announce or report revisions to these statements as new information becomes available that may change the Company’s expectations. Cray, and the stylized CRAY mark and SONEXION are registered trademarks of Cray Inc. in the United States and other countries, and XC40 and XC are trademarks of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners. Intel and Xeon Phi are trademarks or registered trademarks of Intel Corporation in the United States and other countries.


News Article | November 14, 2016
Site: globenewswire.com

SEATTLE and SALT LAKE CITY, Nov. 14, 2016 (GLOBE NEWSWIRE) -- At the 2016 Supercomputing Conference in Salt Lake City, Utah, global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced it has achieved acceptance for “Theta,” the Cray® XC40™ supercomputer, as well as a Cray Sonexion® 3000 storage system, located at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility at Argonne National Laboratory. The Theta system marks Cray and Intel’s first acceptance for a large-scale supercomputer featuring the latest generation of Intel® Xeon Phi™ processors formerly code named “Knights Landing.” Theta has a peak performance of more than eight petaflops, and is currently running a broad range of scientific applications through the ALCF’s Theta Early Science Program. The Cray XC40 system was delivered to Argonne National Laboratory through a partnership with Intel as part of the DOE’s initiative to build state-of-the art supercomputers through the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore National Laboratories (CORAL) program. “The Argonne Leadership Computing Facility is committed to providing advanced supercomputing resources, like the Theta system, that help our researchers solve some of the world’s largest and most complex problems,” said Rick Stevens, Associate Laboratory Director for Argonne National Laboratory. “We are pleased that Theta is now accelerating our scientific research.” Cray, in partnership with Intel, will also deliver a flagship system called “Aurora” as part of the CORAL program. Aurora will be based on the next-generation Cray supercomputer code named “Shasta,” the successor to the current, industry-leading Cray XC™ line of supercomputers. Aurora is expected to have a peak performance of 180 petaflops and is scheduled for delivery in 2018. “The acceptance of Theta is a significant milestone for Cray and Intel as we continue to work towards the installation and acceptance of several Cray supercomputers with the new Intel Xeon Phi processors,” said Peter Ungaro, president and CEO of Cray. “Our collaboration with Intel and Argonne is focused on producing powerful supercomputers for the DOE user community. The Theta system is exciting step forward in our partnership, and one that I know many other supercomputer centers around the world are watching as they evaluate this new processor technology for their next mission-critical system.” Cray is a leader in deploying supercomputers with the Intel Xeon Phi processors, and several top supercomputing centers have signed large contracts for Cray XC™ supercomputers with Intel’s new processor. These systems include the “Cori” system at the U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC); the “Trinity” system at the National Nuclear Security Administration; and the Cray XC40 supercomputer at the Academic Center for Computing and Media Studies at Kyoto University in Kyoto, Japan. “The Theta supercomputer at Argonne is an important step in the CORAL project and a milestone on the path towards the Aurora system,” said Dr. Rajeeb Hazra, Vice President of the Data Center group and General Manager of the Enterprise and Government group at Intel. “The combination of Intel’s leading technology, latest Intel Xeon Phi processor technology, codenamed Knights Landing, Intel Scalable System Framework, and Cray’s system capabilities is helping to drive scientific discoveries and engineering breakthroughs.” For more information on the Cray XC supercomputers and Cray Sonexion storage systems, please visit the Cray website at www.cray.com. About Cray Inc. Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information. Safe Harbor Statement This press release contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934 and Section 27A of the Securities Act of 1933, including, but not limited to, statements related to Cray's ability to deliver the systems required by the CORAL program when required and that meet the CORAL program's needs and the planned availability, performance and features of the Cray Shasta system. These statements involve current expectations, forecasts of future events and other statements that are not historical facts. Inaccurate assumptions and known and unknown risks and uncertainties can affect the accuracy of forward-looking statements and cause actual results to differ materially from those anticipated by these forward-looking statements. Factors that could affect actual future events or results include, but are not limited to, the risk that orders for Cray Shasta-based systems through the CORAL program are smaller than expected or not placed at all, the risk that the systems required by the CORAL program are not delivered in a timely fashion or do not perform as expected, the risk that Cray is not able to successfully complete its planned product development efforts or to ship Shasta systems within the planned timeframe or at all, the risk that the Shasta system will not have the features, performance or components currently planned, the risk that processors and interconnect technology planned for the Cray Shasta system are not available when expected or with the performance expected and such other risks as identified in the Company’s quarterly report on Form 10-Q for the quarter ended September 30, 2016, and from time to time in other reports filed by Cray with the U.S. Securities and Exchange Commission. You should not rely unduly on these forward-looking statements, which apply only as of the date of this release. Cray undertakes no duty to publicly announce or report revisions to these statements as new information becomes available that may change the Company’s expectations. Cray, and the stylized CRAY mark and SONEXION are registered trademarks of Cray Inc. in the United States and other countries, and XC40 and XC are trademarks of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners. Intel and Xeon Phi are trademarks or registered trademarks of Intel Corporation in the United States and other countries.


News Article | November 22, 2016
Site: www.eurekalert.org

Three years ago the White House launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to accelerate the development and application of novel technologies that will give us a better understanding about how brains work. Since then, dozens of technology firms, academic institutions, scientists and other have been developing new tools to give researchers unprecedented opportunities to explore how the brain processes, utilizes, stores and retrieves information. But without a coherent strategy to analyze, manage and understand the data generated by these new technologies, advancements in the field will be limited. This is precisely why Lawrence Berkeley National Laboratory (Berkeley Lab) Computational Neuroscientist Kristofer Bouchard assembled an international team of interdisciplinary researchers--including mathematicians, computer scientists, physicists and experimental and computational neuroscientists--to develop a plan for managing, analyzing and sharing neuroscience data. Their recommendations were published in a recent issue of Neuron. "The U.S. BRAIN Initiative is just one of many national and private neuroscience initiatives globally that are working toward accelerating our understanding of brains," says Bouchard. "Many of these efforts have given a lot of attention to the technological challenges of measuring and manipulating neural activity, while significantly less attention has been paid to the computing challenges associated with the vast amounts of data that these technologies are generating." To maximize the return on investments in global neuroscience initiatives, Bouchard and his colleagues argue that the international neuroscience community should have an integrated strategy for data management and analysis. This coordination would facilitate the reproducibility of workflows, which then allows researchers to build on each other's work. For a first step, the authors recommend that researchers from all facets of neuroscience agree on standard descriptions and file formats for products derived from data analysis and simulations. After that, the researchers should work with computer scientists to develop hardware and software ecosystems for archiving and sharing data. The authors suggest an ecosystem similar to the one used by the physics community to share data collected by experiments like the Large Hadron Collider (LHC). In this case, each research group has their own local repository of physiological or simulation data that they've collected or generated. But eventually, all of this information should also be included in "meta-repositories" that are accessible to the greater neuroscience community. Files in the "meta-repositories" should be in a common format, and the repositories would ideally be hosted by an open-science supercomputing facility like the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), located at Berkeley Lab. Because novel technologies are producing unprecedented amounts of data, Bouchard and his colleagues also propose that neuroscientists collaborate with mathematicians to develop new approaches for data analysis and modify existing analysis tools to run on supercomputers. To maximize these collaborations, the analysis tools should be open-source and should integrate with brain-scale simulations, they emphasize. "These are the early days for neuroscience and big data, but we can see the challenges coming. This is not the first research community to face big data challenges; climate and high energy physics have been there and overcome many of the same issues," says Prabhat, who leads NERSC's Data & Analytics Services Group. Berkeley Lab is well positioned to help neuroscientists address these challenges because of its long tradition of interdisciplinary science, Prabhat adds. DOE facilities like NERSC and the Energy Sciences Network (ESnet) have worked closely with Lab computer scientists to help a range of science communities--from astronomy to battery research--collaborate and manage and archive their data. Berkeley Lab mathematicians have also helped researchers in various scientific disciplines develop new tools and methods for data analysis on supercomputers. "Harnessing the power of HPC resources will require neuroscientists to work closely with computer scientists and will take time, so we recommend rapid and sustained investment in this endeavor now," says Bouchard. "The insights generated from this effort will have high-payoff outcomes. They will support neuroscience efforts to reveal both the universal design features of a species' brain and help us understand what makes each individual unique." In addition to Bouchard and Prabhat, other authors on the paper include: James Aimone (Sandia National Laboratories); Miyoung Chun (Kavli Insitute for Fundamental Neuroscience); Thomas Dean (Google Research); Michael Denker (Jülich Research Center); Markus Diesmann (Jülich Research Center/ Aachen University); Loren Frank (Kavli Insitute for Fundamental Neuroscience/ Howard Hughs Medical Institute/ UC San Francisco); Narayanan Kasthuri (University of Chicago/Argonne National Laboratory); Christof Koch (Allen Institute for Brain Science); Friedrich Sommer (UC Berkeley); Horst Simon, David Donofrio and Oliver Rübel (Berkeley Lab). Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | December 7, 2016
Site: www.eurekalert.org

Machine learning, a field focused on training computers to recognize patterns in data and make new predictions, is helping doctors more accurately diagnose diseases and stock analysts forecast the rise and fall of financial markets. And now materials scientists have pioneered another important application for machine learning -- helping to accelerate the discovery and development of new materials. Researchers at the Center for Nanoscale Materials and the Advanced Photon Source, both U.S. Department of Energy (DOE) Office of Science User Facilities at DOE's Argonne National Laboratory, announced the use of machine learning tools to accurately predict the physical, chemical and mechanical properties of nanomaterials. In a study published in The Journal of Physical Chemistry Letters, a team of researchers led by Argonne computational scientist Subramanian Sankaranarayanan described their use of machine learning tools to create the first atomic-level model that accurately predicts the thermal properties of stanene, a two-dimensional (2-D) material made up of a one-atom-thick sheet of tin. The study reveals for the first time an approach to materials modeling that applies machine learning and is more accurate at predicting material properties compared to past models. "Predictive modeling is particularly important for newly discovered materials, to learn what they're good for, how they respond to different stimuli and also how to effectively grow the material for commercial applications -- all before you invest in costly manufacturing," said Argonne postdoctoral researcher Mathew Cherukara, one of the lead authors of the study. Traditionally, atomic-scale materials models have taken years to develop, and researchers have had to rely largely on their own intuition to identify the parameters on which a model would be built. But by using a machine learning approach, Cherukara and fellow researchers were able to reduce the need for human input while shortening the time to craft an accurate model down to a few months. "We input data obtained from experimental or expensive theory-based calculations, and then ask the machine, 'Can you give me a model that describes all of these properties?'" said Badri Narayanan, an Argonne postdoctoral researcher and another lead author of the study. "We can also ask questions like, 'Can we optimize the structure, induce defects or tailor the material to get specific desired properties?'" Unlike most past models, the machine learning model can capture bond formation and breaking events accurately; this not only yields more reliable predictions of material properties (e.g. thermal conductivity), but also enables researchers to capture chemical reactions accurately and better understand how specific materials can be synthesized. Another advantage of building models using machine learning is the process is not material-dependent, meaning researchers can look at many different classes of materials and apply machine learning to various other elements and their combinations. The computational model Cherukara, Narayanan and their colleagues have developed describes stanene, a structure made of tin that has caught the eye of researchers in recent years. Interest in stanene mirrors a growing interest in 2-D materials evolving from the 2004 discovery of graphene, a single-layer arrangement of carbon with attractive electronic, thermal and mechanical properties. While stanene remains far from commercialization, researchers find it promising for applications in thermal management (the regulation of heat) across some nanoscale devices. The study, "Ab Initio-Based Bond Order Potential to Investigate Low Thermal Conductivity of Stanene Nanostructures," appeared in the The Journal of Physics Chemistry Letters. This work was supported by Argonne's Laboratory Directed Research and Development (LDRD) program and tapped the high-performance computing clusters of Argonne's Laboratory Computing Resource Center. It also used the Extreme Science and Engineering Discovery Environment, a project supported by the National Science Foundation, as well as resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.


News Article | September 12, 2016
Site: www.scientificcomputing.com

The U.S. Department of Energy's (DOE) Exascale Computing Project (ECP) today announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. Exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 to 100 times faster than the nation's most powerful supercomputers in use today. The 15 awards being announced total $39.8 million, targeting advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy and national security, as well as collaborations such as the Precision Medicine Initiative with the National Institutes of Health's National Cancer Institute. Of the proposals announced that are receiving full funding, two are being led by principal investigators at the DOE's Argonne National Laboratory: 1. Computing the Sky at Extreme Scales equips cosmologists with the ability to design foundational simulations to create "virtual universes" on demand at the extreme fidelities demanded by future multi-wavelength sky surveys. The new discoveries that will emerge from the combination of sky surveys and advanced simulation provided by the ECP will shed more light on three key ingredients of our universe: dark energy, dark matter and inflation. All three of these concepts reach beyond the known boundaries of the Standard Model of particle physics. 2. Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) that addresses three top challenges of the National Cancer Institute: understanding the molecular basis of key protein interactions, developing predictive models for drug response and automating the analysis and extraction of information from millions of cancer patient records to determine optimal cancer treatment strategies. Rick Stevens, Principal Investigator, Argonne National Laboratory, with Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory and the National Institutes of Health's National Cancer Institute (NIH/NCI). Additionally, a third project led by Argonne will be receiving seed funding: Multiscale Coupled Urban Systems will create an integrated modeling framework comprising data curation, analytics, modeling and simulation components that will equip city designers, planners and managers to scientifically develop and evaluate solutions to issues that affect cities now and in the future. The framework will focus first on integrating urban atmosphere and infrastructure heat exchange and air flow; building energy demand at district or city-scale, generation and use; urban dynamics and socioeconomic models; population mobility and transportation; and hooks to expand to include energy systems (biofuels, electricity and natural gas) and water resources. The application efforts will help guide DOE's development of a U.S. exascale ecosystem as part of President Obama's National Strategic Computing Initiative. DOE, the U.S. Department of Defense and the National Science Foundation have been designated as lead agencies, and ECP is the primary DOE contribution to the initiative. The ECP's multi-year mission is to maximize the benefits of high-performance computing for U.S. economic competitiveness, national security and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms and workforce development needs critical to the effective development and deployment of future exascale systems. Leadership of the Exascale Computing Project comes from six DOE national laboratories: the Office of Science's Oak Ridge, Argonne and Lawrence Berkeley national labs and NNSA's Lawrence Livermore, Los Alamos and Sandia national labs.


News Article | January 20, 2016
Site: www.scientificcomputing.com

Using Mira, physicists from Princeton Plasma Physics Laboratory uncovered a new understanding about electron behavior in edge plasma. Based on this discovery, improvements were made to a well-known analytical formula that could enhance predictions of and, ultimately, increase fusion power efficiency. Today’s nuclear power plants run on nuclear fission reactions — or the breaking down of heavy radioactive elements like uranium, which are strictly regulated for safety and ultra-long-term waste storage. But there may be a safer, greener and virtually inexhaustible nuclear power on the horizon. Fueled by isotopes of hydrogen collected from sea water, fusion power could be a global solution to future energy demand. Progress in fusion research over the last few decades has led to the design of the first plant-scale, prototype fusion research reactor, ITER, currently under construction in France. With ITER slated to begin experimental tests around 2025, it is a critical time for the international teams of scientists and engineers who are planning how the reactor will perform at maximum efficiency. To develop the best predictive tools for ITER (and, by extension, other experimental fusion reactors), research teams are using high-performance computing (HPC) to resolve the behaviors of fusion plasma across the many spatial scales that impact reactor efficiency and plasma stability. “Right now, it is only through HPC that researchers can simulate plasma kinetics for large experiments like ITER with enough simulated electrons to resolve important physics,” said Tim Williams, Argonne Leadership Computing Facility (ALCF) computational scientist. The ALCF is a U.S. Department of Energy (DOE) Office of Science User Facility. Using Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, principal investigator C.S. Chang, head of the U.S. SciDAC-3 Partnership for Edge Physics Simulation headquartered at Princeton Plasma Physics Laboratory (PPPL), and co-investigator Robert Hager of PPPL recently gained new insight into the properties of a self-generating electrical current that boosts power in a tokamak fusion reactor. Their research at the ALCF led to a new understanding and a more accurate formula for the self-generating electrical current. Most magnetic fusion reactors are donut-shaped vessels, known as tokamaks, which are capable of confining plasma that reaches tens of millions of degrees. In ITER’s case, the plasma will heat up to 150 million degrees Celsius (hotter than the Sun’s core) under the force of an extremely strong magnetic field produced by powerful superconducting magnets coiled around the tokamak. The magnetic field works together with a strong electrical current that runs through the plasma to boost power. Some of this current, called “bootstrap current,” is self-generated by the plasma. “We would like to rely on this self-generating current as much as possible, because it doesn’t cost us anything,” Chang said. The bootstrap current enables a more efficient reactor because it helps create a barrier known as the “steep edge plasma pedestal” between the highly energized core plasma and the cold edge plasma. Maintaining a stable barrier at the edge pedestal is a critical challenge in experimental fusion. But despite the importance of the edge plasma to efficiency and stability, many previous studies of bootstrap current in tokamak reactors have focused on the core plasma, and the analytical formulas for making predictions of reactor efficiency have largely been derived from these core plasma studies. Specifically, more accurate predictions of bootstrap current density in the steep edge plasma can reduce potential instabilities. A poorly-controlled edge bootstrap current can generate too much power and lead to damaging collapse, lashing powerful plasma energy at the interior walls of the tokamak. “The pedestal cannot be too steep,” Chang said. “If, for instance, you try to make a sand pile at the beach but you make it too steep, it will easily collapse. That is what happens with plasma also.” The team used the latest iteration, XGCa, of the XGC gyrokinetic code, which has been a leadership-class computing code for the past decade and was lately made more portable to adapt to the future of HPCs. XGC is the only code capable of simulating electron particle behavior from first-principles in realistic tokamak edge geometry, making it a computationally demanding code that requires petaflop systems like Mira. When researchers initially ran into problems running their large-scale simulations, Williams and ALCF systems specialist Adam Scovel identified where the code was hanging in a BlueGene-specific library function. “They found a way to work around the problem that allowed us to perform the simulation. Otherwise, it would have been difficult,” Chang said. “We could not have done this study without a system of Mira’s size.” Running on more than 260,000 Mira processing cores with excellent scalability, the latest XGCa plasma edge simulations revealed electron behaviors related to edge bootstrap current that are not accurately predicted for present-day tokamak geometry by the well-known Sauter formula, which is used to calculate values for the bootstrap current. “Mira allows running simulations of larger tokamaks at ITER’s scale, and modeling at much higher particle counts more accurately represents the electron populations in the plasma,” Williams said. Two types of electrons characterize bootstrap current: passing and trapped. Based on tokamak geometries with large aspect ratios, in which the tokamak torus (the donut-shaped vessel) is skinny with a wide center hole, physicists have understood for decades that passing electrons are primarily responsible for carrying the bootstrap current while the trapped particles generate it. However, with more detailed simulations of trapped and passing electrons of the edge plasma, Chang and Hager observed that trapped electrons can take on the responsibility of carrying bootstrap current at the edge where there are fewer passing electrons. “We found that, at the steep pedestal edge where there aren’t too many passing electrons, the trapped electrons carry most of the current,” Chang said. The ratio of passing-to-trapped electrons inverted from 80 percent passing to 20 percent trapped electrons in the torus core to 20 percent passing and 80 percent trapped at the edge. Simulations showed that this property of bootstrap current applies to both conventional and low (or tight) aspect ratios that make up most of today’s tokamak geometries, including ITER, which has a conventional aspect ratio with a fatter torus and smaller center hole than the large-aspect-ratio limit. The team evaluated the standard deviation of the Sauter formula, when compared to the XGCa simulation results, to be about 19 percent. The modified formula significantly reduces uncertainty to a standard deviation of four percent. With a more precise formula like this, fusion researchers can better project the fusion power efficiency of ITER and, in turn, help refine parameters for the magnetic field, temperature and other controls during experimental operations. “ITER’s goal is to see 10 times energy output versus energy input or more, and the faster we can get there, the sooner we will see commercial fusion power,” Chang said. This research is supported by DOE’s Office of Science. Computing time at the ALCF was allocated through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.


News Article | February 3, 2016
Site: www.scientificcomputing.com

This week, the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, turns one decade old. ALCF is home to Mira, the world's fifth-fastest supercomputer, along with teams of experts that help researchers from all over the world perform complex simulations and calculations in almost every branch of science. To celebrate its 10th anniversary, we're highlighting 10 accomplishments since the facility opened its doors. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science Web site.


The U.S. International Trade Commission (ITC) today issued notice of its decision relating to the patent infringement complaint brought against Umicore by BASF and University of Chicago Argonne National Laboratory. Umicore strongly disputes the findings of infringement of two U.S. patents relating to certain battery cathode materials. The decision to issue a limited exclusion order is subject to automatic review by the United States Trade Representative. If the decision is upheld, Umicore intends to appeal to the United States Court of Appeals for the Federal Circuit. Kurt Vandeputte, Vice President of Umicore's Rechargeable Battery Materials business unit, said: "We believe that the ITC's decision is wrong on the science and the law. We intend to appeal the decision in the interest of Umicore, its customers and U.S. battery innovation and research."


News Article | September 19, 2016
Site: www.cemag.us

Graphene is the stuff of the future. For years, researchers and technologists have been predicting the utility of the one-atom-thick sheets of pure carbon in everything from advanced touch screens and semiconductors to long-lasting batteries and next-generation solar cells. But graphene’s unique intrinsic properties — supreme electrical and thermal conductivities and remarkable electron mobility, to name just a few — can only be fully realized if it is grown free from defects that disrupt the honeycomb pattern of the bound carbon atoms. A team led by Materials Scientist Anirudha Sumant with the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s Center for Nanoscale Materials (CNM) and Materials Science Division, along with collaborators at the University of California-Riverside, has developed a method to grow graphene that contains relatively few impurities and costs less to make, in a shorter time and at lower temperatures compared to the processes widely used to make graphene today. Theoretical work led by Argonne nanoscientist Subramanian Sankaranarayanan at the CNM helped researchers understand the molecular-level processes underlying the graphene growth. The new technology taps ultrananocrystalline diamond (UNCD), a synthetic type of diamond that Argonne researchers have pioneered through years of research. UNCD serves as a physical substrate, or surface on which the graphene grows, and the source for the carbon atoms that make up a rapidly produced graphene sheet. “When I first looked at the [scanning electron micrograph] and saw this nice uniform, very complete layer, it was amazing,” says Diana Berman, the first author of the study and former postdoctoral research associate who worked with Sumant and is now an Assistant Professor at the University of North Texas. “I’d been dealing with all these different techniques of growing graphene, and you never see such a uniform, smooth surface.” Current graphene fabrication protocols introduce impurities during the etching process itself, which involves adding acid and extra polymers, and when they are transferred to a different substrate for use in electronics. “The impurities introduced during this etching and the transferring step negatively affect the electronic properties of the graphene,” Sumant says. “So you do not get the intrinsic properties of the graphene when you actually do this transfer.” The team found that the single-layer, single-domain graphene can be grown over micron-size holes laterally, making them completely free-standing (that is, detached from the underlying substrate). This makes it possible to exploit the intrinsic properties of graphene by fabricating devices directly over free-standing graphene. The new process is also much more cost-effective than conventional methods based on using silicon carbide as a substrate. Sumant says that the 3- to 4-inch silicon carbide wafers used in these types of growth methods cost about $1,200, while UNCD films on silicon wafers cost less than $500 to make. The diamond method also takes less than a minute to grow a sheet of graphene, where the conventional method takes on the order of hours. The high quality of graphene was confirmed by the UC Riverside co-authors Zhong Yan and Alexander Balandin by fabricating top-gate field-effect transistors from this material and measuring its electron mobility and charge carrier concentration. “It is well known that certain metals, such as nickel and iron, dissolve diamond at elevated temperatures, and the same process has been used for many years to polish diamond,” says Sumant. He and his team used this property to employ nickel in converting the top layer of diamond into amorphous carbon, but it was not clear how these freed carbon atoms converted instantly into high-quality graphene. After Sumant’s and Berman’s initial breakthrough of growing graphene directly on UNCD, Sankaranarayanan and his postdocs Badri Narayanan and Sanket Deshmukh, computational material scientists at the CNM used resources at the Argonne Leadership Computing Facility (ALCF) to help the team better understand the mechanism of the growth process underlying this interesting phenomenon using reactive molecular dynamic simulations. Computer simulations developed by Narayanan, Deshmukh and Sankaranarayanan showed that certain crystallographic orientation of nickel-111 highly favor nucleation, and subsequent rapid growth of graphene; this was then confirmed experimentally. These large-scale simulations also showed how graphene forms. The nickel atoms diffuse into the diamond and destroy its crystalline order, while carbon atoms from this amorphous solid move to the nickel surface and rapidly form honeycomb-like structures, resulting in mostly defect-free graphene. The nickel then percolated through the fine crystalline grains of the UNCD, sinking out of the way and removing the need for acid to dissolve away excess metal atoms from the top surface. “It is like meeting a good Samaritan at an unknown place who helps you, does his job and leaves quietly without a trace,” says Sumant. “The proven predictive power of our simulations places us in a position of advantage to enable rapid discovery of new catalytic alloys that mediate growth of high-quality graphene on dielectrics and move away on their own when the growth is completed,” adds Narayanan. In addition to the utility in making minimally defective, application-ready graphene for things like low-frequency vibration sensors, radio frequency transistors and better electrodes for water purification, Berman and Sumant say that the Argonne team has already secured three patents arising from their new graphene growth method. The researchers have already struck a collaboration with Swedish Institute of Space Physics involving the European Space Agency for their Jupiter Icy Moons Explorer (JUICE) program to develop graphene-coated probes that may help exploratory vehicles sense the properties of plasma surrounding the moons of Jupiter. Closer to home, the team has also crafted diamond and graphene needles for researchers at North Carolina University to use in biosensing applications. The Argonne researchers are now fine-tuning the process — tweaking the temperature used to catalyze the reaction and adjusting the thickness of the diamond substrate and the composition of the metal film that facilitates the graphene growth — to both optimize the reaction and to better study the physics at the graphene-diamond interface. “We’re trying to tune this more carefully to have a better understanding of which conditions lead to what quality of graphene we’re seeing,” Berman says. Other Argonne authors involved in the study were Alexander Zinovev and Daniel Rosenmann. The paper, “Metal-induced rapid transformation of diamond into single and multilayer graphene on wafer scale,” is published in Nature Communications. The study used resources of the CNM and the ALCF as well as the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory, all DOE Office of Science User Facilities. Additional support was provided by the U.S. Department of Energy’s Office of Science.


News Article | February 5, 2016
Site: www.greencarcongress.com

« President Obama proposes 50% increase in spending on clean transportation, funded by $10/barrel tax on oil | Main | Study shows viability of RCCI in a two-stroke engine; higher efficiency than direct-injection spark ignition » An international group of physicists led by the University of Arkansas has created an artificial graphene-like structure with transition metal atoms in place of carbon atoms. The U of A-led group published its findings in Physical Review Letters. Graphene, discovered in 2004, is a one-atom-thick sheet of graphite. Graphene transistors are predicted to be substantially faster and more heat tolerant than today’s silicon transistors and may result in more efficient computers and the next-generation of flexible electronics. Its discoverers were awarded the Nobel Prize in physics in 2010. This discovery gives us the ability to create graphene-like structures for many other elements. —Srimanta Middey, a postdoctoral research associate at the U of A who led the study The research group also included U of A postdoctoral research associates Michael Kareev and Yanwei Cao, doctoral student Xiaoran Liu and recent doctoral graduate Derek Meyers, now at Brookhaven National Laboratory. Additional members of the group were David Doennig of the University of Munich, Rossitza Pentcheva of the University of Duisburg-Essen in Germany, Zhenzhong Yang, Jinan Shi and Lin Gu of the Chinese Academy of Sciences; and John W. Freeland and Phillip Ryan of the Advanced Photon Source at Argonne National Laboratory near Chicago. The research was also partially funded by the Chinese Academy of Sciences.


« Honda begins sales of Clarity Fuel Cell in Japan; targeting 200 units first year | Main | UQM receives new follow-on order from Proterra to support increased demand for electric buses » The US Department of Energy (DOE) selected 33 small businesses to work directly with DOE national labs to accelerate the commercialization of new clean energy technologies. The department’s Office of Energy Efficiency and Renewable Energy is investing nearly $6.7 million under Round 1 of the new Small Business Vouchers (SBV) pilot. For Round 1, the small businesses and laboratories will collaborate on advancing a number of clean energy technologies, including water, wind, bioenergy, solar, buildings, vehicles, fuel cells, geothermal technologies, and advanced manufacturing. The selected small businesses will work with scientists at nine department laboratories: Oak Ridge National Laboratory (ORNL); National Renewable Energy Laboratory (NREL); Lawrence Berkeley National Laboratory (LBNL); Sandia National Laboratories (SNL); Pacific Northwest National Laboratory (PNNL); Idaho National Laboratory (INL); Los Alamos National Laboratory (LANL); Argonne National Laboratory (ANL); and Lawrence Livermore National Laboratory (LLNL). SBV is a collaborative, national effort that provides $20 million for US companies to help improve industry awareness of national laboratory capabilities and provide small and mid-size businesses access to the resources available within the national laboratory system. Vouchers range from $50,000-300,000. The companies were competitively chosen from a pool of hundreds of applications. Almost 40% of the businesses represent new DOE-industry partnerships and relationships with the national labs. Building on the tremendous response of Round 1, the department also has begun accepting applications for Round 2 of the SBV pilot. A total of $13 million worth of funding remains; over the course of the year, up to 100 vouchers will be awarded.


News Article | January 14, 2016
Site: www.nanotech-now.com

Abstract: Target dates are critical when the semiconductor industry adds small, enhanced features to our favorite devices by integrating advanced materials onto the surfaces of computer chips. Missing a target means postponing a device's release, which could cost a company millions of dollars or, worse, the loss of competitiveness and an entire industry. But meeting target dates can be challenging because the final integrated devices, which include billions of transistors, must be flawless - less than one defect per 100 square centimeters. Researchers at the University of Chicago and the U.S. Department of Energy's Argonne National Laboratory, led by Juan de Pablo and Paul Nealey, may have found a way for the semiconductor industry to hit miniaturization targets on time and without defects. To make microchips, de Pablo and Nealey's technique includes creating patterns on semiconductor surfaces that allow block copolymer molecules to self-assemble into specific shapes, but thinner and at much higher densities than those of the original pattern. The researchers can then use a lithography technique to create nano-trenches where conducting wire materials can be deposited. This is a stark contrast to the industry practice of using homo-polymers in complex "photoresist" formulations, where researchers have "hit a wall," unable to make the material smaller. Before they could develop their new fabrication method, however, de Pablo and Nealey needed to understand exactly how block copolymers self-assemble when coated onto a patterned surface - their concern being that certain constraints cause copolymer nanostructures to assemble into undesired metastable states. To reach the level of perfection demanded to fabricate high-precision nanocircuitry, the team had to eliminate some of these metastable states. To imagine how block copolymers assemble, it may be helpful to picture an energy landscape consisting of mountains and valleys where some valleys are deeper than others. The system prefers defect-free stability, which can be characterized by the deepest (low energy) valleys, if they can be found. However, systems can get trapped inside higher (medium energy) valleys, called metastable states, which have more defects. To move from a metastable to stable state, block copolymer molecules must find ways to climb over the mountains and find lower energy valleys. "Molecules in these metastable states are comfortable, and they can remain in that state for extraordinarily long periods of time," said de Pablo of the University of Chicago's and Argonne's Institute for Molecular Engineering. "In order to escape such states and attain a perfect arrangement, they need to start rearranging themselves in a manner that allows the system to climb over local energy barriers, before reaching a lower energy minimum. What we have done in this work is predict the path these molecules must follow to find defect-free states and designed a process that delivers industry-standard nanocircuitry that can be scaled down to smaller densities without defects." Using an INCITE grant, de Pablo and his team used the Mira and Fusion supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. There, the team generated molecular simulations of self-assembling block polymers along with sophisticated sampling algorithms to calculate where barriers to structural rearrangement would arise in the material. After all the calculations were done, the researchers could precisely predict the pathways of molecular rearrangement that block copolymers must take to move from a metastable to stable state. They could also experiment with temperatures, solvents and applied fields to further manipulate and decrease the barriers between these states. To test these calculations, de Pablo and Nealey partnered with IMEC, an international consortium located in Belgium. Their commercial-grade fabrication and characterization instruments helped the researchers perform experiments under conditions that are not available in academic laboratories. An individual defect measures only a handful of nanometers; "finding a defect in a 100 square centimeter area is like finding a needle in hay stack, and there are only a few places in the world where one has access to the necessary equipment to do so," says de Pablo. "Manufacturers have long been exploring the feasibility of using block copolymer assembly to reach the small critical dimensions that are demanded by modern computing and higher data storage densities," de Pablo said. "Their biggest challenge involved evaluating defects; by following the strategies we have outlined, that challenge is greatly diminished." John Neuffer, president and CEO of the Semiconductor Industry Association (SIA), says industry is relentlessly focused on designing and building chips that are smaller, more powerful and more energy-efficient. "The key to unlocking the next generation of semiconductor innovation is research," he said. "SIA commends the work done by Argonne National Laboratory and the University of Chicago, as well as other critical scientific research being done across the United States." De Pablo, Nealey and their team will continue their investigations with a wider class of materials, increasing the complexity of patterns and characterizing materials in greater detail while also developing methods based on self-assembly for fabrication of three-dimensional structures. Their long-term goal, with support from the DOE's Office of Science, is to arrive at an understanding of directed self-assembly of polymeric molecules that will enable creation of wide classes of materials with exquisite control over their nanostructure and functionality for applications in energy harvesting, storage and transport. About Argonne National Laboratory Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | January 15, 2016
Site: www.nanotech-now.com

Abstract: While lithium-ion batteries have transformed our everyday lives, researchers are currently trying to find new chemistries that could offer even better energy possibilities. One of these chemistries, lithium-air, could promise greater energy density but has certain drawbacks as well. Now, thanks to research at the U.S. Department of Energy's (DOE) Argonne National Laboratory, one of those drawbacks may have been overcome. All previous work on lithium-air batteries showed the same phenomenon: the formation of lithium peroxide (Li2O2), a solid precipitate that clogged the pores of the electrode. In a recent experiment, however, Argonne battery scientists Jun Lu, Larry Curtiss and Khalil Amine, along with American and Korean collaborators, were able to produce stable crystallized lithium superoxide ((LiO2) instead of lithium peroxide during battery discharging. Unlike lithium peroxide, lithium superoxide can easily dissociate into lithium and oxygen, leading to high efficiency and good cycle life. "This discovery really opens a pathway for the potential development of a new kind of battery," Curtiss said. "Although a lot more research is needed, the cycle life of the battery is what we were looking for." The major advantage of a battery based on lithium superoxide, Curtiss and Amine explained, is that it allows, at least in theory, for the creation of a lithium-air battery that consists of what chemists call a "closed system." Open systems require the consistent intake of extra oxygen from the environment, while closed systems do not - making them safer and more efficient. "The stabilization of the superoxide phase could lead to developing a new closed battery system based on lithium superoxide, which has the potential of offering truly five times the energy density of lithium ion," Amine said. Curtiss and Lu attributed the growth of the lithium superoxide to the spacing of iridium atoms in the electrode used in the experiment. "It looks like iridium will serve as a good template for the growth of superoxide," Curtiss said. "However, this is just an intermediate step," Lu added. "We have to learn how to design catalysts to understand exactly what's involved in lithium-air batteries." ### The researchers confirmed the lack of lithium peroxide by using X-ray diffraction provided by the Advanced Photon Source, a DOE Office of Science User Facility located at Argonne. They also received allocations of time on the Mira supercomputer at the Argonne Leadership Computing Facility, which is also a DOE Office of Science User Facility. The researchers also performed some of the work at Argonne's Center for Nanoscale Materials, which is also a DOE Office of Science User Facility. A study based on the research appeared in the January 11 issue of Nature. The work was funded by the DOE's Office of Energy Efficiency and Renewable Energy and Office of Science. About Argonne National Laboratory Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 16, 2016
Site: www.eurekalert.org

DALLAS, Nov. 16, 2016 - Researchers at UT Southwestern Medical Center report the most detailed 3-D structure to date of the brain receptor that binds and responds to the chemical at the root of marijuana's high. Their high-resolution structure of the human cannabinoid receptor 1 (CB1) and its binding site for the chemical tetrahydrocannabinol (THC) should lead to a better understanding of how marijuana affects the brain. The research also could aid discovery of new treatments for conditions that target the receptor, said Dr. Daniel Rosenbaum, Assistant Professor of Biophysics and Biochemistry at UT Southwestern. "What is most exciting from a therapeutic standpoint is that the same receptor pocket that binds THC also binds cannabinoid inhibitors that have been studied as possible treatments for conditions such as obesity," said Dr. Rosenbaum, senior author of the study published online today by Nature. "The structure is an important step toward explaining how cannabinoids initiate signals in the brain that affect the release of neurotransmitters that relay messages between the brain's neurons," Dr. Rosenbaum said. "This 3-D structure provides high-resolution details of the binding pocket in the CB1 receptor, where plant cannabinoids like THC, cannabinoids made in the body, and synthetic cannabinoid inhibitors all work to modulate receptor function and physiology." He said the CB1 receptor is the target for cannabinoid inhibitor drugs now under study as possible treatments for epilepsy, pain control, obesity, and other conditions. In a competing study released last month by the journal Cell, a U.S.-Chinese team of researchers reported a 3-D structure of the CB1 receptor at a resolution of 2.8 angstroms. The UT Southwestern study reports a higher resolution of 2.6 angstroms. (One angstrom is equivalent to one hundred-millionth of a centimeter.) The higher the resolution, the finer the details of the relationship between atoms of the protein. "The resolution is very important. Our structure shows a different and better resolved structure at the important binding pocket that is of interest to scientists involved in drug development," Dr. Rosenbaum said. "Overall, these two structures are complementary, but we believe our structure may provide a better framework for understanding how cannabinoids and inhibitors bind to the receptor." The Cell study examined the CB1 receptor bound to a synthetic chemical created to stabilize the receptor. In contrast, the UT Southwestern research team successfully imaged the receptor bound to the drug taranabant, which was tested as a possible anti-obesity treatment in clinical trials. Those trials ended due to side effects such as anxiety and depression, Dr. Rosenbaum said. CB1 and the related CB2, which still lacks a high-resolution structural solution, are both members of the human G protein-coupled receptor family. Members of that receptor family control signaling pathways involving hormones, neurotransmitters, and sensory stimuli such as light and odors. The team's success depended on overcoming the receptor protein's resistance to crystallization, which is required for the diffraction measurements used in X-ray crystallography. The researchers also conducted computer simulations of how THC might bind to the CB1 receptor, he said. The next step is to obtain structures of CB1 actually bound to THC, he said. Dr. Rosenbaum, a Eugene McDermott Scholar in Medical Research at UT Southwestern, was listed as one of the world's most highly cited researchers in 2014, according to data compiled by Thomson Reuters. This is his second study published in Nature in the past six months. His earlier study pertained to the structural biology of an important membrane protein involved in cholesterol metabolism. UT Southwestern co-authors in Biophysics include lead author Dr. Zhenhua Shao and Dr. Jie Yin, postdoctoral researchers; Karen Chapman, research scientist; Magdalena Grzemska, a visiting junior researcher; and Lindsay Clark, graduate student. Dr. Junmei Wang, Associate Professor in the Cecil H. and Ida Green Comprehensive Center for Molecular, Computational, and Systems Biology, and of Biophysics also assisted. The research was funded by the Welch Foundation and the Packard Foundation. The study also received support from the Advanced Photon Source (APS), a U.S. Department of Energy Office of Science User Facility operated by the Argonne National Laboratory. UT Southwestern, one of the premier academic medical centers in the nation, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty includes many distinguished members, including six who have been awarded Nobel Prizes since 1985. The faculty of almost 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide medical care in about 80 specialties to more than 100,000 hospitalized patients and oversee approximately 2.2 million outpatient visits a year.


News Article | November 21, 2015
Site: www.nanotech-now.com

Abstract: Entanglement is one of the strangest phenomena predicted by quantum mechanics, the theory that underlies most of modern physics. It says that two particles can be so inextricably connected that the state of one particle can instantly influence the state of the other, no matter how far apart they are. Just one century ago, entanglement was at the center of intense theoretical debate, leaving scientists like Albert Einstein baffled. Today, however, entanglement is accepted as a fact of nature and is actively being explored as a resource for future technologies including quantum computers, quantum communication networks, and high-precision quantum sensors. Entanglement is also one of nature's most elusive phenomena. Producing entanglement between particles requires that they start out in a highly ordered state, which is disfavored by thermodynamics, the process that governs the interactions between heat and other forms of energy. This poses a particularly formidable challenge when trying to realize entanglement at the macroscopic scale, among huge numbers of particles. "The macroscopic world that we are used to seems very tidy, but it is completely disordered at the atomic scale. The laws of thermodynamics generally prevent us from observing quantum phenomena in macroscopic objects," said Paul Klimov, a graduate student in the University of Chicago's Institute for Molecular Engineering and lead author of new research on quantum entanglement. The institute is a partnership between UChicago and Argonne National Laboratory. Previously, scientists have overcome the thermodynamic barrier and achieved macroscopic entanglement in solids and liquids by going to ultra-low temperatures (-270 degrees Celsius) and applying huge magnetic fields (1,000 times larger than that of a typical refrigerator magnet) or using chemical reactions. In the Nov. 20 issue of Science Advances, Klimov and other researchers in David Awschalom's group at the Institute for Molecular Engineering have demonstrated that macroscopic entanglement can be generated at room temperature and in a small magnetic field. The researchers used infrared laser light to order (preferentially align) the magnetic states of thousands of electrons and nuclei and then electromagnetic pulses, similar to those used for conventional magnetic resonance imaging (MRI), to entangle them. This procedure caused pairs of electrons and nuclei in a macroscopic 40 micrometer-cubed volume (the volume of a red blood cell) of the semiconductor SiC to become entangled. "We know that the spin states of atomic nuclei associated with semiconductor defects have excellent quantum properties at room temperature," said Awschalom, Liew Family Professor in Molecular Engineering and a senior scientist at Argonne National Laboratory. "They are coherent, long-lived and controllable with photonics and electronics. Given these quantum 'pieces,' creating entangled quantum states seemed like an attainable goal." In addition to being of fundamental physical interest, "the ability to produce robust entangled states in an electronic-grade semiconductor at ambient conditions has important implications on future quantum devices," Awschalom said. In the short-term, the techniques used here in combination with sophisticated devices enabled by advanced SiC device-fabrication protocols could enable quantum sensors that use entanglement as a resource for beating the sensitivity limit of traditional (non-quantum) sensors. Given that the entanglement works at ambient conditions and the fact that SiC is bio-friendly, one particularly exciting application is biological sensing inside a living organism. "We are excited about entanglement-enhanced magnetic resonance imaging probes, which could have important biomedical applications," said Abram Falk of IBM's Thomas J. Watson Research Center and a co-author of the research findings. In the long term, it might even be possible to go from entangled states on the same SiC chip to entangled states across distant SiC chips. Such efforts could be facilitated by physical phenomena that allow macroscopic quantum states, as opposed to single quantum states (in single atoms), to interact very strongly with one another, which is important for producing entanglement with a high success rate. Such long-distance entangled states have been proposed for synchronizing global positioning satellites and for communicating information in a manner that is fundamentally secured from eavesdroppers by the laws of physics. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Making the switch, this time with an insulator: Colorado State University physicists, joining the fundamental pursuit of using electron spins to store and manipulate information, have demonstrated a new approach to doing so, which could prove useful in the application of low-powe Abstract: The growing field of spin electronics - spintronics - tells us that electrons spin like a top, carry angular momentum, and can be controlled as units of power, free of conventional electric current. Nonvolatile magnetic memory based on the "spin torques" of these spinning electrons has been recently commercialized as STT-MRAM (spin transfer torque-magnetic random access memory). Colorado State University physicists, joining the fundamental pursuit of using electron spins to store and manipulate information, have demonstrated a new approach to doing so, which could prove useful in the application of low-power computer memory. Publishing Sept. 1 in Nature Communications, they've demonstrated a new way to switch magnetic moments - or direction of magnetization - of electrons in a thin film of a barium ferrite, which is a magnetic insulator. Until this point, scientists have only demonstrated this switching behavior - the key to writing information as memory - in metal thin films. The work was led by Mingzhong Wu, professor in the Department of Physics, with first author Peng Li, a former postdoctoral researcher now at Seagate, and second author Tao Liu, a current postdoc at CSU. The work was performed in collaboration with researchers at University of Alabama, Argonne National Laboratory, University of Notre Dame, and University of Wyoming. Other CSU authors include faculty members Stuart Field and Mario Marconi, and graduate students Houchen Chang and Daniel Richardson. Switching magnetic moments of electrons in an insulator instead of a metal could prove to be a major breakthrough in spintronics, by allowing a spin current-based memory storage device to be simpler, and also maintain more efficiency per electron. A property known as perpendicular magnetic anisotropy (PMA), key for information storage, in this case originates from the intrinsic magneto-crystalline anisotropy of the insulator, rather than interfacial anisotropy in other cases, Wu said. "Higher efficiency and lower power than the standard are always the goal in memory applications," Wu said. Beyond the application for computer memory, which captivates most spintronics researchers today, the CSU researchers' device does something bigger: It demonstrates the possibility of a new class of materials for spintronics. "What's exciting about this is that it's an enabling technology for exploring an entirely different class of configurations, some of which are theorized to be useful," said Jake Roberts, professor and chair of the Department of Physics. In the CSU researchers' device, the spin current does the job of assisting magnetic switching. Next, they will attempt to further refine their device for more efficient switching, including using a topological insulator or the photo-spin-voltaic effect to produce spin currents. The photo-spin-voltaic effect was discovered by Wu and colleagues, and reported in Nature Physics. ### The work was primarily supported by C-SPIN (Center for Spintronic Materials, interfaces and Novel Architectures). It was also supported by SHINES (Spins and Heat in Nanoscale Electronic Systems); the National Science Foundation, the U.S. Army Research Office and the U.S. Department of Energy. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Cleaning up hybrid battery electrodes improves capacity and lifespan: New way of building supercapacitor-battery electrodes eliminates interference from inactive components Abstract: Hybrid batteries that charge faster than conventional ones could have significantly better electrical capacity and long-term stability when prepared with a gentle-sounding way of making electrodes. Called ion soft-landing, the high-precision technique resulted in electrodes that could store a third more energy and had twice the lifespan compared to those prepared by a conventional method, the researchers report today in Nature Communications. Straightforward to set up, the method could eventually lead to cheaper, more powerful, longer-lasting rechargeable batteries. "This is the first time anyone has been able to put together a functioning battery using ion soft-landing," said chemist and Laboratory Fellow Julia Laskin of the Department of Energy's Pacific Northwest National Laboratory. The advantages come from soft-landing's ability to build an electrode surface very specifically with only the most desirable molecules out of a complex mixture of raw components. "It will help us unravel important scientific questions about this energy storage technology, a hybrid between common lithium rechargeable batteries and supercapacitors that have very high energy density," said lead author, PNNL chemist Venkateshkumar Prabhakaran. A different kind of hybrid Although lithium ion rechargeable batteries are the go-to technology for small electronic devices, they release their energy slowly, which is why hybrid electric vehicles use gasoline for accelerating, and take a long time to recharge, which makes electric vehicles slower to "fill" than their gas-powered cousins. One possible solution is a hybrid battery that crosses a lithium battery's ability to hold a lot of charge for its size with a fast-charging supercapacitor. PNNL chemists wanted to know if they could make superior hybrid battery materials with a technology -- called ion soft-landing -- that intricately controls the raw components during preparation. To find out, Laskin and colleagues created hybrid electrodes by spraying a chemical known as POM, or polyoxometalate, onto supercapacitor electrodes made of microscopically small carbon tubes. Off-the-shelf POM has both positively and negatively charged parts called ions, but only the negative ions are needed in hybrid electrodes. Limited by its design, the conventional preparation technique sprays both the positive and negative ions onto the carbon nanotubes. Ion soft-landing, however, separates the charged parts and only sets down the negative ions on the electrode surface. The question that Laskin and team had was, do positive ions interfere with the performance of hybrid electrodes? To find out, the team made centimeter-sized square hybrid batteries out of POM-carbon nanotube electrodes that sandwiched a specially developed ionic liquid membrane between them. "We had to design a membrane that separated the electrodes and also served as the battery's electrolyte, which allows conduction of ions," said Prabhakaran. "Most people know electrolytes as the liquid sloshing around within a car battery. Ours was a solid gel." They tested this mini-hybrid battery for how much energy it could hold and how many cycles of charging and discharging it could handle before petering out. They compared soft-landing with conventionally made hybrid batteries, which were made with a technique called electrospray deposition. They used an off-the-shelf POM containing positively charged sodium ions. Cheers for the POMs The team found that the POM hybrid electrodes made with soft-landing had superior energy storage capacity. They could hold a third more energy than the carbon nanotube supercapacitors by themselves, which were included as a minimum performance benchmark. And soft-landing hybrids held about 27 percent more energy than conventionally made electrospray deposited electrodes. To make sure the team was using the optimal amount of POM, they made hybrid electrodes using different amounts and tested which one resulted in the highest capacity. Soft-landing produced the highest capacity overall using the lowest amount of POM. This indicated the electrodes used the active material extremely efficiently. In comparison, conventional, sodium-based POM electrodes required twice as much POM material to reach their highest capacity. The conventionally-made devices used more POM, but the team couldn't count them out yet. They might in fact have a longer lifespan than the soft-landing produced electrodes. To test that, the team charged and discharged the hybrids 1,000 times and measured how long they lasted. As they did in the previous tests, the soft-landing-based devices performed the best, losing only a few percent capacity after 1000 cycles. The naked supercapacitors came in second, and the sodium-based, conventionally made devices lost about double the capacity of the soft-landed devices. This suggests that the soft-landing method has the potential to double the lifespan of these types of hybrid batteries. Looking good The team was surprised that it took so little of the POM material to make such a big difference to the carbon nanotube supercapacitors. By weight, the amount of POM was just one-fifth of a percent of the amount of carbon nanotube material. "The fact that the capacitance reaches a maximum with so little POM, and then drops off with more, is remarkable," said Laskin. "We didn't expect such a small amount of POM to be making such a large contribution to the capacitance." They decided to examine the structure of the electrodes using powerful microscopes in EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility at PNNL. They compared soft-landing with the conventionally made, sodium-POM electrodes. Soft-landing created small discrete clusters of POM dotting the carbon nanotubes, but the conventional method resulted in larger clumps of POM clusters swamping out the nanotubes, aggregates up to ten times the size of those made by soft-landing. This result suggested to the researchers that removing the positive ions from the POM starting material allowed the negative ions to disperse evenly over the surface. As long as the positive ions such as sodium remained, the POM and sodium appear to reform the crystalline material and aggregate on the surface. This prevented much of the POM from doing its job in the battery, thereby reducing capacity. When the team zoomed out a little and viewed the nanotubes from above, the conventionally made electrodes were covered in large aggregates of POM. The soft-landed electrodes, however, were remarkably indistinguishable from the naked carbon nanotube supercapacitors. In future research, the team wants to explore how to get the carbon materials to accept more POM, which might increase capacity and lifespan even further. ### This work was supported by the Department of Energy Office of Science and the Joint Center for Energy Storage Research, a Department of Energy Innovation Hub. Reference: Venkateshkumar Prabhakaran, B. Layla Mehdi, Jeffrey J. Ditto, Mark H. Engelhard, Bingbing Wang, K. Don D. Gunaratne, David C. Johnson, Nigel D. Browning, Grant E. Johnson and Julia Laskin. Rational Design of Efficient Electrode-Electrolyte Interfaces for Solid-State Energy Storage Using Ion Soft-Landing , Nature Communications April 21, 2016, DOI:10.1038/NCOMMS11399. About Pacific Northwest National Laboratory The Joint Center for Energy Storage Research (JCESR), a DOE Energy Innovation Hub, is a major partnership that integrates researchers from many disciplines to overcome critical scientific and technical barriers and create new breakthrough energy storage technology. Led by the U.S. Department of Energy's Argonne National Laboratory, partners include national leaders in science and engineering from academia, the private sector, and national laboratories. Their combined expertise spans the full range of the technology-development pipeline from basic research to prototype development to product engineering to market delivery. EMSL, the Environmental Molecular Sciences Laboratory, is a national scientific user facility sponsored by the Department of Energy's Office of Science. Located at Pacific Northwest National Laboratory in Richland, Wash., EMSL offers an open, collaborative environment for scientific discovery to researchers around the world. Its integrated computational and experimental resources enable researchers to realize important scientific insights and create new technologies. Follow EMSL on Facebook, LinkedIn and Twitter. Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | March 2, 2017
Site: phys.org

This single finding led to a worldwide investigation that's spanned a century. While it resolved one scientific debate, it created many more. The Department of Energy's Office of Science and its predecessors have spent decades supporting scientists investigating the mystery of why superconductivity occurs under a variety of circumstances. The answer to this question holds major opportunities for scientific and technological development. About six percent of all electricity distributed in the U.S. is lost in transmission and distribution. Because superconductors don't lose current as they conduct electricity, they could enable ultra-efficient power grids and incredibly fast computer chips. Winding them into coils produces magnetic fields that could be used for highly-efficient generators and high-speed magnetic levitation trains. Unfortunately, technical challenges with both traditional and "high temperature" superconductors restrict their use. "To the extent that Tesla and Edison introducing the use of electricity revolutionized our society, ambient superconductivity would revolutionize it once again," said J.C. Séamus Davis, a physicist who works with the Center for Emergent Superconductivity, a DOE Energy Frontier Research Center. The How and Why of Superconductivity Kamerlingh Onnes' discovery set off a flurry of activity. Despite his grand visions, most of what scientists found only reinforced superconductors' limitations. One of the first big breakthroughs came nearly half a century after Kamerlingh Onnes' initial finding. While most researchers thought superconductivity and magnetism couldn't co-exist, Alexei A. Abrikosov proposed "Type II" superconductors that can tolerate magnetic fields in 1952. Abrikosov continued his research at DOE's Argonne National Laboratory (ANL) and later won the Nobel Prize in Physics for his contributions. The next big leap came in 1957, when John Bardeen, Leon Cooper, and John Robert Schrieffer proposed the first theory of why superconductivity occurs. Their theory, made possible by the support of DOE's predecessor, the Atomic Energy Commission, also won them the Nobel Prize in physics. Their theory contrasts how some metals work under normal conditions with how they act at extremely low temperatures. Normally, atoms are packed together in metals, forming regular lattices. Similar to the spokes and rods of Tinkertoys, the metals' positively charged ions are bonded together. In contrast, negatively charged free electrons (electrons not tied to an ion) move independently through the lattice. But at extremely low temperatures, the relationship between the electrons and the surrounding lattice changes. A common view is that the electrons' negative charges weakly attract positive ions. Like someone tugging the middle of a rubber band, this weak attraction slightly pulls positive ions out of place in the lattice. Even though the original electron has already passed by, the now displaced positive ions then slightly attract other electrons. At near absolute zero, attraction from the positive ions causes electrons to follow the path of the ones in front of them. Instead of travelling independently, they couple into pairs. These pairs flow easily through metal without resistance, causing superconductivity. Unfortunately, all of the superconductors that scientists had found only functioned near absolute zero, the coldest theoretically possible temperature. But in 1986, Georg Bednorz and K. Alex Müller at IBM discovered copper-based materials that become superconducting at 35 K (-396 F). Other scientists boosted these materials' superconducting temperature to close to 150 K (-190 F), enabling researchers to use fairly common liquid nitrogen to cool them. In the last decade, researchers in Japan and Germany discovered two more categories of high-temperature superconductors. Iron-based superconductors exist in similar conditions to copper-based ones, while hydrogen-based ones only exist at pressures more than a million times that of Earth's atmosphere. But interactions between the electron pairs and ions in the metal lattice that Bardeen, Cooper, and Schrieffer described couldn't explain what was happening in copper and iron-based high temperature superconductors. "We were thrown into a quandary," said Peter Johnson, a physicist at Brookhaven National Laboratory (BNL) and director of its Center for Emergent Superconductivity. "These new materials challenged all of our existing ideas on where to look for new superconductors." In addition to being scientifically intriguing, this conundrum opened up a new realm of potential applications. Unfortunately, industry can only use "high-temperature" superconductors for highly specialized applications. They are still too complex and expensive to use in everyday situations. However, figuring out what makes them different from traditional ones may be essential to developing superconductors that work at room temperature. Because they wouldn't require cooling equipment and could be easier to work with, room temperature superconductors could be cheaper and more practical than those available today. Several sets of experiments supported by the Office of Science are getting us closer to finding out what, if anything, high-temperature superconductors have in common. Evidence suggests that magnetic interactions between electrons may be essential to why high-temperature superconductivity occurs. All electrons have a spin, creating two magnetic poles. As a result, electrons can act like tiny refrigerator magnets. Under normal conditions, these poles aren't oriented in a particular way and don't interact. However, copper and iron-based superconductors are different. In these materials, the spins on adjacent iron sites have north and south poles that alternate directions – oriented north, south, north, south and so on. One project supported by the Center for Emergent Superconductivity examined how the ordering of these magnetic poles affected their interactions. Scientists theorized that because magnetic poles were already pointing in opposite directions, it would be easier than usual for electrons to pair up. To test this theory, they correlated both the strength of bonds between electrons (the strength of the electron pairs) and the direction of their magnetism. With this technique, they provided significant experimental evidence of the relationship between superconductivity and magnetic interactions. Other experiments at a number of DOE's national laboratories have further reinforced this theory. These observations met scientists' expectations of what should occur if superconductivity and magnetism are connected. Researchers at ANL observed an iron-based superconductor go through multiple phases before reaching a superconducting state. As scientists cooled the material, iron atoms went from a square structure to a rectangular one and then back to a square one. Along the way, there was a major change in the electrons' magnetic poles. While they were originally random, they assumed a specific order right before reaching superconductivity. At DOE's Ames Laboratory, researchers found that adding or removing electrons from an iron-based superconducting material changed the direction in which electricity flowed more easily. Researchers at BNL observed that superconductivity and magnetism not only co-exist, but actually fluctuate together in a regular pattern. Unfortunately, electron interactions' complex nature makes it difficult to pinpoint exactly what role they play in superconductivity. Research at BNL found that as scientists cooled an iron-based material, the electron spins' directions and their relationship with each other changed rapidly. The electrons swapped partners right before the material became superconducting. Similarly, research at ANL has showed that electrons in iron-based superconductors produce "waves" of magnetism. Because some of the magnetic waves cancel each other out, only half of the atoms demonstrate magnetism at any one time. These findings are providing new insight into why superconductors behave the way they do. Research has answered many questions about them, only to bring up new ones. While laboratories have come a long way from Kamerlingh Onnes' hand-blown equipment, scientists continue to debate many aspects of these unique materials. Explore further: Electron spin could be the key to high-temperature superconductivity


News Article | November 16, 2016
Site: phys.org

Supercomputers provide the computational horsepower to understand single atoms or supernovae. To get the best performance from these machines, the Department of Energy's computing centers offer free access to user support teams. These people are curious, driven scientists, who, in addition to their own research, have taken on the challenge of some of the world's most complex computers. Credit: Nathan Johnson, Pacific Northwest National Laboratory Like the world's best pit crews, groups of highly trained scientists make sure everything works together at the supercomputers available through the Department of Energy's Office of Science. Like their NASCAR counterparts, these crews get the best possible performance from the custom-built supercomputers. Their machines are fast and powerful, but they aren't intuitive. Add to that the fact that researchers using the computers are racing against the clock, too – to get answers before their time on the machine expires. Unwieldy data files, slow programs and hardware crashes all eat into that time. For example, working with data files that are 10 times larger than they've used before can slow the code to a crawl or a crash. The team works with scientists long before their time starts on the supercomputer to optimize codes and improve workflows. "We want them to get the performance they need to get the job done," says Katherine Riley. She's director of science for the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility at Argonne National Laboratory in Illinois. Supporting some of the world's fastest computers requires more than an encyclopedic understanding of supercomputers. A deep knowledge of mathematics is a must. Add to that an outgoing personality and a desire to help. Top it off with a doctoral degree and extensive experience in a scientific field. "We're able to speak the computational language and also the science," says Richard Gerber, now a senior science advisor at National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at Lawrence Berkeley National Laboratory in California. But he started, 20 years ago, answering questions from users about the center's supercomputer. Last year, the dozen-strong team at NERSC, located in California, answered 6,435 enquiries and earned, once again, top marks for their contributions. "Everybody that calls in has a problem that they can't figure it out on their own," says Gerber. "We work with very smart, sophisticated users. We provide that extra bit of knowledge they need." That extra bit can also take the form of training events. Short, intense programming marathons called hack-a-thons are quite popular. Fernanda Foertter, who leads training at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility at Oak Ridge National Laboratory in Tennessee, started these events to increase use of the facility's supercomputer, known as Titan. Foertter saw researchers attend workshops but struggle to find the time and resources to apply what they learned to optimize their codes for Titan. At a hack-a-thon, Foertter pairs each team of five or so software developers with two mentors from the support team and lots of food. The goal is to get the software to run well on the supercomputer. And that's what's happening for scientists at national labs, universities, and elsewhere. In fact, at the end of one event, researchers got their program, which analyzes how fluid moves in the brain, to run eight months of data in eight hours. Despite optimizing the code, crashes happen. At ALCF, the support team analyzes every crash. "It's time consuming, but it is important to us to understand why," says Richard Coffey, ALCF director of user experience. "If it's a system failure, we can give a refund. If it's a bug in the user's software, we work hard to help them understand what they could do differently." Just as the world's fastest cars require more than a driver and a pit crew, so supercomputers have talented teams who don't work directly with the scientist driving the research. Each supercomputer has experts handling issues such as file management, maintenance, security, and much more. While the machines often get the attention at the end of a race, it's the talents of people that win races. In the race to answer tough questions, it's the crews supporting DOE's supercomputers that make it possible to cross the finish line.


News Article | October 26, 2016
Site: www.eurekalert.org

A research team led by scientists at the Icahn School of Medicine at Mount Sinai (ISMMS) has solved the three-dimensional structure of a key protein that helps damaged cellular DNA repair itself. Investigators say that knowing the chemical structure of the protein will likely help drug designers build novel anti-cancer agents. The study, published in the October 21 issue of the journal Science Advances, involved a team of investigators from multiple institutions, who worked for more than two years to decipher the unusual configuration of the protein PrimPol, whose function was discovered in 2013. PrimPol is used in cells when normal repair proteins encounter damaged sections of DNA, often caused by anticancer chemotherapy drugs. The protein can skip over the damage to rescue DNA replication, says the study's senior investigator, Aneel K. Aggarwal, PhD, Professor of Pharmacological and Oncological Sciences at ISMMS. "PrimPol can counter the anti-cancer action of common chemotherapeutic agents such as cisplatin. By inhibiting PrimPol, we believe that we can increase the efficacy of chemotherapeutic agents in the treatment of many cancers," he says. DNA damage happens constantly -- more than 100,000 events occur in every human cell each day. PrimPol is necessary for the cell to repair DNA damage, but sometimes this may not be to the individual's benefit, as in the case of resistance to chemotherapeutic agents, says the study's co-lead author, Olga Rechkoblit, PhD, Assistant Professor of Pharmacological Sciences at ISMMS. The basic steps involved in DNA replication are known. The first step involves unzipping the intertwined double helix DNA structure, creating a "Y" shape known as a replication fork. These two strands act as templates for making the new DNA strands. A short piece of RNA known as a primer (produced by a primase enzyme) acts as the starting point for the synthesis of new DNA. "It had been believed that DNA polymerase and primase activities in human cells were the province of separate enzymes. Then PrimPol was discovered, and the understanding of DNA replication changed dramatically. PrimPol was found to be capable of both restarting and performing DNA synthesis after DNA replication stalls," says Dr. Rechkoblit. While scientists were excited by the discovery of the enzyme, they didn't understand how it worked. Drs. Aggarwal and Rechkoblit organized a team of investigators from Cornell University in New York, Argonne National Laboratory in Illinois, and the University of Texas Medical Branch in Galveston to find out. "Because the three-dimensional structure of the enzyme is so different from that of other DNA polymerases, it required a group effort to elucidate its structure," says Dr. Aggarwal. The clinical implications are clear, Dr. Rechkoblit says. "Many chemotherapy agents kill cancer cells by damaging their DNA and preventing the completion of DNA replication. PrimPol, on the other hand, promotes the replication progression and, thus, cell survival," she says. "Knowing the structure of PrimPol described in the current study is invaluable for designing an inhibitor for this enzyme as a future cancer therapy." "The idea would be that a patient could be given a PrimPol inhibitor -- an agent that shuts down this clever machine -- at the same time as chemotherapy, so that cancer cells cannot repair the killing damage chemotherapy offers," she says. Study contributors include Yogesh K. Gupta, PhD, and Radhika Malik, PhD, both co-lead authors from the Mount Sinai team; Kanagalaghatta R. Rajashanka, PhD, from Cornell University and Argonne National Laboratory; and Robert E. Johnson, PhD, Louise Prakash, PhD, and Satya Prakash, PhD, from the University of Texas. This work is funded by the National Institute of General Medical Sciences (AGM-12006), the National Institute of Health (P41 GM103403), the National Cancer Institute (ACB-12002) and the National Institute of Health-Office of Research Infrastructure Programs High-End Instrumentation grant (S10 RR029205). This research used resources of the AQ38 Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory. The Mount Sinai Health System is an integrated health system committed to providing distinguished care, conducting transformative research, and advancing biomedical education. Structured around seven member hospital campuses and a single medical school, the Health System has an extensive ambulatory network and a range of inpatient and outpatient services--from community?based facilities to tertiary and quaternary care. The System includes approximately 6,600 primary and specialty care physicians, 12?minority?owned free?standing ambulatory surgery centers, over 45 ambulatory practices throughout the five boroughs of New York City, Westchester, and Long Island, as well as 31 affiliated community health centers. Physicians are affiliated with the Icahn School of Medicine at Mount Sinai, which is ranked among the top 20 medical schools both in National Institutes of Health funding and by U.S. News & World Report. For more information, visit http://www. , or find Mount Sinai on Facebook, Twitter and YouTube.


News Article | January 14, 2016
Site: www.cemag.us

Target dates are critical when the semiconductor industry adds small, enhanced features to our favorite devices by integrating advanced materials onto the surfaces of computer chips. Missing a target means postponing a device's release, which could cost a company millions of dollars or, worse, the loss of competitiveness and an entire industry. But meeting target dates can be challenging because the final integrated devices, which include billions of transistors, must be flawless — less than one defect per 100 square centimeters. Researchers at the University of Chicago and the U.S. Department of Energy's (DOE's) Argonne National Laboratory, led by Juan de Pablo and Paul Nealey, may have found a way for the semiconductor industry to hit miniaturization targets on time and without defects. To make microchips, de Pablo and Nealey's technique includes creating patterns on semiconductor surfaces that allow block copolymer molecules to self-assemble into specific shapes, but thinner and at much higher densities than those of the original pattern. The researchers can then use a lithography technique to create nano-trenches where conducting wire materials can be deposited. This is a stark contrast to the industry practice of using homo-polymers in complex "photoresist" formulations, where researchers have "hit a wall," unable to make the material smaller. Before they could develop their new fabrication method, however, de Pablo and Nealey needed to understand exactly how block copolymers self-assemble when coated onto a patterned surface — their  concern being that certain constraints cause copolymer nanostructures to assemble into undesired metastable states. To reach the level of perfection demanded to fabricate high-precision nanocircuitry, the team had to eliminate some of these metastable states. To imagine how block copolymers assemble, it may be helpful to picture an energy landscape consisting of mountains and valleys where some valleys are deeper than others. The system prefers defect-free stability, which can be characterized by the deepest (low-energy) valleys, if they can be found. However, systems can get trapped inside higher (medium energy) valleys, called metastable states, which have more defects. To move from a metastable to stable state, block copolymer molecules must find ways to climb over the mountains and find lower energy valleys. "Molecules in these metastable states are comfortable, and they can remain in that state for extraordinarily long periods of time," says de Pablo of the University of Chicago's and Argonne's Institute for Molecular Engineering. "In order to escape such states and attain a perfect arrangement, they need to start rearranging themselves in a manner that allows the system to climb over local energy barriers, before reaching a lower energy minimum. What we have done in this work is predict the path these molecules must follow to find defect-free states and designed a process that delivers industry-standard nanocircuitry that can be scaled down to smaller densities without defects." Using an INCITE grant, de Pablo and his team used the Mira and Fusion supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. There, the team generated molecular simulations of self-assembling block polymers along with sophisticated sampling algorithms to calculate where barriers to structural rearrangement would arise in the material. After all the calculations were done, the researchers could precisely predict the pathways of molecular rearrangement that block copolymers must take to move from a metastable to stable state. They could also experiment with temperatures, solvents and applied fields to further manipulate and decrease the barriers between these states. To test these calculations, de Pablo and Nealey partnered with IMEC, an international consortium located in Belgium. Their commercial-grade fabrication and characterization instruments helped the researchers perform experiments under conditions that are not available in academic laboratories. An individual defect measures only a handful of nanometers; "finding a defect in a 100 square centimeter area is like finding a needle in hay stack, and there are only a few places in the world where one has access to the necessary equipment to do so," says de Pablo. "Manufacturers have long been exploring the feasibility of using block copolymer assembly to reach the small critical dimensions that are demanded by modern computing and higher data storage densities," de Pablo says. "Their biggest challenge involved evaluating defects; by following the strategies we have outlined, that challenge is greatly diminished." John Neuffer, president and CEO of the Semiconductor Industry Association (SIA), says industry is relentlessly focused on designing and building chips that are smaller, more powerful and more energy-efficient. "The key to unlocking the next generation of semiconductor innovation is research," he says. "SIA commends the work done by Argonne National Laboratory and the University of Chicago, as well as other critical scientific research being done across the United States." De Pablo, Nealey, and their team will continue their investigations with a wider class of materials, increasing the complexity of patterns and characterizing materials in greater detail while also developing methods based on self-assembly for fabrication of three-dimensional structures. Their long-term goal, with support from the DOE's Office of Science, is to arrive at an understanding of directed self-assembly of polymeric molecules that will enable creation of wide classes of materials with exquisite control over their nanostructure and functionality for applications in energy harvesting, storage, and transport.


News Article | February 2, 2016
Site: www.cemag.us

An international group of physicists led by the University of Arkansas has created an artificial material with a structure comparable to graphene. “We’ve basically created the first artificial graphene-like structure with transition metal atoms in place of carbon atoms,” says Jak Chakhalian, professor of physics and director of the Artificial Quantum Materials Laboratory. In 2014, Chakhalian was selected as a quantum materials investigator for the Gordon and Betty Moore Foundation. His selection came with a $1.8 million grant, a portion of which funded the study, Graphene, discovered in 2001, is a one-atom-thick sheet of graphite. Graphene transistors are predicted to be substantially faster and more heat tolerant than today’s silicon transistors and may result in more efficient computers and the next-generation of flexible electronics. Its discoverers were awarded the Nobel Prize in physics in 2010. The University of Arkansas -led group published its findings this week in Physical Review Letters, the journal of the American Physical Society, in a paper titled “Mott Electrons in an Artificial Graphene-like Crystal of Rare Earth Nickelate.” “This discovery gives us the ability to create graphene-like structures for many other elements,” says Srimanta Middey, a postdoctoral research associate at the University of Arkansas who led the study. The research group also included postdoctoral research associates Michael Kareev and Yanwei Cao, doctoral student Xiaoran Liu, and recent doctoral graduate Derek Meyers, now at Brookhaven National Laboratory. Additional members of the group were David Doennig of the University of Munich, Rossitza Pentcheva of the University of Duisburg-Essen in Germany, Zhenzhong Yang, Jinan Shi, and Lin Gu of the Chinese Academy of Sciences; and John W. Freeland and Phillip Ryan of the Advanced Photon Source at Argonne National Laboratory near Chicago. The research was also partially funded by the Chinese Academy of Sciences. The of their report reads: Deterministic control over the periodic geometrical arrangement of the constituent atoms is the backbone of the material properties, which, along with the interactions, define the electronic and magnetic ground state. Following this notion, a bilayer of a prototypical rare-earth nickelate, NdNiO , combined with a dielectric spacer, LaAlO , has been layered along the pseudocubic [111] direction. The resulting artificial graphenelike Mott crystal with magnetic 3D electrons has antiferromagnetic correlations. In addition, a combination of resonant X-ray linear dichroism measurements and ab initio calculations reveal the presence of an ordered orbital pattern, which is unattainable in either bulk nickelates or nickelate based heterostructures grown along the [001] direction. These findings highlight another promising venue towards designing new quantum many-body states by virtue of geometrical engineering.


News Article | October 3, 2016
Site: news.mit.edu

When one type of an oxide structure called perovskite is exposed to both water vapor and streams of electrons, it exhibits behavior that researchers had never anticipated: The material gives off oxygen and begins oscillating, almost resembling a living, breathing organism. The phenomenon was “totally unexpected” and may turn out to have some practical applications, says Yang Shao-Horn, the W.M. Keck Professor of Energy at MIT. She is the senior author of a paper describing the research that is being published today in the journal Nature Materials. The paper’s lead author is Binghong Han PhD ’16, now a postdoc at Argonne National Laboratory. Perovskite oxides are promising candidates for a variety of applications, including solar cells, electrodes in rechargeable batteries, water-splitting devices to generate hydrogen and oxygen, fuel cells, and sensors. In many of these uses, the materials would be exposed to water vapor, so a better understanding of their behavior in such an environment is considered important for facilitating the development of many of their potential applications. A video taken from a transmission electron microscope shows a perovskite material oscillating as it is exposed to water vapor and a beam of electrons. Video has been sped up. (Courtesy of the researchers) When a particular kind of perovskite known as BSCF — after the chemical symbols for its constituents barium, strontium, cobalt, and iron — is placed in a vacuum in a transmission electron microscope (TEM) to observe its behavior, Shao-Horn says, “nothing happens, it’s very stable.” But then, “when you pump in low pressure water vapor, you begin to see the oxide oscillate.” The cause of that oscillation, clearly visible in the TEM images, is that “bubbles form and shrink in the oxide. It’s like cooking a polenta, where bubbles form and then shrink.” The behavior was so unexpected in part because the oxide is solid and was not expected to have the flexibility to form growing and shrinking bubbles. “This is incredible,” Shao-Horn says. “We think of oxides as brittle,” but in this case the bubbles expand and contract without any fracturing of the material. And in the process of bubble formation, “we are actually generating oxygen gas,” she says. What’s more, the exact frequency of the oscillations that are generated by the forming and bursting bubbles can be precisely tuned, which could be a useful feature for some potential applications. “The magnitude and frequency of the oscillations depend on the pressure” of the vapor in the system, Shao-Horn says. And since the phenomenon also depends on the presence of electron beams, the reaction can be switched on and off at will by controlling those beams. The effect is not just a surface reaction, she says. The water molecules, which become ionized (electrically charged) by the electron beam, actually penetrate deep into the perovskite. “These ions go inside the bulk material, so we see oscillations coming from very deep,” she says. This experiment used the unique capabilities of an “environmental” transmission electron microscope at Brookhaven National Laboratory, part of a U.S. Department of Energy-supported facility there. With this instrument, the researchers directly observed the interaction between the perovskite material, water vapor, and streams of electrons, all at the atomic scale. Despite all the pulsating motion and the penetration of ions in and out of the solid crystalline material, when the reaction stops, the material “still has its original perovskite structure,” Han says. Because this is such a new and intriguing finding, Shao-Horn says, “we still don’t understand in full detail” exactly how the reactions take place, so the research is continuing in order to clarify the mechanisms. “It’s an unexpected result that opens a lot of questions to address scientifically.” While the initial experiments used electron beams, Shao-Horn questions if such behavior could also be induced by shining a bright light, which could be a useful approach for water splitting and purification — for example, using sunlight to generate hydrogen fuel from water or remove toxins from water. While most catalysts promote reactions only at their surfaces, the fact that this reaction penetrates into the bulk of the material suggests that it could offer a new mechanism for catalyst designs, she says. In addition to mechanical engineering, Shao-Horn holds joint appointments with the Department of Materials Science and Engineering and the MIT Energy Initiative’s Center for Energy Storage. The research team also included Kelsey Stoerzinger PhD ’16; Vasili Tileli of the Ecole Polytechnique Federale de Lausanne, in Switzerland; and Andrew Gamalski and Eric Stach of Brookhaven National Laboratory, in Upton, New York. The work was supported by the National Science Foundation, the Skoltech-MIT Center for Electrochemical Energy Storage, and the U.S. Department of Energy Office of Science.


News Article | February 15, 2017
Site: cerncourier.com

Ovsat Abdinov, member of the Azerbaijan National Academy of Sciences (ANAS), died on 29 October at the age of 72, after a long illness. He was born in Belokan city, Azerbaijan, graduated from Baku State University in 1966, and defended his PhD thesis in 1972. It is impossible to overstate the impact that Abdinov had in the creation and development of high-energy physics in Azerbaijan. His wide knowledge, inexhaustible energy, talent in organisation and search for young specialists led to the creation of his own school in this field that serves as an example for future generations. Scientifically, Abdinov’s main interest was the theoretical description of hadron-nuclear interaction processes. He was the first to propose a hypothesis of the cluster formation in light nuclei, which was later experimentally proven. The laboratory he headed at ANAS Institute of Physics collaborated initially with the Joint Institute for Nuclear Research (JINR) in Dubna and the Institute of High Energy Physics (IHEP) in Serpukhov, both in Russia, followed by CERN. The creation and expansion of relations between Azerbaijan and CERN paved the way for the participation of Azerbaijan scientists in the LHC, but this did not interrupt connections with Dubna: Abdinov was a staff member of JINR, deputy of authorised representative of the government of Azerbaijan Republic in JINR, and a member of JINR Scientific Council. The creation of Azerbaijan’s first Worldwide LHC Computing Grid segment also owes its thanks to Abdinov. Abdinov was a famous scientific representative of the Azerbaijan intelligentsia. He was an organiser and invited speaker at international conferences, a presenter of high-level reports and the winner of numerous research grants both in the former Soviet Union and in Azerbaijan. He dedicated almost 20 years of his scientific activity to investigations carried out within the ATLAS collaboration. We hope that his work will be continued by his scientific heirs and further benefit Azerbaijan high-energy physics. Malcolm Derrick, a long-time leader in the Argonne high-energy physics (HEP) division, passed away on 31 October after a long illness. Born in Hull, UK, in 1933, Malcolm received his BSc and PhD degrees in physics from the University of Birmingham. After working on the cyclotron at Carnegie Tech, he moved to Oxford University in 1962 to help establish a bubble-chamber group working at CERN. In 1963 he moved to Argonne National Laboratory to work on the 12 GeV ZGS synchrotron then in construction. While working on several bubble-chamber experiments with the 30 inch chamber, Malcolm’s main interest was in establishing a programme of neutrino physics using the 12 foot bubble-chamber then being built. He was spokesman for the first experiment using the deuterium-filled chamber, which produced several important results including the first measurement of the axial-vector form factor in muon neutrino–neutron quasi-elastic scattering. This result was verified by later BNL and FNAL experiments. Malcolm served on two occasions as HEP division director and was always a source of good career advice. He enthusiastically supported the division's collaboration with the University of Minnesota to build an underground detector to search for proton decay in the Soudan Mine in Minnesota. This resulted in a rich programme of neutrino physics with a series of multi-kiloton detectors and in new underground laboratories at Soudan, using both atmospheric neutrinos and Fermilab neutrino beams. The important physics produced by the MINOS programme is the direct result of these early experiments. After the closure of the ZGS programme, Malcolm initiated Argonne participation in two important experiments: HRS at the PEP collider at SLAC, where he proposed using the superconducting magnet of the 12 foot bubble chamber as the solenoid for the HRS spectrometer, and the ZEUS experiment at the HERA collider in DESY. Malcolm took sabbatical leave at University College London and later at DESY, where he served as physics chairman and oversaw such activities as physics publications. A gifted speaker, he served on several review committees and was a HEPAP member and an active participant in the Snowmass Conferences. He retired in 2006. Besides being a brilliant physicist, Malcolm had a knack for entertaining his guests with stories about his life and endless anecdotes about history and philosophy. His spare time was spent reading good books, fine dining and listening to classical music. Malcolm leaves behind his wife Eva and his many children and grandchildren. He will be missed by all who knew and loved him. Russian physicist Valery Dmitrievich Khovanskiy passed away in Moscow on 7 September. A veteran of Russian experimental high-energy physics and long-time leader of the ITEP team in ATLAS, he will be remembered not only as an energetic contributor to the CERN neutrino and LHC programmes, but also as an honest and principled person who loved science and life. Valery was born in Sverdlovsk in the former USSR, and received his PhD (for the study of cumulative effects in πN-interactions) at the Institute of Experimental and Theoretical Physics (ITEP, Moscow) in 1969. Since then, his main scientific interests were in the fields of neutrino physics, novel particle-detection methods and hadron collider physics. In the first Russian accelerator neutrino experiment at the Serpukhov 70 GeV proton synchrotron (IHEP-ITEP, 1970–1978), Valery lead the detector construction and studied neutrino and antineutrino interactions to validate the then very young quark-parton model. In the late 1970s, Valery joined the CERN experimental neutrino programme at the SPS and PS, and became one of the senior scientists of the CHARM, PS-181, CHARM-2 and CHORUS experiments devoted to a systematic study of neutral currents, and the search for new particles and neutrino oscillations. From 1990 onwards, he participated in the ATLAS experiment. His group was active in the preparation of the Letter of Intent, working on the concept of radiation-resistant forward calorimeters, and, from 1995 to 2009, worked on the construction and commissioning of the ATLAS liquid-argon forward calorimeters, providing the major part of the tungsten electrodes. From 1995 to 2012, Valery was the leader of the neutrino-physics laboratory at ITEP. He served on the LHCC from 1992 to 1994 and for a long period on the Russian government’s commission on fundamental research. He was also one of the founders and lecturers of the famous ITEP Winter School of Physics. Valery had a vivid individuality and was invariably good humoured. His many pupils, colleagues and friends admired him and he will be very much missed. Edmund (Ted) Wilson, a well-known figure in the world of particle accelerators and former director of the CERN Accelerator School (CAS), died after a short illness on 3 November. The son of a schoolteacher in Liverpool, UK, he graduated in physics at the University of Oxford in 1959 and immediately joined the nearby Rutherford Appleton Laboratory. His first stay at CERN was in 1962–1963 and he returned in 1967 as a fellow, working in Werner Hardt’s group on the design of the booster for the new large synchrotron: the “300 GeV” machine, later to become the Super Proton Synchrotron (SPS). He became the right-hand man of John Adams in 1969, helping him to prepare the project for approval by CERN Council, which was given in 1971. He became one of the first staff members of the new “300 GeV laboratory” set up for the construction of the SPS. In 1973–1974, at the request of Adams, Ted spent a sabbatical year at Fermilab to work on the commissioning of the “main ring”, a machine very similar to the SPS. The lessons he learnt there would prove essential for the smooth commissioning of the SPS, for which he was responsible a few years later. Following the approval in 1978 of the bold proposal of Carlo Rubbia to turn the SPS into a part-time proton–antiproton collider, Ted started working on how to convert the machine from a synchrotron to a storage ring. He later worked on the design and construction of CERN’s antiproton complex: first the antiproton accumulator, to which a second ring, the antiproton collector, was later added. Ted was a natural and gifted teacher. During the days of SPS construction he ran a series of courses on accelerator theory for members of the 300 GeV laboratory, which evolved into the book An Introduction to Particle Accelerators. Following his appointment as CAS director in 1992, he was responsible for organising 25 schools, in addition to special schools in India, China and Japan. He also coauthored a fascinating book on the history of particle accelerators and their applications: Engines of Discovery, a Century of Particle Accelerators. On his retirement, Ted renewed his association with Oxford University by becoming a guest professor at the John Adams Institute of Accelerator Physics, where he taught and supervised students. He has helped to bring on a new generation of machine builders. Ted Wilson will be sorely missed by the world’s accelerator community. He will always be remembered for his impish smile and his dry sense of humour. He is survived by his wife Monika, his three children and five grandchildren.


News Article | October 28, 2016
Site: www.prweb.com

Kitware has scheduled two times, during which it will deliver a presentation that provides exclusive insight on the largest-known in situ simulation run at its booth (3437) at The International Conference for High Performance Computing, Networking, Storage and Analysis (SC16). The presentation times will begin on Monday, November 14, 2016, at 7:30 p.m. MDT and on Tuesday, November 15, 2016, at 11:00 a.m. MDT, respectively. Kitware participated on a team that worked with collaborators to complete the run for a project earlier this year. As part of the project, the team aims to create an infrastructure that can execute algorithms that users write across in situ software implementations. The particular implementation that the team used for the largest-known in situ simulation run combined ParaView Catalyst and SENSEI. “ParaView Catalyst provides a lightweight ParaView server library, and SENSEI offers an interface that transfers analysis code among in situ infrastructures,” said Andrew Bauer, Ph.D., the lead developer of ParaView Catalyst at Kitware. “ParaView Catalyst and SENSEI streamline the simulation pipeline. Together, they enabled us to complete a larger run than ever before.” For the run, ParaView Catalyst and SENSEI computed the Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA) simulation code. The run employed a 6.3 billion-cell unstructured grid and used 32,768 out of 49,152 nodes on Mira, an IBM Blue Gene/Q supercomputer from Argonne Leadership Computing Facility at Argonne National Laboratory. With 1,048,576 Message Passing Interface (MPI) processes on over 500,000 cores, the run quadrupled the size of the previous largest-known in situ simulation run. While the previous run also computed PHASTA with ParaView Catalyst, it did not utilize SENSEI. “With ParaView Catalyst, users can specify which visualization and analysis capabilities of ParaView they seek to implement,” Bauer said. “Now, with SENSEI, users can better leverage in situ infrastructures to reduce runtime.” The total runtime for the simulation clocked in at 653 seconds. Of this total, the time for in situ computation comprised 13 percent. During SC16, Kitware will address how information gleaned from the simulation run guided the development community to update ParaView. With these updates, the team projects that the total runtime for the same simulation would now come to 601 seconds, and the time for in situ computation would reduce to 5.6 percent of the total. The SC16 presentation will complement a paper that team members wrote with collaborators for the conference. The paper, “Performance Analysis, Design Considerations, and Applications of Extreme-scale In Situ Infrastructures,” reviews in situ computation and shares additional results of the largest-known in situ simulation run. As well as Bauer, authors on the paper include Utkarsh Ayachit, Earl P. N. Duque, Greg Eisenhauer, Nicola Ferrier, Junmin Gu, Kenneth E. Jansen, Burlen Loring, Zarija Lukić, Suresh Menon, Dmitriy Morozov, Patrick O’Leary, Reetesh Ranjan, Michel Rasquin, Christopher P. Stone, Venkat Vishwanath, Gunther H. Weber, Brad Whitlock, Matthew Wolf, K. John Wu and E. Wes Bethel. Bethel will discuss the paper at the State-of-the-Practice: System Characterization and Design session at the conference. The session will take place on Thursday, November 17, 2016, from 4:00 p.m. to 4:30 p.m. MDT in room 355-D of the Salt Palace Convention Center. To learn more about the largest-known in situ simulation run, SC16 attendees can visit the Kitware booth during the conference exhibition from November 14 to November 17, 2016. Along with the exhibition, Kitware will participate in a number of conference activities that highlight its technology and expertise in high-performance computing and visualization. For details on scheduled activities, please refer to the event listing on the Kitware blog. To set up a time to meet with Kitware team members at SC16, please email kitware(at)kitware.com. This material is based upon work supported by the U.S. Department of Energy, Office of Science, under Award Number DE-SC0012387. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. About Kitware Kitware is an advanced technology, research and open-source solutions provider for research facilities, government institutions and corporations worldwide. Founded in 1998, Kitware specializes in research and development in the areas of HPC and visualization, medical imaging, computer vision, data and analytics and quality software process. Among its services, Kitware offers consulting and support for high-quality software solutions. Kitware is headquartered in Clifton Park, NY, with offices in Carrboro, NC; Santa Fe, NM and Lyon, France. More information can be found on http://www.kitware.com.


News Article | November 10, 2016
Site: www.eurekalert.org

Researchers at Johns Hopkins report they have laid the foundation to develop novel antibiotics that work against incurable, antibiotic-resistant bacteria like tuberculosis by targeting an enzyme essential to the production and integrity of bacterial cell walls. The findings, they say, suggest that antibiotic drugs specifically targeting the recently discovered LD-transpeptidase enzyme, which is needed to build bacterial cell walls in some bacteria, could potentially cure many antibiotic-resistant infections. An additional implication of the research, the Johns Hopkins team says, is that drugs targeting the enzyme could offer quicker, cheaper and more easily accessible treatment against tuberculosis, a disease that still kills more people worldwide than any other infection, according to the Centers for Disease Control and Prevention. A summary of the findings is published on Nov. 7 in Nature Chemical Biology. "The complexity of TB treatment and growing prevalence of antibiotic resistance is a serious threat to public health," says Gyanu Lamichhane, Ph.D. associate professor of medicine at the Johns Hopkins University School of Medicine. His team joined with the research team of Craig Townsend, Ph.D., the Alsoph H. Corwin Professor of Chemistry at The Johns Hopkins University's Krieger School of Arts and Sciences, to tackle this complex issue. "Our research offers steps toward the design of new antibiotics that attack a previously untargeted bacterial enzyme," says Townsend. At the root of their investigation, Lamichhane says, is the fact than more than half of antibiotics prescribed today are of a class called beta-lactams, which work by interrupting the function of the DD-transpeptidase enzyme that creates bacterial cell walls. Without it, bacteria quickly die. However, in 2005, a team of researchers found a second wall-building enzyme, LD-transpeptidase, that allows bacteria like the ones that cause TB to survive antibiotic treatments. "We looked at the structure of LD-transpeptidase, thought about how it works and started making new compounds that could be used against it," says Townsend. Pankaj Kumar, Ph.D., postdoctoral fellow in infectious diseases at the Johns Hopkins University School of Medicine, began the research in the new study by extracting LD-transpeptidase from many species of bacteria and examining its detailed molecular structure with a sophisticated imaging system known as protein X-ray crystallography using the Advanced Photon Source at the Argonne National Laboratory in Chicago. By analyzing the enzyme's structure, Johns Hopkins researchers were able to design new compounds in the carbapenem group, a subclass of the beta-lactam antibiotics that bind to the LD-transpeptidase wall-building enzyme and stop its function. In live bacterial cultures, the carbapenems were shown by Lamichhane's and Townsend's groups to stop the enzyme's wall-building activity. The new compounds were even effective against the ESKAPE pathogens, a group of six bacterial species that the Centers for Disease Control and Prevention has identified as a threat because of their propensity for developing antibiotic resistance. Following these successes, Amit Kaushik, Ph.D., a postdoctoral fellow in infectious diseases at the Johns Hopkins University School of Medicine, tested two carbapenems in vivo against TB in mice infected with TB. Researchers infected mice with tuberculosis bacteria and separated them into different treatment groups. The rodents' lungs were sampled periodically over a period of three weeks, and the results showed that even without use of classic TB antibiotic treatments, the new carbapenems, specifically biapenem, cured TB infection in mice. "Our data show that the carbapenems successfully treated TB infections alone by attacking the right enzyme," says Lamichhane. Townsend and Lamichhane say the focus of their research is now on creating variations of their original compound that are designed to target specific bacteria. Many commonly prescribed antibiotics today work on a broad spectrum of bacterial species, meaning that in addition to killing off bad bacteria, they also destroy the friendly bacteria our bodies need to function normally and whose destruction can cause dangerous side effects. Lamichhane believes that the future of antibiotic treatments relies on our ability to be good antimicrobial stewards and treat specific bacteria without affecting our bodies' natural microbiome. Not only will this cut down on antibiotic side effects, but it will also slow the development of antibiotic resistance in the bacterial species not being targeted. The researchers are now in the process of initiating clinical trials to test the safety and efficacy of some of these new compounds. Antibiotic resistance has been an ever-present threat since the discovery of penicillin in 1928. Scientists and physicians have historically kept up with resistant bacteria through the frequent introduction of new antibiotic treatments. However, the research and development of antibiotics plummeted in the 1980s as other drugs became more profitable to produce. Because of the decreased incentive to invest in new antibiotics and the liberal use of antibiotics we already have, many bacterial species have quickly outpaced our ability to treat them. The Centers for Disease Control and Prevention estimates that annually, 2 million illnesses and 23,000 deaths are caused by antibiotic-resistant bacteria in the United States alone, and that 70 percent of hospital-acquired infections are now antibiotic-resistant. Other authors on this study include Evan P. Lloyd, Trevor A. Zandi, Rohini Mattoo, Nicole C. Ammerman and Drew T. Bell of The Johns Hopkins University; Shao-Gang Li, Joel S. Freundlich and Alexander L. Perryman of Rutgers University Medical School; Sean Ekins of Collaborations in Chemistry; and Stephan L. Ginell of the Argonne National Laboratory. This study was funded by the National Institutes of Health (R21AI111739 and DP2OD008459) and the U.S. Department of Energy Office of Biological and Environmental Research (contract number DE-AC02-06CH11357).


News Article | February 17, 2017
Site: www.eurekalert.org

U.S. Department of Energy (DOE) high-performance computer sites have selected a dynamic fusion code, led by physicist C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL), for optimization on three powerful new supercomputers. The PPPL-led code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s. The PPPL code, called XGC, simulates behavior of the ions, electrons and neutral atoms in the transport barrier region-- or "pedestal" -- between the ultra-hot core of the plasma that fuels fusion reactions and the cooler and turbulent outer edge of the plasma. The pedestal must be high and wide enough to prevent damage to the divertor plate that exhausts heat in doughnut-shaped tokamaks that house the fusion reactions. "How to create a high edge pedestal without damaging the divertor wall is the key question to be answered," said Chang. "That is a prerequisite for achieving steady state fusion." Among the team of nationwide experts developing this program are PPPL physicists Seung-Ho Ku, Robert Hager and Stephane Ethier. Selection of the PPPL code could help ready it for exascale development. "Computer architecture is evolving rapidly and these new pre-exascale computers have features that are quite different from some of the earlier petascale supercomputers," said Amitava Bhattacharjee, head of the Theory Department at PPPL. Petascale machines operate in petaflops, or one million billion (1015) floating point operations per second. Bhattacharjee heads a PPPL-led Exascale Computing Project that will integrate the XGC code with GENE, a code developed at the University of California, Los Angeles, to create the first simulation of a complete fusion plasma. Exascale supercomputers will perform exaflops, or a billion billion (1018) floating point operations per second. The three new pre-exascale supercomputers: Cori, now fully installed at the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory. Cori, named for biochemist Gerty Cori, the first American woman to win a Nobel Prize in science, has a theoretical peak speed of 30 petaflops per second on scientific applications using Intel Xeon "Haswell" and Xeon Phi "Knights Landing" processor nodes. Also selected to participate in Cori's NERSC Exascale Science Applications Program (NESAP) is the PPPL-led M3D-CI, an extended magnetohydrodynamics (MHD) code focused on simulation of plasma disruptions led by physicist Stephen Jardin, with support from physicists Joshua Breslau, Nate Ferraro and Jin Chen. Two more PPPL-led codes, in addition to the 20 that included XGC and M3D-CI that were previously selected, will participate in the Cori NERSC program. These programs are GTC-P and GTS codes that model plasma turbulence in the plasma core and are headed by physicists William Tang and Stephane Ethier. Principal developer of the GTS code is PPPL physicist Weixing Wang. The GTC-P code is PPPL's version of the GTC code led by the University of California, Irvine. Summit, to be operational at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in 2018. Summit features a hybrid architecture consisting of IBM Power 9 processors and multiple NVIDIA Volta graphic processing units and will be capable of performing up to at least 200 petaflops for a wide range of applications. The facility's Center for Accelerated Application Readiness (CAAR) program has selected 13 projects that will participate in the program to optimize their applications codes and demonstrate the effectiveness of their applications on Summit. Aurora, scheduled to be deployed in 2018 at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory, will be comprised of third generation Intel Xeon Phi "Knights Hill" many-core processors. Ten projects have been selected for the ALCF Early Science Program, which is expected to be capable of performing up to 200 petaflops on a wide range of scientific applications. PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- ultra-hot, charged gases -- and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 8, 2016
Site: www.eurekalert.org

UPTON, NY--Three technologies developed at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have received 2016 R&D 100 Awards, which annually recognize the 100 most innovative technologies and services of the past year. The Brookhaven Lab winners are the Hard X-ray Scanning Microscope with Multilayer Laue Lens Nanofocusing Optics (a joint award with DOE's Argonne National Laboratory), MoSoy Catalyst, and Nanostructured Anti-reflecting and Water-repellent Surface Coatings. "It is an honor for Brookhaven Lab to be recognized as among the institutions where innovative research and development is taking place," said Brookhaven's Director Doon Gibbs. "Our three winners this year illustrate the wide range of scientific discovery happening at Brookhaven. My congratulations go out to all the innovative researchers being recognized by these awards." Brookhaven's custom-built x-ray microscope has advanced x-ray optics called multilayer Laue lenses for imaging a broad range of materials, with a spatial resolution of better than 15 nanometers--equivalent to 50,000 times smaller than a grain of sand. The microscope is installed at the National Synchrotron Light Source II (NSLS-II), a DOE Office of Science User Facility, where the user community can perform comprehensive structural and chemical studies on everything from soil samples to biological proteins and battery materials. The only one of its kind, this microscope is also the recipient of a 2016 Microscopy Today Innovation Award. Brookhaven physicists Evgeny Nazaretski and Yong Chu led the project. MoSoy Catalyst is a novel catalyst derived from a low-cost, earth-abundant metal--molybdenum (Mo)--and renewable soybeans (Soy). This catalyst produces hydrogen in an environmentally friendly, cost-effective way, eliminating the need for expensive metal catalysts such as platinum to speed up the rate at which water is split into hydrogen and oxygen. Through this water-splitting reaction, hydrogen can be released as a gas and used directly as a fuel or to produce energy-rich hydrocarbons. The project was led by Brookhaven chemists James Muckerman, Etsuko Fujita, and Kotaro Sasaki, with the participation of Wei-Fu Chen (former postdoctoral research associate) and twin sisters Shweta and Shilpa Iyer (former high-school summer students). The nanotextured "coatings" developed at Brookhaven's Center for Functional Nanomaterials (CFN), another DOE Office of Science User Facility, are actually nanoscale features etched into the surfaces of silicon, glass, and some plastics. These textured surfaces absorb all wavelengths of light from any angle and also repel water extremely efficiently, with the water droplets carrying away particles of dirt. The method for achieving these textured surfaces leverages the ability of materials called block copolymers to self-assemble into ordered patterns that scientists can then use as templates for etching surfaces. The anti-reflective and water-repellent materials could enable the production of highly efficient self-cleaning solar cells, glare-free cell phone screens, and perfectly transparent windows. Brookhaven physicist and current CFN Director Charles Black led this project. The award winners were announced on November 3, 2016 at a ceremony held during the 54th Annual R&D 100 Conference at the Gaylord National Resort & Convention Center in Oxon Hill, MD. Winners are selected annually by R&D Magazine editors--who are guided by an independent panel of more than 50 judges--and represent industries, federally funded research institutions, national labs, and university research centers worldwide. Awards are given across five categories: analytical/test, IT/electrical, mechanical devices/materials, process/prototyping, and software/services. Researchers from DOE laboratories won 32 of the 100 awards this year. "The Department of Energy's National Laboratories are a global technology powerhouse that harness innovation to create jobs, support new industries, and help ensure America's competitive edge in areas of national security and science," said Secretary of Energy Ernest Moniz. "These awards demonstrate the incredible value that our National Laboratories continue to provide by using basic science to address tough challenges, from cryptography to clean energy, X-ray imaging to environmental remediation." Since 1987, Brookhaven Lab has won more than 30 R&D 100 Awards for technologies including innovative new materials, chemistry breakthroughs, biomedical products, consumer items, and testing equipment, and for advances in high-energy physics. Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 9, 2015
Site: www.rdmag.com

A team led by scientists at the U.S. Dept. of Energy (DOE)’s SLAC National Accelerator Laboratory combined powerful magnetic pulses with some of the brightest x-rays on the planet to discover a surprising 3-D arrangement of a material’s electrons that appears closely linked to a mysterious phenomenon known as high-temperature superconductivity. This unexpected twist marks an important milestone in the 30-year journey to better understand how materials known as high-temperature superconductors conduct electricity with no resistance at temperatures hundreds of degrees Fahrenheit above those of conventional metal superconductors but still hundreds of degrees below freezing. The study was published in Science Express. The study also resolves an apparent mismatch in data from previous experiments and charts a new course for fully mapping the behaviors of electrons in these exotic materials under different conditions. Researchers have an ultimate goal to aid the design and development of new superconductors that work at warmer temperatures. “Totally unexpected” physics “This was totally unexpected, and also very exciting. This experiment has identified a new ingredient to consider in this field of study. Nobody had seen this 3-D picture before,” said Jun-Sik Lee, a SLAC staff scientist and one of the leaders of the experiment conducted at SLAC’s Linac Coherent Light Source (LCLS) x-ray laser. “This is an important step in understanding the physics of high-temperature superconductors.” The dream is to push the operating temperature for superconductors to room temperature, he added, which could lead to advances in computing, electronics and power grid technologies. There are already many uses for standard superconducting technology, from MRI machines that diagnose brain tumors to a prototype levitating train, the CERN particle collider that enabled the Nobel Prize-winning discovery of the Higgs boson and ultrasensitive detectors used to hunt for dark matter, the invisible constituent believed to make up most of the mass of the universe. A planned upgrade to the LCLS, known as LCLS-II, will include a superconducting particle accelerator. The new wave in superconductivity The 3-D effect that scientists observed in the LCLS experiment, which occurs in a superconducting material known as YBCO (yttrium barium copper oxide), is a newly discovered type of “charge density wave.” This wave does not have the oscillating motion of a light wave or a sound wave; it describes a static, ordered arrangement of clumps of electrons in a superconducting material. Its coexistence with superconductivity is perplexing to researchers because it seems to conflict with the freely moving electron pairs that define superconductivity. The 2-D version of this wave was first seen in 2012 and has been studied extensively. The LCLS experiment revealed a separate 3-D version that appears stronger than the 2-D form and closely tied to both the 2-D behavior and the material’s superconductivity. The experiment was several years in the making and required international expertise to prepare the specialized samples and construct a powerful customized magnet that produced magnetic pulses compressed to thousandths of a second. Each pulse was 10-20 times stronger than those from the magnets in a typical medical MRI machine. A powerful blend of magnetism and light Those short but intense magnetic pulses suppressed the superconductivity of the YBCO samples and provided a clearer view of the charge density wave effects. They were immediately followed at precisely timed intervals by ultrabright LCLS x-ray laser pulses, which allowed scientists to measure the wave effects. “This experiment is a completely new way of using LCLS that opens up the door for a whole new class of future experiments,” said Mike Dunne, LCLS director. Researchers conducted many preparatory experiments at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and Argonne National Laboratory's Advanced Photon Source (APS), which also produce x-rays for research. LCLS, SSRL and APS are DOE Office of Science User Facilities. Scientists from SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC, and SSRL and LCLS were a part of the study. “I’ve been excited about this experiment for a long time,” said Steven Kivelson, a Stanford Univ. physics professor who contributed to the study and has researched high-temperature superconductors since 1987. Kivelson said the experiment sets very clear boundaries on the temperature and strength of the magnetic field at which the newly observed 3-D effect emerges. “There is nothing vague about this,” he said. “You can now make a definitive statement: In this material a new phase exists.” The experiment also adds weight to the growing evidence that charge density waves and superconductivity “can be thought of as two sides of the same coin,” he added. In search of common links But it is also clear that YBCO is incredibly complex, and a more complete map of all of its properties is required to reach any conclusions about what matters most to its superconductivity, said Simon Gerber of SIMES and Hoyoung Jang of SSRL, the lead authors of the study. Follow-up experiments are needed to provide a detailed visualization of the 3-D effect, and to learn whether the effect is universal across all types of high-temperature superconductors, said SLAC staff scientist and SIMES investigator Wei-Sheng Lee, who co-led the study with Jun-Sik Lee of SSRL and Diling Zhu of LCLS. “The properties of this material are much richer than we thought,” Lee said. “We continue to make new and surprising observations as we develop new experimental tools,” Zhu added. The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason.  Learn more.


News Article | October 26, 2016
Site: www.nature.com

For more than 20 years, Brian Kobilka worked to create a portrait of a key cell receptor. Sometimes, the slow, steady approach wins. Brian Kobilka was exhausted when he stepped off the 12-hour red-eye flight from China to San Francisco, California, last May. But after a quick nap at home, he headed straight back to the airport and crammed his long frame into another plane, this time bound for Chicago, Illinois. Once there, he drove to the Advanced Photon Source at Argonne National Laboratory, a source of powerful X-ray beams used for analysing protein structures. Kobilka, a biochemist at Stanford University in California, was desperate to see the latest data from his lab's effort to make the first atomic-scale, three-dimensional image of a key cell-surface receptor locked with its protein partner. The image marked the last leg of an intellectual journey that he had started some 20 years before. Nearly every function of the human body, from sight and smell to heart rate and neuronal communication, depends on G-protein-coupled receptors (GPCRs). Lodged in the fatty membranes that surround cells, they detect hormones, odours, chemical neurotransmitters and other signals outside the cell, and then convey their messages to the interior by activating one of several types of G protein. The G protein, in turn, triggers a plethora of other events. The receptors make up one of the largest families of human proteins and are the targets of one-third to one-half of drugs. Working out their atomic structure will help researchers to understand how this central cellular-communication system works, and could help drug-makers to design more effective treatments. The structure and workings of GPCRs have been overriding obsessions of Kobilka for most of his professional life. For much of this, he had little company, as the proteins were widely considered too complex and unwieldy to be stabilized as crystals, a prerequisite for structural analysis by X-ray crystallography. But his determination has finally started to pay off. In 2007, his team solved the first high-resolution structure of a GPCR that binds to a hormone1–3; in January this year he did the same for the receptor poised to activate its G-protein4; and last month, he published the results from Chicago, the first structure of any GPCR in the act of turning on its G protein5. The latest accomplishment has many in the field buzzing about a Nobel prize. "But if they do give it to him, it'll be the devil to get him to Stockholm," says Henry Bourne, a professor emeritus at the University of California, San Francisco, who worked on G proteins and has known Kobilka since the 1980s. Kobilka loathes the limelight, and is renowned as much for his shy modesty as his ability to crack seemingly impossible protein structures. When I met Kobilka at his Stanford office, he accepted a quick handshake, averting his eyes, and reluctantly offered me a seat across from him at his desk. He stared anxiously at the glowing red light on my voice recorder. He is "desperately fearful" of talking to the press, he says, his voice breaking, and only agreed to talk because he "wanted to make sure that the contributions of my lab and collaborators are recognized". He is so fearful, in fact, that it is almost impossible to draw much out from him. When asked why he is so captivated by GPCRs, Kobilka struggles for an answer. "I'm just inherently fascinated by these proteins. I don't know. I just want to know how they work," he says. The only time he becomes animated is when describing the conformation of the receptor gripping its G protein. "It's a fantastic structure," he says, a grin across his face. "It's just amazing. I really enjoy talking about it." Bourne says that "Brian is a fascinating character. He's so driven, and enormously intense. But you don't get any feeling of — nor is there any — self-aggrandizing, pushiness, show-offiness. There is absolutely none of that. It's quite refreshing and rare." Perhaps Kobilka's rare reticence and determination are exactly what make him so well-suited for the task of protein crystallization — a pursuit in which a complete atomic structure is the finish line, and there are few intermediate rewards. "What he did was, step by step, sneak up on it to make it work," says Bourne. Kobilka is from a small, rural community in central Minnesota. His grandfather and father were bakers; his mother decorated the cakes. Kobilka studied biology and chemistry at the University of Minnesota in Duluth, where he met his future wife, Tong Sun Thian, in biology class. "He always topped the curve," she says. "But he was also so modest and quiet, you'd never know." Kobilka got his first taste of research when he and Tong Sun worked in a developmental biology lab at the university, and built a tissue-culture hood for the lab using scrap heavy plastic from his father's bakery. He enjoyed research, but went to medical school anyway — in rural Minnesota, he says, people who were interested in biology became doctors. Studying at Yale University in New Haven, Connecticut, Kobilka developed an interest in intensive-care medicine and the drugs used in life-or-death situations that act on GPCRs — particularly the receptors for adrenaline and noradrenaline, which open the airways and boost heart rate. In 1984, after his residency, Kobilka applied for a fellowship with Robert Lefkowitz at Duke University in Durham, North Carolina. It was the premier lab studying receptors for adrenaline, which had become a model system for all hormone receptors. When Kobilka joined, the lab was just starting to think about how to clone the gene for the β -adrenergic receptor (β AR) and determine its genetic sequence. But the receptor was produced in such small amounts that the team was only able to collect enough protein to work out a few scraps of its likely genetic sequence. Kobilka then displayed "the first of many flashes of technological innovation", says Lefkowitz: he decided to construct a library of mammalian genomic sequences and screen it with the scraps of sequences they had. This would pull out longer clones that could be pieced together to reveal the full sequence. The plan worked. And when the team stitched together the receptor sequence6, it held a surprise: several strings of amino acids that are typically found in cell membranes showed that the receptor snaked through the membrane seven times. It was just like rhodopsin, the light-detecting receptor in the retina that was also known to activate a G protein. At the time, there was no reason to think that these receptors were going to look the same — especially when one was turned on by light, and the other by a hormone. "It was a real eureka moment," recalls Lefkowitz. At the time, about 30 proteins were known to turn on G proteins. "We realized, oh my god, they're all going to look like this. It's a whole family!" That family became known as seven-transmembrane receptors, or GPCRs — and is now known to have nearly 800 members in humans. Kobilka describes the watershed cloning project with humility. "It was exciting just to be involved," he says. "It was the result of a lot of heroic work by the Lefkowitz lab." Sometimes that self-effacement has held Kobilka back. After he left Duke, he was interviewed for a faculty position in the pharmacology department at the University of California, San Francisco. "We had him and another guy," says Bourne, who was then head of the department. "It was very clear to everyone that Brian was smart, no question. But he didn't sparkle in the slightest. He was this shy, pale, Scandinavian-looking guy. I thought, who is this guy? He's very strange, so modest, so quiet." The other guy got the job, and Stanford snapped up Kobilka, says Bourne. "We should have hired both of them." After his success with β AR, Kobilka was hooked. He wanted to see what the receptor looked like in three dimensions using X-ray crystallography, in which a beam of X-rays is fired at a protein crystal and the resultant diffraction pattern is used to reveal the arrangement of its atoms. It was an audacious goal. To produce an intelligible X-ray diffraction pattern, Kobilka first needed to crystallize the receptor — a formidable process of packing millions of identical copies of protein so tightly that they form a solid that looks like a microscopic shard of glass. Working out the conditions that will allow a protein to crystallize can take years, and membrane proteins such as GPCRs are the hardest of all: they must be coaxed out of the membrane intact, but it is the membrane that holds them in shape. GPCRs are also constantly shifting into various states, and most are expressed in very low quantities. To collect enough protein, Kobilka would need to express the β AR at about 100–1,000 times the levels at which it is normally produced in a cell. At that time, no one was close to crystallizing a GPCR and few were trying. Crystallizing the receptor became Kobilka's pet project because he thought it was too high risk for a postdoc or graduate student. "We used to joke that he'd come running into the lab from teaching or a meeting with a column in his hand, trying to do a binding assay to see if his latest purification had worked before he went home," says Mark von Zastrow, one of Kobilka's first postdocs, who now studies GPCR trafficking at the University of California, San Francisco. Kobilka had reason to rush through the workday — he had two young children at home, Tong Sun was at medical school and they had a "tremendous" mortgage. To make ends meet, he moonlighted as a doctor in the emergency department at weekends. Von Zastrow recalls poking fun at Kobilka for his seemingly futile crystallography project. "He told us, 'You'll see. The crystals are going to be so big, I'm going to make a ring for Tong Sun out of them.'" As the years rolled by, Kobilka's lab was carrying out various biochemistry and biophysics experiments aimed at getting to know the β AR more intimately, and he was inching forwards in expressing and purifying the protein. But he wasn't noticeably closer to getting the structure. "People viewed what he was doing as dotting i's and crossing t's," says Bourne. "And for a while it was sort of that. He wasn't getting published in fancy journals." The team showed that GPCRs have big, floppy loops inside and outside the cell, and that the receptor writhes and squirms, adopting a variety of levels of activation7. The work only made crystallization look more impossible. In the meantime, another GPCR crystallography effort raced ahead. In 2000, Krzysztof Palczewski and his postdoc Tetsuji Okada, then at the University of Washington in Seattle, published the crystal structure of rhodopsin, the light-sensing GPCR in the retina8. It was a significant accomplishment, but of little help to Kobilka. Rhodopsin is plentiful — a bucketful of cow eyes from the slaughterhouse gives enough pure protein for crystallography. It is also simpler and more stable than other GPCRs, and its structure was thought to be different. In 2001, Kobilka got disheartening news. His main funding, from the Howard Hughes Medical Institute in Chevy Chase, Maryland, would not be renewed after it ran out in 2003. His lab began to struggle financially, and went "deep into the red" to support his expensive crystallization crusade. "I don't think I ever considered giving up," says Kobilka. "I admit that it was frustrating at times, but I enjoyed the challenge and I wanted to know the answer." Kobilka says that one of his friends "best described my persistence as 'irrational optimism'". Finally, in late 2004, his group managed to grow tiny crystals — too small to be analysed at Stanford's synchrotron facility. Gebhard Schertler, a crystallographer then at the Medical Research Council Laboratory of Molecular Biology in Cambridge, UK, suggested that Kobilka take his samples to the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which had the tightly focused beamline needed to analyse such small crystals. "We were running out of money all the time," says Schertler, now at the Paul Scherrer Institute in Villigen, Switzerland. "The measurements were all on my grants." The crystals diffracted to a resolution of around 20 ångströms — so low that there was no discernible image. A resolution of about 4 Å is needed to see the organization of individual atoms. Still, says Kobilka, "it was very exciting. In part it was because I was very naive. I thought we'd be able to get to 3 Å in no time." He finally had the confidence to hire postdocs for the crystallography project. In 2005, he also received a financial lifeline, winning two funding streams from the US National Institutes of Health in Bethesda, Maryland. But frustration followed — the team couldn't get the crystals to grow any bigger or diffract any better. The receptor's changeable activation states and floppy segments — particularly one restless loop on the intracellular side of the receptor — made it very difficult to trap all the proteins in an identical conformation. The team realized that they'd have to do something radical: chop off the loose ends, and either anchor the loop in place with an antibody or replace it altogether with a protein known to crystallize well. The antibody project, led by postdoc Søren Rasmussen, came together first. As before, they first took the crystals to Schertler, at the ESRF. "It was the greatest thing," says Schertler. "Brian and I and our team were at the synchrotron. We were sitting in front of the machine when the measurements came in. When we first saw the pattern for 3.5 Å, everyone in the room jumped up. It was a very happy moment. That's what makes scientists go. That moment, when we know we've found a new continent." The structure, published in Nature1, was the second crystal structure of a GPCR, after rhodopsin. Scientists, however, reserve their accolades for the fusion-protein project, which wasn't far behind. Postdoc Daniel Rosenbaum found that one protein — T4 lysozyme (T4L) — looked promising for fusing to the receptor in place of the loop. And Kobilka had got in touch with Ray Stevens at the Scripps Research Institute in La Jolla, California, and his new postdoc Vadim Cherezov, who had been optimizing a fatty scaffold to lock the membrane proteins in place for crystallization. T4L and the fatty scaffold turned out to be the winning combination. Just days after the Nature paper, Kobilka and Stevens published back-to-back papers in Science2,3 that solved the structure of the engineered β AR to 2.8 Å. The trio of papers marked a milestone in structural biology, and sharply intensified the competition in what was by now a fast and aggressive field. Stevens became one such competitor, and his lab is now powering through further GPCR structures. But Kobilka's heart was set on a different goal. The GPCR structures had been snapshots of receptors in an inactive state. To really understand the receptor's workings, researchers needed to see it as it was being activated by a ligand and turning on the G protein. This project was even more technically daunting than the last. The protein complex was too big to hold in the fatty scaffold; the G protein kept falling off; and this time, the extracellular part of the receptor wouldn't sit still for crystallization. "This was difficult enough that I wasn't sure we were ever going to get it," says Kobilka. "I thought it might be my retirement project." He also knew that several other labs, particularly those studying rhodopsin, were breathing down his neck. Kobilka reached out to all manner of experts for help, including Roger Sunahara, an expert on G proteins at the University of Michigan in Ann Arbor. The various groups developed a detergent for stabilizing the receptor with its G protein; a lipid scaffold that could support the complex; and an antibody that could hold it together. And Rasmussen was relentless, testing thousands upon thousands of crystallization conditions and ways to engineer the protein. One morning last May, Kobilka took a quick peek down the microscope at Rasmussen's latest effort to produce crystals. "They were already bigger than any other crystals we'd grown of the complex," says Kobilka. "It was extremely exciting. I didn't know if they would diffract well or not, but I had a good feeling." But a planned trip to China meant that he couldn't be there for the first X-ray images. "As soon as we got to the hotel in Beijing, he got online," says Tong Sun. When he found out it was solved, "he was on cloud nine. You could tell he just wanted to go back." The new picture, solved to a resolution of 3.2 Å, reveals a tangled molecular threesome: β AR with a ligand clasped at one end and the G protein nested up on the other. "There's definitely been a race, and in my opinion, Kobilka has triumphed," says Chuck Sanders, a structural biologist at Vanderbilt University in Nashville, Tennessee. "Hopefully the field will spread out a little. This complex was the prize, and that's been done now." Kobilka, of course, diverts credit to his collaborators and the "unsung heroes" of his lab, anxious that everyone should see their names in this story. His allegiance to the receptors is as resolute as ever. "The more we learn" about these proteins, he says, "the more complicated and fascinating they are". He is already working to understand more of the complications: what the various active states of the receptor look like, why different receptors couple to different G proteins and what happens when different ligands bind to the same receptor. He is also using other techniques, such as electron microscopy and nuclear magnetic resonance, to understand how GPCRs flex and move. "The job isn't done yet," he says. Maybe not — but few would doubt now that Kobilka will finish it. "Brian ultimately reaches his goals," says Lefkowitz. "Sometimes it takes 15 years, but he gets there."


News Article | February 3, 2016
Site: www.rdmag.com

FAYETTEVILLE, AR — An international group of physicists led by the University of Arkansas has created an artificial material with a structure comparable to graphene. “We’ve basically created the first artificial graphene-like structure with transition metal atoms in place of carbon atoms,” said Jak Chakhalian, professor of physics and director of the Artificial Quantum Materials Laboratory at the U of A. In 2014, Chakhalian was selected as a quantum materials investigator for the Gordon and Betty Moore Foundation. His selection came with a $1.8 million grant, a portion of which funded the study. Graphene, discovered in 2004, is a one-atom-thick sheet of graphite. Graphene transistors are predicted to be substantially faster and more heat-tolerant than today’s silicon transistors and may result in more efficient computers and the next-generation of flexible electronics. Its discoverers were awarded the Nobel Prize in physics in 2010. The U of A-led group published its findings this week in Physical Review Letters, the journal of the American Physical Society, in a paper titled “Mott Electrons in an Artificial Graphene-like Crystal of Rare Earth Nickelate.” “This discovery gives us the ability to create graphene-like structures for many other elements,” said Srimanta Middey, a postdoctoral research associate at the U of A who led the study. The research group also included U of A postdoctoral research associates Michael Kareev and Yanwei Cao, doctoral student Xiaoran Liu and recent doctoral graduate Derek Meyers, now at Brookhaven National Laboratory. Additional members of the group were David Doennig of the University of Munich, Rossitza Pentcheva of the University of Duisburg-Essen in Germany, Zhenzhong Yang, Jinan Shi and Lin Gu of the Chinese Academy of Sciences; and John W. Freeland and Phillip Ryan of the Advanced Photon Source at Argonne National Laboratory near Chicago. The research was also partially funded by the Chinese Academy of Sciences.


The Trump administration has prioritized repealing the Clean Power Plan (CPP), a set of rules by the U.S. EPA aimed at limiting pollution from power plants. New analysis shows that repealing the rule would cost the U.S. economy hundreds of billions of dollars, add more than a billion tons of greenhouse gases to the atmosphere and cause more than 100,000 premature deaths due to inhaled particulate pollution. Energy Innovation utilized the Energy Policy Simulator (EPS) to analyze the effects of repealing the CPP. The EPS is an open-source computer model developed to estimate the economic and emissions effects of various combinations of energy and environmental policies using non-partisan, published data from the U.S. Energy Information Administration (EIA), U.S. EPA, Argonne National Laboratory, U.S. Forest Service, and U.S. Bureau of Transportation Statistics, among others. The EPS has been peer reviewed by experts at MIT, Stanford University, Argonne National Laboratory, Berkeley National Laboratory and the National Renewable Energy Laboratory. It is freely available for public use through a user-friendly web interface or by downloading the full model and input dataset. Our analysis compared a business-as-usual (BAU) scenario (based on existing policies as of mid-to-late 2016, not including the Clean Power Plan) to a scenario that includes a set of policies that narrowly achieve the Clean Power Plan’s mass-based emissions targets. Three important notes: We find that repealing the CPP would result in an increase of carbon dioxide equivalent (CO2e) emissions of more than 500 million metric tons (MMT) in 2030 and 1200 MMT in 2050, contributing to global warming and severe weather events, such as hurricanes, floods and droughts. Cumulative net costs to the U.S. economy (in increased capital, fuel, and operations and maintenance (O&M) expenditures) would exceed $100 billion by 2030 and would reach nearly $600 billion by 2050. It may seem ironic that removing regulations can result in increased costs to the economy, but regulations can help to overcome market barriers and similar problems that prevent certain economically-ideal outcomes from being achieved in a free market (for instance, under-investment in energy efficiency technologies). Although the CPP’s focus is on reducing carbon emissions, the same policies also reduce particulate pollution, which is responsible for thousands of heart attacks and respiratory diseases each year. Repealing the CPP would increase particulate emissions, causing more than 40,000 premature deaths in 2030 and more than 120,000 premature deaths in 2050.


News Article | November 18, 2016
Site: www.eurekalert.org

They say diamonds are forever, but diamonds in fact are a metastable form of carbon that will slowly but eventually transform into graphite, another form of carbon. Being able to design and synthesize other long-lived, thermodynamically metastable materials could be a potential gold mine for materials designers, but until now, scientists lacked a rational understanding of them. Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have published a new study that, for the first time, explicitly quantifies the thermodynamic scale of metastability for almost 30,000 known materials. This paves the way for designing and making promising next-generation materials for use in everything from semiconductors to pharmaceuticals to steels. "There's a great amount of possibility in the space of metastable materials, but when experimentalists go to the lab to make them, the process is very heuristic--it's trial and error," said Berkeley Lab researcher Wenhao Sun. "What we've done in this research is to understand the metastable phases that have been made, so that we can better understand which metastable phases can be made." The research was published last week in the journal Science Advances in a paper titled, "The Thermodynamic Scale of Inorganic Crystalline Metastability." Sun, a postdoctoral fellow working with Gerbrand Ceder in Berkeley Lab's Materials Sciences Division, was the lead author, and Ceder was the corresponding author. The study involved large-scale data mining of the Materials Project, which is a Google-like database of materials that uses supercomputers to calculate properties based on first-principles quantum-mechanical frameworks. The Materials Project, directed by Berkeley Lab researcher Kristin Persson, who was also a co-author of the new paper, has calculated properties of more than 67,000 known and predicted materials with the goal of accelerating materials discovery and innovation. "Materials design and development is truly a slow process but is now being greatly accelerated by the fact that we can compute properties of compounds before they are made," Ceder said. "Although we still don't fully understand which materials can be made and how, mapping the underlying thermodynamics is an important first step." Metastable materials, or materials that transform to another state over a long period of time, are ubiquitous in both nature and technology and often have superior properties. Chocolate, for example, is metastable, with a lower melting point and better texture than stable chocolate. There are also metastable steels that have both toughness and strength, properties not normally found simultaneously in most stable steels. Scientists would love to develop new materials with certain properties for various applications--an ultra-strong yet lightweight metal for vehicles, for example--but to make any new material with desired properties, materials scientists must understand how synthesizing the material influences its structure, and then how the structure in turn affects its properties and performance. This, Sun explains, is the fundamental paradigm of materials science. "The Materials Project has helped us link a material's structure to its properties," Ceder said. "What we've done here is the first quantitative step in understanding synthesis-structure relationships." Sun offers an analogy to food: "If the Materials Project were a cookbook, it'd be like a database of ingredients and delicious dishes but no recipes. Designing recipes is difficult because scientists have a poor understanding of why metastable phases appear during 'cooking.' There are some applications where a metastable material is better, and others where the stable phases are better. This study sets a foundation to investigate how to use computers to predict recipes." Previously, scientists had thermodynamic numbers for less than 1,000 metastable compounds. "It's very hard to survey metastability over known materials because there's not much data out there in terms of calorimetry, which is measuring thermodynamic numbers," Sun said. What's more, metastable materials come in many forms, spanning metal alloys and minerals to ceramics, salts, and more, making a comprehensive survey difficult. "What we've done is large-scale data mining on nearly 30,000 observed materials to explicitly measure the thermodynamic scale of metastability, as a function of a wide variety of parameters, like chemistry and composition, which inorganic chemists and materials scientists can use to build intuition," Sun said. Based on their observations, the researchers went a step further, to propose a new principle they term "remnant metastability" to explain which metastable materials can be synthesized and which cannot. "We're essentially proposing search criteria¬?we're identifying which crystalline materials can be made, and possibly under what conditions they can be made," Sun said. "We hope this can be a more refined way to think about which crystal structure nature chooses when a material forms." The other co-authors of the paper are: Anubhav Jain of Berkeley Lab, Stephen Dacek and William Richards of MIT, Shyue Ping Ong and Anthony Gamst of UC San Diego, and Geoffroy Hautier of the Université Catholique de Louvain in Belgium. The research was supported by the Materials Project and was a collaboration with the DOE's Office of Science as part of its Center for Next Generation of Materials by Design, a DOE Energy Frontier Research Center. The researchers also used computing resources at the Center for Nanoscale Materials (CNM) at Argonne National Laboratory and well as resources at the Center for Functional Nanomaterials (CFN) at Brookhaven National Laboratory. CNM and CFN are DOE Office of Science User Facilities. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | September 19, 2016
Site: www.rdmag.com

Graphene is the stuff of the future. For years, researchers and technologists have been predicting the utility of the one-atom-thick sheets of pure carbon in everything from advanced touch screens and semiconductors to long-lasting batteries and next-generation solar cells. But graphene's unique intrinsic properties – supreme electrical and thermal conductivities and remarkable electron mobility, to name just a few – can only be fully realized if it is grown free from defects that disrupt the honeycomb pattern of the bound carbon atoms. A team led by Materials Scientist Anirudha Sumant with the U.S. Department of Energy's (DOE) Argonne National Laboratory's Center for Nanoscale Materials (CNM) and Materials Science Division, along with collaborators at the University of California-Riverside, has developed a method to grow graphene that contains relatively few impurities and costs less to make, in a shorter time and at lower temperatures compared to the processes widely used to make graphene today. Theoretical work led by Argonne nanoscientist Subramanian Sankaranarayanan at the CNM helped researchers understand the molecular-level processes underlying the graphene growth. "I'd been dealing with all these different techniques of growing graphene, and you never see such a uniform, smooth surface." The new technology taps ultrananocrystalline diamond (UNCD), a synthetic type of diamond that Argonne researchers have pioneered through years of research. UNCD serves as a physical substrate, or surface on which the graphene grows, and the source for the carbon atoms that make up a rapidly produced graphene sheet. "When I first looked at the [scanning electron micrograph] and saw this nice uniform, very complete layer, it was amazing," said Diana Berman, the first author of the study and former postdoctoral research associate who worked with Sumant and is now an Assistant Professor at the University of North Texas. "I'd been dealing with all these different techniques of growing graphene, and you never see such a uniform, smooth surface." Current graphene fabrication protocols introduce impurities during the etching process itself, which involves adding acid and extra polymers, and when they are transferred to a different substrate for use in electronics. "The impurities introduced during this etching and the transferring step negatively affect the electronic properties of the graphene," Sumant said. "So you do not get the intrinsic properties of the graphene when you actually do this transfer." The team found that the single-layer, single-domain graphene can be grown over micron-size holes laterally, making them completely free-standing (that is, detached from the underlying substrate). This makes it possible to exploit the intrinsic properties of graphene by fabricating devices directly over free-standing graphene. The new process is also much more cost-effective than conventional methods based on using silicon carbide as a substrate. Sumant says that the 3- to 4-inch silicon carbide wafers used in these types of growth methods cost about $1,200, while UNCD films on silicon wafers cost less than $500 to make. The diamond method also takes less than a minute to grow a sheet of graphene, where the conventional method takes on the order of hours. The high quality of graphene was confirmed by the UC Riverside co-authors Zhong Yan and Alexander Balandin by fabricating top-gate field-effect transistors from this material and measuring its electron mobility and charge carrier concentration. "It is well known that certain metals, such as nickel and iron, dissolve diamond at elevated temperatures, and the same process has been used for many years to polish diamond," said Sumant. He and his team used this property to employ nickel in converting the top layer of diamond into amorphous carbon, but it was not clear how these freed carbon atoms converted instantly into high-quality graphene. After Sumant's and Berman's initial breakthrough of growing graphene directly on UNCD, Sankaranarayanan and his postdocs Badri Narayanan and Sanket Deshmukh, computational material scientists at the CNM used resources at the Argonne Leadership Computing Facility (ALCF) to help the team better understand the mechanism of the growth process underlying this interesting phenomenon using reactive molecular dynamic simulations. Computer simulations developed by Narayanan, Deshmukh and Sankaranarayanan showed that certain crystallographic orientation of nickel-111 highly favor nucleation, and subsequent rapid growth of graphene; this was then confirmed experimentally. These large-scale simulations also showed how graphene forms. The nickel atoms diffuse into the diamond and destroy its crystalline order, while carbon atoms from this amorphous solid move to the nickel surface and rapidly form honeycomb-like structures, resulting in mostly defect-free graphene. The nickel then percolated through the fine crystalline grains of the UNCD, sinking out of the way and removing the need for acid to dissolve away excess metal atoms from the top surface. "It is like meeting a good Samaritan at an unknown place who helps you, does his job and leaves quietly without a trace," said Sumant. "The proven predictive power of our simulations places us in a position of advantage to enable rapid discovery of new catalytic alloys that mediate growth of high-quality graphene on dielectrics and move away on their own when the growth is completed," added Narayanan. In addition to the utility in making minimally defective, application-ready graphene for things like low-frequency vibration sensors, radio frequency transistors and better electrodes for water purification, Berman and Sumant say that the Argonne team has already secured three patents arising from their new graphene growth method. The researchers have already struck a collaboration with Swedish Institute of Space Physics involving the European Space Agency for their Jupiter Icy Moons Explorer (JUICE) program to develop graphene-coated probes that may help exploratory vehicles sense the properties of plasma surrounding the moons of Jupiter. Closer to home, the team has also crafted diamond and graphene needles for researchers at North Carolina University to use in biosensing applications. The Argonne researchers are now fine-tuning the process – tweaking the temperature used to catalyze the reaction and adjusting the thickness of the diamond substrate and the composition of the metal film that facilitates the graphene growth – to both optimize the reaction and to better study the physics at the graphene-diamond interface. "We're trying to tune this more carefully to have a better understanding of which conditions lead to what quality of graphene we're seeing," Berman said. Other Argonne authors involved in the study were Alexander Zinovev and Daniel Rosenmann. The paper, "Metal-induced rapid transformation of diamond into single and multilayer graphene on wafer scale," is published inNature Communications.


News Article | February 15, 2017
Site: www.eurekalert.org

Researchers are taking rough, defective diamonds and using high temperatures to perfect them for quantum sensing WASHINGTON, D.C., February 14, 2017 -- Quantum mechanics, the physics that governs nature at the atomic and subatomic scale, contains a host of new physical phenomena to explore quantum states at the nanoscale. Though tricky, there are ways to exploit these inherently fragile and sensitive systems for quantum sensing. One nascent technology in particular makes use of point defects, or single-atom misplacements, in nanoscale materials, such as diamond nanoparticles, to measure electromagnetic fields, temperature, pressure, frequency and other variables with unprecedented precision and accuracy. Quantum sensing could revolutionize medical diagnostics, enable new drug development, improve the design of electronic devices and more. For use in quantum sensing, the bulk nanodiamond crystal surrounding the point defect must be highly perfect. Any deviation from perfection, such as additional missing atoms, strain in the crystalline lattice of the diamond, or the presence of other impurities, will adversely affect the quantum behavior of the material. Highly perfect nanodiamonds are also quite expensive and difficult to make. A cheaper alternative, say researchers at Argonne National Laboratory and the University of Chicago, is to take defect-ridden, low-quality, commercially manufactured diamonds, and then "heal" them. In a paper published this week in APL Materials, from AIP Publishing, the researchers describe a method to heal diamond nanocrystals under high-temperature conditions, while visualizing the crystals in three dimensions using an X-ray imaging technique. "Quantum sensing is based on the unique properties of certain optically active point defects in semiconductor nanostructures," said F. Joseph Heremans, an Argonne National Laboratory staff scientist and co-author on the paper. These defects, such as the nitrogen-vacancy (NV) centers in diamond, are created when a nitrogen atom replaces a carbon atom adjacent to a vacancy in the diamond lattice structure. They are extremely sensitive to their environment, making them useful probes of local temperatures, as well as electric and magnetic fields, with a spatial resolution more than 100 times smaller than the thickness of a human hair. Because diamonds are biologically inert, quantum sensors based on diamond nanoparticles, which can operate at room temperature and detect several factors simultaneously, could even be placed within living cells, where they could, according to Heremans, "image systems from the inside out." Heremans and his colleagues, including Argonne's Wonsuk Cha and Paul Fuoss, as well as David Awschalom of the University of Chicago, set out to map the distribution of the crystal strain in nanodiamonds and to track the healing of these imperfections by subjecting them to high temperatures, up to 800 degrees Celsius in an inert helium environment. "Our idea of the 'healing' process is that gaps in the lattice are filled as the atoms move around when the crystal is heated to high temperatures, thereby improving the homogeneity of the crystal lattice," said Stephan Hruszkewycz, also a staff scientist at Argonne and lead author on the paper. This nanodiamond healing was monitored with a 3-D microscopy method called Bragg coherent diffraction imaging, performed by subjecting the crystals to a coherent X-ray beam at the Advanced Photon Source at Argonne. The X-ray beam that scatters off the nanodiamonds was detected and used to reconstruct the 3-D shape of the nanocrystal, "and, more importantly, the strain state of the crystal," Hruszkewycz said. The researchers found that nanodiamonds "shrink" during the high-temperature annealing process, and surmise that this occurs because of a phenomenon called graphitization. This phenomenon occurs when the surface of the material is converted from the normal diamond lattice arrangement into graphite, a single layer of chicken-wire-like arranged carbon atoms. The study marks the first time that Bragg coherent diffraction imaging has been shown to be useful at such high temperatures, a capability that, Hruszkewycz said, "enables the exploration of structural changes in important nanocrystalline materials at high temperatures that are difficult to access with other microscopy techniques." Hruszkewycz added that the research represents "a significant step towards developing scalable methods of processing inexpensive, commercial nanodiamonds for quantum sensing and information processing." The article, "In-situ study of annealing-induced strain relaxation in diamond nanoparticles using Bragg coherent diffraction imaging," is authored by Stephan O. Hruszkewycz, Wonsuk Cha, Paolo Andrich, Christopher P. Anderson, Andrew Ulvestad, Ross Harder, Paul Fuoss, David D. Awschalom and F. Joseph P. Heremans. The article appeared in the journal APL Materials Feb. 14, 2017 (DOI: 10.1063/1.49748651) and can be accessed at http://aip. . APL Materials is a new open access journal featuring original research on significant topical issues within the field of functional materials science. See http://aplmaterials. .


Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters Supernovae are not a common occurrence in our galaxy. While this is most likely a good thing for our continued survival, the Super-Kamiokande neutrino detector in Japan aims at detecting neutrinos that are produced during these super-energetic celestial explosions. Only three or four supernovae happen in our galaxy every century, meaning that researchers want to be prepared for when one of these takes place. Now, a new computer system has been installed at the underground Japanese lab that will monitor local supernovae (relatively speaking) in real time and inform the scientific community of the arrival of neutrinos. "It is a computer system that analyses the events recorded in the depths of the observatory in real time and, if it detects abnormally large flows of neutrinos, it quickly alerts the physicists watching from the control room," says Luis Labarga Echeverría, a physicist at the Autonomous University of Madrid in Spain and a member of the collaboration. With the new monitoring system, collaboration members will be able to assess the significance of an incoming signal within minutes and see whether it is actually from a supernova within the Milky Way. If it is, they can issue an early warning to all interested research centres around the world, as well as the celestial co-ordinates of the source of the neutrinos. This gives observatories around the globe a chance to point all of their instruments towards the source. Neutrinos are of fundamental interest to physics, while also providing us with invaluable information about the death of stars and the formation of black holes. The research is published in the journal Astroparticle Physics and a preprint is available on arXiv. The US Department of Energy's (DOE) Brookhaven National Laboratory has bagged three awards at the 2016 R&D 100 Awards, which annually recognize the 100 most innovative technologies and services of the past year. The three award-winning technologies include the hard X-ray scanning microscope with multilayer Laue lens nanofocusing optics (a joint award with DOE's Argonne National Laboratory), the MoSoy catalyst, and nanostructured anti-reflecting and water-repellent surface coatings. The custom-built X-ray microscope – developed by physicists Evgeny Nazaretski and Yong Chu, and team – has a spatial resolution of better than 15 nm – equivalent to 50,000 times smaller than a grain of sand. The only one of its kind, this microscope is also the recipient of a 2016 Microscopy Today Innovation Award. The MoSoy catalyst is a novel catalyst derived from using olybdenum (Mo) and renewable soybeans (Soy). This catalyst produces hydrogen in an environmentally friendly, cost-effective way, eliminating the need for expensive metal catalysts such as platinum to speed up the rate at which water is split into hydrogen and oxygen. The nanotextured "coatings" or etchings were fabricated at Brookhaven's Center for Functional Nanomaterials (CFN) – the textured surfaces absorb all wavelengths of light from any angle and also repel water extremely efficiently. "It is an honour for Brookhaven Lab to be recognized as among the institutions where innovative research and development is taking place," say's lab-director Doon Gibbs. "Our three winners this year illustrate the wide range of scientific discovery happening at Brookhaven." Droplets of a dilute "liquid" have emerged from a gas of ultracold atoms created by Tilman Pfau and colleagues at the University of Stuttgart in Germany. The team cooled dysprosium atoms to a few billionths of a degree above absolute zero. The atoms were trapped using light and magnetic fields, which were fine-tuned to create a repulsive short-range interaction between the atoms. There is also a long-range attractive magnetic interaction between dysprosium atoms, and together these interactions are similar to the forces between molecules in a conventional liquid. Pfau and colleagues found that tiny droplets condensed out of the gas when it was chilled. The droplets contained a few thousand atoms and the researchers found that a minimum number of atoms is required to stabilize a droplet. Describing its work in Nature, the team was also able to use quantum mechanics to calculate the critical number of atoms required to form a droplet – which was 600–800, depending upon the applied magnetic field. Although the droplets have densities that are 10–8 that of liquid_helium droplets, the team says that its droplets could provide insight into much denser quantum liquids and even atomic nuclei.


News Article | March 1, 2017
Site: www.greencarcongress.com

« Goodenough and UT team report new strategy for all-solid-state Na or Li battery suitable for EVs; plating cathodes | Main | Volvo Trucks introducing OTA remote programming for trucks » Researchers at Pacific Northwest National Laboratory (PNNL) have found that adding a small, optimal amount (0.05M) of LiPF (lithium hexafluorophosphate) as an additive in LiTFSI–LiBOB dual-salt/carbonate-solvent-based electrolytes significantly enhances the charging capability and cycling stability of Li metal batteries. A paper on their work is published in the journal Nature Energy. In the paper, they report that using the additive in a Li metal battery with a 4-V Li-ion cathode at a moderately high loading of 1.75 mAh cm−2 resulted in 97.1% capacity retention after 500 cycles along with very limited increase in electrode overpotential at a charge/discharge current density up to 1.75 mA cm−2. The researchers attributed the fast charging and stable cycling performances to the generation of a robust and conductive solid electrolyte interphase at the Li metal surface and stabilization of the Al cathode current collector. Lithium (Li) metal is regarded as the ultimate anode for energy storage systems because of its ultrahigh specific capacity of 3,860 mAh g-1, a very low redox potential (−3.040 V versus standard hydrogen electrode) and a small gravimetric density of 0.534 g cm−3. Secondary Li metal batteries (LMBs) have been extensively studied in the past four decades, and received increasing attention recently because of the growing needs for high-energy-density batteries. However, technical challenges such as unsatisfied coulombic efficiency and dendritic Li growth impede the successful deployment of secondary LMBs. Recently, extensive work has been dedicated to Li metal protection, including the application of polymer or solid-state electrolytes, ionic liquids, concentrated electrolytes or additives, protective layers, interlayers between Li and separator, nanoscale design, selective deposition, Li/reduced graphene oxide composites, and others. … Here we report that significantly improved charging capability and cycling stability of LMBs using a LiNi0.4 Mn Co O (NMC, 1.75 mAh cm−2) cathode can be achieved via manipulating the lithium salt chemistry in the electrolyte to generate a highly conductive SEI on Li metal. An optimal additive level (0.05 M) of LiPF can greatly alter the interfacial reactions between Li metal and dual-salt electrolyte containing LiTFSI and LiBOB in carbonate solvents. The capacity retention of moderately high areal-capacity Li||NMC batteries could be significantly improved to >97% after 500 cycles at 1.75 mA cm−2. Corresponding author Wu Xu and colleagues were part of earlier PNNL research seeking a better-performing electrolyte. The electrolytes they tried produced either a battery that didn’t have problematic dendrites and was super-efficient but charged very slowly and couldn’t work in higher-voltage batteries, or a faster-charging battery that was unstable and had low voltages. Next, they tried adding small amounts of a salt that’s already used in lithium-ion batteries—LiPF lithium hexafluorophosphate—to their fast-charging electrolyte. They paired the new electrolyte with a lithium anode and a lithium nickel manganese cobalt oxide cathode. The result was a fast, efficient, high-voltage battery. Because the additive is already an established component of lithium-ion batteries, it's readily available and relatively inexpensive. The small amounts needed—just 0.6 wt%—should also further lower the electrolyte’s cost. Xu and his team continue to evaluate several ways to make rechargeable lithium-metal batteries viable, including improving electrodes, separators and electrolytes. Specific next steps include making and testing larger quantities of their electrolyte, further improving the efficiency and capacity retention of a lithium-metal battery using their electrolyte, increasing material loading on the cathode and trying a thinner anode. This research was supported by the Department of Energy’s Office of Energy Efficiency and Renewable Energy. Researchers performed microscopy and spectroscopy characterizations of battery materials at EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science national User Facility at PNNL. The battery electrodes were made at DOE’s Cell Analysis, Modeling, and Prototyping Facility at Argonne National Laboratory.


News Article | December 15, 2016
Site: www.eurekalert.org

DALLAS, Dec. 15, 2016 - UT Southwestern Medical Center researchers developed a new imaging technique that makes X-ray images of proteins as they move in response to electric field pulses. The method, created in conjunction with scientists at the University of Chicago and published this month in Nature, could lead to new insights into how proteins work, said senior author Dr. Rama Ranganathan, Director of the Cecil H. and Ida Green Comprehensive Center for Molecular, Computational, and Systems Biology at UT Southwestern. "Proteins carry out the basic reactions in cells that are necessary for life: They bind to other molecules, catalyze chemical reactions, and transmit signals within the cell," said Dr. Ranganathan, also Professor of Biophysics and Pharmacology. "These actions come from their internal mechanics; that is, from the coordinated motions of the network of amino acids that make up the protein." Often, Dr Ranganathan said, the motions that underlie protein function are subtle and happen on time scales ranging from trillionths of a second to many seconds. "So far, we have had no direct way of 'seeing' the motions of amino acids over this range and with atomic precision, which has limited our ability to understand, engineer, and control proteins," he said. The new method, which the researchers call EF-X (electric field-stimulated X-ray crystallography), is aimed at stimulating motions within proteins and visualizing those motions in real time at atomic resolution, he said. This approach makes it possible to create video-like images of proteins in action - a goal of future research, he explained. The method involves subjecting proteins to large electric fields of about 1 million volts per centimeter and simultaneously reading out the effects with X-ray crystallography, he said. The researchers' experiments showed proteins can sustain these intense electric fields, and further that the imaging method can expose the pattern of shape changes associated with a protein's function. "This is not the first report of seeing atomic motions in proteins, but previous reports were specialized for particular proteins and particular kinds of motions," said Dr. Ranganathan, who holds the Cecil H. and Ida M. Green Chair in Biomedical Science. "Our work is the first to open up the investigation to potentially all possible motions, and for any protein that can be crystallized. It changes what we can learn." Ultimately, this work could explain how proteins work in both normal and disease states, with implications in protein engineering and drug discovery. An immediate goal is to make the method simple enough for other researchers to use, he added. The idea for this imaging method began over 10 years ago, Dr. Ranganathan said, but assembling the perfect lab team made it a reality. Former UTSW postdoctoral researcher and lead author Dr. Doeke Hekstra and former UT Southwestern graduate student Dr. K. Ian White made the critical advances. Dr. Hekstra is now an Assistant Professor at Harvard University and Dr. White is a postdoc at Stanford University. "I think this work has opened a new door to understanding protein function. It is already capable of being used broadly for many very important problems in biology and medicine. But like any new method, there is room for many improvements that will come from both us and others. The first step will be to create a way for other scientists to use this method for themselves," Dr. Ranganathan said. Dr. Michael Socolich, a research scientist in the Green Center, contributed to the study as did scientists at the University of Chicago. Some of the research was done at the Argonne National Laboratory, a U.S. government science resource run by the University of Chicago and at the Stanford Synchrotron Radiation Lightsource (SSRI) as well as at the UT Southwestern Structural Biology Laboratory. The study received support from the National Institutes of Health, the Robert A. Welch Foundation, the Lyda Hill Endowment for Systems Biology, and the Green Center for Systems Biology. UT Southwestern, one of the premier academic medical centers in the nation, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty includes many distinguished members, including six who have been awarded Nobel Prizes since 1985. The faculty of almost 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide medical care in about 80 specialties to more than 100,000 hospitalized patients and oversee approximately 2.2 million outpatient visits a year. This news release is available on our website at http://www.


News Article | March 4, 2016
Site: phys.org

The interfaces between the two oxides (represented in this idealized, atomically abrupt model by the yellow and purple bands) create an electric field. The field separates electrons (silver) excited by sunlight (gold), which could be used to catalyze hydrogen fuel production. Nobody wants a laptop computer that stops working when a cloud passes by. Storing sunlight as fuel that can be later used to drive fuel cells requires new materials. Scientists demonstrated such a material. They combined two oxides on the atomic scale. The interface between the oxide materials, one containing strontium and titanium (SrTiO ) and one containing lanthanum and chromium (LaCrO ), absorbs visible light, producing electrons and holes that might be useful for catalyzing reactions, such as producing hydrogen fuel. However, if there is nothing to pull those electrons and holes apart, they will quickly annihilate one another without doing anything useful. By carefully synthesizing this material as a series of alternating layers, the international team created a built-in electric field that could help separate the excited electrons and holes and improve the material's performance as a catalyst. This material opens up new scientific frontiers to solve a persistent energy challenge: storing solar energy for later use. Fuel cells capable of running on hydrogen fuel created by solar energy could allow people to heat their homes and run their computers on solar energy even in the dark of night. By depositing thin layers of SrTiO and LaCrO on a crystalline substrate, the investigators at the Pacific Northwest National Laboratory, Argonne National Laboratory, SuperSTEM, and the University of Oxford controlled how the ions at the interfaces bond to each other. This allowed them to engineer an electric field in the material that can be used to separate electrons and holes. Using X-ray spectroscopy techniques, they confirmed that the electric field was present and that the ions in the material shift positions as a result. Ultra-high resolution measurements using an electron microscope helped confirm this ionic shift. All of these experimental results were backed up by computational modeling of the materials, which predicted behavior in excellent agreement with what was observed. The electric field in these materials is present because the researchers were able to precisely control the growth process so that each interface has either a positive or negative charge. Alternating positively and negatively charged interfaces in the material produce nanoscale electric fields that can interact with the electrons and holes that are excited by solar energy. Electrons may then be driven to the surface, where they could interact with water molecules to break their bonds and produce hydrogen fuel. Researchers are continuing to explore the properties of these superlattices using cutting-edge X-ray measurements at synchrotrons around the world and using other advanced microscopy techniques to look at the chemical makeup of the interfaces. Explore further: New method to reduce the optical band gap of strontium titantate thin films More information: Ryan B. Comes et al. Interface-Induced Polarization in SrTiO -LaCrO Superlattices , Advanced Materials Interfaces (2016). DOI: 10.1002/admi.201500779


News Article | September 1, 2016
Site: phys.org

Colorado State University physicists, joining the fundamental pursuit of using electron spins to store and manipulate information, have demonstrated a new approach to doing so, which could prove useful in the application of low-power computer memory. Publishing Sept. 1 in Nature Communications, they've demonstrated a new way to switch magnetic moments - or direction of magnetization - of electrons in a thin film of a barium ferrite, which is a magnetic insulator. Until this point, scientists have only demonstrated this switching behavior - the key to writing information as memory - in metal thin films. The work was led by Mingzhong Wu, professor in the Department of Physics, with first author Peng Li, a former postdoctoral researcher now at Seagate, and second author Tao Liu, a current postdoc at CSU. The work was performed in collaboration with researchers at University of Alabama, Argonne National Laboratory, University of Notre Dame, and University of Wyoming. Other CSU authors include faculty members Stuart Field and Mario Marconi, and graduate students Houchen Chang and Daniel Richardson. Switching magnetic moments of electrons in an insulator instead of a metal could prove to be a major breakthrough in spintronics, by allowing a spin current-based memory storage device to be simpler, and also maintain more efficiency per electron. A property known as perpendicular magnetic anisotropy (PMA), key for information storage, in this case originates from the intrinsic magneto-crystalline anisotropy of the insulator, rather than interfacial anisotropy in other cases, Wu said. "Higher efficiency and lower power than the standard are always the goal in memory applications," Wu said. Beyond the application for computer memory, which captivates most spintronics researchers today, the CSU researchers' device does something bigger: It demonstrates the possibility of a new class of materials for spintronics. "What's exciting about this is that it's an enabling technology for exploring an entirely different class of configurations, some of which are theorized to be useful," said Jake Roberts, professor and chair of the Department of Physics. In the CSU researchers' device, the spin current does the job of assisting magnetic switching. Next, they will attempt to further refine their device for more efficient switching, including using a topological insulator or the photo-spin-voltaic effect to produce spin currents. The photo-spin-voltaic effect was discovered by Wu and colleagues, and reported in Nature Physics. More information: Peng Li et al, Spin–orbit torque-assisted switching in magnetic insulator thin films with perpendicular magnetic anisotropy, Nature Communications (2016). DOI: 10.1038/ncomms12688


News Article | November 20, 2015
Site: www.scientificcomputing.com

Austin, TX — Sardina Systems has been announced as a winner of the International Data Corporation (IDC) HPC Innovation Excellence Award at the SC15 supercomputer industry conference in Austin, TX. The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing (HPC) technologies. "Sardina Systems has achieved clear success in applying HPC to greatly improve business ROI, scientific advancement and engineering successes. They are at the forefront of helping drive new capabilities in cloud-based HPC computing, an increasingly important sector of the overall HPC space," said Bob Sorensen Research Vice President, High Performance Computing at IDC. "Sardina Systems is proud to be a recipient of this prestigious award, recognizing the unique value of FishDirector technology, the cornerstone of FishOS," said Anna Panchenko, VP of Marketing, Sardina Systems. "FishOS, an efficiency-driven, massively scalable, cloud management platform, enables hyper efficient cloud infrastructure. FishDirector is the world's first utilization-aware solution for OpenStack automation. It can easily double, and even triple, customer's server utilization rates." Sardina Systems stand alongside US-based Argonne National Laboratory and the Korean Institute of Science and Technology, as the ninth round of recipients of the Award. Sardina Systems provides smart, efficient, super-scalable cloud automation technology, enabling organizations to rapidly experience the value of OpenStack cloud and maximizing the utility of their resources. Sardina Systems help customers to manage OpenStack environments easily, flexibly and efficiently, halving carbon footprint along the way. Sardina Systems is headquartered in Estonia, with local presence in the UK, Germany, Ukraine and Russia. International Data Corporation (IDC) is a global provider of market intelligence, advisory services and events for the information technology, telecommunications and consumer technology markets. IDC helps IT professionals, business executives, and the investment community to make fact-based decisions on technology purchases and business strategy. More than 1,000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC is a subsidiary of IDG.


News Article | November 11, 2016
Site: www.biosciencetechnology.com

Researchers at Johns Hopkins report they have laid the foundation to develop novel antibiotics that work against incurable, antibiotic-resistant bacteria like tuberculosis by targeting an enzyme essential to the production and integrity of bacterial cell walls. The findings, they say, suggest that antibiotic drugs specifically targeting the recently discovered LD-transpeptidase enzyme, which is needed to build bacterial cell walls in some bacteria, could potentially cure many antibiotic-resistant infections. An additional implication of the research, the Johns Hopkins team says, is that drugs targeting the enzyme could offer quicker, cheaper and more easily accessible treatment against tuberculosis, a disease that still kills more people worldwide than any other infection, according to the Centers for Disease Control and Prevention. A summary of the findings is published on Nov. 7 in Nature Chemical Biology. “The complexity of TB treatment and growing prevalence of antibiotic resistance is a serious threat to public health,” says Gyanu Lamichhane, Ph.D. associate professor of medicine at the Johns Hopkins University School of Medicine. His team joined with the research team of Craig Townsend, Ph.D., the Alsoph H. Corwin Professor of Chemistry at The Johns Hopkins University’s Krieger School of Arts and Sciences, to tackle this complex issue. “Our research offers steps toward the design of new antibiotics that attack a previously untargeted bacterial enzyme,” says Townsend. At the root of their investigation, Lamichhane says, is the fact than more than half of antibiotics prescribed today are of a class called beta-lactams, which work by interrupting the function of the DD-transpeptidase enzyme that creates bacterial cell walls. Without it, bacteria quickly die. However, in 2005, a team of researchers found a second wall-building enzyme, LD-transpeptidase, that allows bacteria like the ones that cause TB to survive antibiotic treatments. “We looked at the structure of LD-transpeptidase, thought about how it works and started making new compounds that could be used against it,” says Townsend. Pankaj Kumar, Ph.D., postdoctoral fellow in infectious diseases at the Johns Hopkins University School of Medicine, began the research in the new study by extracting LD-transpeptidase from many species of bacteria and examining its detailed molecular structure with a sophisticated imaging system known as protein X-ray crystallography using the Advanced Photon Source at the Argonne National Laboratory in Chicago. By analyzing the enzyme’s structure, Johns Hopkins researchers were able to design new compounds in the carbapenem group, a subclass of the beta-lactam antibiotics that bind to the LD-transpeptidase wall-building enzyme and stop its function. In live bacterial cultures, the carbapenems were shown by Lamichhane’s and Townsend’s groups to stop the enzyme’s wall-building activity. The new compounds were even effective against the ESKAPE pathogens, a group of six bacterial species that the Centers for Disease Control and Prevention has identified as a threat because of their propensity for developing antibiotic resistance. Following these successes, Amit Kaushik, Ph.D., a postdoctoral fellow in infectious diseases at the Johns Hopkins University School of Medicine, tested two carbapenems in vivo against TB in mice infected with TB. Researchers infected mice with tuberculosis bacteria and separated them into different treatment groups. The rodents’ lungs were sampled periodically over a period of three weeks, and the results showed that even without use of classic TB antibiotic treatments, the new carbapenems, specifically biapenem, cured TB infection in mice. “Our data show that the carbapenems successfully treated TB infections alone by attacking the right enzyme,” says Lamichhane. Townsend and Lamichhane say the focus of their research is now on creating variations of their original compound that are designed to target specific bacteria. Many commonly prescribed antibiotics today work on a broad spectrum of bacterial species, meaning that in addition to killing off bad bacteria, they also destroy the friendly bacteria our bodies need to function normally and whose destruction can cause dangerous side effects. Lamichhane believes that the future of antibiotic treatments relies on our ability to be good antimicrobial stewards and treat specific bacteria without affecting our bodies’ natural microbiome. Not only will this cut down on antibiotic side effects, but it will also slow the development of antibiotic resistance in the bacterial species not being targeted. The researchers are now in the process of initiating clinical trials to test the safety and efficacy of some of these new compounds. Antibiotic resistance has been an ever-present threat since the discovery of penicillin in 1928. Scientists and physicians have historically kept up with resistant bacteria through the frequent introduction of new antibiotic treatments. However, the research and development of antibiotics plummeted in the 1980s as other drugs became more profitable to produce. Because of the decreased incentive to invest in new antibiotics and the liberal use of antibiotics we already have, many bacterial species have quickly outpaced our ability to treat them. The Centers for Disease Control and Prevention estimates that annually, 2 million illnesses and 23,000 deaths are caused by antibiotic-resistant bacteria in the United States alone, and that 70 percent of hospital-acquired infections are now antibiotic-resistant.


« EPA and NHTSA seeking comments on new data for Phase 2 Heavy-Duty National Program to reduce GHG | Main | Volvo Cars calls for global standardized electric car charging, joins CharIN CCS effort » Researchers at Argonne National Laboratory have analyzed the water consumption for transportation fuels in the United States using an extended lifecycle system boundary that includes the water embedded in intermediate processing steps. In a paper published in the RSC journal Energy & Environmental Science, they compared the water consumed per unit energy and per km traveled in light-duty vehicles. They found that many alternative fuels consume larger quantities of water on a per km basis than traditional petroleum and natural gas pathways. The authors concluded that it will be important to consider the implications of transportation and energy policy changes on water resources in the future. … Transportation fuels are produced using interconnected pathways composed of numerous individual production processes. Many of these production processes generate intermediates that are later consumed for the ultimate purpose of generating transportation. Each process in a pathway may consume water resources, so it is necessary to analyze the water consumed throughout the pathway to understand the net impact that a fuel will have on water resources. For example, biodiesel and ethanol production consume water in both the agricultural operations used to produce biomass and in the conversion of the biomass to fuel; thermoelectric power generation consumes water in fuel cycle operations and cooling in thermoelectric power plants; hydroelectric power plants require water-consuming reservoirs to generate electricity; hydrogen fuel cells consume water as a feedstock for the fuel and for cooling excess energy generated by the process; petroleum and natural gas require water for recovery and subsequently for processing. … LCAs [life cycle analysis] of water consumption are challenging to perform due to differences in definitions and terminology associated with water data. … The goals of this study were to develop a comprehensive baseline LCA of water resource consumption associated with transportation fuel production, highlight the uncertainties and implications in the results, and identify important outstanding gaps in the data. There have been other LCAs performed on water consumption associated with transportation; however, these analyses ignored the water embedded in many intermediate resources such as transportation fuel and intermediate chemicals or utilized economic input–output modeling to fill process data gaps. The Argonne team developed an extensive inventory of process-level water consumption factors for the major transportation fuel pathways in the US consistent with the structure in the lifecycle analysis tool GREET. They used the inventory with the GREET framework to estimate the anthropogenic-induced water consumption associated with the various transportation fuels. System boundaries for the life cycle water consumed in the production of the transportation fuels in the study included mining and recovery operations; agricultural production of biomass; agricultural chemicals manufacturing; biofuel conversion; crude oil and natural gas refining and processing, and transportation and distribution. Life cycle water consumption was computed for each fuel pathway using the 2014 version of GREET.net. Fossil fuel pathways generally have the least impact on water resources. Petroleum-based gasoline and diesel consume slightly more water than CNG because of the water consumption associated with enhanced recovery technologies. Corn and soy biofuel production show the largest water consumption impact among all the fuels with the exception of electricity from hydropower. Water consumption associated with irrigation for a particular crop may be much less than or much greater than the US average, they noted. electricity generation pathways have both the smallest (wind) and largest (hydropower) impact on water resources. Thermoelectric power generation pathways consume more than twice the water of fossil fuel pathways primarily for waste heat dissipation. Using dry cooling technologies would bring electricity water consumption estimates into a similar range as the baseline fossil fuels, although these technologies carry both a capital and energy penalty. More than 70% of this water is associated with evaporation from hydropower reservoirs. Because electricity is a fundamental input for essentially all the other pathways, a detailed analysis of the water consumption associated with hydropower is in need of further characterization, the researchers said. Pathways for centralized production of hydrogen fuel showed similar water consumption footprints to the thermoelectric pathways on an energy basis. A large portion of the total is associated with water embedded in electric compression than can be traced to hydropower generation. Processing and cooling constitute the other major contributors to the H pathways. In terms of water liters/100 km, compressed natural gas vehicles show the lowest burden on water consumption—the majority of the water is associated with electric compression of the fuel and not with the recovery process. Reforming natural gas to H for use in a FCEV more than doubles the water consumption intensity of transportation. Coal or biomass-based gasification in central facilities is expected to consume a similar amount of water as the SMR-based H pathway. BEVs were estimated to consume almost twice the water as the E10 baseline on average in the US. However, electricity-based transportation exhibits high variability depending on the regional electricity mix. Moves towards alternative fuels appear to have a greater impact on water resources than fossil fuels. Energy and environmental policy should consider the implications of alternative vehicles on water resources when planning changes to the transportation and energy infrastructure. The values outlined in this study should not be interpreted as absolute inputs for specific projects since many processes (e.g., agriculture, mining) exhibit high degrees of spatial and temporal variability. Actual projects must therefore be evaluated on their specifics and not on national or regional averages. The research was supported by the Bioenergy Technologies Office, Fuel Cell Technologies Office, and the Vehicle Technologies Office of the Department of Energy’s Office of Energy Efficiency and Renewable Energy.


News Article | March 28, 2016
Site: www.cemag.us

Scientists at the U.S. Department of Energy's Argonne National Laboratory have discovered a way to use a microscopic swirling flow to rapidly clear a circle of tiny bacteria or swimming robots. "This discovery offers a new approach for control and manipulation of microscopic swimmers," says Argonne physicist and co-author Igor Aronson, and it could be useful in tiny microfluidic ("lab-on-a-chip") devices that can quickly run chemical or biological analyses or perform tasks. In the study, published in Nature Communications, the researchers placed a magnetic particle in the center of a liquid film filled with swimming bacteria. Normally the bacteria swim randomly; but when scientists spun the particle by applying a rotating magnetic field, the swimmers shot away from the center, like a school of fish that suddenly realized there's a shark in their midst. What's actually happening is that the particle is rotating, creating a small vortex around itself. The bacteria swim parallel to the stream lines and are quickly pushed outward — except for a few that get sucked in right next to the particle. They're not pushed out by centrifugal force, says Argonne scientist Andrey Sokolov, who co-authored the paper; dead bacteria, which aren't swimming, are not pushed out with their living companions. "Because of the curvature of the flow, some swim in and are trapped on the rotating particle, and others are forced to swim out of the curved flow," Sokolov says. This technique could separate live from dead bacteria, or different species, bacterial strains or mutants from one another. "The shape and swimming rates of different species would mean they separate," Aronson says. "At certain frequencies of rotation, the bacteria self-organize into a spiral-shaped halo, creating a microscopic galaxy — similar to our galaxy Milky Way, but trillions of trillions (1024) of times smaller," Sokolov says. In addition to new understanding of the forces governing microswimmers and their environments, the vortex technique could help prevent biofilms from forming and disrupting microfluidic devices, the authors suggest. They are particularly interested in creating systems in which microswimmers could assemble gears to build a tiny machine and then power it, Aronson says. Aronson and Sokolov also modeled the results theoretically and saw good alignment between computer models and observed results, they say. The results were published in Nature Communications in a study titled "Rapid expulsion of microswimmers by a vortical flow." The research was supported by the U.S. Department of Energy's Office of Science (Office of Basic Energy Sciences) and by the National Institutes of Health.


News Article | March 9, 2016
Site: www.cemag.us

Inexpensive materials called MOFs pull gases out of air or other mixed gas streams, but fail to do so with oxygen. Now, a team has overcome this limitation by creating a composite of a MOF and a helper molecule in which the two work in concert to separate oxygen from other gases simply and cheaply. The results, reported in Advanced Materials, might help with a wide variety of applications, including making pure oxygen for fuel cells, using that oxygen in a fuel cell, removing oxygen in food packaging, making oxygen sensors, or for other industrial processes. The technique might also be used with gases other than oxygen as well by switching out the helper molecule. Currently, industry uses a common process called cryogenic distillation to separate oxygen from other gases. It is costly and uses a lot of energy to chill gases. Also, it can't be used for specialty applications like sensors or getting the last bit of oxygen out of food packaging. A great oxygen separator would be easy to prepare and use, be inexpensive and be reusable. MOFs, or metal-organic frameworks, are materials containing lots of pores that can suck up gases like sponges suck up water. They have potential in nuclear fuel separation and in lightweight dehumidifiers. But of the thousands of MOFs out there, less than a handful absorb molecular oxygen. And those MOFs chemically react with oxygen, forming oxides — think rust — that render the material unusable. "When we first worked with MOFs for oxygen separation, we could only use the MOFs a few times. We thought maybe there's a better way to do it," says materials scientist Praveen Thallapally of the Department of Energy's Pacific Northwest National Laboratory. The new tack for Thallapally and colleagues at PNNL involved using a second molecule to mediate the oxygen separation — a helper molecule would be attracted to but chemically uninterested in the MOF. Instead, the helper would react with oxygen to separate it from other gases. They chose a MOF called MIL-101 that is known for its high surface area — making it a powerful sponge — and lack of reactivity. One teaspoon of MIL-101 has the same surface area as a football field. The high surface area comes from a MOF's pores, where reactive MOFs work their magic. MOFs that react with oxygen need to be handled carefully in the laboratory, but MIL-101 is stable at ambient temperatures and in the open atmosphere of a lab. For their helper molecule, they tried ferrocene, an inexpensive iron-containing molecule. The scientists made a composite of MIL-101 and ferrocene by mixing them and heating them up. Initial tests showed that MIL-101 took up more than its weight in ferrocene and at the same time lost surface area. This indicated that ferrocene was taking up space within the MOF's pores, where they need to be to snag the oxygen. Then the team sent gases through the black composite material. The material bound up a large percentage of oxygen, but almost none of the added nitrogen, argon or carbon dioxide. The material behaved this way whether the gases went through individually or as a mix, showing that the composite could in fact separate oxygen from the others. Additional analysis showed that heating caused ferrocene to decompose in the pores to nanometer-sized clusters, which made iron available to react with oxygen. This reaction formed a stable mineral known as maghemite, all within the MOF pores. Maghemite could be removed from the MOF to use the MOF again. Together, the results on the composite showed that a MOF might be able to do unexpected things — like purify oxygen — with a little help. Future research will explore other combinations of MOF and helper molecules. In addition to PNNL, participating researchers hailed from and used analytical instruments at two Office of Science User Facilities, the Environmental Molecular Sciences Laboratory at PNNL and the Advanced Photon Source at Argonne National Laboratory, as well as the University of Amsterdam. This work was supported by the Department of Energy Office of Science.


News Article | March 17, 2016
Site: www.materialstoday.com

Inexpensive materials called metal-organic frameworks (MOFs) can pull gases out of air or other mixed gas streams, but fail to do so with oxygen. Now, a team of researchers has overcome this limitation by creating a composite of a MOF and a helper molecule in which the two work together to separate oxygen from other gases simply and cheaply. This work, reported in Advanced Materials, might prove of use in a wide variety of applications, including making pure oxygen for fuel cells and then using that oxygen in fuel cells, removing oxygen from food packaging, making oxygen sensors, and for many other industrial processes. The technique could also be used with gases other than oxygen by simply switching the helper molecule. Currently, industry uses a common process called cryogenic distillation to separate oxygen from other gases, but it is costly and uses a lot of energy to chill the gases. Also, it can't be used for specialty applications like sensors or getting the last bit of oxygen out of food packaging. A great oxygen separator would be easy to prepare and use, inexpensive and reusable. MOFs are materials containing lots of pores that can suck up gases like sponges suck up water. They have potential for use in nuclear fuel separation and in lightweight dehumidifiers. But of the thousands of MOFs produced to date, less than a handful can absorb molecular oxygen. And those MOFs chemically react with the oxygen, forming oxides that render the material unusable. "When we first worked with MOFs for oxygen separation, we could only use the MOFs a few times. We thought maybe there's a better way to do it," said materials scientist Praveen Thallapally at the US Department of Energy's Pacific Northwest National Laboratory (PNNL). The new approach that Thallapally and his colleagues at PNNL came up with involves using a second molecule to mediate the oxygen separation. This helper molecule should be attracted to, but chemically uninterested in, the MOF. Instead, the helper should react with oxygen to separate it from the other gases. The researchers chose a MOF called MIL-101 that is known for its high surface area – making it a powerful sponge – and its lack of reactivity: one teaspoon of MIL-101 has the same surface area as a football field. The high surface area comes from a MOF's pores, where reactive MOFs work their magic. MOFs that react with oxygen usually need to be handled carefully in the laboratory, but MIL-101 is stable at ambient temperatures and in the open atmosphere of a lab. For their helper molecule, Thallapally and his colleagues tried ferrocene, an inexpensive iron-containing molecule. The researchers made a composite of MIL-101 and ferrocene by simply mixing them and heating them up. Initial tests showed that MIL-101 took up more than its weight in ferrocene and at the same time lost surface area. This indicated that ferrocene was taking up space within the MOF's pores, where they need to be to snag oxygen molecules. Then the team tried sending gases through the black composite material. They found that the composite bound up a large percentage of oxygen, but almost none of the added nitrogen, argon or carbon dioxide. The material behaved this way whether the gases went through individually or as a mix, showing that the composite could separate oxygen from the other gases. Additional analysis showed that heating caused the ferrocene to decompose in the MOF pores to nanometer-sized clusters, making the iron available to react with oxygen. This reaction formed a stable mineral known as maghemite, all within the MOF pores. Handily, maghemite could be removed from the MOF, allowing the composite to be used again. Together, the results on the composite showed that a MOF might be able to do unexpected things – like purify oxygen – with a little help. Future research will explore other combinations of MOF and helper molecules. In addition to PNNL, other researchers taking part in this study hailed from Argonne National Laboratory and the University of Amsterdam in the Netherlands. This story is adapted from material from PNNL, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | February 15, 2017
Site: www.eurekalert.org

About 4.6 billion years ago, an enormous cloud of hydrogen gas and dust collapsed under its own weight, eventually flattening into a disk called the solar nebula. Most of this interstellar material contracted at the disk's center to form the sun, and part of the solar nebula's remaining gas and dust condensed to form the planets and the rest of our solar system. Now scientists from MIT and their colleagues have estimated the lifetime of the solar nebula -- a key stage during which much of the solar system evolution took shape. This new estimate suggests that the gas giants Jupiter and Saturn must have formed within the first 4 million years of the solar system's formation. Furthermore, they must have completed gas-driven migration of their orbital positions by this time. "So much happens right at the beginning of the solar system's history," says Benjamin Weiss, professor of earth, atmospheric, and planetary sciences at MIT. "Of course the planets evolve after that, but the large-scale structure of the solar system was essentially established in the first 4 million years." Weiss and MIT postdoc Huapei Wang, the first author of this study, report their results today in the journal Science. Their co-authors are Brynna Downey, Clement Suavet, and Roger Fu from MIT; Xue-Ning Bai of the Harvard-Smithsonian Center for Astrophysics; Jun Wang and Jiajun Wang of Brookhaven National Laboratory; and Maria Zucolotto of the National Museum in Rio de Janeiro. By studying the magnetic orientations in pristine samples of ancient meteorites that formed 4.563 billion years ago, the team determined that the solar nebula lasted around 3 to 4 million years. This is a more precise figure than previous estimates, which placed the solar nebula's lifetime at somewhere between 1 and 10 million years. The team came to its conclusion after carefully analyzing angrites, which are some of the oldest and most pristine of planetary rocks. Angrites are igneous rocks, many of which are thought to have erupted onto the surface of asteroids very early in the solar system's history and then quickly cooled, freezing their original properties -- including their composition and paleomagnetic signals -- in place. Scientists view angrites as exceptional recorders of the early solar system, particularly as the rocks also contain high amounts of uranium, which they can use to precisely determine their age. "Angrites are really spectacular," Weiss says. "Many of them look like what might be erupting on Hawaii, but they cooled on a very early planetesimal." Weiss and his colleagues analyzed four angrites that fell to Earth at different places and times. "One fell in Argentina, and was discovered when a farm worker was tilling his field," Weiss says. "It looked like an Indian artifact or bowl, and the landowner kept it by this house for about 20 years, until he finally decided to have it analyzed, and it turned out to be a really rare meteorite." The other three meteorites were discovered in Brazil, Antarctica, and the Sahara Desert. All four meteorites were remarkably well-preserved, having undergone no additional heating or major compositional changes since they originally formed. The team obtained samples from all four meteorites. By measuring the ratio of uranium to lead in each sample, previous studies had determined that the three oldest formed around 4.563 billion years ago. The researchers then measured the rocks' remnant magnetization using a precision magnetometer in the MIT Paleomagnetism Laboratory. "Electrons are little compass needles, and if you align a bunch of them in a rock, the rock becomes magnetized," Weiss explains. "Once they're aligned, which can happen when a rock cools in the presence of a magnetic field, then they stay that way. That's what we use as records of ancient magnetic fields." When they placed the angrites in the magnetometer, the researchers observed very little remnant magnetization, indicating there was very little magnetic field present when the angrites formed. The team went a step further and tried to reconstruct the magnetic field that would have produced the rocks' alignments, or lack thereof. To do so, they heated the samples up, then cooled them down again in a laboratory-controlled magnetic field. "We can keep lowering the lab field and can reproduce what's in the sample," Weiss says. "We find only very weak lab fields are allowed, given how little remnant magnetization is in these three angrites." Specifically, the team found that the angrites' remnant magnetization could have been produced by an extremely weak magnetic field of no more than 0.6 microteslas, 4.563 billion years ago, or, about 4 million years after the start of the solar system. In 2014, Weiss' group analyzed other ancient meteorites that formed within the solar system's first 2 to 3 million years, and found evidence of a magnetic field that was about 10-100 times stronger -- about 5-50 microtesla. "It's predicted that once the magnetic field drops by a factor of 10-100 in the inner solar system, which we've now shown, the solar nebula goes away really quickly, within 100,000 years," Weiss says. "So even if the solar nebula hadn't disappeared by 4 million years, it was basically on its way out." The researchers' new estimate is much more precise than previous estimates, which were based on observations of faraway stars. "What's more, the angrites' paleomagnetism constrains the lifetime of our own solar nebula, while astronomical observations obviously measure other faraway solar systems," Wang adds. "Since the solar nebula lifetime critically affects the final positions of Jupiter and Saturn, it also affects the later formation of the Earth, our home, as well as the formation of other terrestrial planets." Now that the scientists have a better idea of how long the solar nebula persisted, they can also narrow in on how giant planets such as Jupiter and Saturn formed. Giant planets are mostly made of gas and ice, and there are two prevailing hypotheses for how all this material came together as a planet. One suggests that giant planets formed from the gravitational collapse of condensing gas, like the sun did. The other suggests they arose in a two-stage process called core accretion, in which bits of material smashed and fused together to form bigger rocky, icy bodies. Once these bodies were massive enough, they could have created a gravitational force that attracted huge amounts of gas to ultimately form a giant planet. According to previous predictions, giant planets that form through gravitational collapse of gas should complete their general formation within 100,000 years. Core accretion, in contrast, is typically thought to take much longer, on the order of 1 to several million years. Weiss says that if the solar nebula was around in the first 4 million years of solar system formation, this would give support to the core accretion scenario, which is generally favored among scientists. "The gas giants must have formed by 4 million years after the formation of the solar system," Weiss says. "Planets were moving all over the place, in and out over large distances, and all this motion is thought to have been driven by gravitational forces from the gas. We're saying all this happened in the first 4 million years." This research was supported, in part, by NASA and a generous gift from Thomas J. Peterson, Jr. Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. NOTE: This news release was issued on February 9, 2017 by MIT News. Physicist Jun Wang and former postdoc (now scientific associate) Jiajun Wang of the U.S. Department of Energy's (DOE) Brookhaven National Laboratory contributed to the research. Using transmission x-ray microscopy (TXM) nanoimaging, they identified and characterized the chemical composition and the grain size of magnetic particles within ancient igneous meteorites. The analyses provided accurate estimates of the intensities of the early solar system's magnetic fields (as preserved in the meteorites); this information is key to setting the time scale of the evolution of the solar nebula--the cloud of dust and gas particles thought to be behind the development of our solar system--and the formation of planets. The TXM data were collected at Brookhaven's National Synchrotron Light Source (NSLS) and the Advanced Photon Source (APS) at DOE's Argonne National Laboratory through the NSLS-II TXM transition program. The TXM instrument was temporarily relocated to APS in 2015 when NSLS was replaced by NSLS-II. The instrument will remain at APS until its new beamline at NSLS-II is constructed. NSLS-II (and the former NSLS) and APS are DOE Office of Science User Facilities.


News Article | November 21, 2016
Site: www.cemag.us

They say diamonds are forever, but diamonds in fact are a metastable form of carbon that will slowly but eventually transform into graphite, another form of carbon. Being able to design and synthesize other long-lived, thermodynamically metastable materials could be a potential gold mine for materials designers, but until now, scientists lacked a rational understanding of them. Now researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have published a new study that, for the first time, explicitly quantifies the thermodynamic scale of metastability for almost 30,000 known materials. This paves the way for designing and making promising next-generation materials for use in everything from semiconductors to pharmaceuticals to steels. “There’s a great amount of possibility in the space of metastable materials, but when experimentalists go to the lab to make them, the process is very heuristic — it’s trial and error,” says Berkeley Lab researcher Wenhao Sun. “What we’ve done in this research is to understand the metastable phases that have been made, so that we can better understand which metastable phases can be made.” The research was published recently in the journal Science Advances in a paper titled, “The Thermodynamic Scale of Inorganic Crystalline Metastability.” Sun, a postdoctoral fellow working with Gerbrand Ceder in Berkeley Lab’s Materials Sciences Division, was the lead author, and Ceder was the corresponding author. The study involved large-scale data mining of the Materials Project, which is a Google-like database of materials that uses supercomputers to calculate properties based on first-principles quantum-mechanical frameworks. The Materials Project, directed by Berkeley Lab researcher Kristin Persson, who was also a co-author of the new paper, has calculated properties of more than 67,000 known and predicted materials with the goal of accelerating materials discovery and innovation. “Materials design and development is truly a slow process but is now being greatly accelerated by the fact that we can compute properties of compounds before they are made,” Ceder says. “Although we still don’t fully understand which materials can be made and how, mapping the underlying thermodynamics is an important first step.” Metastable materials, or materials that transform to another state over a long period of time, are ubiquitous in both nature and technology and often have superior properties. Chocolate, for example, is metastable, with a lower melting point and better texture than stable chocolate. There are also metastable steels that have both toughness and strength, properties not normally found simultaneously in most stable steels. Scientists would love to develop new materials with certain properties for various applications — an ultra-strong yet lightweight metal for vehicles, for example — but to make any new material with desired properties, materials scientists must understand how synthesizing the material influences its structure, and then how the structure in turn affects its properties and performance. This, Sun explains, is the fundamental paradigm of materials science. “The Materials Project has helped us link a material’s structure to its properties,” Ceder says. “What we’ve done here is the first quantitative step in understanding synthesis-structure relationships.” Sun offers an analogy to food: “If the Materials Project were a cookbook, it’d be like a database of ingredients and delicious dishes but no recipes. Designing recipes is difficult because scientists have a poor understanding of why metastable phases appear during ‘cooking.’ There are some applications where a metastable material is better, and others where the stable phases are better. This study sets a foundation to investigate how to use computers to predict recipes.” Previously, scientists had thermodynamic numbers for less than 1,000 metastable compounds. “It’s very hard to survey metastability over known materials because there’s not much data out there in terms of calorimetry, which is measuring thermodynamic numbers,” Sun says. What’s more, metastable materials come in many forms, spanning metal alloys and minerals to ceramics, salts, and more, making a comprehensive survey difficult. “What we’ve done is large-scale data mining on nearly 30,000 observed materials to explicitly measure the thermodynamic scale of metastability, as a function of a wide variety of parameters, like chemistry and composition, which inorganic chemists and materials scientists can use to build intuition,” Sun says. Based on their observations, the researchers went a step further, to propose a new principle they term “remnant metastability” to explain which metastable materials can be synthesized and which cannot. “We’re essentially proposing search criteria — we’re identifying which crystalline materials can be made, and possibly under what conditions they can be made,” Sun says. “We hope this can be a more refined way to think about which crystal structure nature chooses when a material forms.” The other co-authors of the paper are: Anubhav Jain of Berkeley Lab, Stephen Dacek and William Richards of MIT, Shyue Ping Ong and Anthony Gamst of UC San Diego, and Geoffroy Hautier of the Université Catholique de Louvain in Belgium. The research was supported by the Materials Project and was a collaboration with the DOE’s Office of Science as part of its Center for Next Generation of Materials by Design, a DOE Energy Frontier Research Center. The researchers also used computing resources at the Center for Nanoscale Materials (CNM) at Argonne National Laboratory and well as resources at the Center for Functional Nanomaterials (CFN) at Brookhaven National Laboratory. CNM and CFN are DOE Office of Science User Facilities.


« BMW Group Launches car sharing service “ReachNow” in Seattle | Main | CMU county-level study shows plug-ins have larger or smaller lifecycle GHG than gasoline ICE depending on regional factors » A team of scientists and engineers with the Virtual Engine Research Institute and Fuels Initiative (VERIFI) (earlier post) at the US Department of Energy’s Argonne National Laboratory recently completed development of engineering simulation code and workflows that will allow as many as 10,000 engine simulations to be conducted simultaneously on Argonne’s supercomputer, Mira. These simulations are typical “engineering-type” smaller scale simulations, which are used routinely for engine design within industry. This massive simulation capacity has opened up a new capability for industrial partners seeking new advanced engine designs. The work was enabled by a recent award of 60 million core hours on Mira— the fifth-fastest supercomputer in the world—located at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. (Earlier post.) Presently, engineers exploring new engine designs can do a small number of simulations—perhaps 100—on cluster computers, which can take weeks to complete. Each change of a variable in an engine design, such as piston bowl or fuel injector configuration, requires a new simulation, and changing multiple variables increases the number of required simulations exponentially, quickly using up the computing power available to industry. At the end of this lengthy simulation process, the engineers have data to select promising designs to go to a prototype hardware build, but only based on a limited number of simulations with a limited number of variables. This new computing capability is a quantum leap from what anyone was doing before, and it holds the potential to unlock major breakthroughs in engine efficiency, as well as very substantial cost savings. In the past, doing 10,000 simulations was unthinkable. Now we can do that in a matter of days. —Sibendu Som, principal investigator and principal mechanical engineer at Argonne’s Center for Transportation Research Dr. Som is presenting a paper on Monday at the 2016 SAE High Efficiency IC Engine Symposium on the team’s work. The presentation, “Enabling High Throughput Calculations on Supercomputers for Efficient Engine Design”, will highlight two specific applications of high throughput computing: Other applications of this high throughput computing include engine design and strategy optimization. Mira is often used to do a few huge simulations, but the VERIFI team wants it to do large numbers of smaller, engineering-type simulations simultaneously. To get there, they had to optimize the source code of CONVERGE—engine simulation software from Convergent Science, Inc.—and use an Argonne-developed, high-level programming language known as Swift to manage the massive workflows involved. This massive simulation capacity has opened up a new capability for industrial partners seeking new advanced engine designs. VERIFI is already working with a major auto manufacturer, a leading company in energy and transportation and a global fuel supplier to put this unique capability to work. The power of supercomputing will not only increase the quality and quantity of simulations while reducing the development costs, it will also broaden the number of vehicle systems that can be simulated at once. The smaller computer systems VERIFI has been working with have forced engineers to focus on finite aspects of engine design, such as fuel injectors or the fluid dynamics of combustion. With this new approach, they can broaden their inquiry to the entire powertrain. Funding for the project is provided by DOE’s Office of Science and Office of Energy Efficiency & Renewable Energy Vehicle Technologies Office. Also assisting in the research was Marta García, assistant computational scientist at ALCF.


News Article | November 23, 2015
Site: www.cemag.us

Entanglement is one of the strangest phenomena predicted by quantum mechanics, the theory that underlies most of modern physics: It says that two particles can be so inextricably connected that the state of one particle can instantly influence the state of the other — no matter how far apart they are. A century ago, entanglement was at the center of intense theoretical debate, leaving scientists like Albert Einstein baffled. Today, entanglement is accepted as a fact of nature and is actively being explored as a resource for future technologies including quantum computers, quantum communication networks and high-precision quantum sensors. Entanglement is also one of nature’s most elusive phenomena. Producing entanglement between particles requires that they start out in a highly ordered state, which is disfavored by thermodynamics, the process that governs the interactions between heat and other forms of energy. This poses a particularly formidable challenge when trying to realize entanglement at the macroscopic scale, among huge numbers of particles. “The macroscopic world that we are used to seems very tidy, but it is completely disordered at the atomic scale. The laws of thermodynamics generally prevent us from observing quantum phenomena in macroscopic objects,” says Paul Klimov, a graduate student in the Institute for Molecular Engineering and lead author of new research on quantum entanglement. The institute is a partnership between UChicago and Argonne National Laboratory. Previously, scientists have overcome the thermodynamic barrier and achieved macroscopic entanglement in solids and liquids by going to ultra-low temperatures (-270 C) and applying huge magnetic fields (1,000 times larger than that of a typical refrigerator magnet) or using chemical reactions. In a recent issue of Science Advances, Klimov and other researchers in Professor David Awschalom’s group at the Institute for Molecular Engineering have demonstrated that macroscopic entanglement can be generated at room temperature and in a small magnetic field. The researchers used infrared laser light to order (preferentially align) the magnetic states of thousands of electrons and nuclei and then electromagnetic pulses, similar to those used for conventional magnetic resonance imaging (MRI), to entangle them. This procedure caused pairs of electrons and nuclei in a macroscopic 40 micrometer-cubed volume (the volume of a red blood cell) of the semiconductor SiC to become entangled. “We know that the spin states of atomic nuclei associated with semiconductor defects have excellent quantum properties at room temperature,” says Awschalom, the Liew Family Professor in Molecular Engineering and a senior scientist at Argonne. “They are coherent, long-lived and controllable with photonics and electronics. Given these quantum ‘pieces,’ creating entangled quantum states seemed like an attainable goal.” In addition to being of fundamental physical interest, “the ability to produce robust entangled states in an electronic-grade semiconductor at ambient conditions has important implications on future quantum devices,” Awschalom says. In the short term, the techniques used here in combination with sophisticated devices enabled by advanced SiC device-fabrication protocols could enable quantum sensors that use entanglement as a resource for beating the sensitivity limit of traditional (non-quantum) sensors. Given that the entanglement works at ambient conditions and that SiC is bio-friendly, biological sensing inside a living organism is one particularly exciting application. “We are excited about entanglement-enhanced magnetic resonance imaging probes, which could have important biomedical applications,” says Abram Falk of IBM’s Thomas J. Watson Research Center and a co-author of the research findings. In the long term, it might even be possible to go from entangled states on the same SiC chip to entangled states across distant SiC chips. Such efforts could be facilitated by physical phenomena that allow macroscopic quantum states, as opposed to single quantum states (in single atoms), to interact very strongly with one another, which is important for producing entanglement with a high success rate. Such long-distance entangled states have been proposed for synchronizing global positioning satellites and for communicating information in a manner that is fundamentally secured from eavesdroppers by the laws of physics.


Albert Einstein would be rolling in his grave today if he were to find that there is indeed an unusually spook