Berkeley, CA, United States
Berkeley, CA, United States

The Lawrence Berkeley National Laboratory , also known as the "Berkeley Lab", is a United States national laboratory located in the Berkeley Hills near Berkeley, California that conducts unclassified scientific research on behalf of the United States Department of Energy . It is managed and operated by the University of California, whose oldest campus, the University of California, Berkeley's main campus, it overlooks. Plans announced by the university in 2012 called for a second Berkeley Lab campus to be built on land it owns nearby at Richmond Field Station. Wikipedia.


Time filter

Source Type

Martin L.W.,University of Illinois at Urbana - Champaign | Chu Y.-H.,National Chiao Tung University | Ramesh R.,University of California at Berkeley | Ramesh R.,Lawrence Berkeley National Laboratory
Materials Science and Engineering R: Reports | Year: 2010

The growth and characterization of functional oxide thin films that are ferroelectric, magnetic, or both at the same time are reviewed. The evolution of synthesis techniques and how advances in in situ characterization have enabled significant acceleration in improvements to these materials are described. Methods for enhancing the properties of functional materials or creating entirely new functionality at interfaces are covered, including strain engineering and layering control at the atomic-layer level. Emerging applications of these functional oxides such as achieving electrical control of ferromagnetism and the future of these complex functional oxides is discussed. © 2010 Elsevier B.V. All rights reserved.


Mony L.,University of California at Berkeley | Berger T.K.,University of California at Berkeley | Isacoff E.Y.,University of California at Berkeley | Isacoff E.Y.,Lawrence Berkeley National Laboratory
Nature Structural and Molecular Biology | Year: 2015

The Hv1 proton channel is unique among voltage-gated channels for containing the pore and gate within its voltage-sensing domain. Pore opening has been proposed to include assembly of the selectivity filter between an arginine (R3) of segment S4 and an aspartate (D1) of segment S1. We determined whether gating involves motion of S1, using Ciona intestinalis Hv1. We found that channel opening is concomitant with solution access to the pore-lining face of S1, from the cytoplasm to deep inside the pore. Voltage- and patch-clamp fluorometry showed that this involves a motion of S1 relative to its surroundings. S1 motion and the S4 motion that precedes it are each influenced by residues on the other helix, thus suggesting a dynamic interaction between S1 and S4. Our findings suggest that the S1 of Hv1 has specialized to function as part of the channel's gate. © 2015 Nature America, Inc. All rights reserved.


Cappa C.D.,University of California at Davis | Wilson K.R.,Lawrence Berkeley National Laboratory
Atmospheric Chemistry and Physics | Year: 2011

Vacuum Ultraviolet (VUV) photoionization mass spectrometry has been used to measure the evolution of chemical composition for two distinct organic aerosol types as they are passed through a thermodenuder at different temperatures. The two organic aerosol types considered are primary lubricating oil (LO) aerosol and secondary aerosol from the α-pinene + O3 reaction (αP). The evolution of the VUV mass spectra for the two aerosol types with temperature are observed to differ dramatically. For LO particles, the spectra exhibit distinct changes with temperature in which the lower peaks, corresponding to compounds with higher vapor pressures, disappear more rapidly than the high peaks. In contrast, the αP aerosol spectrum is essentially unchanged by temperature even though the particles experience significant mass loss due to evaporation. The variations in the LO spectra are found to be quantitatively in agreement with expectations from absorptive partitioning theory whereas the αP spectra suggest that the evaporation of αP derived aerosol appears to not be governed by partitioning theory. We postulate that this difference arises from diffusivity within the ±P particles being sufficiently slow that they do not exhibit the expected liquid-like behavior and perhaps exist in a glassy state. To reconcile these observations with decades of aerosol growth measurements, which indicate that OA formation is described by equilibrium partitioning, we present a conceptual model wherein the secondary OA is formed and then rapidly converted from an absorbing form to a non-absorbing form. The results suggest that, although OA growth may be describable by equilibrium partitioning theory, the properties of organic aerosol once formed may differ significantly from the properties determined in the equilibrium framework. © 2011 Author(s).


Luo Y.,Beijing Normal University | Wang L.-W.,Lawrence Berkeley National Laboratory
ACS Nano | Year: 2010

The electronic structures of the CdSe/CdS core-shell nanorods are systemically investigated by large-scale first-principles quality calculations. The effects of band alignment, quantum confinement, piezoelectric field, and dipole moments are analyzed and delineated by comparing the results of systems with or without some of these attributes. We found complicated interplays between these effects in determining the nanorod band gap and electron hole wave function localizations. The hole wave function is found to be localized inside the CdSe core, while the electron wave function is localized in the CdS shell, with its distance to the CdSe core depending on the surface passivation. The permanent dipole moment induced by different surface passivations can change the electron hole separation, while the piezoelectric effect plays a relatively minor role. Finally, we demonstrate that it is straightforward to manipulate the nanorod electronic structure by changing its CdSe core position. © 2010 American Chemical Society.


Nord A.S.,University of California at Davis | Pattabiraman K.,University of California at San Francisco | Visel A.,Lawrence Berkeley National Laboratory | Visel A.,U.S. Department of Energy | And 2 more authors.
Neuron | Year: 2015

The forebrain is the seat of higher-order brain functions, and many human neuropsychiatric disorders are due to genetic defects affecting forebrain development, making it imperative to understand the underlying genetic circuitry. Recent progress now makes it possible to begin fully elucidating the genomic regulatory mechanisms that control forebrain gene expression. Herein, we discuss the current knowledge of how transcription factors drive gene expression programs through their interactions with cis-acting genomic elements, such as enhancers; how analyses of chromatin and DNA modifications provide insights into gene expression states; and how these approaches yield insights into the evolution of the human brain. © 2015 Elsevier Inc.


Cappa C.D.,University of California at Davis | Wilson K.R.,Lawrence Berkeley National Laboratory
Atmospheric Chemistry and Physics | Year: 2012

A new model of secondary organic aerosol (SOA) formation is developed that explicitly takes into account multi-generational oxidation as well as fragmentation of gas-phase compounds, and assumes equilibrium gas-particle partitioning. The model framework requires specification of a limited number of tunable parameters to describe the kinetic evolution of SOA mass, the average oxygen-to-carbon atomic ratio and the mean particle volatility as oxidation proceeds. These parameters describe (1) the relationship between oxygen content and volatility, (2) the probability of fragmentation and (3) the amount of oxygen added per reaction. The time-evolution and absolute value of the simulated SOA mass depends sensitively on all tunable parameters. Of the tunable parameters, the mean O : C is most sensitive to the oxygen/volatility relationship, exhibiting only a weak dependence on the other relationships. The model mean particle O : C produced from a given compound is primarily controlled by the number of carbon atoms comprising the SOA precursor, with some sensitivity to the specified oxygen/volatility relationship. The model is tested against laboratory measurements of time-dependent SOA formation from the photooxidation of α-pinene and n-pentadecane and performs well (after tuning). The model can also accurately simulate the carbon-number dependence of aerosol yields previously observed for oxidation of straight-chain alkanes. This model may provide a generalized framework for the interpretation of laboratory SOA formation experiments in which explicit consideration of multiple-generations of products is required, which is true for all photo-oxidation experiments. © 2012 Author(s).


Wang F.,Peking University | Kivelson S.A.,Stanford University | Lee D.-H.,University of California at Berkeley | Lee D.-H.,Lawrence Berkeley National Laboratory
Nature Physics | Year: 2015

In common with other iron-based high-temperature superconductors, FeSe exhibits a transition to a nematic phase below 90 K in which the crystal rotation symmetry is spontaneously broken. However, the absence of strong low-frequency magnetic fluctuations near or above the transition has been interpreted as implying the primacy of orbital ordering. In contrast, we establish that quantum fluctuations of spin-1 local moments with strongly frustrated exchange interactions can lead to a nematic quantum paramagnetic phase consistent with the observations in FeSe. We show that this phase is a fundamental expression of the existence of a Berry s phase associated with the topological defects of a Néel antiferromagnet, in a manner analogous to that which gives rise to valence bond crystal order for spin-1/2 systems. We present an exactly solvable model realizing the nematic quantum paramagnetic phase, discuss its relation with the spin-1 J 1 -J 2 model, and construct a field theory of the Landau-forbidden transition between the Néel state and this nematic quantum paramagnet. © 2015 Macmillan Publishers Limited. All rights reserved.


Doudna J.A.,Howard Hughes Medical Institute | Doudna J.A.,University of California at Berkeley | Doudna J.A.,Lawrence Berkeley National Laboratory
Annual Review of Biophysics | Year: 2013

Small RNA molecules regulate eukaryotic gene expression during development and in response to stresses including viral ection. Specialized ribonucleases and RNA-binding proteins govern the production and action of small regulatory RNAs. After initial processing in the nucleus by Drosha, precursor microRNAs (pre-miRNAs) are transported to the cytoplasm, where Dicer cleavage generates mature microRNAs (miRNAs) and short interfering RNAs (siRNAs). These double-stranded products assemble with Argonaute proteins such that one strand is preferentially selected and used to guide sequence-specific silencing of complementary target mRNAs by endonucleolytic cleavage or translational repression. Molecular structures of Dicer and Argonaute proteins, and of RNA-bound complexes, have offered exciting insights into the mechanisms operating at the heart of RNA-silencing pathways. Copyright © 2013 by Annual Reviews.


Catalan G.,Catalan Institution for Research and Advanced Studies | Catalan G.,Catalan Institute of Nanoscience and Nanotechnology | Seidel J.,Lawrence Berkeley National Laboratory | Seidel J.,University of California at Berkeley | And 4 more authors.
Reviews of Modern Physics | Year: 2012

Domains in ferroelectrics were considered to be well understood by the middle of the last century: They were generally rectilinear, and their walls were Ising-like. Their simplicity stood in stark contrast to the more complex Bloch walls or Néel walls in magnets. Only within the past decade and with the introduction of atomic-resolution studies via transmission electron microscopy, electron holography, and atomic force microscopy with polarization sensitivity has their real complexity been revealed. Additional phenomena appear in recent studies, especially of magnetoelectric materials, where functional properties inside domain walls are being directly measured. In this paper these studies are reviewed, focusing attention on ferroelectrics and multiferroics but making comparisons where possible with magnetic domains and domain walls. An important part of this review will concern device applications, with the spotlight on a new paradigm of ferroic devices where the domain walls, rather than the domains, are the active element. Here magnetic wall microelectronics is already in full swing, owing largely to the work of Cowburn and of Parkin and their colleagues. These devices exploit the high domain wall mobilities in magnets and their resulting high velocities, which can be supersonic, as shown by Kreines' and co-workers 30 years ago. By comparison, nanoelectronic devices employing ferroelectric domain walls often have slower domain wall speeds, but may exploit their smaller size as well as their different functional properties. These include domain wall conductivity (metallic or even superconducting in bulk insulating or semiconducting oxides) and the fact that domain walls can be ferromagnetic while the surrounding domains are not. © 2012 American Physical Society.


Haxton W.C.,University of California at Berkeley | Haxton W.C.,Lawrence Berkeley National Laboratory | Hamish Robertson R.G.,University of Washington | Serenelli A.M.,Institute Ciencias Del Espacio CSIC IEEC | Serenelli A.M.,Max Planck Institute for Astrophysics
Annual Review of Astronomy and Astrophysics | Year: 2013

We describe the current status of solar neutrino measurements and of the theory - both neutrino physics and solar astrophysics - employed in interpreting measurements. Important recent developments include Super-Kamiokande's determination of the ν-e elastic scattering rate for 8B neutrinos to 3%; the latest Sudbury Neutrino Observatory (SNO) global analysis in which the inclusion of low-energy data from SNO I and II significantly narrowed the range of allowed values for the neutrino mixing angle θ12; Borexino results for both the 7Be and proton-electron-proton (pep) neutrino fluxes, the first direct measurements constraining the rate of proton-proton (pp) I and pp II burning in the Sun; global reanalyses of solar neutrino data that take into account new reactor results on θ13; a new decadal evaluation of the nuclear physics of the pp chain and CNO cycle defining best values and uncertainties in the nuclear microphysics input to solar models; recognition of an emerging discrepancy between two tests of solar metallicity, helioseismological mappings of the sound speed in the solar interior, and analyses of the metal photoabsorption lines based on our best current description of the Sun's photosphere; a new round of standard solar model calculations optimized to agree either with helioseismology or with the new photospheric analysis; and, motivated by the solar abundance problem, the development of nonstandard, accreting solar models, in order to investigate possible consequences of the metal segregation that occurred in the proto-solar disk. We review this progress and describe how new experiments such as SNO+ could help us further exploit neutrinos as a unique probe of stellar interiors. Copyright ©2013 by Annual Reviews. All rights reserved.


News Article | August 31, 2016
Site: www.greencarcongress.com

« Hyundai Mobis develops Electronic Parking Brakes (eDIH) for large pickup trucks | Main | SAKOR Technologies provides dual-dynamometer testing system to major global powertrain manufacturer in China » Among the 43 small businesses participating in the newly announced second round of the US Department of Energy (DOE) Small Business Vouchers (SBV) pilot are 10 projects in the area of hydrogen and fuel cells. These selected projects are aimed at reducing the costs and improving the performance of hydrogen and fuel cell technologies, such as finding ways to lower costs of membrane electrode assemblies (MEAs), a critical component of fuel cell stacks and electrolyzers. Some of these collaborations will explore using lower cost materials, reducing the quantity of pricey raw material required, and developing better and more precise methods in manufacturing MEAs. The ten companies receiving Round 2 SBVs for hydrogen and fuel cells projects include: Altergy. Altergy will work with the National Renewable Energy Laboratory (NREL) to leverage lab expertise to reduce cost and improve performance in its proprietary fuel cell products. The objective of their work with NREL is to observe the fabrication of MEAs at NREL using non- proprietary methods, and evaluate MEAs made at NREL with ETFECS-based catalyst to reduce costs and/or enhance durability. American Fuel Cell. AFC will work with Oak Ridge National Laboratory (ORNL) to optimize deposition techniques for roll-to-roll direct coating of electrodes on anode and cathode gas diffusion media leading to improved quality and lower-cost manufacturing of various fuel cell applications. Garmor Inc. Garmor, working with NREL, will utilize existing bipolar plate (BPP) technology to develop a BPP that will include composites that will be used to form microstructured surface features to improve BPP surface tension for water droplet control. Ion Power. Ion Power will work with Los Alamos National Laboratory (LANL) to analyze various performance improvement mechanisms to improve the impact on durability of its fuel cell catalyst layer. Nanosonic. NanoSonic, working with LANL, will receive assistance in the preparation of membrane electrode assembly and performance of fuel cell testing and validation of new anion exchange fuel cell membranes. Nzyme2HC LLC. Nzyme2HC has novel acellular biological approach to scalable/clean/potentially low-cost H generation. Combining minimal electric current, bacterially-extricated hydrogenase, and industrial waste as the feedstocks/production materials. RFA award monies would go to building end-to-end bench prototype at NREL to test/metric. Oorja Fuel Cells. Oorja Fuel Cells will work with NREL to determine the efficacy and performance of membrane electrode assembly (MEA) of Oorja’s direct methanol cells operating conditions (DMFC) and compare the results with the commercially available MEA used in Oorja DMFC power system. Opus 12. Opus 12 will receive assistance to develop a computational model of ions, products, reactants, and water transport within their PEM electrolyzer for the co-electrolysis of water and carbon dioxide to make syngas. The model will aid in increasing system performance to make Opus 12’s syngas cost-competitive with traditional syngas sources. Working with Lawrence Berkeley National Laboratory. Pajarito Powder. Pajarito will work with LANL to improve electrode structures and to implement electrode manufacturing techniques that allow full catalyst utilization. Ultimately, this work will increase customer confidence in the technology and accelerate commercial adoption. Proton Energy Systems. Proton Energy Systems will work with NREL to develop, design and validate a small-scale advanced power converter to provide a pathway for reduced capital cost and improved efficiency of Proton’s M-series electrolyzer that offers a carbon-free source of hydrogen fuel or process gas.


« Elio Motors plans to sell 100 pre-production 3-wheelers; consumer launch moving to 2017 as company seeks funding | Main | $67 Oil Has All The Majors Converging in Argentina » Researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new, environmentally stable solid-state hydrogen storage material constructed of Mg nanocrystals encapsulated by atomically thin and gas-selective reduced graphene oxide (rGO) sheets. This material, protected from oxygen and moisture by the rGO layers, exhibits dense hydrogen storage (6.5 wt% and 0.105 kg H per liter in the total composite). As rGO is atomically thin, this approach minimizes inactive mass in the composite, while also providing a kinetic enhancement to hydrogen sorption performance. These multilaminates of rGO-Mg are able to deliver exceptionally dense hydrogen storage and provide a material platform for harnessing the attributes of sensitive nanomaterials in demanding environments. An open-access paper on the work is published in the journal Nature Communications. Metal hydrides for solid-state hydrogen storage are one of the few materials capable of providing sufficient storage density required to meet these long-term targets. However, simultaneously meeting gravimetric, volumetric, thermodynamic and kinetic requirements has proven challenging, owing to the strong binding enthalpies for the metal hydride bonds, long diffusion path lengths and oxidative instability of zero-valent metals. Although nanostructuring has been shown to optimize binding enthalpies3, synthesis and oxidative stabilization of metallic nanocrystals remains a challenge. Protection strategies against oxidization and sintering of nanocrystals often involve embedding these crystals in dense matrices, which add considerable ‘dead’ mass to the composite, in turn decreasing gravimetric and volumetric density. Thus, although metal hydrides show the most promise for non-cryogenic applications, it remains true that no single material has met all of these essential criteria. These graphene-encapsulated magnesium crystals act as “sponges” for hydrogen, offering a very compact and safe way to take in and store hydrogen. The nanocrystals also permit faster fueling, and reduce the overall “tank” size. The graphene shields the nanocrystals from oxygen and moisture and contaminants, while tiny, natural holes allow the smaller hydrogen molecules to pass through. This filtering process overcomes common problems degrading the performance of metal hydrides for hydrogen storage. The research, conducted at Berkeley Lab’s Molecular Foundry and Advanced Light Source, is part of a National Lab Consortium, dubbed HyMARC (Hydrogen Materials—Advanced Research Consortium) (earlier post) that seeks safer and more cost-effective hydrogen storage. The work was supported by the Department of Energy Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy, the Bay Area Photovoltaic Consortium (BAPVC), and the US-India Partnership to Advance Clean Energy-Research (PACE-R) for the Solar Energy Research Institute for India and the US (SERIIUS).


News Article | November 3, 2016
Site: www.eurekalert.org

Scientists at Lehigh University, in collaboration with Lawrence Berkeley National Laboratory, have demonstrated the fabrication of what they call a new class of crystalline solid by using a laser heating technique that induces atoms to organize into a rotating lattice without affecting the macroscopic shape of the solid. By controlling the rotation of the crystalline lattice, the researchers say they will be able to make a new type of synthetic single crystals and "bio-inspired" materials that mimic the structure of special biominerals and their superior electronic and optical properties as well. The group reported its findings today (Nov. 3) in Scientific Reports, a Nature journal, in an article titled "Rotating lattice single crystal architecture on the surface of glass." The paper's lead author is Dmytro Savytskii, a research scientist in the department of materials science and engineering at Lehigh. The other authors are Volkmar Dierolf, distinguished professor and chair of the department of physics at Lehigh; Himanshu Jain, the T.L. Diamond Distinguished Chair in Engineering and Applied Science and professor of materials science and engineering at Lehigh; and Nobumichi Tamura of the Lawrence Berkeley National Lab in Berkeley, California. The development of the rotating lattice single (RLS) crystals follows a discovery reported in March in Scientific Reports in which the Lehigh group demonstrated for the first time that a single crystal could be grown from glass without melting the glass. In a typical crystalline solid, atoms are arranged in a lattice, a regularly repeating, or periodic three-dimensional structure. When viewed from any angle--left to right, up and down, front to back--a crystal-specific periodicity becomes evident. Glass, by contrast, is an amorphous material with a disordered atomic structure. Because they have no grain boundaries between interconnecting crystals, single-crystal materials often possess exceptional mechanical, optical and electrical properties. Single crystals give diamonds their brilliance and jet turbine blades their resistance to mechanical forces. And the single crystal of silicon of which a silicon chip is made gives it superior conducting properties that form the basis for microelectronics. The periodicity, or repeating pattern, in a rotating lattice single crystal, said Jain and Dierolf, differs from the periodicity in a typical single crystal. "We have found that when we grow a crystal out of glass," said Jain, "the periodicity does not result the some way. In one direction, it looks perfect, but if you turn the lattice and look at it from a different angle, you see that the whole structure is rotating." "In a typical single-crystal material," said Dierolf, "once I figure out how the pattern repeats, then, if I know the precise location of one atom, I can predict the precise location of every atom. This is possible only because single crystals possess a long-range order. "When we grow an RLS crystal out of glass, however, we have found that the periodicity does not result the some way. To predict the location of every atom, I have to know not just the precise location of a particular atom but the rotation angle of the lattice as well. "Thus, we have to slightly modify the textbook definition of single crystals." The rotation, said Jain, occurs at the atomic scale and does not affect the shape of the glass material. "Only the string of atoms bends, not the entire material. We can see the bending of the crystal lattice with x-ray diffraction." To achieve this rotation, the researchers heat a very small portion of the surface of a solid glass material with a laser, which causes the atoms to become more flexible. "The atoms want to arrange in a straight line but the surrounding glass does not allow this," said Jain. "Instead, the glass, being completely solid, forces the configuration of the atoms to bend. The atoms move and try to organize in a crystalline lattice, ideally in a perfect single crystal, but they cannot because the glass prevents the perfect crystal from forming and forces the atoms to arrange in a rotational lattice. The beauty is that the rotation occurs smoothly on the micrometer scale. "Our laser imposes a degree of asymmetry on the growth of the crystal. We control the asymmetry of the heating source to impose this rotational pattern on the atoms." The group's ability to control the amount of heating is critical to the formation of the rotating lattice, said Jain. "The key to the creation of the rotating atomic lattice is that it occurs without melting the glass. Melting allows too much freedom of atomic movement, which makes it impossible to control the organization of the lattice. "Our subtle way of heating the glass overcomes this. We heat only the surface of the glass, not inside. This is very precise, very localized heating. It causes only a limited movement of the atoms, and it allows us to control how the atomic lattice will bend." Rotating lattices have been observed in certain biominerals in the ocean, said Jain and Dierolf, and it may also occur on a very small scale in some natural minerals as spherulites. "But no one had previously made this on a larger scale in a controlled way, which we have accomplished with the asymmetrical imposition of a laser to cause the rotating lattice," said Jain. "Scientists were not able to understand this phenomenon before because they could not observe it on a large enough scale. We are the first group to induce this to happen on an effectively unlimited dimension with a laser." Jain and Dierolf and their group are planning further studies to improve their ability to manipulate the ordering of the atoms. The researchers performed the laser heating of the glass at Lehigh and characterized the glass with micro x-ray diffraction on a synchrotron at the Lawrence Berkeley National Lab. They plan to perform further characterization at Berkeley and with electron microscopy at Lehigh. The project has been funded for six years by the U.S. Department of Energy. "This is a novel way of making single crystals," said Dierolf. "It opens a new field by creating a material with unique, novel properties."


News Article | November 29, 2016
Site: globenewswire.com

Copenhagen, 2016-11-29 12:30 CET (GLOBE NEWSWIRE) -- The Carlsberg Forum series reflects the strong link between science and business. Each year, the Kaj Linderstrøm-Lang awards are given to prominent scientists for their achievements within biochemistry or physiology, the fields of science in which Kaj Linderstrøm-Lang, a professor at Carlsberg Research Laboratory in the period 1939-1959, distinguished himself as a pioneer. This year, Professor Henrik V. Scheller, Joint BioEnergy Institute, Lawrence Berkeley National Laboratory, USA and Professor Geoff Fincher, School of Agriculture, Food & Wine, The University of Adelaide, Australia received this year’s Kaj Linderstrøm-Lang Prize as an acknowledgement of their outstanding achievements on identifying and characterizing enzymes involved in synthesis and modification of the plant cell wall. The third Kaj Linderstrøm-Lang Prize was awarded to Professor Richard Henderson, MRC Laboratory of Molecular Biology, Cambridge, UK for his pioneering work towards obtaining high resolution atomic structures of membrane proteins and membrane protein complexes by electron cryomicroscopy. The prize is a plated gold medal and a financial personal award of DKK 40.000. Finally, the Emil Chr. Hansen Golden Medal was awarded to Jef Boeke, Director, Institute for Systems Genetics at New York University, for his seminal contributions to yeast genome scrambling and the design of synthetic yeast. The Emil Chr. Hansen Foundation sponsors the Emil Chr. Hansen Golden Medal. Professor Emil Chr. Hansen worked at the Carlsberg Laboratory from 1877-1909, and was the first to isolate a pure yeast that revolutionized the brewing industry. Birgitte Skadhauge, Head of Carlsberg Research Laboratory, says: “The distinguished work of Carlsberg Research Laboratory scientists Kaj Linderstrøm-Lang and Emil Chr. Hansen made it possible to have the high quality beers we have today. This year marks the 140th anniversary of the Carlsberg Research Laboratory, and we found it particularly important to celebrate the connection between beer and science this year by honouring not only Professors Richard Henderson, Geoff Fincher and Henrik Scheller but also Jef Boeke with the Emil Christian Hansen Golden Medal for their notable scientific contributions to business and society.” The program for this year’s Carlsberg Forum consisted of eight lectures by outstanding international scientists and businesspersons spanning trend-setting research in e.g. yeast, cereal-crops, food for health & well-being and new technologies. The business part of the program included lectures on business and health challenges in a globalized and rapidly changing world. In a special endnote address, Birgitte Skadhauge reviewed the Laboratory’s anniversary and the famed re-brew project. Scientists at the Laboratory made an extraordinary discovery in the old cellars of Carlsberg in Copenhagen, Denmark, some years ago. They discovered a very old Carlsberg bottle that surprisingly still contained living yeast cells. They grew and analyzed the cells, and, as it turns out, the bottle was one of the very first beers brewed with the original pure yeast from 1883 when Carlsberg democratized modern lager beers. Earlier this year, specially invited guests celebrated the Laboratory’s 140th anniversary by tasting, for the first time in more than a hundred years, the original Carlsberg quality lager, re-brewed by the Laboratory. Birgitte Skadhauge says: “What better way to take science to business than by re-brewing the father of quality lager. The Laboratory has pioneered beer quality since it was founded 140 years ago. On the eve of our anniversary, we are proud to have demonstrated scientifically why Carlsberg is probably the best beer in the world.” The movie premiere of Warner Bros’ documentary about the re-brew project premiered in Copenhagen and Fredericia, Denmark, on 24 and 25 November. A movie trailer is available on rebrewproject.com The Carlsberg Group is one of the leading brewery groups in the world, with a large portfolio of beer and other beverage brands. Our flagship brand – Carlsberg – is one of the best-known beer brands in the world and the Baltika, Carlsberg and Tuborg brands are among the eight biggest brands in Europe. More than 47,000 people work for the Carlsberg Group, and our products are sold in more than 150 markets. In 2015, the Carlsberg Group sold 120 million hectolitres of beer, which is more than 35 billion bottles of beer. Find out more at www.carlsberggroup.com.


News Article | December 19, 2016
Site: www.cemag.us

Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). “This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid,” says Jason Horng, a co-lead author of a study published Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab’s Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. “The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell,” says Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. “One of the outstanding problems in studying a large network of cells is understanding how information propagates between them,” Balch says. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui says, “This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane.” The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers say. Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab’s Advanced Light Source to understand the effects of an electric field on graphene’s absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene’s light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer’s light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field’s strength and location along the surface of the graphene. The new imaging platform — dubbed CAGE for “Critically coupled waveguide-Amplified Graphene Electric field imaging device” — proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field’s location along the graphene sheet’s surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Balch says that there are already plans to test the platforms with living cells. “We are working with collaborators to test this with real heart cells,” she says. “There are several potential applications for this research in heart health and drug screening.” There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. “The kind of elegance behind this system comes from its generality,” Balch says. “It can be sensitive to anything that carries charge.” The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility.


News Article | August 18, 2016
Site: www.aweablog.org

A number of recent reports paint a bright future for the U.S. wind industry thanks to continuing cost declines, with reductions of more than 60 percent since 2009. This cost trajectory is attributable to, among other things, advanced manufacturing techniques, improved construction methods, larger turbine blades, economies of scale, and other advances in turbine technology. While manufacturers and construction companies continue to bring down the cost of building and installing wind turbines, innovative technology enables turbines to access better wind speeds at higher elevations, capturing more of the wind’s energy and turning it into electricity with greater efficiency. Wind energy has reached a pivotal cost position in the power sector. Many recent reports from a variety of consultants, research groups and banks highlight the positive outlook for the U.S. wind industry thanks to the compelling cost position of new wind projects compared to other forms of new power generation. This builds on several earlier reports that we’ve highlighted previously. Thanks to the low cost of wind combined with the current availability of the wind production tax credit (PTC), a number of utilities have recently announced plans for very large wind investments. Here’s a review of some of those reports, along with highlights of some of those utility announcements. In an equity research note dated June 30, Goldman Sachs concluded that “when compared to other incremental sources of power, wind provides the lowest cost source of new capacity.” The investment bank calculated current wind levelized cost of electricity at $29/Megawatt Hour (MWh), compared to $38/MWh for natural gas combined-cycle plants and $57/MWh for utility-scale solar PV projects. For reference, a MWh provides enough electricity to power a typical home for a month. Goldman expects this comparative cost advantage to drive “robust” wind development into 2020. Into the next decade, Goldman expects wind capacity factors to increase due to technology advancements while costs to continue to fall. In combination, the bank expects wind costs to fall in the $30-$35/MWh range as the PTC expires. In sum, Goldman finds “the economics of wind development remain compelling.” On July 11, Macquiaire Research published new analysis showing wind to be cost competitive on a levelized cost basis in 2023, following the expiration of the PTC. Macquiaire projects wind costs to lie around $42/MWh compared to $66/MWh for natural gas combined-cycle plants.  As the 2020s progress, wind continues to be cheaper than gas plants through the end of the decade. Macquarie Research also looked at levelized cost of electricity comparisons in 2016 including the PTC. With the tax credit in place, Macquarie projects electricity from new wind projects to cost around $23/MWh compared to $50/MWh for new gas plants. Given U.S. wind energy’s cost position, Macquarie finds the U.S. to be the most attractive wind market in the world, surpassing both China and Europe. More recently, Morgan Stanley published a report on July 27 concluding that wind and solar developments will perform strongly through the next decade regardless of who is elected president, primarily due to their economics. In the interior of the country, Morgan Stanley calculates the levelized cost of wind electricity at around $15-$25/MWh range – far lower than the cost of a new natural gas combined-cycle facility, which costs roughly $55-$65/MWh in the same region. One of the first reports this summer highlighting wind’s cost competitiveness came from Bloomberg New Energy Finance (BNEF). On June 13, BNEF published an article noting that “in many regions, the lifetime cost of wind and solar is less than the cost of building new fossil fuel plants.” BNEF see technological innovations driving much of wind’s cost declines. Larger turbine blades that are able to capture more wind – combined with smart technology that allows project developers to optimize turbine placement and operators to optimize production and predict component issues before they happen –  enable wind projects to operate at higher capacity factors. These higher capacity factors mean wind turbines are generating more energy over a given time period. Given zero fuel costs and fixed capital costs, this lowers the levelized cost of wind generated electricity. Looking forward, BNEF expects wind costs to fall even further. Globally, for every doubling of installed wind power capacity, prices fall 19 percent. At that rate “wind and solar will be the cheapest forms of producing electricity in most of the world by the 2030s.” While this collection of new reports is a refreshing confirmation of wind’s low cost, the results are nothing new. Yesterday, Lawrence Berkeley National Laboratory (LBNL) released the latest version of its Wind Technologies Market Report. Besides a rich discussion of industry, technology, and performance trends, the report provides an insightful section tracking historical wind cost and power price trends. Documenting levelized power purchase agreement (PPA) prices going back to 1996, LBNL’s sample of wind power purchase agreements (PPAs) shows national average wind prices to be “around the $20/MWh level” compared to $70/MWh during the peak in 2009. The price trend has been consistently downward since 2009, with some variability in prices across different regions and projects, as one would expect. LBNL attributes wind’s low price to two primary factors: 1) low wind turbine pricing pushing down installed project costs and 2) larger, more advanced turbine designs delivering increased output. Installed wind project costs in the U.S. in 2015 averaged roughly $1,690/kW, down 27 percent from peak installed costs back in 2009 and 2010. More interestingly, installed costs are down even as wind turbines grow larger and increase their rotor diameters, which tends to increase costs per kilowatt (kW). These technological advances are helping new wind turbines to capture more wind and perform more efficiently. LBNL notes that wind turbines installed in 2014 had a capacity factor of 41 percent during 2015, compared to just 31 percent for projects built in the period from 2004 to 2011, one of the main reasons that PPA prices and levelized costs per MWh continue to fall. As shown in the chart below, the increase in capacity factor has been very rapid over the last several years of project installations. The report finds that this increase in output has been driven by both the move to larger turbines and by siting turbines in areas with higher wind resources, the latter being driven in part by recent grid upgrades. Earlier this month, the Energy Information Administration (EIA) also released its widely-used estimate of wind energy costs. EIA reports an average installed cost for wind of $1,536/kW, much lower than its cost assumption in recent years. EIA’s estimate for the capacity factor of new wind projects has also significantly increased from prior years. As a result, EIA shows wind as having a levelized cost of $58.50/MWh without incentives, which is down significantly from $73.60/MWh last year. This levelized wind cost is about the same as its estimate of $56.40/MWh for conventional combined cycle gas generation or $55.80/MWh for “advanced combined cycle.” Notably, EIA’s analysis understates the real-world cost of gas combined cycle generation by assuming that the plant will operate at a very high 87 percent capacity factor, thereby diluting the fixed costs across more MWh, even though the average natural gas combined cycle plant had a capacity factor of 56.3 percent in 2015. Correcting to the actual average capacity factor yields a gas combined cycle cost that is $8-9/MWh higher than indicated by EIA’s levelized cost figures, and higher than wind’s levelized cost. EIA also reports an average weighted wind cost of $34/MWh after accounting for the value of the PTC. While EIA fixed the flaws in its assumptions that have previously caused it to overestimate wind costs and underestimate wind plant capacity factors, other flaws in its analysis cause it to still underestimate wind deployment going forward. Despite 60 percent cost reductions over the last seven years, and NREL projections of 20 percent or more cost declines for wind energy through 2030, EIA inexplicably assumes that wind energy is a mature technology that only sees five percent cost reductions through 2035, while other technologies like solar, nuclear, and even many types of gas generation are assumed to experience many times larger cost reductions. As a result, EIA’s report still greatly underestimates wind energy deployment, particularly in the 2020’s and beyond. Lazard, a Wall Street investment firm, also found new wind generation to be the lowest cost energy source in its latest annual levelized cost of energy analysis. At the low end, Lazard calculates the levelized cost of wind energy without the PTC at $32/MWh compared to utility-scale solar PV at $50/MWh and gas combined-cycle at $52/MWh, without accounting for incentives. With incentives, Lazard finds wind pricing available as low as $14/MWh. Lazard also reports regional ranges for wind costs, which show wind to be competitive in nearly all regions. A number of utilities are capitalizing on wind’s low cost As shown in the following chart from NREL’s February report, time is of the essence for utilities seeking to lock in maximum savings for their customers. Prices for wind purchases from projects that begin construction this year are likely to be at an all-time low, thanks to the combined impact of the full value PTC and recent wind cost reductions. Forward-looking utilities buying wind this year are locking in prices at the lowest point on that V-shaped price curve before the PTC begins to decline in value. Utilities across the country are aggressively pursuing wind investments and purchases to capitalize on recent reductions in wind costs and the full value PTC: -Kansas City Power and Light recently announced a 500 MW wind PPA, noting “This tax credit allows KCP&L to pass savings along to customers, keeping rates lower than would otherwise be possible.” -Appalachian Power announced a 120 MW wind PPA in June, explaining “The extension … of the wind PTC provides cost advantages to wind energy resources constructed over the next five years and makes this agreement cost-effective for Appalachian Power customers.” -MidAmerican Energy recently announced a $3.6 billion, 2,000 MW investment in wind, explaining that “Investments of this scale are viable because federal production tax credits are at their highest level.” -Earlier this year, PacifiCorp and Portland General Electric both announced RFPs for renewable resources, with Portland General noting it was doing so now “in order to procure renewable resources to maximize the economic value of available tax credits on behalf of our customers.” -WPPI Energy also announced a 100 MW RFP in June, stating that “With the recent extension of federal renewable energy tax credits, we anticipate that bidders might be able to offer renewables to cost-effectively fill our need for future power supply resources.” -Many other utilities, including Tennessee Valley Authority, are moving aggressively to pursue large quantities of wind energy while the tax credit is in place at full value. In sum, thanks to improved turbine technology, wind energy now consistently ranks as the lowest-cost source of new electricity generation across the country, providing a bright future for the U.S. wind industry.


News Article | December 16, 2016
Site: www.eurekalert.org

MADISON, Wis. - Mother-of-pearl or nacre (pronounced nay-ker), the lustrous, tough-as-nails biomineral that lines some seashells, has been shown to be a faithful record of ancient ocean temperature. Writing online Thursday, Dec. 15, in the journal Earth and Planetary Science Letters, a team led by University of Wisconsin-Madison physics Professor Pupa Gilbert describes studies of the physical attributes of nacre in modern and fossil shells showing that the biomineral provides an accurate record of temperature as the material is formed, layer upon layer, in a mollusk. "We can very accurately correlate nacre tablet thickness with temperature," says Gilbert, explaining that mother-of-pearl is formed as mollusks lay down microscopic polygonal tablets of the mineral aragonite like brickwork to build layers of the shiny biomineral. The work is important because it provides scientists with a new and potentially more accurate method of measuring ancient ocean temperatures, improving on methods now used with other biominerals to tease out the record of the environmental conditions at which the materials formed in the distant past. "Everyone else measures temperatures in the ancient world using chemical proxies," says Gilbert, referencing methods that, for example, use ratios of isotopic oxygen locked into tiny fossil shells made by marine microorganisms known as Foraminifera to get a snapshot of ocean temperatures in the distant past. The method devised by Gilbert and her collaborators is extraordinarily simple: using just a scanning electron microscope and a cross section of shell, it is possible to measure the thickness of the layered microscopic tablets that compose nacre in a shell. The thickness of the tablets, explains Gilbert, correlates with ocean temperature as measured in modern shells when ocean temperatures were known at the time the shells were formed. The new work by the researchers from Wisconsin, Harvard, and the Lawrence Berkeley National Laboratory provides a novel physical approach to measuring past climate, says Gilbert, an expert in biomineral formation. "If what you are measuring is a physical structure, you see it directly," says Gilbert. "You just measure nacre tablet thickness, the spacing of the lines, and it corresponds to temperature. When the temperature is warmer, the layers get thicker." The new study looked at fossil samples of nacre as old as 200 million years from a mollusk in the family Pinnidae, large, fast-growing saltwater clams that live in shallow ocean environments. Today, as in the distant past, the bivalves are widespread in tropical and temperate coastal and shallow continental shelf environments. The new method is potentially more accurate, Gilbert notes, because the chemistry of fossil shells can be altered by diagenesis. Diagenesis occurs over geologic time, during or after sediments rain down on ocean beds to form sedimentary rock. Fossil shells may partially dissolve and re-precipitate as calcite, which fills cracks in aragonite nacre, thus skewing the chemical analysis of a sample, if analyzed as a bulk sample. "If the chemistry changes after the death of a fossil, the formation chemistry isn't necessarily preserved," says Gilbert. On the other hand, "if the physical structure is altered by diagenesis, you will notice immediately that nacre is no longer layered, and so you will know that it's not worth analyzing that area. If just a few nacre tablets are preserved, their thickness can easily be measured" meaning the new technique can augment current geochemical methods used to assess past temperatures, and thus help reconstruct ancient climates, especially the shallow marine environments that preserve most of the world's invertebrate fossil record. The family of mollusks in the new study has lived in the world's oceans for more than 400 million years, potentially leaving a clear record of ocean temperatures into the distant past. For purposes of assessing climate, the record is valuable because not only does it say something about past climate, but the data can also help modelers forecast future climate and environmental change. "The only thing you can do to understand climate in the future is to look at climate in the past," Gilbert notes. The new study was supported by a grant from the U.S. Department of Energy, the National Science Foundation, and the Radcliffe Institute for Advanced Study Fellowship Program.


News Article | September 19, 2016
Site: motherboard.vice.com

No one said detecting dark matter would be easy. We didn't even know about the stuff until a couple of decades ago, after all, despite the fact that it represents some 85 percent of all of the mass in the universe and is what's responsible for giving structure to the cosmos. We see its effects across the universe, but we have yet to see it. We're not even sure what exactly we're looking for—there are many theories as to the exact properties of a dark matter particle. Some aren't even all that dark. The leading candidate for dark matter is a particle known as a WIMP, or weakly-interacting massive particle. These are really heavy, classically "dark" particles. They interact with other matter via only the gravitational force, crucially evading electromagnetic interactions, which are what most of the interactions we see out in the world are based on: from a baseball whapping into a catcher's mitt to the nanoscale electrical circuits enabling the machine you are now staring at. WIMP detection is premised on WIMPs having sufficient mass to smack into an atomic nuclei with enough force to create a bit of light or heat, which can then be registered by the detector. A problem then arises when we start trying to imagine dark matter particles that maybe aren't so heavy and, as such, may result in interactions below the sensitivity of current detectors. This is where the work of Kathryn Zurek and colleagues at the Lawrence Berkeley National Laboratory comes in—bagging superlight dark matter may require supermaterials. Zurek and co.'s work is described in two papers published last week in the Physical Review Letters. One looks at the possibility of using helium superfluids to detect dark matter while the other considers superconducting metals. Both possibilities offer detector sensitivities below current offerings of around 10 million electronvolts. The lower limit for light, "warm" dark matter is around 1,000 electronvolts. The idea behind the helium superfluid detector is that the incoming superlight dark matter particle doesn't just couple to one collision event. Instead, the one incoming particle is able to yield several interactions—enough to be detected. This is part of quirkiness of superfluidic materials, which are able to flow without losing kinetic energy thanks to their complete absence of viscosity. Somewhat famously, superfluidic helium has the bizarre capability of creeping up the walls of its container and leaking away. So, the incoming superlight DM particles can couple to the superfluid without burning off kinetic energy—that is, without stopping. A similar sort of thing happens in a metallic superconductor, which is the basis of the detector described in Zurek's second paper. Here, current is able to flow without resistance in much the same way that the superfluid is able to flow without viscosity. The difference detector-wise is that the superlight dark matter particles interact with pairs of electrons rather than atomic nuclei. The pairs are split and the result are particle-like sound waves called phonons. Because the electrons in a superconductor are already moving at velocities faster than those expected of a superlight dark matter particle, the momentum of the incoming DM particles isn't "lost" in the collision event and they can be observed as particles in the superconductor recoil from the collision. The superconductor method will only work with dark matter particles that are able to couple to electrons, which means that they interact to some extent electromagnetically and are thus not so dark. As odd as it sounds, light-based dark matter hasn't been excluded from the hunt. The superfluid-based detector would presumably be bagging more conventional dark-dark matter particles, albeit very light ones. In any case, as dark matter searches across the globe continue to come up dry, it's clear that we need more tools.


News Article | February 23, 2017
Site: www.rdmag.com

A national lab and NASA are working together with X-ray and 3D visualization technologies to create materials to utilize in space. The goal of the collaboration— between NASA and a science group at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory—is to establish a suite of tools that includes X-ray imaging and small laboratory experiments, computer-based analysis and simulation tools, as well as large-scale high heat and wind-tunnel tests, to allow for rapid development of new materials with established performance and reliability in space. This system can heat sample materials to thousands of degrees and subject them to a mixture of different gases found in other planets’ atmospheres—with pistons stretching the materials to their breaking points— all while imagining in real time their 3D behavior at the microstructure level. For humans to explore Mars and other large-payload missions, a new type of heat shield that is flexible and can remain folded up until needed is likely required. An initial X-ray study has been deemed successful and led to a renewed interest in exploring the use of X-ray experiments to guide a better understanding of meteorite breakup. Scientists are using data from these experiments in risk analysis and aid in assessing threats posed by large asteroids. Michael Barnhardt, a senior research scientist at NASA ARC and principal investigator of the Entry Systems Modeling Project, explained that the research led to a better understanding of what was going on at the microscale. “Before this collaboration, we didn’t understand what was happening at the microscale,” Barnhardt said in a statement. “We didn’t have a way to test it. “X-rays gave us a way to peek inside the material and get a view we didn’t have before,” he added. “With this understanding, we will be able to design new materials with properties tailored to a certain mission.” According to Barnhardt, this basis will build more predictive models, reduce risk and provide more assurance about a new material’s performance even at initial stages. Researchers at Berkeley are testing several candidates for a flexible heat shield in addition to fabrics for Mars-mission parachutes that can be deployed at supersonic speeds, using Berkeley Lab’s Advanced Light Source (ALS) and with other techniques. “We are developing a system at the ALS that can simulate all material loads and stresses over the course of the atmospheric entry process,” Harold Barnard, a scientist at Berkeley Lab’s ALS who is spearheading the Lab’s X-ray work with NASA, said in a statement. A group of researchers at NASA Ames Research Center (NASA ARC) can blast materials with a giant superhot blowtorch that accelerates hot air to velocities topping 11,000 miles per hour, with temperatures exceeding that of the surface of the Sun. The scientists also test parachutes and spacecraft at their wind-tunnel facilities, which can produce supersonic wind speeds faster than 1,900 miles per hour. While it takes rocket science to launch and fly spacecraft’s, an understanding of how materials perform under extreme conditions is also needed to enter and land on planets with different atmospheres. X-ray science is also necessary in ensuring spacecrafts survive in extreme environments as they descent through otherworldly atmospheres and touch down safely on foreign surfaces. Francesco Panerai, a materials scientist with NASA contractor AMA Inc. and the X-ray experiments test lead for NASA ARC, said the aim is to modernize how scientists produce and study materials. “We need to use modern measurement techniques to improve our understanding of material response,” Panerai said in a statement. The experiments are being conducted at an ALS experimental station that captures a sequence of images as a sample is rotated in front of an X-ray beam. These images, which provide views inside the samples and can resolve details less than one micron or one millionth of a meter, can be compiled to form detailed 3D images and animations of samples.


News Article | September 22, 2016
Site: www.washingtonpost.com

On Thursday the Rocky Mountain Institute — an energy-focused think tank known for its boldness in predicting technological change — released a truly sweeping report on the future of vehicles. And if it’s right — something that, to be sure, can be questioned — then we would be on the verge of an energy and technology transformation to rival the sudden decline of the coal industry. The institute certainly isn’t the first to suggest self-driving cars are something special. Uber and Lyft are already exploring self-driving robotaxis. Apple, Google, and Tesla are also in the game. But what’s striking is the optimism about a real technological transformation that could arrive from the merger of two vehicle trends — automation and electrification — and the idea that it could happen quite fast, with major consequences for reducing our greenhouse gas emissions. “It is possible that a new mobility system will emerge in the next few years that is superior to our existing system in almost every way,” write the study’s authors, Charlie Johnson and Jonathan Walker. Autonomous cars won’t just be safer and more energy efficient, the study argues. Hiring one to taxi you around will also become very cheap, they argue, once these vehicles can be electrified, rather than running on gas. The upshot is that at some point, perhaps quite soon, it will make more economic sense to have a robotaxi ferry you around on all your trips than to personally own a car. At the same time, as the trend becomes widespread — with fleets of autonomous vehicles, owned by companies or cities, on the road and not running on gas — our greenhouse gas emissions will plunge, the research suggests. By the year 2018, the study argues, “solely using autonomous taxis for transportation could cost the same as owning and operating a car.” More specifically, the researchers think that electric robotaxis might quickly rival the per-mile cost of personal car use, which they say is 85 cents per mile. “We think as soon as Uber is ready to pull the driver out of their autonomous vehicle — right now they have a safety driver in the vehicle, but they’re testing the autonomous software,” says Walker. “As soon as they can remove that safety driver, the cost of that service is going to be about the same as owning and operating a sedan in Pittsburgh.” And of course these vehicles will be electric because that, too, will save money — in this case, for the companies that own and operate them. The huge distances that autonomous vehicles might drive in their lifetimes will mean that even if an electric vehicle costs more than a gas vehicle to purchase, it will make up the difference over the course of its use. Most striking is what this would do to greenhouse gas emissions. The study suggests a major decrease in gasoline demand and, as a result, a major increase in electricity demand, and a plummeting of transportation-sector greenhouse-gas emissions. Walker said the reduction could be as large as 800 million tons per year in the U.S. by 2035, a huge contribution to fighting climate change. “Personal mobility hasn’t changed in 100 years, and it actually plausibly could in the next five to 10,” Walker said. The study also forecasts that this means that “peak car ownership in the United States will occur around 2020 and will drop quickly after that.” The Rocky Mountain Institute isn’t the only such bullish voice. John Zimmer, the co-founder of Lyft, recently wrote at Medium that “by 2025, private car ownership will all-but end in major U.S. cities.” Meanwhile, a study last year of the adoption electrified robotic cabs found they could slash our greenhouse gas emissions, by 2030, by tremendous percentages. But is this all too gung-ho? To find out, I queried several outside experts about the scenario. Jeffrey Greenblatt, a researcher at the Lawrence Berkeley National Laboratory who co-authored the just mentioned study on greenhouse gas reductions from electric, shared, and autonomous vehicles, said the work seemed pretty solid. “It’s clearly optimistic, but I think it’s based on reasonable assumptions of the future cost of automated technology, as well as the total cost of ownership of vehicles.,” he said. Margo Oge, a former EPA transportation and air quality official and author of the book “Driving the Future: Combating Climate Change with Cleaner, Smarter Cars” agreed about the Rocky Mountain Institute’s “overall findings that autonomous, shared mobility will utilize largely electric power trains as they will be cheaper to operate. It’s the most economical strategy  for autonomous shared mobility and it will help significantly reduce carbon pollution – especially as the utilization of those EVs will be much higher than the conventional powertrains they displace.” However, the speed of the transition is another matter, Oge said, and here, the group might be off. “In the end the overall success of autonomous mobility will be based on public trust,” she said. “It’s not just an issue of technology. Trust takes time to develop.” It’s still not clear how people — or, for that matter, legal systems — will deal with the possibility of occasional autonomous vehicle crashes, even if they are significantly more rare than today’s car crashes. Moreover, the programming of these vehicles to deal with situations like an automotive version of the ethical “trolley dilemma” — in which a vehicle would have to “choose” whether or not to kill its driver in order to save multiple other individuals — is bound to spur controversy. Another transportation researcher, Susan Shaheen of the University of California-Berkeley, praised the Rocky Mountain Institute work for trying to push us in thinking about what the future could be like. “I think the purpose of papers like this is to set a visionary plan,” she said. But Shaheen also noted some potential drawbacks — for instance, regulatory. “By 2018, a question to ask is, are cities going to be ready to convert to a shared, automated, electrified fleet?” she said. “Chances are, we will not be at that point.” The Rocky Mountain Institute’s Walker admits that there could also be regulatory and cultural hurdles to such a vast transition. “Certain cities and states may make it legal, and some cities and states may not,” he said. And then, there’s just the possibility that some Americans love their cars and won’t want to give them up, he said. And there’s yet another problem, noted Don MacKenzie, a transportation researcher at the University of Washington in Seattle — what happens outside of urban areas? “This will not work so well when you need a ride to your cottage or the state park for the weekend, where there is no one else for the vehicle pick up after dropping you off,” he wrote by email (though MacKenzie agreed with other aspects of the report). “In such conditions, an important question becomes whether people will be willing to give up owning a personal vehicle. If they still have that personal vehicle, there’s a good chance they will continue to drive it (and drive it more).” Nonetheless, Walker said that the bullish scenario is something the group is standing by. “There’s people who say the technology’s not going to be ready, but they’re quoting things like 5 or 10 years, when a year ago, they were quoting 30 years,” Walker said. Think California’s current drought is bad? Past incarnations have lasted hundreds of years Scientists may have solved a key riddle about Antarctica — and you’re not going to like the answer What the ‘sixth extinction’ will look like in the oceans: The largest species die off first For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.


News Article | March 3, 2016
Site: www.aweablog.org

Oregon just decided to a lot of renewable energy to its electricity mix. Yesterday, the Oregon state senate approved a bill that would increase Oregon’s renewable energy portfolio standard (RPS) to 50 percent, while phasing coal out of the state’s electricity mix. Having already passed the house, the bill will now be sent to Gov. Kate Brown, who is expected to sign it into law. The measure firmly establishes Oregon as a renewable energy leader. Vermont, California and Hawaii are the only other states with renewable energy requirements of 50 percent or higher, and Oregon is the first state to officially ban coal via legislative action. The new law, called the Clean Electricity and Coal Transition plan, is also particularly noteworthy because of the diversity of the coalition that supported it. Oregon’s two largest utilities were in favor of the bill, along with business groups, community organizations and environmental advocates. Supporters as varied as Pacific Power and the Sierra Club all advocated for the bill, recognizing the initiative was the in the best interests of all Oregonians. The Clean Electricity and Coal Transition Plan will have a big effect on carbon emissions. It’s expected that it will reduce carbon pollution by 30 million metric tons, or the same amount as 6.4 million cars’ worth. Here’s what some people are saying in the wake of this historic news. Oregon’s decision to deploy a stronger RPS is a smart choice from a policy perspective. The Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory recently examined the effects of state-level RPS policies. Their findings were overwhelmingly positive. State RPS policies resulted in $7.5 billion in annual environmental benefits from reduced air emissions, 27 billion gallons in reduced water consumption annually, $1.3 billion to $4.9 billion in reduced consumer energy prices, and 200,000 American jobs and $20 billion in annual gross domestic product from the renewable energy developed to meet state RPS’s through the year 2013. Hopefully other states will follow Oregon’s lead in the coming months and years. The New York state legislature is also currently mulling a 50 percent renewable energy standard, and similar initiatives may end up of the dockets of state legislatures throughout the country.


News Article | February 23, 2017
Site: www.rdmag.com

A national lab and NASA are working together with X-ray and 3D visualization technologies to create materials to utilize in space. The goal of the collaboration— between NASA and a science group at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory—is to establish a suite of tools that includes X-ray imaging and small laboratory experiments, computer-based analysis and simulation tools, as well as large-scale high heat and wind-tunnel tests, to allow for rapid development of new materials with established performance and reliability in space. This system can heat sample materials to thousands of degrees and subject them to a mixture of different gases found in other planets’ atmospheres—with pistons stretching the materials to their breaking points— all while imagining in real time their 3D behavior at the microstructure level. For humans to explore Mars and other large-payload missions, a new type of heat shield that is flexible and can remain folded up until needed is likely required. An initial X-ray study has been deemed successful and led to a renewed interest in exploring the use of X-ray experiments to guide a better understanding of meteorite breakup. Scientists are using data from these experiments in risk analysis and aid in assessing threats posed by large asteroids. Michael Barnhardt, a senior research scientist at NASA ARC and principal investigator of the Entry Systems Modeling Project, explained that the research led to a better understanding of what was going on at the microscale. “Before this collaboration, we didn’t understand what was happening at the microscale,” Barnhardt said in a statement. “We didn’t have a way to test it. “X-rays gave us a way to peek inside the material and get a view we didn’t have before,” he added. “With this understanding, we will be able to design new materials with properties tailored to a certain mission.” According to Barnhardt, this basis will build more predictive models, reduce risk and provide more assurance about a new material’s performance even at initial stages. Researchers at Berkeley are testing several candidates for a flexible heat shield in addition to fabrics for Mars-mission parachutes that can be deployed at supersonic speeds, using Berkeley Lab’s Advanced Light Source (ALS) and with other techniques. “We are developing a system at the ALS that can simulate all material loads and stresses over the course of the atmospheric entry process,” Harold Barnard, a scientist at Berkeley Lab’s ALS who is spearheading the Lab’s X-ray work with NASA, said in a statement. A group of researchers at NASA Ames Research Center (NASA ARC) can blast materials with a giant superhot blowtorch that accelerates hot air to velocities topping 11,000 miles per hour, with temperatures exceeding that of the surface of the Sun. The scientists also test parachutes and spacecraft at their wind-tunnel facilities, which can produce supersonic wind speeds faster than 1,900 miles per hour. While it takes rocket science to launch and fly spacecraft’s, an understanding of how materials perform under extreme conditions is also needed to enter and land on planets with different atmospheres. X-ray science is also necessary in ensuring spacecrafts survive in extreme environments as they descent through otherworldly atmospheres and touch down safely on foreign surfaces. Francesco Panerai, a materials scientist with NASA contractor AMA Inc. and the X-ray experiments test lead for NASA ARC, said the aim is to modernize how scientists produce and study materials. “We need to use modern measurement techniques to improve our understanding of material response,” Panerai said in a statement. The experiments are being conducted at an ALS experimental station that captures a sequence of images as a sample is rotated in front of an X-ray beam. These images, which provide views inside the samples and can resolve details less than one micron or one millionth of a meter, can be compiled to form detailed 3D images and animations of samples.


News Article | April 26, 2016
Site: www.topix.com

Apr 15 The renewable portfolio standards that many states have enacted are responsible for 60 percent of the growth in non-hydro renewable energy generation, according to a new Lawrence Berkeley National Laboratory study. Most of the additional capacity to meet RPS requirements has come from wind, but in recent years, solar energy is gaining traction.


News Article | November 10, 2016
Site: www.washingtonpost.com

On election day, the U.S. was busy voting for a president who openly scoffs at the reality of human-caused climate change and has expressed the goal of withdrawing the U.S. from the international process for dealing with it. So if climate change is something you worry about, you’re probably extra worried right now. But on the same day, a paper published in Nature Communications Tuesday actually contained some of the better news about climate change that we’ve heard in a while. Granted, even that doesn’t make it very sunny — just a modest bit of evidence suggesting a slight, temporary reprieve in the rate at which we’re altering the planet. Trevor Keenan of Lawrence Berkeley National Laboratory and his colleagues examined what is called the “terrestrial carbon sink”: How much trees, plants, and other aspects of global land surfaces are managing to pull carbon dioxide out of the air. Every year, you see, humans pump many billion tons of the stuff into the atmosphere, but not all of it stays there to warm the planet. A very significant fraction ends up getting absorbed by the ocean. Another large fraction gets pulled in by land-based plants which use it for photosynthesis. And so on. What the new study shows is that from 2002 to 2014, plants appear to have gone into overdrive, and started pulling more carbon dioxide out of the air than they had before. The result was that the rate at which carbon dioxide accumulates in the atmosphere did not increase during this time period, although previously, it had grown considerably in concert with growing greenhouse gas emissions. Granted, this does not mean that plants revved up nearly enough to prevent steady accumulation of the long-lived gas in the atmosphere each year. There is still more added every year. But they did increase the fraction of it that they were gobbling up, in a way that scientists could measure. “That portion [of carbon dioxide] that stays in the atmosphere, that’s called the airborne fraction, and that has reduced by about 20 percent over the last 15 years,” said Keenan in an interview. “So around the year 2000 it was about 0.5, and currently it’s about 0.4.” In other words, in recent years the land and ocean started absorbing 60 percent of all emitted carbon dioxide, whereas before they had been absorbing more like 50 percent. Carbon dioxide in the atmosphere still went up steadily each year, but maybe not at quite as fast a rate as before. Keenan completed the work with scientists from U.S., U.K., Australian and Chinese institutions — including two researchers with the Global Carbon Project, which every year tracks how much CO2 the world emits, as well as how much of it ends up in the atmosphere as opposed to being trapped by the ocean and land “sinks.” One reason for the change is the so-called global warming “pause” that persisted during much of the 2000s (and now appears to have ended), explained Keenan. You see, plants engage in photosynthesis which pulls in carbon dioxide, but they also engage in respiration, which releases it again. The same is true of microorganisms living on land – in its soil, for instance. And respiration rates increase at higher temperatures. But during the pause, Keenan continued, carbon dioxide levels kept rising but temperature didn’t rise as much over land surfaces. That seems to have triggered a disproportionate response from living things on land. “We have more and more CO2 going into photosynthesis, but we don’t have more and more coming out from respiration,” he explained. One related trend may be what has been termed “Arctic greening.” Scientists have been finding, of late, that as northern portions of the planet warm up even as total atmospheric carbon dioxide increases, there’s been an increase in plant growth in these regions: If all of this were to continue — if more carbon dioxide in the air continually meant an increased response from photosynthesis, and if respiration stayed in check too — then you might think the growing land carbon “sink” would be able to really help save us. And indeed, climate change “skeptics” have often pointed to plant growth as a possible factor that would mitigate climate change. However, that is most emphatically not the conclusion of the new paper. Rather, what it has detected seems to have been a temporary effect. It’s even possible the situation could fully reverse itself in the future. “What we do know is that the effect of CO2 fertilization diminishes as CO2 goes up, so it’s a diminishing return,” said Keenan. “Whereas the effect of temperature on respiration is thought to increase.” Especially in a context in which Trump’s election could significantly hobble global climate progress, the new research underscores how effective it could be to curtail another major climate driver – deforestation, especially in tropical countries. If we not only stopped cutting down tropical forests, but also reclaimed lost forested areas and allowed them to grow back on a mass scale, research like this does serve as a reminder that there is a way to further fire up the land sink, and reduce the rate of global warming still more. And that could buy the planet some much needed time. In the meantime, it remains to be seen whether the 2015-2016 El Nino event has already shifted the trends reported in the new paper, making the land “sink” weaker once again. There are, after all, good reasons to think that El Nino events, by causing droughts and large scale wildfires (such as what we saw in Indonesia a year ago), can actually hurt the land’s ability to pull carbon dioxide out of the air. Which just goes to show you how fragile any shift in the land “sink” could be. “It’s good news for now,” said Keenan of the study. “We can’t expect it to continue.” Correction: This article had previously stated that the land and ocean carbon sinks had gone from absorbing 40 percent of airborne carbon dioxide to 50 percent. It should have been from 50 percent to 60 percent. Why scientists are so worried about sea-level rise in the second half of this century We’re adding record amounts of wind and solar — and we’re still not moving fast enough For more, you can sign up for our weekly newsletter here and follow us on Twitter here.


Some laws are made to be broken, or at least challenged. A research team led by faculty scientist Ali Javey at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has done just that by creating a transistor with a working 1-nanometer gate. For comparison, a strand of human hair is about 50,000 nanometers thick. "We made the smallest transistor reported to date," said Javey, a lead principal investigator of the Electronic Materials program in Berkeley Lab's Materials Science Division. "The gate length is considered a defining dimension of the transistor. We demonstrated a 1-nanometer-gate transistor, showing that with the choice of proper materials, there is a lot more room to shrink our electronics." The key was to use carbon nanotubes and molybdenum disulfide (MoS ), an engine lubricant commonly sold in auto parts shops. MoS is part of a family of materials with immense potential for applications in LEDs, lasers, nanoscale transistors, solar cells, and more. The findings will appear in the Oct. 7 issue of the journal Science. Other investigators on this paper include Jeff Bokor, a faculty senior scientist at Berkeley Lab and a professor at UC Berkeley; Chenming Hu, a professor at UC Berkeley; Moon Kim, a professor at the University of Texas at Dallas; and H.S. Philip Wong, a professor at Stanford University. The development could be key to keeping alive Intel co-founder Gordon Moore's prediction that the density of transistors on integrated circuits would double every two years, enabling the increased performance of our laptops, mobile phones, televisions, and other electronics. "The semiconductor industry has long assumed that any gate below 5 nanometers wouldn't work, so anything below that was not even considered," said study lead author Sujay Desai, a graduate student in Javey's lab. "This research shows that sub-5-nanometer gates should not be discounted. Industry has been squeezing every last bit of capability out of silicon. By changing the material from silicon to MoS2, we can make a transistor with a gate that is just 1 nanometer in length, and operate it like a switch." When 'electrons are out of control' Transistors consist of three terminals: a source, a drain, and a gate. Current flows from the source to the drain, and that flow is controlled by the gate, which switches on and off in response to the voltage applied. Both silicon and MoS2 have a crystalline lattice structure, but electrons flowing through silicon are lighter and encounter less resistance compared with MoS2. That is a boon when the gate is 5 nanometers or longer. But below that length, a quantum mechanical phenomenon called tunneling kicks in, and the gate barrier is no longer able to keep the electrons from barging through from the source to the drain terminals. "This means we can't turn off the transistors," said Desai. "The electrons are out of control." Because electrons flowing through MoS are heavier, their flow can be controlled with smaller gate lengths. MoS can also be scaled down to atomically thin sheets, about 0.65 nanometers thick, with a lower dielectric constant, a measure reflecting the ability of a material to store energy in an electric field. Both of these properties, in addition to the mass of the electron, help improve the control of the flow of current inside the transistor when the gate length is reduced to 1 nanometer. Once they settled on MoS as the semiconductor material, it was time to construct the gate. Making a 1-nanometer structure, it turns out, is no small feat. Conventional lithography techniques don't work well at that scale, so the researchers turned to carbon nanotubes, hollow cylindrical tubes with diameters as small as 1 nanometer. They then measured the electrical properties of the devices to show that the MoS2 transistor with the carbon nanotube gate effectively controlled the flow of electrons. "This work demonstrated the shortest transistor ever," said Javey, who is also a UC Berkeley professor of electrical engineering and computer sciences. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over. We also have not developed self-aligned fabrication schemes for reducing parasitic resistances in the device. But this work is important to show that we are no longer limited to a 5-nanometer gate for our transistors. Moore's Law can continue a while longer by proper engineering of the semiconductor material and device architecture." Explore further: Breakthrough paves way for post-silicon future with carbon nanotube electronics


State renewables portfolio standards, known as RPS policies, have contributed to more than half of all renewable electricity growth in the United States since 2000. Most state RPS requirements will continue to rise through at least 2020, if not beyond, and collectively these policies will require substantial further growth in U.S. renewable electricity supplies. These findings are part of a new annual status report on state RPS policies, from Lawrence Berkeley National Laboratory (Berkeley Lab).


News Article | February 4, 2016
Site: www.theenergycollective.com

Federal energy policy developments received a significant amount of attention in 2015 – most notably, the finalized Clean Power Plan and renewal of renewable energy tax credits dominated headlines. However, this focus on national issues tends to obscure critical actions at the local and state levels that are just as important for energy systems. One of the most important state-level policies, renewable portfolio standards, play essential, but not fully appreciated roles in complementing and enabling federal policy action. Renewable portfolio standards (RPSs) are the most influential state-level policies for renewable energy development in the United States. Combined with federal tax incentives, state RPSs have driven wind generation from less than 0.4% of U.S. generation in 2004 to more than 4.4% in 2014 while boosting solar above 1% of total electricity in the U.S. in 2015 for the first time ever. Despite broad recognition of their importance, the central role that state RPSs have played in getting renewable energy to the point it is today is often not fully appreciated. By understanding what state RPS policies have accomplished to date, we can better understand what they are likely to do in the next several years. While the first RPS was passed in Iowa in 1983, it was not until the late 1990’s and mid-2000’s that many states began to follow suit. These early policies often came in response to increasing concerns about energy security and environmental considerations specifically at the state level. While the exact requirements vary by state, RPSs generally require that a certain percentage of state electricity retail sales come from renewable energy by a specified year. As of October 2015, 29 states have mandatory standards while an additional 8 states have voluntary goals. Remarkably, renewable energy mandates are one of the most politically attractive energy policies at the state level. They have been enacted in states ranging from liberal California to moderate Ohio to conservative Texas. Note: This map does not reflect several recent policy changes increasing RPS requirements in CA, NY, HI, and VT Critically, state RPS mandates are generally non-partisan. Despite recent legislative attempts in some states to remove RPS goals, they have proven surprisingly resilient. With only a handful of exceptions, the vast majority of revisions to renewable mandates have led to more stringent renewable requirements. Public opinion polls show consistent support for efforts to increase wind and solar generation, making RPS policies popular for state governors and legislatures. State Renewable Policies Key to Renewable Growth of the Last Decade These widespread policies have had a tremendous impact on renewable energy growth. Data from the Lawrence Berkeley National Laboratory (LBNL) indicates that, in 2013, at least 171,950 GWh of renewable energy was used to comply with state renewable mandates. This nearly equals 5% of total U.S. energy generation in 2013. LBNL’s estimates are based on RPS compliance filings at the state level, meaning they have a high level of reliability. By tracking the retirement of renewable energy credits (great primer here), we can accurately gauge how much renewable energy has been used to comply with state mandates. Source: Spark Library, based on data from LBNL and U.S. EIA Critically, RPS compliance to date has been heavily concentrated geographically. In 2013, almost 27% of RPS compliant generation came from California alone (where it made up almost 18% of total retail sales). Texas, New Jersey, New York, and Pennsylvania combined made up another 26%. Hence, the remaining 24 states with mandatory RPSs contributed less than half of RPS-compliant renewable energy in 2013. This is partially due to California and the other states’ being large electricity consumers. However, it is also a timing issue – California, Texas, New Jersey, and Pennsylvania were some of the first states to enact RPS mandates. RPS requirements in other states are still ramping up, meaning that renewable generation is likely to become somewhat less geographically concentrated in the next decade. Most of the renewable energy used for RPS compliance so far has been from wind energy. A separate estimate from LBNL indicated that as much as 88% of new capacity motivated by state RPSs between 1998 and 2012 was wind. As the figure above clearly illustrates, total U.S. generation very closely tracks the total amount of RPS-compliant renewable energy between 2001 and 2013. This high level of wind generation is not necessarily surprising, even though wind capacity was limited as recently as ten years ago.  Even then, there was significant ongoing research and large deployments in Europe helping the industry reach large-scale commercial viability. Perhaps more critically, the federal production tax credit made wind generation even more cost competitive in the mid-2000’s.  Nationwide average prices for wind capacity installed between 2002 and 2011 ranged between the mid-$30’s/MWh and the upper-$60’s/MWh. Compared to prevailing wholesale electricity prices at the time, these were cost competitive levels. Without the tax credit, wind prices would have been around $20-23/MWh more expensive in all years. In effect, state RPSs and federal tax incentives should be seen as complementary policies. Federal tax incentives made it very inexpensive for states to meet RPS mandates with wind energy, helping encourage the adoption of new and more stringent renewable policies. These state policies then drove significant state economic benefits, strengthening the argument for continued tax credit support. As a quick side note, the price of wind power purchase agreements have been falling since 2009, reaching as low as an average $23.50/MWh in 2014. This is important as it indicates that wind may remain cost competitive with low-cost natural gas resulting from the shale revolution, particularly with most recent renewal of the wind production tax credit. Although wind has been the primary RPS-compliant generation source historically, existing state RPS policies have also benefitted solar energy. Many state renewable mandates have specific provisions (or carve-outs) intended to directly benefit solar and/or distributed generation. The form of these provisions varies significantly. For example, in Colorado, solar located in cooperatives or municipal utility service areas receive a 3X multiplier for the state RPS. Meanwhile, Massachusetts’ RPS contains a specific target of 1,600 MW of solar PV in its RPS. These solar provisions have played notable roles in driving initial solar market growth in many states. The economic benefits of renewable standards are significant. A recent joint study by two federal energy laboratories (LBNL and NREL) found that in 2013 renewables used for RPS compliance: In particular, the direct economic impacts were a principal cause behind the recent decisions to extend both the federal wind and solar tax credits. RPS and federal tax credits continue to be mutually reinforcing, complementary policies. Perhaps more importantly, state RPSs and federal tax credits have driven significant cost innovation in both the wind and solar industries. Long term, renewable energy will continue to grow, bringing in greater economic and environmental benefits at lower (and potentially negative) costs. Further, renewable portfolio standards have come at relatively little cost. State renewable standards have rather limited cost impacts, as demonstrated by yet another joint NREL and LBNL study. In most states, incremental compliance costs for existing standards through 2012 was generally at or below 2% of average retail rates. Even if existing standards are implemented fully, rates are expected to increase by less than 5% in most states, while delivering even greater economic and environmental benefits.


News Article | December 19, 2016
Site: www.cemag.us

Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). “This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid,” says Jason Horng, a co-lead author of a study published Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab’s Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. “The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell,” says Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. “One of the outstanding problems in studying a large network of cells is understanding how information propagates between them,” Balch says. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui says, “This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane.” The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers say. Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab’s Advanced Light Source to understand the effects of an electric field on graphene’s absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene’s light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer’s light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field’s strength and location along the surface of the graphene. The new imaging platform — dubbed CAGE for “Critically coupled waveguide-Amplified Graphene Electric field imaging device” — proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field’s location along the graphene sheet’s surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Balch says that there are already plans to test the platforms with living cells. “We are working with collaborators to test this with real heart cells,” she says. “There are several potential applications for this research in heart health and drug screening.” There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. “The kind of elegance behind this system comes from its generality,” Balch says. “It can be sensitive to anything that carries charge.” The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility.


News Article | August 29, 2016
Site: cleantechnica.com

If you’ve never heard of covalent organic frameworks before, that could be about to change. To the eye, these delicate, super lightweight crystalline materials look like nothing more than a pile of fluffy powder, but they are beginning to emerge as a force to be reckoned with in the energy storage field. In the latest development, a research team based at Northwestern University has figured out how to get one such material to behave like a battery and a supercapacitor, all in one. That could be an important development for the EV market. An improved battery means greater range without adding more weight, and a more efficient supercapacitor translates into faster charging and more powerful discharging. For a quick primer on COF’s you can check out Lawrence Berkeley National Laboratory, which provides this rundown: “COFs and their cousin materials, metal organic frameworks (MOFs), are porous three-dimensional crystals with extraordinarily large internal surface areas that can absorb and store enormous quantities of targeted molecules.” If that sounds a little vague, CleanTechnica offered up the origin story last year: “COFs were first developed at the University of Michigan back in 2005, when researchers ‘coaxed’ rigid plastics to organize into predictable crystal structures — something that had never been done before. The research team did it by slowing down the reaction time involved in synthesizing rigid plastics.” Like their MOF cousins, COFs benefit from a hub-and-strut structure (think Tinker Toys and you’re on the right track) that in effect loads the material with nanoscale pores, like a sponge. The pores can be altered by adjusting the angles at which the struts connect to the hubs, to provide a COF with unique properties. As for why COFs instead of MOFs, for one thing, they contain no metal, so they are much lighter than MOFs (COFs are typically made with hydrogen, boron, carbon, nitrogen, and oxygen). As an organic (aka plastic) material, COFs also have the potential to come in at a far lower cost than other materials. MOFs have previously been investigated for energy storage, but COFs are at a huge disadvantage in that field because they lack sufficient conductivity. One key problem is the disordered nature of COFs as applied to electrodes. So far, the body of research has arrived at one solution, which is to apply them to an electrode as a two-dimensional material: “Two-dimensional covalent organic frameworks (2D COFs) address these limitations by predictably and deliberately organizing redox-active groups into insoluble, high-surface area polymer networks with uniform micropores.” That’s all well and good, but the problem is that the charge/discharge rate is very slow, so you can kiss any high-performance applications goodbye. That’s where the new Northwestern University research comes in. Instead of just fiddling with the angles of their COF, they modded it out with the conducting polymer PEDOT, which is short for poly(3,4-ethylenedioxythiophene). Northwestern provides a rundown on the process: “… Two organic molecules self-assembled and condensed into a honeycomb-like grid, one 2-D layer stacked on top of the other. Into the grid’s holes, or pores, the researchers deposited the conducting polymer.” The pores in the new material are 2.3 nanometers wide, and each pore translates into increased surface area: “A small amount of the fluffy COF powder, just enough to fill a shot glass and weighing the same as a dollar bill, has the surface area of an Olympic swimming pool.” I know right? Here’s the result in plain language: “The modified COF showed a dramatic improvement in its ability to both store energy and to rapidly charge and discharge the device. The material can store roughly 10 times more electrical energy than the unmodified COF, and it can get the electrical charge in and out of the device 10 to 15 times faster.” The new device is also quite durable. According to the study, it can withstand 10,000 charge/discharge cycles. You can get all the details from the American Chemical Society journal Central Science under the title, “Superior Charge Storage and Power Density of a Conducting Polymer-Modified Covalent Organic Framework.” The EV market is slowly gathering steam, and if the Northwestern research pans out, it looks like you ain’t seen nothing yet. The researchers are confident that this is a breakthrough for EV battery technology: “This work represents the first time that electroactive COFs or crystalline framework materials have shown volumetric energy and power densities comparable with other porous carbon-based electrodes, thereby demonstrating the promise of redox-active COFs for EES [electrical energy storage] devices.” In other words, if the EV of tomorrow is equipped with a COF battery (or some similar future technology), it will go significantly farther, cost less, and charge faster than the current crop. Make that the day after tomorrow. So far, the Northwestern research has been tested out on a coin-cell-sized prototype that can light up an LED for 30 seconds. So, don’t hold your breath. But, when that affordable, sporty, 500-mile range EV of the future does come rolling into your garage, the US Army (yes, the Army) might take partial credit. Among other funders, the Northwestern team is supported by a US Army Research Office Multidisciplinary University Research Initiatives grant. Follow me on Twitter and Google+.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 2, 2016
Site: www.nature.com

At the threshold of the Sahara Desert near Ouarzazate, Morocco, some 500,000 parabolic mirrors run in neat rows across a valley, moving slowly in unison as the Sun sweeps overhead. This US$660-million solar-energy facility opened in February and will soon have company. Morocco has committed to generating 42% of its electricity from renewable sources by 2020. Across Africa, several nations are moving aggressively to develop their solar and wind capacity. The momentum has some experts wondering whether large parts of the continent can vault into a clean future, bypassing some of the environmentally destructive practices that have plagued the United States, Europe and China, among other places. “African nations do not have to lock into developing high-carbon old technologies,” wrote Kofi Annan, former secretary-general of the United Nations, in a report last year1. “We can expand our power generation and achieve universal access to energy by leapfrogging into new technologies that are transforming energy systems across the world.” That's an intoxicating message, not just for Africans but for the entire world, because electricity demand on the continent is exploding. Africa's population is booming faster than anywhere in the world: it is expected to almost quadruple by 2100. More than half of the 1.2 billion people living there today lack electricity, but may get it soon. If much of that power were to come from coal, oil and natural gas, it could kill international efforts to slow the pace of global warming. But a greener path is possible because many African nations are just starting to build up much of their energy infrastructure and have not yet committed to dirtier technology. Several factors are fuelling the push for renewables in Africa. More than one-third of the continent's nations get the bulk of their power from hydroelectric plants, and droughts in the past few years have made that supply unreliable. Countries that rely primarily on fossil fuels have been troubled by price volatility and increasing regulations. At the same time, the cost of renewable technology has been dropping dramatically. And researchers are finding that there is more potential solar and wind power on the continent than previously thought — as much as 3,700 times the current total consumption of electricity. This has all led to a surging interest in green power. Researchers are mapping the best places for renewable-energy projects. Forward-looking companies are investing in solar and wind farms. And governments are teaming up with international-development agencies to make the arena more attractive to private firms. Yet this may not be enough to propel Africa to a clean, electrified future. Planners need more data to find the best sites for renewable-energy projects. Developers are wary about pouring money into many countries, especially those with a history of corruption and governmental problems. And nations will need tens of billions of dollars to strengthen the energy infrastructure. Still, green ambitions in Africa are higher now than ever before. Eddie O'Connor, chief executive of developer Mainstream Renewable Power in Dublin, sees great potential for renewable energy in Africa. His company is building solar- and wind-energy facilities there and he calls it “an unparalleled business opportunity for entrepreneurs”. Power outages are a common problem in many African nations, but Zambia has suffered more than most in the past year. It endured a string of frequent and long-lasting blackouts that crippled the economy. Pumps could not supply clean water to the capital, Lusaka, and industries had to slash production, leading to massive job lay-offs. The source of Zambia's energy woes is the worst drought in southern Africa in 35 years. The nation gets nearly 100% of its electricity from hydropower, mostly from three large dams, where water levels have plummeted. Nearby Zimbabwe, South Africa and Botswana have also had to curtail electricity production. And water shortages might get worse. Projections suggest that the warming climate could reduce rainfall in southern Africa even further in the second half of the twenty-first century. Renewable energy could help to fill the gap, because wind and solar projects can be built much more quickly than hydropower, nuclear or fossil-fuel plants. And green-power installations can be expanded piecemeal as demand increases. Egypt, Ethiopia, Kenya, Morocco and South Africa are leading the charge to build up renewable power, but one of the biggest barriers is insufficient data. Most existing maps of wind and solar resources in Africa do not contain enough detailed information to allow companies to select sites for projects, says Grace Wu, an energy researcher at the University of California, Berkeley. She co-authored a report2 on planning renewable-energy zones in 21 African countries, a joint project by the Lawrence Berkeley National Laboratory (LBNL) in California and the International Renewable Energy Agency (IRENA) in Abu Dhabi. The study is the most comprehensive mapping effort so far for most of those countries, says Wu. It weighs the amount of solar and wind energy in the nations, along with factors such as whether power projects would be close to transmission infrastructure and customers, and whether they would cause social or environmental harm. “The IRENA–LBNL study is the only one that has applied a consistent methodology across a large region of Africa,” says Wu. High-resolution measurements of wind and solar resources have typically been done by government researchers or companies, which kept tight control of their data. The Berkeley team used a combination of satellite and ground measurements purchased from Vaisala, an environmental monitoring company based in Finland that has since made those data publicly available through IRENA's Global Atlas for Renewable Energy. The team also incorporated geospatial data — the locations of roads, towns, existing power lines and other factors — that could influence decisions about where to put energy projects. “If there's a forest, you don't want to cut it down and put a solar plant there,” says co-author Ranjit Deshmukh, also an energy researcher at Berkeley. The amount of green energy that could be harvested in Africa is absolutely massive, according to another IRENA report3, which synthesized 6 regional studies and found potential for 300 million megawatts of solar photovoltaic power and more than 250 million megawatts of wind (see 'Power aplenty'). By contrast, the total installed generating capacity — the amount of electricity the entire continent could produce if all power plants were running at full tilt — was just 150,000 megawatts at the end of 2015. Solar and wind power accounted for only 3.6% of that. The estimate of wind resources came as a surprise, says Oliver Knight, a senior energy specialist for the World Bank's Energy Sector Management Assistance Program in Washington DC. Although people have long been aware of Africa's solar potential, he says, as of about a decade ago, few local decision-makers recognized the strength of the wind. “People would have told you there isn't any wind in regions such as East Africa.” The World Bank is doing its own studies, which will assess wind speeds and solar radiation at least every 10 minutes at selected sites across target countries. It will ask governments to add their own geospatial data, and will combine all the information into a user-friendly format that is freely available and doesn't require advanced technical knowledge, says Knight.“It should be possible for a mid-level civil servant in a developing country to get online and actually start playing with this.” In the semi-arid Karoo region of South Africa, a constellation of bright white wind turbines rises 150 metres above the rolling grassland. Mainstream Renewable Power brought this project online in July, 17 months after starting construction. The 35 turbines add 80 megawatts to South Africa's supply, enough to power about 70,000 homes there. The Noupoort Wind Farm is just one of about 100 wind and solar projects that South Africa has developed in the past 4 years, as prices fell below that of coal and construction lagged on two new massive coal plants. South Africa is primed to move quickly to expand renewable energy, in part thanks to its investment in data. Environmental scientist Lydia Cape works for the Council for Scientific and Industrial Research, a national lab in Stellenbosch. She and her team have created planning maps for large-scale wind and solar development and grid expansion. Starting with data on the energy resources, they assessed possible development sites for many types of socio-economic and environmental impact, including proximity to electricity demand, economic benefits and effects on biodiversity. The South African government accepted the team's recommendations and designated eight Renewable Energy Development Zones that are close to consumers and to transmission infrastructure — and where power projects will cause the least harm to people and ecosystems. They total “about 80,000 square kilometres, the size of Ireland or Scotland, roughly”, says Cape. The areas have been given streamlined environmental authorization for renewable projects and transmission corridors, she says. But for African nations to go green in a big way, they will need a huge influx of cash. Meeting sub-Saharan Africa's power needs will cost US$40.8 billion a year, equivalent to 6.35% of Africa's gross domestic product, according to the World Bank. Existing public funding falls far short, so attracting private investors is crucial. Yet many investors perceive African countries as risky, in part because agreements there require long and complex negotiations and capital costs are high. “It's a real challenge,” says Daniel Kammen, a special envoy for energy for the US Department of State and an energy researcher at the University of California, Berkeley. “Many of these countries have not had the best credit ratings.” Elham Ibrahim, the African Union's commissioner for infrastructure and energy, advises countries to take steps to reassure private investors. Clear legislation supporting renewable energy is key, she says, along with a track record of enforcing commercial laws. South Africa is setting a good example. In 2011, it established a transparent process for project bidding called the Renewable Energy Independent Power Producer Procurement Programme (REIPPPP). The programme has generated private investments of more than $14 billion to develop 6,327 megawatts of wind and solar. Mainstream Renewable Power has won contracts for six wind farms and two solar photovoltaic plants through REIPPPP. “This programme is purer than the driven snow,” says O'Connor. “They publish their results. They give state guarantees. They don't delay you too much.” Although the country's main electricity supplier has wavered in its support for renewables, the central government remains committed to the programme, he says. “I would describe the risks in South Africa as far less than the risks in England in investing in renewables.” For countries less immediately attractive to investors, the World Bank Group launched the Scaling Solar project in January 2015. This reduces risk to investors with a suite of guarantees, says Yasser Charafi, principal investment officer for African infrastructure with the International Finance Corporation (IFC) in Dakar, which is part of the World Bank Group. Through the Scaling Solar programme, the IFC offers low-priced loans; the World Bank guarantees that governments will buy the power generated by the projects; and the group's Multilateral Investment Guarantee Agency offers political insurance in case of a war or civil unrest. Zambia, the first country to have access to Scaling Solar, has won two solar projects that will together provide 73 megawatts. Senegal and Madagascar were next, with agreements to produce 200 and 40 megawatts, respectively. Ethiopia has just joined, and the IFC will give two further countries access to the programme soon; its target is to develop 1,000 megawatts in the first 5 years. That power won't be useful if it can't get to users. One of the big barriers to a clean-energy future in Africa is that the continent lacks robust electricity grids and transmission lines to move large amounts of power within countries and across regions. But that gap also provides some opportunities. Without a lot of existing infrastructure and entrenched interests, countries there might be able to scale up renewable projects and manage electricity more nimbly than developed nations. That's what happened with the telephone industry: in the absence of much existing land-line infrastructure, African nations rapidly embraced mobile phones. The future could look very different from today's electricity industry. Experts say that Africa is likely to have a blend of power-delivery options. Some consumers will get electricity from a grid, whereas people in rural areas and urban slums — where it is too remote or too expensive to connect to the grid — might end up with small-scale solar and wind installations and minigrids. Still, grid-connected power is crucial for many city dwellers and for industrial development, says Ibrahim. And for renewables to become an important component of the energy landscape, the grid will need to be upgraded to handle fluctuations in solar and wind production. African nations can look to countries such as Germany and Denmark, which have pioneered ways to deal with the intermittent nature of renewable energy. One option is generating power with existing dams when solar and wind lag, and cutting hydropower when they are plentiful. Another technique shuttles electricity around the grid: for example, if solar drops off in one place, power generated by wind elsewhere can pick up the slack. A third strategy, called demand response, reduces electricity delivery to multiple customers by imperceptible amounts when demand is peaking. These cutting-edge approaches require a smart grid and infrastructure that connects smaller grids in different regions so that they can share electricity. Africa has some of these 'regional interconnections', but they are incomplete. Four planned major transmission corridors will need at least 16,500 kilometres of new transmission lines, costing more than $18 billion, says Ibrahim. Likewise, many countries' internal power grids are struggling to keep up. That's part of what makes working in energy in Africa challenging. Prosper Amuquandoh is an inspector for the Ghana Energy Commission and the chief executive of Smart and Green Energy Group, an energy-management firm in Accra. In Ghana, he says, “there's a lot of generation coming online”. The country plans to trade electricity with its neighbours in a West African Power Pool, Amuquandoh says, but the current grid cannot handle large amounts of intermittent power. Despite the challenges, he brims with enthusiasm when he talks about the future: “The prospects are huge.” With prices of renewables falling, that kind of optimism is spreading across Africa. Electrifying the continent is a moral imperative for everyone, says Charafi. “We cannot just accept in the twenty-first century that hundreds of millions of people are left out.”


News Article | October 23, 2015
Site: www.materialstoday.com

When Ramesh Jasti began making tiny organic circular structures using carbon atoms, the idea was to create an improved version of carbon nanotubes for use in electrical and optical devices. He quickly realized, however, that his technique might have wider applications. In a new paper, Jasti and five colleagues from the University of Oregon show that his nanohoops – known chemically as cycloparaphenylenes – can be made using a variety of atoms, not just carbon. They envision these circular structures, which can efficiently absorb and distribute energy, finding a place in solar cells, organic light-emitting diodes and as new sensors or probes for medicine. The research, led by Jasti's doctoral student Evan Darzi, is described in a paper in ACS Central Science. The paper is a proof-of-principle for the process, which will have to wait for additional research to be completed before the full impact of these new nanohoops can be realized, Jasti said. Barely 1nm in size, the nanohoops offer a new class of structures for use in electrical and optical devices, said Jasti. He was the first scientist to synthesize these types of molecules back in 2008, as a postdoctoral fellow at the Molecular Foundry at the Lawrence Berkeley National Laboratory. "These structures add to the toolbox and provide a new way to make organic electronic materials," Jasti said. "Cyclic compounds can behave like they are hundreds of units long, like polymers, but be only six to eight units around. We show that by adding non-carbon atoms, we are able to move the optical and electronic properties around." Nanohoops can help solve challenges related to materials with controllable band gaps – the energies lying between valance and conduction bands that are vital for designing organic semiconductors. Currently long materials such as those based on polymers make the best organic semiconductors. "If you can control the band gap, then you can control the color of light that is emitted, for example," Jasti said. "In an electronic device, you also need to match the energy levels to the electrodes. In photovoltaics, the sunlight you want to capture has to match that gap to increase efficiency and enhance the ability to line up various components in optimal ways. These things all rely on the energy levels of the molecules. We found that the smaller we make nanohoops, the smaller the gap." To prove that their approach could work, Darzi synthesized a variety of nanohoops using both carbon and nitrogen atoms to explore their behavior. "What we show is that the charged nitrogen makes a nanohoop an acceptor of electrons, and the other part becomes a donator of electrons," Jasti said. "The addition of other elements like nitrogen gives us another way to manipulate the energy levels, in addition to the nanohoop size. We've now shown that the nanohoop properties can be easily manipulated and, therefore, these molecules represent a new class of organic semiconductors -- similar to conductive polymers that won the Nobel Prize in 2000," he said. "With nanohoops, you can bind other things in the middle of the hoop, essentially doping them to change properties or perhaps sense an analyte that allows on-off switching." His initial work making nanohoop compounds was based entirely on carbon, with the idea of making nanohoops with different diameters and then combining them. But his group kept finding unique and unexpected electronic and optical properties. Jasti brought his research from Boston University to the University of Oregon's Department of Chemistry and Biochemistry in 2014. He said the solar cell research being done by his colleagues in the Materials Science Institute, of which he is a member, was an important factor in his decision to move to the University of Oregon. "We haven't gotten very far into the application of this," he said. "We're looking at that now. What we were able to see is that we can easily manipulate the energy levels of the structure, and now we know how to exchange any atom at any position along the loop. That is the key discovery, and it could be useful for all kinds of semiconductor applications." This story is adapted from material from the University of Oregon, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | November 21, 2016
Site: phys.org

New X-ray methods at the Department of Energy's SLAC National Accelerator Laboratory have captured the highest resolution room-temperature images of this protein complex, which allows scientists to closely watch how water is split during photosynthesis at the temperature at which it occurs naturally. The research team took diffraction images using the bright, fast pulses of X-rays at SLAC's X-ray free-electron laser - the Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Nature published the study on Nov. 21. Previously, the resting state of photosystem II had been seen in detail using samples that were frozen. In this latest study, the researchers were able to see two key steps in photosynthetic water splitting under conditions as it occurs in nature, a big step to decoding how the process works in detail. A damage-free, room temperature study means there are fewer artifacts in the images, and this gives a clearer picture of how photosystem II works in nature. For many years now, scientists have been trying to understand this process in meticulous detail. Besides its fundamental importance to science, understanding this process might help develop ways to create artificial photosynthesis devices that can serve as potential clean energy sources. "The eventual goal is to emulate what photosynthesis has been doing for about three billion years. This has been a research challenge for decades," said Junko Yano, principal investigator and senior scientist at Lawrence Berkeley National Laboratory. "We now have the right tool, the femtosecond X-ray laser pulses created at LCLS, that allows us to observe the water-splitting reaction as it happens, in real time, and as it happens in nature." "We want to have enough snapshots of this process to see how exactly the oxygen is formed," added Uwe Bergmann, a distinguished scientist at SLAC, and co-author of the Nature paper. "This method - to be able to get high resolution images at room temperature and under illumination in real time - is really the strategy we need to catch the molecule in the act." The international research team is a long-standing collaboration between SLAC and Berkeley Lab, and includes Humboldt University in Germany, Umeå University and Uppsala University in Sweden, Stanford University, Brookhaven National Laboratory and University of Oxford in the United Kingdom. New Techniques and New Research Directions In previous high-resolution studies of the system at synchrotron light sources, samples experienced radiation damage from longer exposure to X-rays. At LCLS, the researchers were able to take advantage of the ultrafast laser pulses to collect X-ray crystallography and spectroscopy data before damage occurred. "The beauty of the LCLS is that the laser pulses are so short - only 40 femtoseconds in duration, but very intense - that you can collect the data before the sample is destroyed," said co-author Jan Kern, scientist at Berkeley Lab and SLAC, in a related press release. "It's very new, and there are only two places in the world, LCLS and the SPring-8 Angstrom Compact free electron LAser (SACLA), where this can be done at present." In a recent experiment at SACLA, the only other operating hard X-ray free-electron laser, another team of scientists was able to look at the first step in the cycle in a frozen sample, and under dark conditions - which means the water-splitting reaction had not yet been initiated in the protein complex. This latest study at SLAC catches this first step, as well as another step that is two stages further along in the four-step cycle, and at the same temperature the process normally occurs in nature. To do this, the scientists placed droplets of the sample in a solution with small, crystallized forms of the photosystem II on a moving conveyor belt and illuminated the samples with pulses of green light from a laser to initiate the water-splitting reaction. After two light pulses, they captured images of the crystals using X-rays, with a resolution finer than 2.5 angstroms - significantly better than previously achieved at room temperature. The water-splitting reaction takes place at a metal catalyst within the photosystem II protein, known as the oxygen-evolving complex, that is made up of four manganese atoms and one calcium atom. The complex uses the energy from light to form pure oxygen from two water molecules. The four manganese atoms are critical in shuffling electrons through the cycle, but it is unknown where exactly in the complex the involved water is located or where the oxygen formation occurs. To sort this out, the researchers used ammonia, a water substitute, to narrow down where oxygen atoms from two water molecules combine to form an oxygen molecule. If the ammonia was bound to a site, and the reaction still proceeded, that site is unlikely to be part of the oxygen molecule formation. The results from this study offered a surprise - the data do not seem to support two leading theories for how the reaction proceeds within the oxygen-evolving complex. In future studies using this same technique, the researchers hope to capture more images at different steps of the process, which will allow them to further refine the details of the water-splitting reaction. "The chemistry is so unusual," co-principal investigator and senior scientist, Vittal Yachandra at Berkeley Lab, said "Learning how exactly this water-splitting process works will be a breakthrough in our understanding, and it can help in the development of solar fuels and renewable energy." Explore further: Oxygen levels in the air do not limit plant productivity More information: Iris D. Young et al, Structure of photosystem II and substrate binding at room temperature, Nature (2016). DOI: 10.1038/nature20161


Thin films of (LuFeO ) /(LuFe O ) were grown by reactive-oxide molecular-beam epitaxy in a Veeco GEN10 system on (111) (ZrO ) (Y O ) (or 9.5 mol% yttria-stabilized zirconia) substrates, denoted YSZ, held at a temperature of about 700 °C. This substrate temperature was determined by an optical pyrometer focused on an opaque, 200-nm-thick, platinum layer on the backside of the YSZ substrates used to absorb the heat from the SiC radiative substrate heater. Lutetium and iron were evaporated from elemental sources each at a flux of approximately 1 × 1013 atoms per square centimetre per second. The fluxes were first calibrated (approximately) with a quartz crystal microbalance. The lutetium flux was then more accurately refined using the reflection high-energy electron diffraction (RHEED) intensity oscillations that occurred when (111) Lu O was deposited on (111) YSZ. With the known lutetium flux, the iron flux was adjusted to produce a stoichiometric LuFeO film as judged by the out-of-plane lattice parameter measured by XRD33. The background oxygen partial pressure was meticulously varied during the superlattice growth to provide an environment sufficiently oxidizing for the Fe3+ in the LuFeO without over-oxidizing the Fe2.5+ in the LuFe O layers. The background partial pressure of a mixture of about 2% O and O was varied between 3 × 10−7 Torr and 9 × 10−7 Torr using a capacitance manometer to control the inlet pressure of a piezoelectric leak valve. Each growth sequence commenced with at least five monolayers of the LuFeO structure before the first LuFe O layer to avoid over-oxidation of the LuFe O by oxygen supplied by the YSZ substrate. (LuFeO ) /(LuFe O ) superlattices with thicknesses ranging from 28 nm to 35 nm were studied. The structure was characterized by XRD using a four-circle Rigaku SmartLab diffractometer equipped with a Ge(220) × 2 monochromator on the incident side and a Ge(220) × 2 analyser on the diffracted side, with Cu K radiation. XRD θ–2θ scans of all samples presented as well as a representative rocking curve are shown in Extended Data Fig. 1. Cross-sectional TEM specimens were prepared using an FEI Strata 400 Focused Ion Beam (FIB) with a final milling step of 2 keV to reduce surface damage. High-resolution HAADF-STEM images were acquired on a 100-keV Nion UltraSTEM, a fifth-order aberration-corrected microscope. The images in Fig. 1b were acquired from a single sample containing the representative layering patterns in this study. The lutetium distortions were quantified from HAADF-STEM images. Several images were averaged to reduce scan noise and the lutetium atomic positions were determined through an iterative two-dimensional Gaussian fitting procedure and segmentation using a watershed algorithm. The position of each atom was then compared to its neighbouring atom on each side in the same atomic plane. The magnitude and direction of the distortion was calculated by comparing the height difference of the three atoms as shown in Extended Data Fig. 6a. The schematics show different patterns of three-atom sets corresponding to the marked polarization. This distortion directly correlates with the ferroelectric polarization as shown in Extended Data Fig. 2. The resulting polarization at different locations in the structure is plotted in Extended Data Fig. 5 and in the aggregate in Extended Data Fig. 6b, c. In sum, the distortions from over 90,000 lutetium atoms were analysed to generate the results shown in Fig. 2e. The temperature dependence of the lutetium distortions was measured using a Protochips Aduro double-tilt heating holder in a 200-keV FEI Tecnai F20-ST microscope as shown in Extended Data Fig. 6d. A cross-sectional TEM specimen was prepared by FIB as above and then mounted onto a ceramic heating chip. The temperature was controlled on the chip by passing a current, which was calibrated to be within 50 K of the intended temperature. HAADF-STEM images were recorded at intervals as the sample was heated from 290 K to 675 K. The ferroelectric distortions were estimated from the strength of the superlattice reflections observed in the fast Fourier transform of the image corresponding to the tripling of the unit cell from the lutetium distortions. We label regions where ferroelectric distortions were observed, where weak distortions were observed and where the distortions were not observed (although they may be present, they are below our resolution limit). Magnetic measurements were performed with a Quantum Design MPMS3 SQUID magnetometer. M–T curves with the sample field-cooled in 1 kOe and zero-field-cooled were measured for each sample, along with a bare YSZ substrate that had undergone identical deposition conditions as a superlattice (same oxygen partial pressure and time at growth temperature, but not exposed to the lutetium or iron molecular beams). The M–T curves of the YSZ substrate were used to subtract the diamagnetic and paramagnetic backgrounds from the (LuFeO ) /(LuFe O ) M–T curves. The curves in Fig. 2a were normalized to facilitate comparison of the Curie temperatures (T ) at which the superlattices order ferromagnetically (or ferrimagnetically). The ferromagnetic (or ferrimagnetic) Curie temperature was taken to be the intersection between a line extending from the point of steepest descent of the field-cooled curve and a line extending from the high-temperature background. M–H loops (where H is the magnetic field) out to 70 kOe were measured for each sample at 50 K, 100 K, 200 K and 300 K. The linear diamagnetic background for each of these loops was subtracted with a linear fit to the high-field data. At higher temperature the loops saturate, but below 130 K the loops did not saturate, and an additional M–H loop was measured with the sample field-cooled in 70 KOe to measure the saturated magnetic moment at these temperatures. The known bulk magnetic moment of LuFeO (ref. 11) was subtracted from the total measured magnetic moment to obtain the LuFe O magnetic moments shown in Fig. 2d. Extended Data Fig. 3c displays ‘excess magnetization’ as a function of composition at 50 K, 100 K and 200 K. This value was found by subtracting the magnetization due to the end-member LuFeO and LuFe O signals from the total moment observed. Neutron diffraction was performed on the BT-4 triple-axis spectrometer at the NIST Center for Neutron Research using pyrolytic graphite monochromator and analyser crystals with neutrons of initial and final energies of 14.7 meV (λ = 2.359 Å). Relaxed horizontal collimation settings of the neutron optics in stream order were open–monochromator–40′–sample–40′–analyser–open–detector to maximize the intensity of the very weak scattered signal. The sample was mounted onto a silicon wafer using a small amount of aluminium foil and sealed in an aluminium canister with 4He gas to ensure thermal equilibration with the closed-cycle refrigerator. Neutron diffraction was used to determine the onset of long-range magnetic order of a (LuFeO ) /(LuFe O ) superlattice. Neutron scattering is not sensitive to small impurity phases that may influence other bulk characterization techniques. Magnetic reflections of the (LuFeO ) /(LuFe O ) superlattice were observed in neutron diffraction by scanning along the [10L] reciprocal lattice direction, where the Miller index L is along the c axis or growth direction, at several temperatures between 5 K and 325 K. We found a single 101 magnetic reflection (indexed to the magnetic unit cell of 10.4 nm, or two superlattice repeat distances) at 5 K that was not present at room temperature. This peak is observed to show considerable change in intensity between 5 K and room temperature as shown in Extended Data Fig. 4a. The centre of the peak is offset slightly from the 101 reflection owing to a slight misalignment of the sample. A Gaussian function is fit to the signal to determine the integrated intensity as a function of temperature. Disregarding finite-size effects of the films along the growth direction, the full width of the resolution function is shown as the horizontal line (black), demonstrating that the peak is not resolution-limited. The magnetic correlation length from this is found to be ξ ≈ 30 nm, or roughly six superlattice units along the c direction. The onset of magnetic order was determined by fitting the temperature-dependent integrated intensity of the peak with a mean-field order parameter. Shown in Extended Data Fig. 4b, we find T  = 238 ± 28 K. This is consistent with the onset of ferromagnetism (or ferrimagnetism) determined from magnetometry of a (LuFeO ) /(LuFe O ) superlattice (246 K). To understand the origin of this 101 reflection, we calculated the magnetic structure factors for the magnetic moment arrangement obtained from DFT for the COII and Fe3+-doped configurations of the (LuFeO ) /(LuFe O ) superlattice. These calculations show that 10L-type reflections are particularly sensitive to the onset of ferrimagnetic order along the c axis in the LuFe O layers in the charge-ordered state. X-ray absorption spectroscopy on the Fe L edge was performed at Beamline 4.0.2 at the Advanced Light Source at Lawrence Berkeley National Laboratory. Spectra were acquired with 100% linearly polarized light oriented nearly parallel to and perpendicular to the c axis of the sample for an angle of X-ray incidence of 20° to the sample surface. The normalized difference between these spectra was collected for temperatures ranging from 300 K to 700 K for (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) . All temperatures measured were above the magnetic transition temperatures of the corresponding films and so the dichroic signal records an asymmetry in the electronic structure, as previously recorded for ferroelectric thin films31. As shown in Extended Data Fig. 7b, the (LuFeO ) /(LuFe O ) sample displays dichroism of about 20% at 300 K due to the structural anisotropy between the in-plane and out-of-plane electronic configurations in the hexagonal structure. A smaller anisotropy was also previously recorded for LuFe O (ref. 34). Previous calibration of the endstation has identified a systematic error of approximately ±5% in the measured dichroism due to the variation of the degree of X-ray polarization, the angle of X-ray incidence and photon energy with time. To further determine the uncertainty associated with these measurements, we measured the dichroic signal from the (LuFeO ) /(LuFe O ) sample multiple times before sample heating and again after the sample heating/irradiation. From comparison of the dichroic signal, we find a standard error of the mean of approximately 0.016 (4% in un-normalized units), within the uncertainty estimated from the previous calibration. The (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) samples, which were identified as displaying ferroelectric lutetium distortions by HAADF-STEM, demonstrated a dichroism of about 40% at 300 K. The temperature-dependent fits of the recorded difference spectra as a linear combination of the structural (LuFeO ) /(LuFe O ) component and (LuFeO ) /(LuFe O ) component at 300 K are shown in Fig. 2f. There is a drop in the recorded signal for the (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) samples indicative of a ferroelectric transition. The XLD signal does not decrease fully to the signal observed in (LuFeO ) /(LuFe O ) , suggesting that there may be an additional structural component in the (LuFeO ) /(LuFe O ) films that is not captured by the (LuFeO ) /(LuFe O ) component. The data shown in Fig. 2f were fitted to an order parameter plus a constant background using a Bayesian fitting algorithm. From this, we identify a transition T at about 550 K and 500 K in the (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) sample, respectively. Assuming the same constant background for the (LuFeO ) /(LuFe O ) sample, we estimate a transition near 900 K, beyond the measurement range of this experiment. The variable-temperature HAADF-STEM images (Extended Data Fig. 6d) recorded from these layering patterns are also consistent with transitions at comparable temperatures, although slightly different transitions between the m = 3 and m = 5 superlattices are found. The high-temperature transition in the (LuFeO ) /(LuFe O ) superlattice is consistent with the observation of superlattice reflections in RHEED corresponding to the trimer distortion24 during thin-film deposition at the growth temperature for this film. Vertical (or out-of-plane) PFM was used to measure the ferroelectric polarizability of the (LuFeO ) /(LuFe O ) superlattice shown in Figs 2g and 4a35–37. A bottom electrode was added to the sample after thin-film deposition. The substrate was first thinned using tripod mechanical polishing at a 1° angle; the measured region had a substrate thickness of approximately 150 nm. A copper adhesion layer and subsequent platinum layer were deposited onto the thinned substrate by direct-current magnetron sputtering in an argon background of 3 × 10−3 Torr at room temperature. This provided a conducting back electrode for the PFM experiments without disturbing the epitaxy of the (LuFeO ) /(LuFe O ) film on the insulating YSZ substrate. In Fig. 2g, a 20 μm × 20 μm region was poled with a d.c. bias of −15 V to switch the sample to the ‘upward’-oriented out-of-plane polarization. A second switch with a d.c. bias of +15 V was then performed to write the pattern shown with the corresponding ‘downward’-oriented polarization. The resulting structure was imaged directly after poling and again 100 h later using a 2-V a.c. voltage. The image acquired after 100 h was rotated after acquisition to follow the orientation in the original image. The persistence of the written structure demonstrates the ability to reversibly switch the ferroelectric polarization with an electric field. In Fig. 4a, a 15 μm × 15 μm region was poled using a single switch of a d.c. bias at +15 V and −15 V (applied to the cantilever tip), generating a ‘downward’- and ‘upward’-oriented out-of-plane polarization, plotted in red and turquoise, respectively. The magnetic structure presented in Fig. 4b–d was measured nearly 200 h after the initial poling shown in Fig. 4a, further corroborating the persistence of the switched state. Variable-temperature and magnetic field transport measurements were made in the van der Pauw geometry and performed using the combination of a Quantum Design PPMS and external Keithley electronics. Measurements were made of the in-plane resistivity, for example, parallel to the (0001) planes. The resistivity increases substantially as the temperature is lowered until it is beyond our measurement abilities below about 200 K. Resistivity values for the entire stack of the two samples we measured, (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) , are 4.3–86.5 Ω cm over the 330–198 K temperature range. Plotting the natural logarithm of the conductivity as various powers of inverse temperature suggests that conduction is primarily through thermally activated carriers. No anomaly in the resistivity at T or any changes in resistivity with the application of a magnetic field (0 T, 5 T and 9 T) were observed. We attempted, but were unable (the sample was too resistive), to measure a bare YSZ substrate that had undergone identical deposition conditions as a superlattice (the same oxygen partial pressure and time at growth temperature, but not exposed to the lutetium or iron molecular beams) to determine if the primary conduction path was mediated by oxygen vacancies generated in the substrate. The enhancement of T in region I could be caused by strain or oxygen stoichiometry effects. Lutetium atom distortions were not observed in these superlattices. A slight mismatch between the LuFeO in-plane lattice (a = 5.989 ± 0.005 Å in thick relaxed epitaxial films25 or 3.458 Å for the primitive cell) and the bulk LuFe O (a = 3.4406 Å; ref. 38) produces a tensile strain of up to 0.25% on the LuFe O layers; epitaxial strain has previously been shown to affect ferromagnetic transitions7, 39, 40. In region I, T reaches 250 K, approximately the transition observed in LuFe O single crystals17. Previous experiments have shown that the magnetic properties of LuFe O are extremely sensitive to deviations in oxygen stoichiometry41 and thus a small amount of oxygen diffusion between the LuFe O and LuFeO layers could have also caused the increase in T to 250 K. We use first-principles DFT calculations to elucidate the origin of the increase in the magnetic transition temperature T with the number of LuFeO layers (Fig. 2c) and, correspondingly, the magnitude of the electrical polarization (Fig. 2e). In particular, we perform DFT calculations on what we call ‘model LuFe O ’ systems: bulk-like LuFe O structures with the addition of the trimer distortion Q. This trimer distortion is characteristic of LuFeO and, as shown in Fig. 1b, is experimentally observed in the (LuFeO ) /(LuFe O ) superlattices, yet is not observed in bulk LuFe O . This allows us to derive a simple model that elucidates the origin of the effect in the superlattices, without performing DFT calculations directly on these larger structures. Previous experimental work17 has suggested that bulk LuFe O has a net polarization due to a charge disproportionation between the composing iron bilayers. In the first layer (A), an Fe2+ ion is at the centre of a hexagon composed of six Fe3+ ions, leading to a 1:2 ratio of the charges. The second layer (B) has a similar arrangement with a majority of Fe2+. It was proposed that the bilayers order to form an ABAB stacking along the c direction. We denote this structure as ‘COII’, which is displayed in Fig. 3b and reproduced in Extended Data Fig. 10b. This stacking sequence breaks inversion symmetry and so is compatible with a ferroelectric state (space group Cm). We have also considered multiple additional charge-order arrangements, including those compatible with a non-ferroelectric state, consistent with later experimental results42, 43. We introduce only the low-energy arrangements here. One such arrangement has the same bilayer structure as COII, but an opposite stacking of the bilayers, for example, ABBA ordering. In this case, a mirror plane is located at the Lu–O plane, preserving inversion symmetry (space group C2/m). We refer to this structure, displayed in Fig. 3a and reproduced in Extended Data Fig. 10a, as ‘COI’. In additional, the layers can be stacked in an AABB arrangement (proposed in ref. 42, but studied theoretically for the first time here), which we denote ‘COIII’ (space group C2/m) and display in Extended Data Fig. 10c. In this configuration, the AA and BB bilayers have excess charge and a mirror plane located at the centre between two A (B) layers. COI and COIII are non-polar (space group C2/m)42, 43 and therefore not ferroelectric. We find that COI is the lowest-energy configuration, followed by COII, which is higher than COI by only about 4 meV per formula unit. COIII is higher in energy than COI by more than about 50 meV per formula unit and therefore will not be considered further. The small difference in energy between COI and COII is expected because the two structures differ only in long-range inter-bilayer Coulomb interactions. Structurally, however, COI and COII have a quite distinct (and relatively easy to experimentally discern) difference: the COII configuration induces the lutetium trimer distortions, which are forbidden by symmetry in COI. One difficulty in definitively determining the magnetic ground state of LuFe O from first principles is the presence of multiple magnetic configurations of COI and COII with similar energy. The energetics of these magnetic configurations in COI and COII are shown in Extended Data Fig. 9b. The ground-state magnetization identified for COI is M = 0.5μ /Fe, whereas the lowest energy of COII has M = 1.2μ /Fe. Both COI and COII have low-energy states with M ranging from 0μ /Fe to 1.2μ /Fe. According to experimental measurements44, LuFe O undergoes a ferrimagnetic transition at T  = 240–250 K, resulting in an Ising-like net magnetic moment along the c direction. The reported moment ranges from 0.8μ /Fe to 1.4μ /Fe and is particularly sensitive to oxygen stoichiometry41. We thus consider an Ising Hamiltonian to describe the magnetic properties of the LuFe O system: Here J is the symmetric super-exchange interaction and the spin S  = ±2 (S  = 5/2) for Fe2+ (Fe3+) in trigonal bipyramidal coordination with high spin. The six super-exchange interactions considered correspond to nearest-neighbour interactions for iron (2+/3+) located either in the same plane (J ), in neighbouring planes within a bilayer (J ) or in neighbouring planes between different bilayers (J ); see Extended Data Fig. 8a. These parameters are estimated from first principles. The magnetic ground states and Curie transition temperatures are then determined from Monte Carlo simulations. The simulated T for COI (T  ≈ 500 K) is found to be about a factor of two higher than the experimentally reported value (T  ≈ 240 K). The possible origins of this discrepancy are: (1) correlations among electrons not captured with DFT + U that strongly influence the magnetic exchange interactions or (2) relativistic interactions between spins that are not included in the present model, yet that often strongly compete with symmetric exchange interactions. We therefore introduce a normalization factor of approximately 0.48 to match the calculated T of COI with the corresponding experimentally reported value and use this same factor throughout our study. Although, in essence, the T of bulk (COI) LuFe O becomes a parameter, the changes in T (for example, from COI to COII) are calculated from a parameter-free, first-principles theory. The corresponding results are displayed in Fig. 3a and b for COI and COII (with Q ≈ 1), respectively, where it is seen that the ferrimagnetic transition temperature of COII is substantially larger (T  ≈ 300 K) than that of COI. Noting that COII induces lutetium trimer distortions, we repeat our Monte Carlo simulations of the magnetic transition for different values of Q. As shown in Fig. 3b, we observe that T monotonically increases with the magnitude of this distortion. Calculated super-exchange values as a function of distortion are shown in Extended Data Fig. 8b. The largest super-exchange interactions correspond to in-plane interactions and follow a linear trend as a function of Q. Although the Fe3+–Fe3+ interactions increase and become more antiferromagnetic, the Fe2+–Fe2+ interactions follow the opposite trend. The Fe2+–Fe3+ interactions do not exhibit substantial variation as a function of Q. This result is consistent with the Goodenough–Kanamori–Anderson (GKA) rules considering the change in angle for the Fe–O–Fe super-exchange in-plane paths as illustrated in Extended Data Fig. 8a. As the distortions are increased, on average the Fe–O–Fe angle becomes larger than the undistorted value of 118° for Fe3+–Fe3+ interactions, smaller for Fe2+–Fe2+ and unchanged for Fe2+–Fe3+. Because the Fe2+ and Fe3+ ions both have partially filled d orbitals, the GKA rules predict antiferromagnetic and ferromagnetic super-exchange interactions for 180°and 90° angles, respectively, consistent with the trend observed in Extended Data Fig. 8b. The observed control of T by the trimer distortion is universal, parameter-free and is independent of any specific detail of the first-principles methods. DFT calculations were also performed on the (LuFeO ) /(LuFe O ) periodic supercells for superlattices with m = 1, 3 and 5. The size of these supercells varies depending on whether m is odd or even. For odd values of m, the number of (LuFeO ) /(LuFe O ) blocks has to be doubled along the out-of-plane direction to obtain a periodic supercell. For even values, the number of blocks has to be tripled. Additionally, taking into account that trimerization of the LuFeO structure and charge order of the LuFe O structure demand a supercell containing three formula units per plane, the total number of atoms for the different periodic (LuFeO ) /(LuFe O ) supercells is N = 6(5m + 7) and N = 9(5m + 7), for odd and even values of m, respectively. Owing to computational constraints, most of our calculations were performed on (LuFeO ) /(LuFe O ) (72 atoms) and (LuFeO ) /(LuFe O ) (132 atoms) supercells. Calculations were also performed on (LuFeO ) /(LuFe O ) supercells (192 atoms)—without considering full structural optimizations—to confirm the key result obtained for m = 3, that is, stabilization of hole-doped LuFe O blocks with head-to-head domain walls occurring in the middle of LuFeO blocks, as discussed below. In a single ferroelectric domain of a (LuFeO ) /(LuFe O ) superlattice, the most stable charge-order configuration is COII, as shown in Extended Data Fig. 10d for m = 3. The lowest-energy magnetic configuration corresponds to the same ferrimagnetic arrangement found in bulk LuFe O with the COII pattern, characterized by M = 1.2μ /Fe. Once LuFeO is added to LuFe O —that is, (LuFeO ) /(LuFe O ) superlattices with m ≥ 1—the COI pattern becomes unstable. The single-domain configuration with the COII pattern of the m = 3 (LuFeO ) /(LuFe O ) superlattice is metastable against multidomain configurations, even under E = 0 electrical boundary conditions. We refer to the multidomain configurations as (1) ‘undoped-type’ (Extended Data Fig. 10e) and (2) ‘doped-type’ (Extended Data Fig. 10f). For the m = 1 composition, the difference in energy between the domain types is negligible. By contrast, for the m = 3 superlattice, the doped-type domain configuration is about 8 meV per formula unit more stable than the undoped-type, which itself is about 8 meV per formula unit more stable than the single-domain configuration, demonstrating the added stability of this configuration with increasing m. We also verified that the doped-type domain configuration remains the most stable for the m = 5 superlattice. The undoped-type structure corresponds to the COII ordering pattern in which the AB bilayers are stacked in ABAB sequence along the c direction. Here the superlattice exhibits charged tail-to-tail and head-to-head ferroelectric domain walls. For the m = 3 superlattice, the tail-to-tail domain wall occurs at the interface between a bilayer and LuFeO , whereas the head-to-head domain wall always occurs in the LuFeO block next to the second bilayer, as labelled in Extended Data Fig. 10e. The magnetism of this structure is similar to the single ferroelectric domain case shown in Extended Data Fig. 10d. The starting point to understanding the doped-type domain structure is again the COII ordering pattern of the bulk LuFe O . In this case, however, nearly an electron of charge from a Fe2+ cation located in the A layer of LuFe O is transferred to an Fe3+ cation located in the LuFeO layer, as illustrated in Extended Data Fig. 10f. This results in Fe3+-rich iron bilayers and a corresponding stacking sequence of AAAA (see Extended Data Fig. 10f). Therefore, we can consider the head-to-head domains in this configuration as being stabilized as a result of LuFe O layers electron-doping the LuFeO layers. Indeed, in the doped-type domains, the tail-to-tail domain wall occurs at the interface between a bilayer and LuFeO , whereas the head-to-head domain wall occurs right at the doping layer in the LuFeO block, as shown in Extended Data Fig. 10f. Alternatively, we can consider the LuFeO layers as hole-doping the LuFe O . In this doped-type of ferroelectric domain, the only contribution to the total magnetization in the hole-doped LuFe O layers is from ferromagnetically aligned Fe2+ ions located at the centre of the Fe3+ hexagons. Because the in-plane Fe3+–Fe3+ super-exchange interactions are always found to be antiferromagnetic, there is no net contribution due to the Fe3+ ions. This leads to a magnetization of M = 1.33μ /Fe for each Fe3+-doped bilayer, as shown in Extended Data Fig. 9c. Although this charged domain-wall configuration might appear energetically unfavourable, mobile carriers redistribute to screen the excess of bound charges. The calculated density of states for each iron cation in the heterostructure is shown in Extended Data Fig. 10e. Our calculations demonstrate that electrons migrate from the LuFe O block to occupy iron states in contiguous LuFeO layers, which nevertheless remain insulating. Owing to the shift in the electrostatic potential caused by LuFeO dipoles, conducting Fe d states appear at the head-to-head domain walls (Extended Data Fig. 10f). Because the potential drop across the structure should increase with superlattice periodicity, we expect the electrical conductivity of the walls to grow monotonically with m. The same qualitative trends are observed in the m = 5 superlattices. Structural relaxations of bulk LuFe O were performed using the DFT + U method45 with the Perdew–Burke–Ernzerhof (PBE)46 form of exchange correlation functional as implemented in VASP code47. We used the projector-augmented plane-wave method48. We considered Lu 4f states in the core; for Fe 3d states we chose U  = 4.5 eV (where U is the screened Coulomb interaction) and J = 0.95 eV (where J is the Hunds coupling on the iron site). The choice of U and J are based on our previous study on the LuFeO system16, but all results remain qualitatively similar for choices of U  > 4.5 eV. We used a 6 × 6 × 2 k-point mesh and a kinetic-energy cut-off value of 500 eV. The Hellman–Feynman forces were converged to 0.001 eV Å−1. To account for the different charge orders (COI, COII and COIII), a supercell of the high-symmetry cell was considered. Considering optimized structures with VASP, spin–spin exchange interactions were estimated by fitting the Ising model with the energies of different magnetic configurations calculated with the linearized augmented plane-wave (LAPW) method as implemented in the Wien2k code49. We considered 25 magnetic configurations to construct the model Hamiltonian for the high-symmetry phase and each charge-ordered state. To be consistent, we used the same U and J values and k-point mesh as for VASP. We used U  = 8.8 eV and J = 0.95 eV for Lu 4f states. The plane-wave cut-off, which is defined as the ‘muffin tin’ radius multiplied by k , is 7.0. For (LuFeO ) /(LuFe O ) superlattices, all internal parameters as well as out-of-plane lattice constants were fully relaxed (except for the in-plane lattice constant, which was fixed to the average value of the corresponding LuFeO and LuFe O lattice constants, a  = 3.46 Å). All calculations involving superlattices were performed in VASP. Magnetic imaging was performed using cryogenic PEEM at the Advanced Light Source at Beamline 11.0.150, taking advantage of XMCD at the Fe L edge51. Magnetic images are obtained by dividing images recorded with left and right circular polarization. The resulting dark and bright contrast is a measure of the projection of the magnetization direction on the X-ray polarization vector, which has a 30° angle of incidence relative to the surface of the sample. To probe the coupling between the ferroic orders, XMCD-PEEM measurements were made on a region of the (LuFeO ) /(LuFe O ) sample that was poled with a PFM tip to form distinct regions of ‘up’ and ‘down’ c-oriented ferroelectric polarization. The PFM poling was performed eight days before PEEM imaging to ensure that the polarization configuration that was imaged was robust. The PFM poling was performed at 300 K. In the PEEM, the sample was first cooled to 200 K to increase the magnetic contrast from the buried LuFe O layer. As shown in Fig. 4b, the magnetic structure displays the same distinct pattern as the ferroelectric polarization. Finally, to confirm that the magnetic image shown in Fig. 4b could not be due to extrinsic effects, the sample was heated to 320 K. As shown in Fig. 4d, the resulting dichroism dropped by about 70%, consistent with the drop in the saturation magnetization between these temperatures identified in SQUID. Because the overall XLD contrast was constant between those two temperatures (not displayed in Fig. 2f, but acquired using the same experimental configuration) and any extrinsic chemical contrast would similarly be constant, this indicates that the strong dichroism observed at 200 K must arise from the magnetic order.


News Article | October 5, 2016
Site: www.cemag.us

When the local water management agency closes your favorite beach due to unhealthy water quality, how reliable are the tests they base their decisions on? As it turns out, those tests, as well as the standards behind them, have not been updated in decades. Now scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a highly accurate, DNA-based method to detect and distinguish sources of microbial contamination in water. Using the award-winning PhyloChip, a credit card-sized device that can detect the presence of more than 60,000 species of bacteria and archaea, the new method was found to be more sensitive than conventional methods at assessing health risks. In tests at the Russian River watershed in Northern California, the Berkeley Lab researchers found instances where their method identified potential human health risks that conventional fecal indicator tests had failed to detect. Conversely, they also found instances where the conventional tests flagged bacteria that weren’t likely risks to human health. The research was led by Eric Dubinsky and Gary Andersen, microbial ecologists at Berkeley Lab, and was published recently in the journal Water Research in an article titled, “Microbial source tracking in impaired watersheds using PhyloChip and machine-learning classification.” Steven Butkus of the North Coast Regional Water Quality Control Board, which supported part of the research, was also a co-author. “With the PhyloChip, in an overnight test we can get a full picture of the microorganisms in any given sample,” Dubinsky says. “Instead of targeting one organism, we’re essentially getting a fingerprint of the microbial community of potential sources in that sample. So it gives us a more comprehensive picture of what’s going on. It’s a novel way of going about source tracking.” What local water agencies currently do is collect water samples, culture the bacteria overnight, and then check the growth level of two types of bacteria, E. coli, and Enterococcus, which are presumed to be indicators of fecal contamination. However, this method doesn’t distinguish between sources. The bacteria could have come from humans, cows, ducks, sewage, or even decaying vegetation. “These tests have been used for decades and are relatively primitive,” Dubinsky says. “Back in the 1970s when the Clean Water Act was developed and we had sewage basically flowing into our waters, these tests worked really well. Epidemiological studies showed an association of these bacteria with levels of illness of people who used the water. These bacteria don’t necessarily get you sick, but they’re found in sewage and fecal matter. That’s why they’re measured.” As pollution from point sources — single identifiable sources such as sewage — has been cleaned up over time, the emerging concern has become what are known as nonpoint sources, or diffuse sources, throughout the watershed, such as agricultural lands. “The picture is much more complicated now than it was back then, when the concern was really point sources,” Dubinsky adds. The PhyloChip, which was developed by Andersen and several other Berkeley Lab scientists, has been used for a number of medical, agricultural, and environmental purposes, including understanding air pollution, the ecology of coral reefs, and environmental conditions of the Gulf of Mexico after the BP oil spill. With 1 million probes, it identifies microbes based on variations of a specific gene, with no culturing needed. “About seven years ago we started doing water quality work, and we realized the PhyloChip could provide a fundamentally new and improved method for doing source tracking,” Andersen says. Determining the source of any particular pathogen is not a straightforward task. In most cases, a single microbe is not a definitive marker of an animal or other source. “A microbial community is complex,” Dubinsky says. “A cow may have 1,000 different organisms.” So Andersen and Dubinsky had an idea. “We had Laleh Coté, an intern at the time and now a Lab employee, run around and basically collect poop from all sorts of animals,” says Andersen. “What we’ve done since then is develop a reference library of the microbial communities that occur in different types of poop — we have cows, horses, raccoons, humans, different types of birds, pigs, sea lions, and other animals, as well as sewage and septage. We used that library to develop a model.” The new method takes the unknown sample and compares it against this microbial reference library. “We’ve used the PhyloChip in a way that it hasn’t been used before by using machine learning models to analyze the data in order to detect and classify sources,” Andersen says. “It’s essentially giving you a statistical probability that a microbial community came from a particular source.” They validated their method by comparing it to about 40 other methods of microbial source tracking in a California study. “We were the only method that could detect all sources and get them right,” Dubinsky says. If the source is an animal that is not in the reference library, their method can still point you in the right direction. “For example, in that study, one sample was a chicken,” says Dubinsky. “We hadn’t analyzed chickens, but we had geese, gulls, and pigeons. We were still able to determine that the sample was a bird.” In extensive testing throughout the Russian River watershed, which is out of compliance with the Clean Water Act, the Berkeley Lab researchers found widespread contamination by human sources close to areas where communities rely on aging septic tanks. They also found significant human contamination immediately after a weekend jazz festival, whereas testing by conventional methods yielded a much weaker signal after a time lag of a couple days. “Our method is more sensitive to human contamination than those fecal indicator tests are,” Dubinsky says. The team is now working on characterizing the microbial community of naturally occurring E. coli and Enterococci, using Hawaii with its warm waters as a testing ground. “They can occur naturally in sediments and decaying kelp and vegetation,” Dubinsky says. “It is known that they do, but nobody has developed a test to definitively show that.” The researchers will also be able to study whether climate affects microbial communities. “Does a Hawaiian cow look like a California cow in terms of fecal bacteria composition? That’s a good question and something we’ll be able to find out,” he says. They are working closely with the U.S. Environmental Protection Agency (EPA), which is looking at new technologies for what it calls “next generation compliance.” Ultimately the goal is to develop their method — possibly with a downsized version of the PhyloChip — to the point where it can be universally used in any location and by non-experts. Dubinsky says the method should also be useful with the burgeoning issue of algal blooms, to understand, for example, the processes by which they form, the microbial dynamics before and after a bloom, and specifically, whether runoff from livestock production in the Midwest is related to algal blooms in the Great Lakes, a question they’re investigating with the EPA.


News Article | November 29, 2016
Site: globenewswire.com

Copenhagen, 2016-11-29 12:30 CET (GLOBE NEWSWIRE) -- The Carlsberg Forum series reflects the strong link between science and business. Each year, the Kaj Linderstrøm-Lang awards are given to prominent scientists for their achievements within biochemistry or physiology, the fields of science in which Kaj Linderstrøm-Lang, a professor at Carlsberg Research Laboratory in the period 1939-1959, distinguished himself as a pioneer. This year, Professor Henrik V. Scheller, Joint BioEnergy Institute, Lawrence Berkeley National Laboratory, USA and Professor Geoff Fincher, School of Agriculture, Food & Wine, The University of Adelaide, Australia received this year’s Kaj Linderstrøm-Lang Prize as an acknowledgement of their outstanding achievements on identifying and characterizing enzymes involved in synthesis and modification of the plant cell wall. The third Kaj Linderstrøm-Lang Prize was awarded to Professor Richard Henderson, MRC Laboratory of Molecular Biology, Cambridge, UK for his pioneering work towards obtaining high resolution atomic structures of membrane proteins and membrane protein complexes by electron cryomicroscopy. The prize is a plated gold medal and a financial personal award of DKK 40.000. Finally, the Emil Chr. Hansen Golden Medal was awarded to Jef Boeke, Director, Institute for Systems Genetics at New York University, for his seminal contributions to yeast genome scrambling and the design of synthetic yeast. The Emil Chr. Hansen Foundation sponsors the Emil Chr. Hansen Golden Medal. Professor Emil Chr. Hansen worked at the Carlsberg Laboratory from 1877-1909, and was the first to isolate a pure yeast that revolutionized the brewing industry. Birgitte Skadhauge, Head of Carlsberg Research Laboratory, says: “The distinguished work of Carlsberg Research Laboratory scientists Kaj Linderstrøm-Lang and Emil Chr. Hansen made it possible to have the high quality beers we have today. This year marks the 140th anniversary of the Carlsberg Research Laboratory, and we found it particularly important to celebrate the connection between beer and science this year by honouring not only Professors Richard Henderson, Geoff Fincher and Henrik Scheller but also Jef Boeke with the Emil Christian Hansen Golden Medal for their notable scientific contributions to business and society.” The program for this year’s Carlsberg Forum consisted of eight lectures by outstanding international scientists and businesspersons spanning trend-setting research in e.g. yeast, cereal-crops, food for health & well-being and new technologies. The business part of the program included lectures on business and health challenges in a globalized and rapidly changing world. In a special endnote address, Birgitte Skadhauge reviewed the Laboratory’s anniversary and the famed re-brew project. Scientists at the Laboratory made an extraordinary discovery in the old cellars of Carlsberg in Copenhagen, Denmark, some years ago. They discovered a very old Carlsberg bottle that surprisingly still contained living yeast cells. They grew and analyzed the cells, and, as it turns out, the bottle was one of the very first beers brewed with the original pure yeast from 1883 when Carlsberg democratized modern lager beers. Earlier this year, specially invited guests celebrated the Laboratory’s 140th anniversary by tasting, for the first time in more than a hundred years, the original Carlsberg quality lager, re-brewed by the Laboratory. Birgitte Skadhauge says: “What better way to take science to business than by re-brewing the father of quality lager. The Laboratory has pioneered beer quality since it was founded 140 years ago. On the eve of our anniversary, we are proud to have demonstrated scientifically why Carlsberg is probably the best beer in the world.” The movie premiere of Warner Bros’ documentary about the re-brew project premiered in Copenhagen and Fredericia, Denmark, on 24 and 25 November. A movie trailer is available on rebrewproject.com The Carlsberg Group is one of the leading brewery groups in the world, with a large portfolio of beer and other beverage brands. Our flagship brand – Carlsberg – is one of the best-known beer brands in the world and the Baltika, Carlsberg and Tuborg brands are among the eight biggest brands in Europe. More than 47,000 people work for the Carlsberg Group, and our products are sold in more than 150 markets. In 2015, the Carlsberg Group sold 120 million hectolitres of beer, which is more than 35 billion bottles of beer. Find out more at www.carlsberggroup.com.


News Article | November 17, 2016
Site: phys.org

The Top500 list was announced Monday, Nov. 14, at The International Conference for High Performance Computing, Networking, Storage and Analysis, SC16, in Salt Lake City. It's announced semi-annually and is comprised of systems that are ranked according to their computation speeds as determined by the LINPACK Benchmark, which is a calculation of a linear system represented by a large double precision dense matrix. SciPhi-XVI was delivered to Jefferson Lab and installed in mid-August and was deployed on Sept. 30 with 200 nodes. An additional 64 nodes were added to the machine in late October, and the High Performance Computing group then benchmarked the machine. The SciPhi-XVI machine benchmarked at 425.8 Teraflops using 256 of its 264 nodes. The cluster's 264 nodes are comprised of the second-generation Intel Xeon Phi chips, which were released in 2016 and are codenamed Knights Landing. Each node has 64 computing cores, and individual nodes within the machine are connected via the Intel Omni-Path Architecture, which supports a 100 Gigabit/second data transfer rate between the nodes. "A special Intel distributed LINPACK benchmark software, HPL (High Performance Linpack), was used to rank the Jefferson Lab cluster, and it is based on an open source distribution with performance enhancement for Intel Xeon Phi processors," said Jie Chen, a member of Jefferson Lab's High Performance Computing group. "To achieve the performance number for the SciPhi-XVI cluster, a set of HPL running parameters has to be determined through many trials. In particular, the size of the matrix representing the linear equations solved by the HPL was chosen to be 2,457,600." Jefferson Lab conducts discovery-caliber nuclear physics research in exploring the atomic nucleus and its fundamental constituents, such as protons and neutrons and these particles' building blocks: quarks and gluons. The theory of how these particles interact—Quantum Chromodynamics, or QCD, is calculated in the form of lattice QCD, or LQCD. Scientists will use the SciPhi-XVI machine for LQCD calculations, as well as a test system for the use of the Xeon Phi processors for analyzing data that is generated in Jefferson Lab's experimental physics program. The Chroma code, which is used to solve LQCD on the Xeon Phi processor platform, will be used with an open source library of equation solvers for LQCD. The High Performance Computing group has been modernizing the freely available Chroma code, so that it will run faster and provide better results, more quickly on the new generations of hardware. "There are always challenges to bringing online state-of-the-art hardware systems. We are very proud of our staff for their hard work and dedication that made this achievement possible," said Jefferson Lab Chief Information Office, Amber Boehnlein. "This hardware achievement, together with our efforts to continually optimize the performance of our software for the newest systems, positions the laboratory to continue performing great science in high-performance computation." Jefferson Lab work on Chroma is funded through the U.S. Department of Energy Office of Science. The research is carried out in collaboration with partner institutions, such as DOE's National Energy Research Scientific Computing Center (NERSC) Exascale Science Application Program (NESAP) and the Scientific Discovery through Advanced Computing (SciDAC) program. NERSC is a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. Supercomputer computation time on SciPhi-XVI is allocated based on the recommendations of the United States Lattice Gauge Theory Computational Program (USQCD), a consortium of top LQCD theorists in the United States that spans both high-energy physics and nuclear physics. The USQCD consortium developed the QUDA code library that is used with the Chroma code. Explore further: K computer takes first place on HPCG benchmark


News Article | September 30, 2016
Site: cleantechnica.com

Wind power is flying high. Learn why this clean energy source is so popular with the public and growing rapidly – and how much we can do with it. The news has been nothing but great lately for the wind energy industry. From plunging costs to exploding growth to a fast-expanding job market, recent developments in wind have proven clean energy can both create real economic opportunity and help us stop climate change. To put it another way, the fact that wind is supplying more and more of our power while putting more and more people to work and reducing our dependence on carbon-emitting, climate-harming fossil fuels has turned us into very big fans. Still, there is an awful lot most people don’t know about this game-changing energy source. So, let’s jump right in. It sure can – and then some! And we mean it, too: by some estimates, wind could supply worldwide electricity consumption more than 40 times over. When we talk about that level of potential, we don’t mean it in some big, theoretical “If all the wind in the entire world were able to be captured and turned into power…” kind of way, either. In a far-reaching study published in the prestigious Proceedings of the National Academy of Sciences (PNAS), researchers from Harvard University and VTT Technical Research Centre of Finland found that a network of land-based 2.5-megawatt (MW) wind turbines – restricted to non-forested, ice-free, nonurban areas (i.e., places we really could build them) – operating at as little as 20 percent of their full capacity could easily supply that much electricity. Which would be more than five times the total energy the world uses in all forms (including oil, coal, and natural gas). That same PNAS study found that in the United States, the Central Plains states – an area identified by researchers as “extending northward from Texas to the Dakotas, westward to Montana and Wyoming, and eastward to Minnesota and Iowa” – could alone accommodate wind facilities producing as much as 16 times the total current demand for electricity in the United States. The potential is very much there – we just need to harness it. It’s pretty simple, really. Wind turns the turbine’s blades, which spin a shaft that is connected to a generator. The generator converts the kinetic energy in the wind into electricity. There are two basic types of turbines: horizontal-axis and vertical-axis turbines. A vertical-axis turbine kind of looks like an eggbeater, but you are probably much more familiar with horizontal-axis wind turbines. These are the turbines that have two or three large propeller-like blades that face into the wind. Wind turbines can be built on both land and – increasingly and to great effect – offshore in large bodies of water like oceans and lakes. Not nearly as much as it should (but we’re working on it). Wind energy currently powers about 5 percent of the nation’s total overall electricity demand – though, how much electricity a given place is getting from wind power varies greatly from place-to-place and frequently day-to-day. Wind provided 66.4 percent of the power in Colorado at one point in November 2015, and at one point that same month, it reliably met more than 43 percent of Texas’ electricity demand, according to the state’s primary grid operator. Wind power represented the largest source of new US electricity in 2015, making the country second in the world in annual wind additions that year. But when it comes to actual wind energy penetration (or how much of the total market wind generation accounts for) we trailed well behind market leaders like Denmark, which generated 42 percent of its electricity from wind power in 2015. All of which is to say that, even as we build the infrastructure to get more and more of our electricity from clean wind energy, we continue to be far more reliant on the status quo – generating energy from environmentally harmful fossil fuels – than we really need to be. If the industry flourishes as expected, the US Department of Energy estimates that wind could provide more than one-third of the nation’s power by 2050. Where there’s a will there’s a way, and the conditions are just right for wind power to blow up here in the US. Texas is by far the biggest wind energy-producing state in the US, with nearly 18,000 MW installed and accounting for about 10 percent of overall in-state electricity production in 2015. That may seem like a surprise for a state well-known for its robust oil industry, but it makes a great deal of sense when you consider its vast, generally flat topography and just how good an investment wind power has become (more on that below). The distant runners-ups in 2015 were Iowa (with about 6,200 MW of wind capacity) and California (with close to 5,700 MW of wind capacity). However, even though they might have less wind capacity than Texas, wind power nevertheless supplied a larger share of overall electricity generation in many of these states. In Iowa, for example, wind power met more than 31 percent of overall electricity demand in 2015, the highest percentage in the nation. Kansas and South Dakota were not far behind, as both of them also generated more than 20 percent of their electricity from wind last year. States like these are proving that wind is far from a niche technology and can reliably meet a substantial portion of electricity demand. Not at all – and it’s getting even less expensive very quickly. This wasn’t always the case, something the fossil fuel industry trumpets to mislead consumers today, but since peaking in 2008–09, the cost of wind energy has fallen dramatically. Wind turbine prices have fallen 20 to 40 percent from their 2008 highs, per a recent report from the US Department of Energy and the Lawrence Berkeley National Laboratory (LBNL), and the average levelized long-term price from wind power sales agreements has dropped to an average of around 2 cents per kilowatt-hour (kWh), after reaching a nearly 7 cent/kWh high in 2009. It’s important to note that 2 cents per kWh is a nationwide average dominated by projects hailing largely from the very windy (and consequently lowest-priced) Central Plains states. But even when looked at a little more broadly, rates for electricity generated by today’s better-than-ever wind farms are currently comparable to wholesale electric power prices of 2.5-3.5 cents/kWh. Moreover, current wind energy contract prices compare very favorably to the projected future costs of electricity generated from fossil fuels. And wind energy costs are expected to get even better in the very near future. Wind turbines are getting bigger, and as you might expect, taller installations with bigger blades can harness the power of more wind and generate more power, driving down prices even further. Because of this and rising installations, the cost of electricity generated by wind is expected to plunge. Top experts in the field estimate that by 2030 the cost of wind-generated electricity could fall by as much as 30 percent – and even further by mid-century. The growth of the wind power industry is going to create many, many new jobs in fields such as development, construction, transportation, manufacturing, operations, and supporting services. By the end of 2015, the wind power industry already employed 88,000 Americans – 24,000 of those in Texas alone. The forecast is even better. According to the US Bureau of Labor Statistics, wind turbine service technician is projected to be the fastest-growing occupation in the US through 2024. Just as exciting, the median annual wage for the gig is $48,800 – that’s $13,260 more than the median annual wage among all 15 projected fastest-growing occupations. Now, with those numbers in mind, consider that the Department of Energy (DOE) estimates wind power could support up to 426,000 jobs by 2030 and 670,000 by 2050. By that same year, assuming the DOE’s estimates bear out and wind energy grows unabated (and without obstruction), the agency estimates that the US could avoid the emission of more than 12.3 gigatonnes of greenhouse gases; save 260 billion gallons of water (because generating electricity from wind power uses virtually no water) and $108 billion in public health costs by cutting air pollution; and prevent 22,000 premature deaths. Just a quick reminder that we could do all of this while creating hundreds of thousands of new, well-paying jobs across the country. Maybe it’s the image of a turbine at work in the breeze without any pollution in sight, but wind energy is remarkably popular. Seventy-three percent of Americans “prefer emphasizing alternative energy, rather than gas and oil production.” That’s pretty firmly in line with the 70 percent of registered voters who have a favorable impression of wind energy, specifically. Perhaps even more telling is wind power’s popularity in the very states where it is flourishing. In Iowa, a poll conducted by WPA Opinion Research found that 91 percent of respondents supported wind energy, and in Texas, the Texas Clean Energy Coalition similarly found that 85 percent of registered voters in the Lone Star State support growth in clean energy like wind power. We could go on and on, but you see where we’re going – both in this piece and as a nation with clean energy. The good news is that today Americans actually have the chance to help support this growing sector and cut greenhouse gas emissions by speaking up for the EPA’s Clean Energy Incentive Program (CEIP). As part of the agency’s Clean Power Plan, the CEIP encourages states to invest in clean energy sources like wind, helping build a sector that already employs thousands and thousands of Americans and is helping the US reduce emissions in a big big way. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | January 6, 2016
Site: www.aweablog.org

New report highlights many benefits of wind energy. Billions of dollars in benefits from wind power American wind energy drastically reduces pollution and energy prices while creating jobs, the Lawrence Berkeley National Laboratory (LBNL) and National Renewable Energy Laboratory (NREL) concluded today in “A retrospective analysis of the benefits and impacts of U.S. renewable portfolio standards.” These results confirm other recent analyses that found wind energy to be the most cost-effective way to cut carbon pollution. Today’s report found around $7.5 billion in annual environmental benefits from reduced air emissions, 27 billion gallons in reduced water consumption annually, $1.3 billion to $4.9 billion in reduced consumer energy prices, and 200,000 American jobs and $20 billion in annual GDP from the renewable energy developed to meet state Renewable Portfolio Standards through the year 2013. The following graphic from today’s report summarizes these benefits and impacts. The study’s results show wind energy is highly effective at reducing carbon pollution while also protecting ratepayers against fuel price increases, a critical message for states and utilities as they begin to craft plans to reduce emissions to comply with EPA’s Clean Power Plan. Wind is particularly effective at displacing carbon pollution as most of the best wind resources are located in regions with the most carbon-intensive electricity mixes. Analysis by DOE and others has shown that wind provides the most bang-for-the-buck in emissions reductions and also protects consumers against fuel price spikes from an over-reliance on any one fuel source. The above graphic from the report also translates the benefits of wind energy into per-kWh increments, illustrating that wind energy provides a large bang-for-the-buck. Specifically, the study found each kWh of wind provided around 2.2 cents/kWh in economic value from reducing carbon pollution and 5.3 cents/kWh from reducing other forms of air pollution, for total air pollution benefits of around 7.5 cents/kWh. Consumers also benefit from lower energy prices, to the tune of 1.3 cents to 4.9 cents/kWh. Water savings and job creation impacts are additional to those amounts. For reference, in 2014 utilities paid an average of 2.3 cents per kWh for wind energy. In most cases that is well below the cost of other energy sources, so wind energy provides net savings to consumers even without factoring in the environmental benefits and energy price reduction impacts. Once the approximately 7.5 cents/kWh of environmental benefits and several cents/kWh of energy price reduction impacts are factored in, wind energy is a clear winner for consumers and the environment. Policies to promote wind energy, most notably the 2.3 cent/kWh renewable Production Tax Credit and low-cost state RPS policies, also clearly provide net benefits to society because the environmental benefits of wind energy are much higher than any net cost of wind energy and those policies. It is widely appreciated that wind energy protects the environment by directly displacing the burning of fossil fuels at power plants. Today’s study confirms that wind energy also benefits consumers by keeping electricity and natural gas prices low and protecting against energy price spikes, as AWEA has noted in previous reports. Today’s study presents a conservative view of the environmental and consumer benefits of total U.S. wind energy, as it only examined the impacts of renewable energy developed to meet state Renewable Portfolio Standards (RPSs). Wind energy accounted for around 75 percent of RPS generation in 2013, and there has been a large amount of additional wind deployment in states that have exceeded their RPSs and in states without RPSs. Specifically, today’s report examined the benefits of 98 million MWh of total renewable energy in 2013, while total U.S. wind generation in 2014 was nearly 181.7 million MWh. Had the study examined the total nationwide benefits of wind energy, as AWEA has done in its own calculations, the environmental benefits and likely the consumer savings would have been around twice as large. Environmental, health benefits from wind spread to all regions Importantly, these environmental benefits accrue to all regions of the country, as AWEA has also noted. The following maps from today’s study show that the renewable energy deployed due to state RPS policies reduced water consumption and provided millions of dollars in health and economic savings from reduced air pollution in all regions of the country. Notably, the air pollution map uses a conservative estimate of the cost of air pollution and only focuses on savings from renewable energy’s reduction of one air pollutant: particulate matter. Wind energy and some other forms of renewable energy drastically reduce the large amount of water that is withdrawn and consumed at most conventional power plants. AWEA has previously noted the important role of wind energy in reducing water consumption in drought-parched California, calculations that are confirmed by today’s report. The following map from today’s report shows that renewable energy is saving the most water in Western and Interior regions of the country that have been most severely affected by drought in recent years. Not only does wind energy reduce water consumption, it also improves electric reliability by making our power system more resilient against climate change-induced drought and temperature increases that are expected to reduce the output of conventional power plants.


News Article | January 28, 2016
Site: cleantechnica.com

How low can solar go? We keep asking that question, and solar researchers keep coming up with new answers. In the latest cost-cutting development, researchers at the Lawrence Berkeley National Laboratory have come up with a relatively inexpensive, energy efficient process for manufacturing standard silicon solar cells without the use of an atomic “dopant.” Dopants are atomic-scale impurities added to the contacts of an electronic device, such as a silicon solar cell, to juice up their conductivity. An archived article from Berkeley lab explains: “Doping materials is a fundamental component of the entire modern electronics industry,” Crommie says, referring to the process of adding impurities like phosphorus or boron to semiconductors like silicon to give the doped material an excess of negatively charged electrons or positively charged holes. Fundamental or not, doping also adds complexity to electronic devices, which leads to increased manufacturing costs and can limit the lifespan of optimal performance. Considering the razor-thin margins of the global solar industry, every tiny increment of cost savings can result in a significant competitive edge, which accounts for Berkeley Lab’s interest in dopant-free solar cells. The Berkeley Lab DASH (dopant free asymmetric heterocontact) solar cell is the result of a collaboration with scientists from Australian National University and the Swiss Federal Institute of Technology of Lausanne. The issue that the solar industry face is one of balance. By not doping, you reduce costs but you also reduce efficiency. Doped silicon solar cells have achieved a range of 20 percent efficiency and slightly higher, but until now dopant-free versions have only gotten up to 14 percent. The new DASH solar cell cranked that up to 19 percent. It consists of a crystalline silicon wafer covered in thin layers of amorphous silicon (amorphous is fancyspeak for dopant-free). The next step is the application of nano-thin coatings of “moly” or molybdenum oxide on top (molybdenum is a brittle transition metal and is getting to be one of our favorite clean tech materials). The moly oxide coatings serve as a dopant-free contact to absorb solar energy. On the back side is a thin layer of lithium fluoride, also dopant-free, that acts as a contact for electrons. The two materials can be layered on at room temperature using a standard thermal evaporation process, which helps to keep manufacturing costs down. From end to end, the entire process only takes seven steps (following is from the study, see link below): …the simplified architectures inherent to this approach allow cell fabrication in only seven low-temperature (≤200 ∘C), lithography-free steps. This is a marked improvement on conventional doped-silicon high-efficiency processes, and highlights potential improvements on both sides of the cost-to-performance ratio for c-Si photovoltaics. The two materials were selected because they have a successful track record when used in other types of devices, though not specifically solar cells. They are both transparent, and their electronic structures complement each other. In addition, the naturally occurring defects in thin films of moly oxide are actually a positive feature that enhances the material’s electronic properties. According to the lab, that’s just one combination so the next phase of the research will involve trying out other materials in different combinations. If you want more details you can find them in the journal Nature Energy under the title “Efficient silicon solar cells with dopant-free asymmetric heterocontacts.“ The cost of solar cells accounts for only the “hard” part of a solar installation. Other costs include labor, permitting and other “soft” factors. While the soft costs of an installation are not dropping as fast as the hard costs, both are falling, and that brings us to the political background. In tandem with energy storage — which is also becoming more accessible and affordable — low cost solar enables more individual property owners to drift off the grid either partly or fully. That trend is already roiling the established utility marketplace, with a consequent pushback from the utility industry and its political supporters. One hotspot that we’re watching in the US (along with our sister site PlanetSave) is Florida, where a battle royal has erupted over a new solar initiative that would enable property owners to buy solar arrays from independent companies. Florida is currently one of only four states in the US that prohibit such deals. As a high heat state, Florida’s electricity consumption is tied to air conditioning, which is a ripe condition for distributed solar even without storage. We’re also watching the state of play in Nevada, where a recent net metering decision by the state’s Public Utility Commission has resulted in a “black hole” for independent solar customers. The independent solar industry has already pulled hundreds of jobs out of Nevada as result, with SolarCity leading the pack. We’re thinking that pushback from the solar industry won’t change any minds on the Nevada Public Service Commission, but it could serve as a warning to other states that are considering changing the rules in midstream. Follow me on Twitter and Google+. …the top images show a cross-section of a solar cell design that uses a combination of moly oxide and lithium fluoride. These materials allow the device to achieve high efficiency in converting sunlight to energy without the need for a process known as doping. The bottom images shows the dimensions of the DASH solar cell components.   Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | August 15, 2016
Site: www.scientificcomputing.com

Imagine a future in which hyper-efficient solar panels provide renewable sources of energy, improved water filters quickly remove toxins from drinking water, and the air is scrubbed clean of pollution and greenhouse gases. That could become a reality with the right molecules and materials. Scientists from Harvard and Google have taken a major step toward making the search for those molecules easier, demonstrating for the first time that a quantum computer could be used to model the electron interactions in a complex molecule. The work is described in a new paper published in the journal Physical Review X by Professor Alán Aspuru-Guzik from the Department of Chemistry and Chemical Biology and several co-authors. “There are a number of applications that a quantum computer would be useful for: cryptography, machine learning, and certain number-theory problems,” Aspuru-Guzik said. “But one that has always been mentioned, even from the first conceptions of a quantum computer, was to use it to simulate matter. In this case, we use it to simulate chemistry.” “There are millions or even trillions of possible molecules,” said Jonathan Romero Fontalvo, a Ph.D. student in Aspuru-Guzik’s lab and one of the lead authors of the study. “If you are an experimentalist hoping to find a new molecule for a drug, you need to consider a huge number of possibilities, and you need to synthesize and test them all. That is extremely costly and requires a great deal of time and effort.” Classical computers can model simple molecules, but they lack the horsepower needed to model all the possible interactions that take place in more complex molecules. A molecule like cholesterol, Aspuru-Guzik said, is all but impossible to model exactly in traditional systems because it would require decades to describe how its electrons interact. Though Aspuru-Guzik and colleagues had described an algorithm to model molecules using quantum computers more than a decade ago, quantum computing resources were limited at the time, meaning the team was only able to test certain parts of the algorithm. The new study not only marks the first time the entire algorithm has been tested in a scalable manner, but also implements it with a new algorithm, dubbed the variational quantum eigensolver. Even more importantly, Aspuru-Guzik said, both algorithms were implemented in a scalable approach, meaning that while they were tested on a small molecule, they would work equally well on a larger, more complex compound. “We were actually able to compare our old algorithm against the new one,” he said. “The machine is so powerful we can do that. And because it’s scalable, the same algorithm we would run against any molecule in this case was run against a small molecule.” Using the algorithm, Aspuru-Guzik and colleagues are able to model the electronic structure of a given molecule, and then to “read” that information, giving them precise data about behavior and interactions. Armed with that information, he said, researchers can understand whether a molecule will possess the properties desired — whether it will bind to an enzyme or protein, whether it will catalyze certain reactions, and whether a material will possess specific traits. “This is arguably the most valuable application for a quantum computer,” he continued. “Commercially, the market for fine chemicals is estimated to be worth $3 trillion. A number of other teams, including researchers at Microsoft and national labs, have made this area a priority.” But without quantum computers, Aspuru-Guzik said, that search would amount to little more than a guessing game. That is why “I like to call this a disruptive innovation,” he said. “All the methods we have now are approximations, which means if we want to discover a new battery or a better LED, our calculations will sometimes fail. But the change a quantum computer brings is that the answer is exact. Period. That trust will allow us to discover molecules much quicker, make molecules that are much more interesting, and explore chemical spaces significantly faster. When I started at Harvard in 2006, I never imagined that 10 years later we would be at this point.” The future is likely to bring hardware advances. “We are currently fabricating a new generation of quantum chips that will enable much larger calculations,” said co-author Ryan Babbush, a quantum engineer at Google. “We are optimistic that these chips will be large enough to model small transition metal complexes, which are notoriously difficult for classical computers.” The study has implications to other areas of study, such as machine learning. “I very much like to think of variational quantum algorithms as a quantum generalization of neural networks,” Google co-author Hartmut Neven said. “Naturally we expect quantum neural networks to be more capable than their classical counterparts in describing quantum systems. An interesting open question is whether they will also prove superior in learning classical data.” Harvard’s Office of Technology Development has filed several patent applications relating to the software that drives Aspuru-Guzik’s quantum computing platforms. The study was funded by the Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences at Lawrence Berkeley National Laboratory, the Air Force Office of Scientific Research, the Army Research Office, the Office of Naval Research, and the National Science Foundation.


News Article | November 29, 2016
Site: globenewswire.com

Copenhagen, 2016-11-29 12:30 CET (GLOBE NEWSWIRE) -- The Carlsberg Forum series reflects the strong link between science and business. Each year, the Kaj Linderstrøm-Lang awards are given to prominent scientists for their achievements within biochemistry or physiology, the fields of science in which Kaj Linderstrøm-Lang, a professor at Carlsberg Research Laboratory in the period 1939-1959, distinguished himself as a pioneer. This year, Professor Henrik V. Scheller, Joint BioEnergy Institute, Lawrence Berkeley National Laboratory, USA and Professor Geoff Fincher, School of Agriculture, Food & Wine, The University of Adelaide, Australia received this year’s Kaj Linderstrøm-Lang Prize as an acknowledgement of their outstanding achievements on identifying and characterizing enzymes involved in synthesis and modification of the plant cell wall. The third Kaj Linderstrøm-Lang Prize was awarded to Professor Richard Henderson, MRC Laboratory of Molecular Biology, Cambridge, UK for his pioneering work towards obtaining high resolution atomic structures of membrane proteins and membrane protein complexes by electron cryomicroscopy. The prize is a plated gold medal and a financial personal award of DKK 40.000. Finally, the Emil Chr. Hansen Golden Medal was awarded to Jef Boeke, Director, Institute for Systems Genetics at New York University, for his seminal contributions to yeast genome scrambling and the design of synthetic yeast. The Emil Chr. Hansen Foundation sponsors the Emil Chr. Hansen Golden Medal. Professor Emil Chr. Hansen worked at the Carlsberg Laboratory from 1877-1909, and was the first to isolate a pure yeast that revolutionized the brewing industry. Birgitte Skadhauge, Head of Carlsberg Research Laboratory, says: “The distinguished work of Carlsberg Research Laboratory scientists Kaj Linderstrøm-Lang and Emil Chr. Hansen made it possible to have the high quality beers we have today. This year marks the 140th anniversary of the Carlsberg Research Laboratory, and we found it particularly important to celebrate the connection between beer and science this year by honouring not only Professors Richard Henderson, Geoff Fincher and Henrik Scheller but also Jef Boeke with the Emil Christian Hansen Golden Medal for their notable scientific contributions to business and society.” The program for this year’s Carlsberg Forum consisted of eight lectures by outstanding international scientists and businesspersons spanning trend-setting research in e.g. yeast, cereal-crops, food for health & well-being and new technologies. The business part of the program included lectures on business and health challenges in a globalized and rapidly changing world. In a special endnote address, Birgitte Skadhauge reviewed the Laboratory’s anniversary and the famed re-brew project. Scientists at the Laboratory made an extraordinary discovery in the old cellars of Carlsberg in Copenhagen, Denmark, some years ago. They discovered a very old Carlsberg bottle that surprisingly still contained living yeast cells. They grew and analyzed the cells, and, as it turns out, the bottle was one of the very first beers brewed with the original pure yeast from 1883 when Carlsberg democratized modern lager beers. Earlier this year, specially invited guests celebrated the Laboratory’s 140th anniversary by tasting, for the first time in more than a hundred years, the original Carlsberg quality lager, re-brewed by the Laboratory. Birgitte Skadhauge says: “What better way to take science to business than by re-brewing the father of quality lager. The Laboratory has pioneered beer quality since it was founded 140 years ago. On the eve of our anniversary, we are proud to have demonstrated scientifically why Carlsberg is probably the best beer in the world.” The movie premiere of Warner Bros’ documentary about the re-brew project premiered in Copenhagen and Fredericia, Denmark, on 24 and 25 November. A movie trailer is available on rebrewproject.com The Carlsberg Group is one of the leading brewery groups in the world, with a large portfolio of beer and other beverage brands. Our flagship brand – Carlsberg – is one of the best-known beer brands in the world and the Baltika, Carlsberg and Tuborg brands are among the eight biggest brands in Europe. More than 47,000 people work for the Carlsberg Group, and our products are sold in more than 150 markets. In 2015, the Carlsberg Group sold 120 million hectolitres of beer, which is more than 35 billion bottles of beer. Find out more at www.carlsberggroup.com.


News Article | February 15, 2017
Site: cen.acs.org

Chaos reigned as visitors from the seven countries—Iraq, Iran, Libya, Somalia, Sudan, Syria, and Yemen—were detained at airports or prevented from leaving on trips that had already been approved. Permanent residents who have lived in the U.S. for years were also affected. Universities, federal researchers, and companies all wrestled with what the ban would mean for them, their employees, or their students. The ban was designed to help prevent terrorist attacks, Trump said. Administrative officials say it is temporary in most cases; entry to the U.S. was suspended for 90 days for most of those affected. Syrian refugee programs were halted altogether. But enforcement has been erratic, and it’s unclear when people with visas will be allowed to enter the U.S. C&EN did not uncover reports of chemists who were caught up in the immediate detentions at airports. But the normally cautious American Chemical Society, which publishes C&EN, reacted strongly. “The order itself is overly broad in its reach, unfairly targets individuals from a handful of nations, ignores established mechanisms designed to achieve the ends sought by the order, and sets potential precedent for future executive orders,” the society said in a statement. ACS has 109 members in Iraq, 41 in Iran, two in Libya, and one in Sudan. The largest number of affected scientists likely comes from Iran. More than 12,000 students came from Iran to the U.S. last year, according to data from the Institute of International Education. That’s far more than the other countries covered by the ban. Universities scrambled to help international students and scholars who were held in airports or turned back. “The order is stranding students who have been approved to study here and are trying to get back to campus and threatens to disrupt the education and research of many others,” says Mary Sue Coleman, president of the Association of American Universities, a coalition of the top research universities. She urged the Administration to make it clear that the U.S. “continues to welcome the most talented individuals from all countries.” Alison Hatt, the user program director at Lawrence Berkeley National Laboratory’s Molecular Foundry, says an Iranian postdoc from an Italian research institution was supposed to work on a project there but would be unlikely to get a visa. “Our researchers are definitely being impacted by the anti-immigration order.” Collaborators from other countries will be reluctant to work with U.S. scientists if this is not resolved, says Zafra Lerman, president of the Malta Conferences Foundation, which promotes international scientific and technical collaborations. “We are losing the opportunity to lead by example that was ours for so many years, and we are impacting the progress of science,” Lerman says. “By putting obstacles on science diplomacy, we will just torpedo its purpose of security and safety.” Although many companies spoke out, the life sciences industry was largely silent on the ban. Neither of the main industry groups—the Pharmaceutical Research & Manufacturers of America and the Biotechnology Industry Organization—commented on the executive order. Neither did major U.S. chemical companies, none of which responded to C&EN’s request for comment. Other than Iran, most of the countries with large chemical industries, such as Saudi Arabia, Kuwait, and the United Arab Emirates, were excluded from the order.


News Article | March 4, 2016
Site: www.cemag.us

Electrons can extend our view of microscopic objects well beyond what’s possible with visible light — all the way to the atomic scale. A popular method in electron microscopy for looking at tough, resilient materials in atomic detail is called STEM, or scanning transmission electron microscopy, but the highly focused beam of electrons used in STEM can also easily destroy delicate samples. This is why using electrons to image biological or other organic compounds, such as chemical mixes that include lithium — a light metal that is a popular element in next-generation battery research — requires a very low electron dose. Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new imaging technique, tested on samples of nanoscale gold and carbon, that greatly improves images of light elements using fewer electrons. The newly demonstrated technique, dubbed MIDI-STEM, for matched illumination and detector interferometry STEM, combines STEM with an optical device called a phase plate that modifies the alternating peak-to-trough, wave-like properties (called the phase) of the electron beam. This phase plate modifies the electron beam in a way that allows subtle changes in a material to be measured, even revealing materials that would be invisible in traditional STEM imaging. Another electron-based method, which researchers use to determine the detailed structure of delicate, frozen biological samples, is called cryo-electron microscopy, or cryo-EM. While single-particle cryo-EM is a powerful tool — it was named as science journal Nature’s 2015 Method of the Year — it typically requires taking an average over many identical samples to be effective. Cryo-EM is generally not useful for studying samples with a mixture of heavy elements (for example, most types of metals) and light elements like oxygen and carbon. “The MIDI-STEM method provides hope for seeing structures with a mixture of heavy and light elements, even when they are bunched closely together,” says Colin Ophus, a project scientist at Berkeley Lab’s Molecular Foundry and lead author of a study, published last week in Nature Communications, that details this method. If you take a heavy-element nanoparticle and add molecules to give it a specific function, conventional techniques don’t provide an easy, clear way to see the areas where the nanoparticle and added molecules meet. “How are they aligned? How are they oriented?” Ophus asks. “There are so many questions about these systems, and because there wasn’t a way to see them, we couldn’t directly answer them.” While traditional STEM is effective for “hard” samples that can stand up to intense electron beams, and cryo-EM can image biological samples, “We can do both at once” with the MIDI-STEM technique, says Peter Ercius, a Berkeley Lab staff scientist at the Molecular Foundry and co-author of the study. The phase plate in the MIDI-STEM technique allows a direct measure of the phase of electrons that are weakly scattered as they interact with light elements in the sample. These measurements are then used to construct so-called phase-contrast images of the elements. Without this phase information, the high-resolution images of these elements would not be possible. In this study, the researchers combined phase plate technology with one of the world’s highest resolution STEMs, at Berkeley Lab’s Molecular Foundry, and a high-speed electron detector. They produced images of samples of crystalline gold nanoparticles, which measured several nanometers across, and the superthin film of amorphous carbon that the particles sat on. They also performed computer simulations that validated what they saw in the experiment. The phase plate technology was developed as part of a Berkeley Lab Laboratory Directed Research and Development grant in collaboration with Ben McMorran at University of Oregon. The MIDI-STEM technique could prove particularly useful for directly viewing nanoscale objects with a mixture of heavy and light materials, such as some battery and energy-harvesting materials, that are otherwise difficult to view together at atomic resolution. It also might be useful in revealing new details about important two-dimensional proteins, called S-layer proteins, that could serve as foundations for engineered nanostructures but are challenging to study in atomic detail using other techniques. In the future, a faster, more sensitive electron detector could allow researchers to study even more delicate samples at improved resolution by exposing them to fewer electrons per image. “If you can lower the electron dose you can tilt beam-sensitive samples into many orientations and reconstruct the sample in 3D, like a medical CT scan. There are also data issues that need to be addressed,” Ercius says, as faster detectors will generate huge amounts of data. Another goal is to make the technique more “plug-and-play,” so it is broadly accessible to other scientists. Berkeley Lab’s Molecular Foundry is a DOE Office of Science User Facility. Researchers from the University of Oregon, Gatan Inc., and Ulm University in Germany also participated in the study. Source: Berkeley Lab


News Article
Site: www.asminternational.org

A research effort led by the University of Colorado Boulder to develop an inexpensive, "do-it-yourself" coating to retrofit energy-inefficient windows in residential and commercial buildings has been given a $4 million boost over three years by the U.S. Department of Energy (DOE). The funding will be used to develop a paintable, infrared-reflective coating to drastically reduce cooling costs for structures and help reduce greenhouse gas emissions by minimizing solar heat gains, according to CU-Boulder assistant professor Garret Miyake. Miyake is the principal investigator on the award, which also involves the California Institute of Technology, Pasadena, the Lawrence Berkeley National Laboratory (LBNL), Calif., and Materia Inc., Pasadena. Widespread national use of the new window coating promises to reduce the consumption of electricity used for cooling by 35 billion kW, saving a projected $4 billion annually, says Miyake. The improved energy efficiency would decrease U.S. carbon dioxide emissions by roughly 26 million tons per year, the equivalent of removing 120 million cars from the road. "Our goal is to provide an inexpensive product across a broad marketplace to improve energy inefficient windows and create significant savings for consumers," Miyake said. "We see this as an exciting opportunity to make a positive impact in the field of energy efficiency." The $4 million award is part of $125 million in grants awarded by the DOE's Advanced Research Projects Agency - Energy (ARPA-E) for 41 cutting-edge energy technologies. Collaborators on the CU-Boulder-led effort include Caltech team leader Professor Robert Grubbs, a 2005 Nobel laureate, as well as researchers Raymond Weitekamp, Stephen Selkowitz and Arman Shehabi, who lead the team at LBNL. The coating consists of a unique class of polymers that self-assemble into a photonic crystal structure that simultaneously reflects heat and is transparent to visible light, Miyake said. An economic analysis by the team indicates that windows could be retrofitted for about $1.50 per square foot. While infrared-reflective coatings are commercially available and have demonstrated significant energy savings, there are high up-front costs that can deter many potential customers, said Miyake. Such coatings generally need to be incorporated during manufacturing or installed by professionals. "We will be able to provide an inexpensive paint that can be directly applied to windows by consumers to minimize solar heat transport and improve the energy efficiency of virtually all window types," he says. ARPA-E grants are designed to fund projects that display innovative technologies and show promise for commercial use but are too early for private sector investment, according to the DOE. The 41 funded projects—viewed as promising avenues in the hunt for viable ways to combat climate change, enhance security and solve pressing global energy challenges—come from 21 states and encompass 10 technical categories spanning transportation, electricity generation and delivery, and energy efficiency.


News Article | December 3, 2015
Site: cleantechnica.com

The Intertubes have been buzzing with news of the Breakthrough Energy Coalition, the latest venture by US billionaire and Microsoft founder Bill Gates. Introduced earlier this week at the COP21 Paris climate talks as a companion to the equally newsworthy Mission Innovation initiative, the new coalition harnesses the dollar power of the Earth’s billionaires to accelerate the clean energy revolution. If your definition of clean energy includes nuclear energy, then you have a lot to cheer about because that seems to be a main focus of the Breakthrough Energy Coalition’s interest. To be clear, the Breakthrough Energy Coalition defines its mission quite broadly. It identifies the problem like this: The existing system of basic research, clean energy investment, regulatory frameworks, and subsidies fails to sufficiently mobilize investment in truly transformative energy solutions for the future. We can’t wait for the system to change through normal cycles. …and it describes one part of the solution: The foundation of this program must be large funding commitments for basic and applied research, and here governments play the key role. That’s where Mission Innovation comes in, by the way. As described yesterday by CleanTechnica, Mission Innovation launched at COP21 with the aim of ramping up government investment in clean energy. The Breakthrough Energy Coalition aims squarely at the missing piece, which would be the task of attracting private dollars to propel high-risk, high return research across the notorious “Valley of Death” that lies between the laboratory and the marketplace: This [Valley of Death] collective failure can be addressed, in part, by a dramatically scaled-up public research pipeline, linked to a different kind of private investor with a long term commitment to new technologies who is willing to put truly patient flexible risk capital to work. These investors will certainly be motivated partly by the possibility of making big returns over the long-term, but also by the criticality of an energy transition. In contrast to the generalities in the Breakthrough mission statement, Gates dropped a hint about his expectations for the organization in a blog post of November 29, timed to COP21: The renewable technologies we have today, like wind and solar, have made a lot of progress and could be one path to a zero-carbon energy future. But given the scale of the challenge, we need to be exploring many different paths—and that means we also need to invent new approaches. You can find another hint in the membership list of the Breakthrough Energy Coalition. So far the only university to join is the University of California, which runs our Lawrence Berkeley National Laboratory. Among many other clean tech endeavors, Berkeley Lab is known for its nuclear energy research facilities: The Nuclear Science Division conducts basic research aimed at understanding the structure and interactions of nuclei and the forces of nature as manifested in nuclear matter – topics that align the Division with the national program as elucidated in the 2007 U.S. Nuclear Science Long Range Plan. The Division has major programs in low energy nuclear science, including nuclear structure physics, studies of the heaviest elements, exotic nuclei and light radioactive beams, weak interactions, and nuclear reactions; relativistic heavy ion physics; nuclear theory; nuclear astrophysics and neutrino properties; data evaluation; and advanced instrumentation. The Division also operates the 88-Inch Cyclotron. The 88-Inch Cyclotron is the home of the Berkeley Accelerator Space Effects Facility (BASEF) and supports a local research program in nuclear science. Not for nothing, but did you know that Bill Gates is a co-founder and current Chairman of the innovative nuclear energy company TerraPower? The Washington State-based company launched in 2006 and although the US is unlikely to prove fertile ground for nuclear energy investment in the near future, TerraPower is already well on its way to putting down stakes in China. Both Gates and the Breakthrough Energy Coalition are honest about their primary intention, which is to make a profit. In that regard it’s worth noting that members of the coalition stand to profit both directly through a return on their new clean energy investments, and indirectly by enabling them to continue growing the market for their primary products in a carbon-constrained world. Industries represented by the Breakthrough membership include computer software and hardware (Microsoft, SAP, Hewlett-Packard), telecommunications (Tata Industries), and e-commerce including shipping and logistics (Amazon and Alibaba). If you can spot more affiliations drop us a note in the comment thread. Before we leave, let’s note that our Lawrence Livermore National Laboratory is a hotbed of research into nuclear fusion — basically, generating energy by squeezing particles together rather than blowing them apart — so we’re interested to see if Livermore will hop aboard the Breakthrough coalition, too. Livermore is the home of the National Ignition Facility and if you want to see this crazy place in action, catch the “Energy on the Edge” episode of National Geographic’s ongoing Breakthrough (no relation to the Breakthrough coalition) series this Sunday at 9:00 p.m. Eastern Standard time on the National Geographic Channel. The episode includes a heartstopping sequence of an actual test firing. Follow me on Twitter and Google. Image: via Lawrence Berkeley National Laboratory.    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 3, 2016
Site: www.rdmag.com

Scientists at Lehigh University, in collaboration with Lawrence Berkeley National Laboratory, have demonstrated the fabrication of what they call a new class of crystalline solid by using a laser heating technique that induces atoms to organize into a rotating lattice without affecting the macroscopic shape of the solid. By controlling the rotation of the crystalline lattice, the researchers say they will be able to make a new type of synthetic single crystals and "bio-inspired" materials that mimic the structure of special biominerals and their superior electronic and optical properties as well. The group reported its findings today (Nov. 3) in Scientific Reports, a Nature journal, in an article titled "Rotating lattice single crystal architecture on the surface of glass." The paper's lead author is Dmytro Savytskii, a research scientist in the department of materials science and engineering at Lehigh. The other authors are Volkmar Dierolf, distinguished professor and chair of the department of physics at Lehigh; Himanshu Jain, the T.L. Diamond Distinguished Chair in Engineering and Applied Science and professor of materials science and engineering at Lehigh; and Nobumichi Tamura of the Lawrence Berkeley National Lab in Berkeley, California. The development of the rotating lattice single (RLS) crystals follows a discovery reported in March in Scientific Reports in which the Lehigh group demonstrated for the first time that a single crystal could be grown from glass without melting the glass. In a typical crystalline solid, atoms are arranged in a lattice, a regularly repeating, or periodic three-dimensional structure. When viewed from any angle--left to right, up and down, front to back--a crystal-specific periodicity becomes evident. Glass, by contrast, is an amorphous material with a disordered atomic structure. Because they have no grain boundaries between interconnecting crystals, single-crystal materials often possess exceptional mechanical, optical and electrical properties. Single crystals give diamonds their brilliance and jet turbine blades their resistance to mechanical forces. And the single crystal of silicon of which a silicon chip is made gives it superior conducting properties that form the basis for microelectronics. The periodicity, or repeating pattern, in a rotating lattice single crystal, said Jain and Dierolf, differs from the periodicity in a typical single crystal. "We have found that when we grow a crystal out of glass," said Jain, "the periodicity does not result the some way. In one direction, it looks perfect, but if you turn the lattice and look at it from a different angle, you see that the whole structure is rotating." "In a typical single-crystal material," said Dierolf, "once I figure out how the pattern repeats, then, if I know the precise location of one atom, I can predict the precise location of every atom. This is possible only because single crystals possess a long-range order. "When we grow an RLS crystal out of glass, however, we have found that the periodicity does not result the some way. To predict the location of every atom, I have to know not just the precise location of a particular atom but the rotation angle of the lattice as well. "Thus, we have to slightly modify the textbook definition of single crystals." The rotation, said Jain, occurs at the atomic scale and does not affect the shape of the glass material. "Only the string of atoms bends, not the entire material. We can see the bending of the crystal lattice with x-ray diffraction." To achieve this rotation, the researchers heat a very small portion of the surface of a solid glass material with a laser, which causes the atoms to become more flexible. "The atoms want to arrange in a straight line but the surrounding glass does not allow this," said Jain. "Instead, the glass, being completely solid, forces the configuration of the atoms to bend. The atoms move and try to organize in a crystalline lattice, ideally in a perfect single crystal, but they cannot because the glass prevents the perfect crystal from forming and forces the atoms to arrange in a rotational lattice. The beauty is that the rotation occurs smoothly on the micrometer scale. "Our laser imposes a degree of asymmetry on the growth of the crystal. We control the asymmetry of the heating source to impose this rotational pattern on the atoms." The group's ability to control the amount of heating is critical to the formation of the rotating lattice, said Jain. "The key to the creation of the rotating atomic lattice is that it occurs without melting the glass. Melting allows too much freedom of atomic movement, which makes it impossible to control the organization of the lattice. "Our subtle way of heating the glass overcomes this. We heat only the surface of the glass, not inside. This is very precise, very localized heating. It causes only a limited movement of the atoms, and it allows us to control how the atomic lattice will bend." Rotating lattices have been observed in certain biominerals in the ocean, said Jain and Dierolf, and it may also occur on a very small scale in some natural minerals as spherulites. "But no one had previously made this on a larger scale in a controlled way, which we have accomplished with the asymmetrical imposition of a laser to cause the rotating lattice," said Jain. "Scientists were not able to understand this phenomenon before because they could not observe it on a large enough scale. We are the first group to induce this to happen on an effectively unlimited dimension with a laser." Jain and Dierolf and their group are planning further studies to improve their ability to manipulate the ordering of the atoms. The researchers performed the laser heating of the glass at Lehigh and characterized the glass with micro x-ray diffraction on a synchrotron at the Lawrence Berkeley National Lab. They plan to perform further characterization at Berkeley and with electron microscopy at Lehigh. The project has been funded for six years by the U.S. Department of Energy. "This is a novel way of making single crystals," said Dierolf. "It opens a new field by creating a material with unique, novel properties."


News Article | November 29, 2016
Site: globenewswire.com

Copenhagen, 2016-11-29 12:30 CET (GLOBE NEWSWIRE) -- The Carlsberg Forum series reflects the strong link between science and business. Each year, the Kaj Linderstrøm-Lang awards are given to prominent scientists for their achievements within biochemistry or physiology, the fields of science in which Kaj Linderstrøm-Lang, a professor at Carlsberg Research Laboratory in the period 1939-1959, distinguished himself as a pioneer. This year, Professor Henrik V. Scheller, Joint BioEnergy Institute, Lawrence Berkeley National Laboratory, USA and Professor Geoff Fincher, School of Agriculture, Food & Wine, The University of Adelaide, Australia received this year’s Kaj Linderstrøm-Lang Prize as an acknowledgement of their outstanding achievements on identifying and characterizing enzymes involved in synthesis and modification of the plant cell wall. The third Kaj Linderstrøm-Lang Prize was awarded to Professor Richard Henderson, MRC Laboratory of Molecular Biology, Cambridge, UK for his pioneering work towards obtaining high resolution atomic structures of membrane proteins and membrane protein complexes by electron cryomicroscopy. The prize is a plated gold medal and a financial personal award of DKK 40.000. Finally, the Emil Chr. Hansen Golden Medal was awarded to Jef Boeke, Director, Institute for Systems Genetics at New York University, for his seminal contributions to yeast genome scrambling and the design of synthetic yeast. The Emil Chr. Hansen Foundation sponsors the Emil Chr. Hansen Golden Medal. Professor Emil Chr. Hansen worked at the Carlsberg Laboratory from 1877-1909, and was the first to isolate a pure yeast that revolutionized the brewing industry. Birgitte Skadhauge, Head of Carlsberg Research Laboratory, says: “The distinguished work of Carlsberg Research Laboratory scientists Kaj Linderstrøm-Lang and Emil Chr. Hansen made it possible to have the high quality beers we have today. This year marks the 140th anniversary of the Carlsberg Research Laboratory, and we found it particularly important to celebrate the connection between beer and science this year by honouring not only Professors Richard Henderson, Geoff Fincher and Henrik Scheller but also Jef Boeke with the Emil Christian Hansen Golden Medal for their notable scientific contributions to business and society.” The program for this year’s Carlsberg Forum consisted of eight lectures by outstanding international scientists and businesspersons spanning trend-setting research in e.g. yeast, cereal-crops, food for health & well-being and new technologies. The business part of the program included lectures on business and health challenges in a globalized and rapidly changing world. In a special endnote address, Birgitte Skadhauge reviewed the Laboratory’s anniversary and the famed re-brew project. Scientists at the Laboratory made an extraordinary discovery in the old cellars of Carlsberg in Copenhagen, Denmark, some years ago. They discovered a very old Carlsberg bottle that surprisingly still contained living yeast cells. They grew and analyzed the cells, and, as it turns out, the bottle was one of the very first beers brewed with the original pure yeast from 1883 when Carlsberg democratized modern lager beers. Earlier this year, specially invited guests celebrated the Laboratory’s 140th anniversary by tasting, for the first time in more than a hundred years, the original Carlsberg quality lager, re-brewed by the Laboratory. Birgitte Skadhauge says: “What better way to take science to business than by re-brewing the father of quality lager. The Laboratory has pioneered beer quality since it was founded 140 years ago. On the eve of our anniversary, we are proud to have demonstrated scientifically why Carlsberg is probably the best beer in the world.” The movie premiere of Warner Bros’ documentary about the re-brew project premiered in Copenhagen and Fredericia, Denmark, on 24 and 25 November. A movie trailer is available on rebrewproject.com The Carlsberg Group is one of the leading brewery groups in the world, with a large portfolio of beer and other beverage brands. Our flagship brand – Carlsberg – is one of the best-known beer brands in the world and the Baltika, Carlsberg and Tuborg brands are among the eight biggest brands in Europe. More than 47,000 people work for the Carlsberg Group, and our products are sold in more than 150 markets. In 2015, the Carlsberg Group sold 120 million hectolitres of beer, which is more than 35 billion bottles of beer. Find out more at www.carlsberggroup.com.


News Article | February 15, 2017
Site: cen.acs.org

Chaos reigned as visitors from the seven countries—Iraq, Iran, Libya, Somalia, Sudan, Syria, and Yemen—were detained at airports or prevented from leaving on trips that had already been approved. Permanent residents who have lived in the U.S. for years were also affected. Universities, federal researchers, and companies all wrestled with what the ban would mean for them, their employees, or their students. The ban was designed to help prevent terrorist attacks, Trump said. Administrative officials say it is temporary in most cases; entry to the U.S. was suspended for 90 days for most of those affected. Syrian refugee programs were halted altogether. But enforcement has been erratic, and it’s unclear when people with visas will be allowed to enter the U.S. C&EN did not uncover reports of chemists who were caught up in the immediate detentions at airports. But the normally cautious American Chemical Society, which publishes C&EN, reacted strongly. “The order itself is overly broad in its reach, unfairly targets individuals from a handful of nations, ignores established mechanisms designed to achieve the ends sought by the order, and sets potential precedent for future executive orders,” the society said in a statement. ACS has 109 members in Iraq, 41 in Iran, two in Libya, and one in Sudan. The largest number of affected scientists likely comes from Iran. More than 12,000 students came from Iran to the U.S. last year, according to data from the Institute of International Education. That’s far more than the other countries covered by the ban. Universities scrambled to help international students and scholars who were held in airports or turned back. “The order is stranding students who have been approved to study here and are trying to get back to campus and threatens to disrupt the education and research of many others,” says Mary Sue Coleman, president of the Association of American Universities, a coalition of the top research universities. She urged the Administration to make it clear that the U.S. “continues to welcome the most talented individuals from all countries.” Alison Hatt, the user program director at Lawrence Berkeley National Laboratory’s Molecular Foundry, says an Iranian postdoc from an Italian research institution was supposed to work on a project there but would be unlikely to get a visa. “Our researchers are definitely being impacted by the anti-immigration order.” Collaborators from other countries will be reluctant to work with U.S. scientists if this is not resolved, says Zafra Lerman, president of the Malta Conferences Foundation, which promotes international scientific and technical collaborations. “We are losing the opportunity to lead by example that was ours for so many years, and we are impacting the progress of science,” Lerman says. “By putting obstacles on science diplomacy, we will just torpedo its purpose of security and safety.” Although many companies spoke out, the life sciences industry was largely silent on the ban. Neither of the main industry groups—the Pharmaceutical Research & Manufacturers of America and the Biotechnology Industry Organization—commented on the executive order. Neither did major U.S. chemical companies, none of which responded to C&EN’s request for comment. Other than Iran, most of the countries with large chemical industries, such as Saudi Arabia, Kuwait, and the United Arab Emirates, were excluded from the order.


News Article | January 11, 2016
Site: cleantechnica.com

An estimated $2.2 billion in benefits accompanying reduced greenhouse gas emissions, and $5.2 billion in benefits accompanying the reduction of other air pollutant emissions, resulted from US state renewable portfolio standard (RPS) policies operating in 2013, according to a new study from the US Department of Energy’s National Renewable Energy Laboratory (NREL) and Lawrence Berkeley National Laboratory (Berkeley Lab). The numbers quoted above are for mid-range estimates, it should be noted, so actual benefits could be notably higher (or perhaps lower). If you are not familiar with the policy in question, the above-mentioned RPS policies force utility companies (or other electricity providers) to utilize a minimum level of renewable electricity generation amongst their load. Also worth noting, the report — “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards” — demonstrated that “national water withdrawals and water consumption by fossil-fuel plants were reduced by 830 billion gallons and 27 billion gallons in 2013, respectively.” There were around 200,000 renewable energy related jobs supported by RPS policies in 2013, reportedly — most of theses jobs were in California, though, and were mostly related to utility-scale solar photovoltaic (PV) projects under development at the time. “This work is intended to inform ongoing policy discussions by helping states evaluate RPS programs,” stated Berkeley Lab’s Ryan Wiser, one of the report authors. While the overall benefits reported are large, the study carefully documents its methods and highlights where uncertainties exist. For example, benefits from greenhouse gas reductions were found to range from $0.7 to $6.3 billion, reflecting differences in underlying estimates of potential damages caused by climate change. Similarly, air pollution reduction benefits-which arise primarily from avoided premature mortality-were estimated to range from $2.6 to $9.9 billion in 2013, reflecting differences in underlying epidemiological literature, among other factors. Although the study takes a national view — evaluating all state RPS programs as a whole — many of the associated benefits and impacts were highly regional. For example, the economic benefits from air pollution reductions are associated mostly with reduced sulfur dioxide (SO2) emissions from coal-fired power plants, and are concentrated primarily in the Mid-Atlantic, Great Lakes, Northeast, and Texas. Reductions in water withdrawal and consumption were largest in California and Texas, respectively-both states that experience intermittent drought conditions. Having examined both the costs and benefits of state RPS programs historically, the researchers are planning a follow-up effort to evaluate the costs and benefits of RPS programs prospectively, considering scheduled increases to each state’s requirements as well as potential policy revisions. “Our goal was to estimate the magnitude of RPS benefits and impacts at a national-level, using established, consistent methodologies, while recognizing that states could perform their own more-detailed assessments,” NREL’s Jenny Heeter, one of the report authors, said. Image by Bonita de Boer (some rights reserved)    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | April 15, 2016
Site: www.greentechmedia.com

The renewable portfolio standards that many states have enacted are responsible for 60 percent of the growth in non-hydro renewable energy generation, according to a new study from Lawrence Berkeley National Laboratory. Most of the additional capacity to meet RPS requirements has come from wind, but in recent years, solar energy is gaining traction. Renewable portfolio standards call for a certain percentage of power generation to come from renewable resources, although what qualifies as renewable and which power generators have to meet the obligations varies by state. The majority of states are on target to meet their differing requirements in the near and long term. In 2014, RPS achievement averaged 95 percent, with only Illinois and New York lagging significantly. Each state has very different rules around RPS requirements, but the outcome is that they are achieving what they set out to do. The Berkeley Lab study found that build rates are on schedule to deliver the requirements of more than 400 terawatt-hours of clean energy by 2030, roughly double what is available today. Many states have strengthened their RPS criteria in the past few years, and only two have weakened them. For most states, the adjustments have either been higher overall targets, or specific carve-outs for solar or distributed generation. Cost compliance is also all over the map, but in all cases it is below, and often well below, cost caps. The average cost compliance for RPS requirements equated to approximately 1.3 percent of retail electricity bills in 2014, up from 1 percent in 2013. In states with solar carve-outs, the price for solar renewable energy credits varies wildly. As with other aspects of an RPS, the rules are highly state-specific. Some states, such as Massachusetts and New Jersey, have an oversupply, while District of Columbia and New Hampshire have an undersupply. The states that have an RPS are also driving investment in renewables in other states that don’t have the mandates. More than 10 percent of projects to support renewable portfolio standards have been built in states that do not have an RPS. Only a handful of states in the U.S. have not seen any renewable generation build-out because of an RPS. Moving forward, RPS mandates will not be as a big a driver for renewable adoption as they had been in the past, even though they are expected to add another 60 gigawatts of clean power capacity to the U.S. grid by 2030. Instead, market drivers and other regulations are increasingly bringing wind and solar on-line across the U.S. as prices for renewable energy continue to drop. The shift away from a reliance on RPS mandates to deploy renewables is already underway. In 2013, 71 percent of renewable energy installments were due to RPS requirements, a figure that dropped to 46 percent in 2015.


News Article | January 6, 2016
Site: www.greentechmedia.com

State renewable portfolio standards, the key policies driving renewable power growth across the United States, are under attack from opponents who say they increase electricity costs for consumers. But a new report finds that any costs are far outweighed by the economic benefits derived from the environmental and health improvements that come with greening the energy mix in the 29 states that have adopted RPS targets. That’s not counting the potential for billions of dollars in reduced natural-gas costs and wholesale electricity costs -- although these additional cost reductions come at the expense of the companies that make money on generating electricity from fossil fuels. Wednesday’s report, A Retrospective Analysis of the Benefits and Impacts of U.S. Renewable Portfolio Standards, found an average of $2.2 billion in economic benefits from reduced greenhouse gas emissions, and another $5.2 billion in benefits from reductions in sulfur dioxide and other air pollutants, for RPS programs in 2013. These averages are taken from wider-range estimates of the value of mitigating the negative effects of climate change for GHG reductions, as well as reducing mortality from air pollution. These figures are much higher than the roughly $1 billion in annual compliance costs (that is, higher electricity prices) that have come about as a result of state RPS mandates between the years 2010 and 2013. Those figures come from a previous study from the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) and the National Renewable Energy Laboratory, which also co-authored Wednesday’s report. The two reports use different methodologies, which makes a direct comparison difficult, said Ryan Wiser, LBNL analyst and report co-author, in an interview. Specifically, the previous study relied on state-by-state measures of RPS compliance costs, which vary widely in their sophistication and methods, while the new report used a common methodology across all 29 states (plus the District of Columbia) that have RPS mandates in place. Still, the new report does help provide an important counterweight to arguments that state RPS targets are driving up electricity costs to unacceptable levels, he said. “Most people and most organizations that have come out in opposition to RPS programs have focused on the cost angle,” said Wiser. “We did this work previously that compiled all these RPS state cost analyses, while recognizing that the states developed their RPS programs not to deliver short-term savings, but to deliver long-term benefits.” “This more recent work helps us understand what we are getting for these costs, and [whether] those benefits seem like they’re in excess of those costs,” he said. Groups such as American Legislative Exchange Council, Americans for Prosperity, Americans for Tax Reform and the Chamber of Commerce have been fighting to block or overturn RPS mandates in individual states, citing costs to consumers. These groups are also funded by fossil fuel companies that stand to lose money in the transition to green energy -- and which have been caught many times making misleading claims about the costs of wind and solar power. The new report also calculates a separate set of economic measures that are termed “impacts,” rather than benefits, Wiser said. These include estimates of nearly 200,000 jobs created in the renewable energy sector, driving over $20 billion in gross domestic product in 2013, along with some potentially large (but still unknown) effects on lowering wholesale electricity prices and natural-gas prices. The difference here, he said, is that “benefits are undoubtedly societal benefits." Few would argue that reducing early deaths from air pollution and mitigating the harm of climate change aren’t good for everyone. But new green energy jobs could be seen as coming at the expense of jobs lost in the fossil fuel sector, and reduced wholesale electricity and natural-gas prices, while good for people who pay for them, aren’t good for the power-plant owners and natural-gas producers who make money on them. One very interesting figure is the calculation of how state RPS goals could lead to reducing wholesale electricity prices. According to Wednesday’s report, the growth of wind, solar and other renewable energy driven by these mandates could lead to up to $1.2 billion in reduced electricity costs and consumer bills -- but it could also be calculated as delivering an impact of zero, or no reduction at all. “In simplest form, the question here becomes one of the adjustment time period,” Wiser explained. “The year you inject a huge amount of solar and wind into the system, you depress wholesale electricity prices,” which one might assume leads to lower costs. But if that leads to the retirement of existing coal, natural gas or nuclear power plants, or the decision not to build new power plants, that could lead to a shortage of electricity, which would cause prices to rise yet again, potentially erasing the short-term benefits. One can see these concepts playing out in Germany, where massive injections of solar and wind power under lucrative feed-in tariff regimes have led to significant disruptions in its energy markets and pushed traditional utility business models to the breaking point. Of course, the United States isn’t nearly as far along in the green energy transformation as is Germany, Wiser noted. Even so, it’s a factor that is being considered by states that are targeting 50 percent or higher shares of renewables within traditional energy market structures that weren’t designed to manage them.


« Toyota expands R&D facilities in China; PHEVs coming under Corolla Levin lineup; fuel cell vehicle demos | Main | DOE: California, Connecticut, and New York top fuel cell states » An international team led by researchers at UCLA and Caltech has demonstrated that altering the form of platinum nanoscale wires from a smooth surface to a jagged one can significantly reduce the amount of precious metal required as a catalyst for the oxygen reduction reaction (ORR) in fuel cells and thus lower the cost. According to the findings, the newly developed catalyst is so active that the amount of platinum required for a fuel cell could be 1/50 of what is needed today. In a paper published in Science, the team reports that the jagged Pt nanowires exhibit an ECSA (electrochemical active surface area) of 118 m2 per gram Pt and a specific activity of 11.5 mA per square centimeter for ORR for a mass activity of 13.6 ampere per milligram Pt, nearly doubling previously reported best values. Reactive molecular dynamics simulations suggested that the highly stressed, under-coordinated rhombohedral-rich surface configurations of the jagged nanowire enhanced ORR activity versus more relaxed surfaces. Platinum is used to catalyze the key oxygen reduction reaction in the fuel cell’s cathode, directly converting the chemical energy in hydrogen fuel into electricity, with water vapor as the byproduct. Oxygen reduction represents the key reaction step that limits the rate of power generation, and platinum is currently still the most viable catalyst to speed up the reaction and power generation, despite a great deal of research exploring alternatives. However, current state-of-art platinum catalysts are not active enough, which necessitates the use of a relatively large amount of platinum, contributing to the high price of fuels cells. Manufacturing nanowires with jagged surfaces, rather than smooth, creates new types of highly active sites that can significantly reduce the reaction barrier and speed up the oxygen reduction reaction. Further, the thin body of the nanowire ensures most of the platinum atoms are exposed on the surface to actively participate in the reaction instead of being embedded inside the body and making little contribution to the reaction. All of that results in a reduction in the amount of platinum used, and therefore the cost, while at the same ramping up the reaction efficiency and power generation rate. This work is a perfect example of what one can achieve by the atomic scale control of nanoscale materials, and how structural modifications at such small dimension can lead to big gain in functions for real applications. This is fascinating world to explore for a material scientist. —Yu Huang, professor of materials science and engineering at the UCLA Henry Samueli School of Engineering and Applied Science Huang was a co-principal investigator on the research along with Xiangfeng Duan, UCLA professor of chemistry and biochemistry. Both are members of the California NanoSystems Institute at UCLA. The other principal investigator was William Goddard, the Charles and Mary Ferkel Professor of Chemistry, Materials Science, and Applied Physics at Caltech. The lead author of the research was Mufan Li, a UCLA doctoral student advised by Duan and Huang. The researchers created the wires in a two-step process. First, they used a heating process to create nanowires of a platinum-nickel alloy. Then they used an electrochemical process selectively to remove the nickel atoms from the alloy nanowires. Following testing, the researchers found that the jagged nanowires made the reaction much more efficient, delivering 50 times more current (power) than the same amount of commercial catalysts can. Other authors hailed from the National Research Council of Italy; Tsinghua University in China; The Chinese Academy of Sciences; California State University, Long Beach; Northeastern University; and Lawrence Berkeley National Laboratory. The research was supported by the Department of Energy and the National Science Foundation.


News Article | November 10, 2016
Site: www.gizmag.com

A new study has found that in response to increasing rates of human-induced CO2 in the atmosphere, the Earth's vegetation has increased the rate at which it absorbs carbon(Credit: ChiPhoto/Depositphoto ) We might have to treat our humble house plants to an extra helping of sunlight this week. As human-induced CO2 emissions continue to increase, a new study suggests that the Earth's vegetation has upped its game to help offset that growth. After a 40-year upward trend, the rate at which atmospheric CO2 levels increased hit a plateau between 2002 and 2014, thanks largely to plants plucking more CO2 out of the air than they have previously. In 1959, the rate of growth of carbon dioxide levels in the atmosphere sat at about 0.75 parts per million (ppm) per year, and over the second half of the 20th century, that figure increased rapidly, up to 1.86 ppm per year in 2002. But in the years since then, the rate of growth has flatlined, despite the fact that human activity continues to pump out more and more CO2. That's not to say the amount of CO2 in the atmosphere isn't still going up – it is – but the rate of increase has slowed, remaining steady on about 1.9 ppm each year since 2002. Not only that, but the amount of anthropogenic CO2 that stays in the air has dropped by around 20 percent. So where is it all going? New research, headed up by the Department of Energy's Lawrence Berkeley National Laboratory, analyzed data from the Global Carbon Project and ran computer models to determine what changes may be occurring in Earth's carbon cycle. "We believed one of the planet's main carbon sinks had unexpectedly strengthened," says Trevor Keenan, a corresponding author of the paper. "The question was: which one?" Oceans are considered a carbon sink, but computer models indicate their rate of absorption remains fairly stable. Instead, the team looked to vegetation, which can fluctuate year-to-year in terms of carbon uptake, and tested 10 "global dynamic vegetation models," which simulate different combinations of atmospheric CO2 levels, temperature, soil moisture, and other factors, to see how that carbon cycle might change over time. From those models, it appears that over the decades these ecosystems have effectively doubled the rate of CO2 they absorb. In the 1950s, the Earth's vegetation absorbed between 1 and 2 petagrams (1 trillion to 2 trillion kg, or 2.2 to 4.4 trillion lb) of CO2 each year, but in the 21st century that number had risen to between 2 and 4 petagrams annually. Interestingly, the models suggest that the increase in CO2 levels is itself responsible for the improved absorption performance of the planet's plants. With more CO2 in the air, the plants are better able to photosynthesize, meaning they grow better and further increase their rate of photosynthesis. That boom was helped along by the fact that plant respiration, a process which sees them take in oxygen and "exhale" CO2, didn't increase at the same rate. "These changes decreased the amount of anthropogenic CO2 that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO2," says Keenan. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly." But as encouraging as these figures may appear, the news isn't quite as rosy as it may seem at a glance. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate."


News Article | April 19, 2016
Site: phys.org

That same energy also makes actinides highly radioactive. Scientists doing early research with these substances in the 1940s and 50s did not fully understand the risks of long-term handling and storage for the resulting waste products. Locations such as the Department of Energy's (DOE's) Hanford and Savannah River Sites still have radioactive materials from the Manhattan Project in storage awaiting processing and final disposition. Often stored in large holding containers, these volatile substances mix with one another over time. One of the major challenges for decontaminating these sites lies in separating radioactive compounds from their more inert counterparts for safe removal and storage. To this end, a multi-institution team has been using computing resources at DOE's Oak Ridge National Laboratory (ORNL) to understand actinide chemistry at the molecular level in hopes of designing methods to clean up contamination and safely store spent nuclear fuel. "We put this group together because we were already collaborating on scientific papers," said professor David Dixon of The University of Alabama. "Rather than just saying that we were going to do one reaction and needed computing time, we're using a team approach to try and get a broad, fundamental understanding of actinide science." Dixon serves as the principal investigator on the project, which includes collaborators from Lawrence Berkeley National Laboratory, Washington State University, The State University of New York at Buffalo, the University of Minnesota, and Rice University. The team is using the Titan supercomputer at the Oak Ridge Leadership Computing Facility—a DOE Office of Science User Facility located at ORNL—to work on a variety of research goals related to understanding the chemical reactions taking place with actinides. Supercomputing plays an essential role in understanding these elements. Because of their high radioactivity, actinides are difficult to handle, and those experiments that can be performed are expensive. "These elements are really radioactive, and in many cases, there is not much of it around," Dixon said. "The radioactivity part makes it very difficult to handle in a lab. We want to try and get as much information as we can about their properties before people do specific lab experiments to test out ideas and design new systems to help remediate contamination." The Dixon team gained access to Titan through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Within the larger allocation of computing time, the team is divided into several smaller groups looking at specific problems related to actinide chemistry. After spent nuclear fuel begins to dissolve within a storage tank, actinides still exist as ions in an aqueous mixture. To clean up this mixture, researchers must separate these ions into groups based on how they affect the remediation process and by the radioactive hazards they present. Currently, state-of-the-art methods revolve around a multistage process where researchers form a metal-ligand complex with an actinide ion, which then helps the ion migrate across the boundary between oil and water. Washington State University professor Aurora Clark leads a group that studies how solutions of actinides can be separated, primarily focusing on liquid-liquid interfaces. "One of the most critical components of actinide separation is the interface between water or an aqueous solvent system and an organic solvent (such as oil), and often these mixtures are incredibly complicated," Clark said. "We're doing simulations to understand the essential physics of what's going on at these liquid-liquid interfaces. We're trying to understand how the structure and dynamics at the oil-water interface can influence permeability of different compounds across these interfaces and can influence the ability of an actinide-containing molecule to migrate across the interface. Essentially, much of the storage for radioactive waste needs to be cleaned of actinides. Clark's group runs large-scale molecular dynamics (MD) simulations to understand this process with a level of detail unattainable through experiment. The team uses Titan to design a solution containing thousands to tens of thousands of individual atoms. During the simulation, the team charts the particles' interactions as the solution moves to a state of equilibrium—or where a reaction and any counter-reaction proceed at an equal rate—and then analyzes the data from these interactions. Clark's group uses some novel approaches for data analysis. One of the team's achievements was creating intermolecular network theory, which allows the team to focus less on the positions of each individual particle in the simulation (like many MD simulations) and more on how the interactions of various particles dictate the physical properties of these radioactive mixtures. Intermolecular network theory gets the team to take its data and analyze a "network" of the various chemical interactions. Researchers then analyze that network much like a network analyst would observe internet connectivity or monitor a network of computers—in fact, the team uses a modified version of Google's PageRank algorithm, which is instrumental in ranking the importance and relevance of web content, to help understand the immediate organization of solvents around ions. Through its suite of computational tools and access to Titan in the last year, the Clark group was able to observe how the molecule tributyl phosphate changes its conformation when at the interface of water and hexane. As the molecule changes, a massive reorganization at the interface makes the transfer of an actinide across the interface more permeable. In addition to working on liquid-liquid interfaces, Clark's group is also starting to simulate liquid-solid interactions, with a specific focus on metal organic frameworks—compounds that create nanoscale pores that attract and bind to actinides. As supercomputing power continues to increase, Clark hopes that metal organic frameworks may become the gold standard for scrubbing actinides from an environment. "The strategy [of employing metal organic frameworks] compliments our other research and provides a way to both separate and potentially store radioactive material in a single step," Clark said. Clark's group is generating large amounts of data in these simulations and has been working with OLCF support staff to develop methods for in situ data analysis techniques that would allow the team to drastically reduce the amount of data it would need to transport out of the OLCF. Other researchers on the Dixon team, such as Lawrence Berkeley National Laboratory's (LBNL's) Bert de Jong, are using their time on Titan and the OLCF's developmental and analysis cluster Eos to collaborate closely with experimentalists. De Jong's group works with teams at both LBNL and the University of Utah to understand how to change and control an element's oxidation state—or how many electrons an atom can shed or add to change its charge. By altering the behavior of elements like uranium—or its actinide relatives, thorium, neptunium, and plutonium—incorporating them in molecules and forming chemical bonds, researchers hope to extract radioactive elements from the environment or waste storage tanks and keep them in a more innocuous form, or to separate them for reuse as nuclear fuel. De Jong's group benefitted from Titan's hybrid architecture. "For our thorium work, we had to do very high-accuracy simulations, and for those, we have to do coupled cluster calculations," de Jong said. "The NWChem code that we use is GPU enabled, so that allows us to do these kinds of calculations with the fastest time-to-solution possible." Throughout the group's work on Titan and Eos, simulations have served as both the basis for driving experimental research and verification of certain experimental hypotheses. "I believe that computing is both predictive and interpretive," de Jong said. "Experiments often can only tackle aspects of the problem, but computing can help broaden insights and trends that can confirm or change interpretations and provide more additional information very quickly and safely." De Jong pointed to research on chemical reactions involving ligands and proteins—molecules that bind to radioactive elements—where Titan simulations suggested that certain reactions would shed water molecules while reducing uranium. This phenomenon was subsequently observed experimentally in the Netherlands. Simulations on Titan also predicted the formation of an unusually bonded uranium-nitrosyl molecule, work that was published in the Journal of the American Chemical Society. "Simulations on elements on the bottom of the periodic table, such as uranium and plutonium, are a challenge, and we are using accurate experimental data to verify that our models and methods are accurate," de Jong said. The verification of experiments performed at the University of Utah, studying thorium reactions in a gas phase, lays the foundation for complex simulations of uranium or heavier, more volatile elements that are dangerous to work with in a laboratory. As the OLCF moves toward its next-generation supercomputer, Summit, the Dixon team plans to focus on optimizing its various codes for taking full advantage of GPUs and other computing methods. The team wants to continue expanding its work on metal organic frameworks and incorporate quantum Monte Carlo methods—a computational technique that uses statistical data and random numbers to accurately plot electrons in a simulation. Explore further: What, oh, what are those actinides doing? More information: Richard M Cox et al. Reactions of Th + H , D , and HD Studied by Guided Ion Beam Tandem Mass Spectrometry and Quantum Chemical Calculations , The Journal of Physical Chemistry B (2016). DOI: 10.1021/acs.jpcb.5b08008 Yasaman Ghadar et al. Solutes at the liquid:liquid phase boundary—Solubility and solvent conformational response alter interfacial microsolvation, The Journal of Chemical Physics (2015). DOI: 10.1063/1.4914142 Yu Gong et al. Gas Phase Uranyl Activation: Formation of a Uranium Nitrosyl Complex from Uranyl Azide, Journal of the American Chemical Society (2015). DOI: 10.1021/jacs.5b02420


News Article | October 5, 2016
Site: www.nature.com

The ExbB construct with and without a C-terminal 6×His tag was subcloned into pET26b (Novagen). ExbD was subcloned into pACYCDuet-1 vector (Novagen) with an N-terminal Strep-tag and a C-terminal 10×His tag. ExbD was also subcloned into a pCDF-1b vector (Novagen) containing a C-terminal TEV protease site followed by a 10×His tag. An ExbD construct containing a C-terminal TEV protease site (preceded by a Gly-Gly-Gly linker for efficient digestion by TEV protease) followed by a 10×His tag was constructed by deletion of the sequence encoding the periplasmic domain of ExbD (residues 50–141). TonB was cloned into a pACYCDUET-1 vector with an N-terminal 10×His tag followed by a TEV protease site. Mutants of TonB (C18A), ExbD (D25A, N78C and E113C), and ExbB (C25S) were prepared by site-directed mutagenesis (primer sequences for all cloning and mutagenesis experiments are available upon request). The sequences of all plasmid constructs and mutations were verified by sequence analysis (Macrogen USA and Eurofins Genomics GmbH). Expression of ExbB with a C-terminal 6×His tag was performed by transforming E. coli BL21(DE3) cells (NEB) with the pET26b/ExbB vector. Co-expression was performed by co-transforming E. coli BL21(DE3) cells with the respective ExbB, ExbD, and/or TonB plasmids. For all transformations, cells were plated onto LB agar plates supplemented with appropriate antibiotics. Colonies were then used for a starter culture to inoculate 12 flasks containing either 1 l 2×YT medium (Ton subcomplex) or SelenoMet medium supplemented with l-methionine at 40 mg/l (Molecular Dimensions) (Ton complex), with appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with isopropyl β-d-1-thiogalactopyranoside (IPTG) to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. For selenomethionine-substituted samples for experimental phasing, B834(DE3) cells (NEB) were co-transformed with pET26b/ExbB and pCDF-1b/ExbD plasmids. Single colonies were used to inoculate 12 flasks containing 1 l SelenoMet medium (Molecular Dimensions) supplemented with 40 mg/ml l-selenomethionine and appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with IPTG to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. Cells were harvested and used immediately or stored at −80 °C. For purification, cells were resuspended in either 1×PBS (Ton subcomplex) or TBS (Ton complex) supplemented with 100 μM 4-(2-aminoethyl)benzenesulfonyl fluoride (AEBSF), 100 μM DNase, and 50 μg/ml lysozyme, and disrupted with two passages through an EmulsiFlex-C3 (Avestin) operating at ~15,000 p.s.i. Membranes were pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C. Membranes were then resuspended in 1×PBS or TBS using a dounce homogenizer and solubilized by the addition of Triton X-100 (Ton subcomplex) or DDM (Anatrace) (Ton complex) to a final concentration of 1% by stirring at medium speed for 1 h to overnight at 4 °C. Insoluble material was pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C and the supernatant was used immediately. Immobilized metal affinity chromatography (IMAC) was performed on an AkTA Purifier (GE Healthcare) using a 15-ml Ni-NTA agarose column (Qiagen) equilibrated with 1×PBS or TBS supplemented with 0.1% Triton X-100 or 0.1% DDM. The supernatant was supplemented with 10 mM imidazole and loaded onto the column. The column was washed in three steps with 1×PBS or TBS supplemented with 20, 40 and 60 mM imidazole, respectively, and eluted with 1×PBS or TBS supplemented with 250 mM imidazole in 2-ml fractions. Fractions were analysed by SDS–PAGE and those fractions containing the complex were pooled. To remove the 10×His tag, TEV protease was added to the sample at 0.1 mg/ml final concentration and rocked overnight at 4 °C. For the Ton complex, the sample was then diluted 2–3 times with 25 mM HEPES, pH 7.3, and 0.1% DDM and loaded onto an anion exchange 6-ml ResourceQ column (GE Healthcare). Elution was performed with a 0–1 M NaCl gradient over 5 column volumes. For the Ton subcomplex, the sample was concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 50-kDa MW cut-off (Millipore), filtered, and purified by size-exclusion chromatography using a Superdex 200 HL 16/600 column (GE Healthcare) at a flow rate of 0.5–1.0 ml/min. The buffer consisted of 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, 0.01% NaN , and 0.08% C E . For the Ton complex, eluted fractions were concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 100-kDa MW cut-off (Millipore), and passed over a Superose6HR 10/30 column (GE Healthcare) at a flow rate of 0.5 ml/min using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM. Far-UV circular dichroism (CD) spectra (185–260 nm) were measured in 0.1 M NaP , pH 7.0, and 0.03% DDM using quartz cuvettes with a 0.02–0.2 mm optical path length. The results were analysed using the DichroWeb package of programs42 and different sets of reference proteins, including the SMP180 set of membrane proteins. The analysis of the thermal stability of the complexes reconstituted into liposomes was measured by the temperature dependence of the CD signal amplitude at 222 nm. Thermal melting was performed in a magnetically stirred 1-cm quartz cuvette containing 10 mM HEPES, pH 7.0, and 100 mM NaCl with a rate of temperature increase of 0.5 °C/min. Melting curves were normalized to the measured value of the molar ellipticity change at 10 °C. For crystallization, samples were concentrated to ~10 mg/ml and sparse matrix screening was performed using a TTP Labtech Mosquito crystallization robot using hanging drop vapour diffusion and plates incubated at 15–21 °C. Initially, many lead conditions were observed to produce crystals with hexagonal morphology; however, none diffracted to better than ~7 Å and most suffered from anisotropy. To avoid this packing, we performed reductive methylation of our samples before crystallization using the Reductive Alkylation Kit (Hampton Research), followed by an additional size-exclusion chromatography step. This led to a condition which produced diffraction spots to ~4 Å resolution. Further optimization and screening allowed us to grow crystals in 100 mM Na-acetate, pH 4.5, 100 mM MgCl , and 25% PEG 400 that routinely diffracted to ~3.5 Å resolution or better. For heavy atom soaking, crystals were transferred to a drop containing 1 mM HgCl and incubated overnight at room temperature and then harvested directly from the soaking condition. The best native crystals for the ExbB–ExbD complex, however, were grown from 100 mM HEPES-NaOH, pH 7.0, 100 mM CaCl , and 22% PEG MME 550 and diffracted to 2.6 Å resolution; these crystals were also used for heavy atom soaking experiments. Unfortunately, none of the heavy atom soaked crystals (nor the selenomethionine substituted crystals) were useful for phasing owing to crystal pathologies, which we suspected were twinning related. However, selenomethionine substituted crystals of the ExbB –ExbD complex were obtained using 100 mM MES/imidazole, pH 6.5, 30 mM MgCl , 30 mM CaCl , 50% ethylene glycol, and 8% PEG 8000 and diffracted to 5.2 Å resolution with no twinning-related issues. Both native and selenomethionine-substituted crystals were harvested directly from the crystallization drops. Screening for diffraction quality was performed at the GM/CA-CAT and SER-CAT beamlines at the Advanced Photon Source at Argonne National Laboratory and at beamlines 5.0.1 and 8.2.1 at the Advanced Light Source at Lawrence Berkeley National Laboratory. Final datasets were collected at the SER-CAT beamline and all data were processed using either HKL200043 or Xia244. A summary of the data collection statistics can be found in Supplementary Table 1. The presence of both components of the Ton subcomplex within the crystals was confirmed by SDS–PAGE and mass spectrometry analyses of harvested crystals. For phasing the ExbB–ExbD complex structure, three datasets were collected on selenomethionine substituted crystals of the ExbB –ExbD complex at a wavelength of 0.979 Å. The data were processed with Xia244 and, based on non-isomorphism, one dataset was removed. The final two datasets were processed together in space group P4 2 2 to a final resolution of 5.2 Å. Selenium sites (35 total) were located using HKL2MAP45 after 5,000 tries within SHELXD at a resolution range of 20–6 Å. The sites were then fed into AutoSol (PHENIX)46 which removed one site, producing a phase-extended density-modified electron density map into which we could build an initial poly-alanine model. Five-fold symmetry was clearly observed, with each monomer consisting of very elongated α-helices, and directionality was determined on the basis of the predicted topology of ExbB, which contains a single large cytoplasmic domain. This model was then used as a search model to solve the native and Hg-soaked structures by molecular replacement using PHASER/PHENIX46, 47 and the sequence docked on the basis of anomalous peaks from the SeSAD dataset. The ExbB–ExbD complex was solved in space group P2 to 2.6 Å resolution with R/R values of 0.21/0.26 and the Hg-soaked structure in space group P2 2 2 to 3.5 Å resolution with R/R values of 0.25/0.30. All model building was performed using COOT and subsequent refinement done in PHENIX46. r.m.s.d. analysis was performed within PyMOL (Schrödinger). Electrostatic surface properties (calculated using the Linearized Poisson-Boltzman Equation mode with a solvent radius of 1.4), including generation of the electric field lines, were analysed and visualized using the APBS plugin within PyMOL (Schrödinger). Buried surface area was calculated using the PDBePISA server48. Structure-related figures were made with PyMOL (Schrödinger) and Chimera49 and annotated and finalized with Adobe Photoshop and Illustrator. Coordinates and structure factors for the ExbB/ExbD complexes have been deposited into the Protein Data Bank (PDB accession codes 5SV0 and 5SV1). For 2D crystallization experiments, the Ton subcomplex (ExbB–ExbD) was extracted and purified by IMAC as previously described. The sample was passed over a Superose 12 HR 10/30 column using 20 mM Tris-HCl, pH 7, 150 mM NaCl, 0.01% NaN , and 0.035% Triton X-100. The purified complex was then mixed with a solution stock of E. coli polar lipid (Avanti Polar Lipids, Inc.) at 10 mg/ml in 2% Triton X-100, to reach final concentrations of 0.5–1.0 mg/ml protein and 0.1–0.4 mg/ml lipid. The lipid-protein-detergent samples solutions were placed into Mini Slide-A-Lyser dialysis devices (Pierce) with a 20-kDa MW cutoff, and dialysed in 1 l of 25 mM Tris-HCl, pH 7.0, 150 mM NaCl, and 0.01% NaN at 4 °C. Aliquots of dialysed samples were observed periodically by electron microscopy to monitor the formation of 2D crystals. Sample preparation for electron microscopy was carried out by applying a 5-μl drop of protein-lipid material on a glow discharged carbon-coated electron microscopy grid. Staining was performed by addition of 1% (w/v) uranyl acetate and incubation for 1 min. Grids were then imaged on a Tecnai G2 200 LaB6 electron microscope operating at 200 kV at the Institut de Microbiologie de la Méditerranée. Images were recorded with a 2K Eagle CCD camera. The best 2D crystals were selected through observation of the power spectrum of the images using ImageJ software41. Selected images were processed using the IPLT Correlation Averaging suite program50. A filtered image was generated by optical filtering of the low resolution spots, and padded to contain only 4–6 unit cells. The padded image was cross-correlated with the original large image. The positions of the cross-correlation peaks were determined and used to extract sub-images that were summed to generate an average image of the 2D unit cell. Site-directed spin labelling was used to covalently attach the spin label (1-oxyl-2,2,5,5-tetramethyl-∆3-pyrroline-3-methyl) methanethiosulfonate (MTSL) (Toronto Research Chemicals) to Cys25 on ExbB and to cysteines engineered at positions 78 and 113 on ExbD (N78C, E113C; ExbD constructs were in the pACYC vector containing an N-terminal strep-tag and a C-terminal 10×His tag for the Ton subcomplex, and in the pCDF-1b vector for the Ton complex). For labelling with MTSL, samples were first incubated with 2–10 mM dithiothreitol (DTT) for 1–2 h and the DTT then removed by passage over a HiTrap desalting column (GE Healthcare) or during anion exchange (Ton complex). Samples were then incubated with a 10× molar excess of MTSL overnight at 4 °C and then passed over a Superose 6HR 10/30 gel filtration column (GE Healthcare) using 20 mM HEPES-NaOH, pH 7.5, 200 mM NaCl, 0.08% C E or 0.03% DDM (Ton subcomplex); or 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM (Ton complex). For DEER measurements, the samples were diluted with D O to a final concentration of 30% and cryoprotected with 10% v/v D8-glycerol before being flash frozen in liquid nitrogen. Continuous wave (CW) electron paramagnetic resonance (EPR) experiments were carried out at room temperature on a bench-top X-band MiniScope MS 400 (Magnettech by Freiberg Instrument) at 9.5 GHz (X-band) with 2.5 mW microwave power, 15 mT sweep width and 0.15 mT modulation amplitude. Spin labelling efficiency was calculated from the second integral of the derivative spectra compared to a standard spin concentration of 100 μM (Tempol in water). The ExbB native cysteine C25 was labelled with a 50% efficiency, while the ExbD mutants were labelled with efficiencies >80%. DEER measurements were initially performed at ETH Zurich on a commercial Bruker ELEXSYS-II E580 Q-band spectrometer (34–35 GHz) and later on a Bruker ELEXSYS E580Q-AWG dedicated pulse Q-band spectrometer operating at 34–35 GHz. Both spectrometers were equipped with a TWT amplifier (150 W) and a home-made rectangular resonator (from ETH Zurich) enabling the insertion of 30–40 μl sample volume in quartz tubes with 3 mm outer diameter51. Dipolar time evolution data were acquired using the four-pulse DEER experiment at 50 K. All pulses were set to be rectangular with 12 ns length, with the pump frequency at the maximum of the echo-detected field swept spectrum, 100 MHz higher than the observer frequency. Deuterium nuclear modulations were averaged by increasing the first interpulse delay by 16 ns for 8 steps as previously described51. The background of the normalized DEER primary data (V(t)/V(0)) was fitted with optimized dimensions from 2.5 to 3.2 and the resulting normalized secondary data (F(t)/F(0)) were converted by model-free Tikhonov regularization to distance distributions with the software DeerAnalysis201552, 53. The simulation of the possible spin label rotamers populated at selected positions in the protein was performed using the Matlab program package MMM2015.1 using the MTSL ambient temperature library54. The ExbB –ExbD complex (ExbD was in the pACYC vector containing an N-terminal strep-tag and a C-terminal 6×HIS tag) was expressed and purified as described earlier. To prepare the sample for crosslinking, the sample was incubated at 4 °C with 5 mM DTT for at least 1 h. The DTT was then removed using a desalting column in 20 mM HEPES, pH 7.0, 150 mM NaCl, and 0.1% DDM. The crosslinker 1,8-bismaleimidodiethylenglycol (BM(PEG) ) (Pierce) was added at a final concentration of 0.2 mM and the reaction was incubated at 4 °C overnight. The sample was concentrated and passed over a Superose 6HR 10/30 gel filtration column using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.035% DDM on an AkTA Purifier system (GE Healthcare). The results were visualized by SDS–PAGE analysis. Protein complexes were reconstituted into liposomes by dialysis of the protein–lipid–detergent mixture. Lipids (DOPG, DOPC and DOPE) dissolved in chloroform were mixed in a molar ratio of 2:3:5. Chloroform was removed by vortexing in a stream of nitrogen gas in a glass tube followed by drying in vacuum for 2–3 h. The lipid film was hydrated in 1 ml TN buffer (10 mM Tris-HCl, pH 7.5, 50 mM NaCl), followed by five cycles of freeze–thaw and sonication using a water bath sonicator until the suspension of lipids became clear (10–15 min). For proteoliposome preparation, small unilamellar vesicles (SUVs) were mixed with octylglucoside (final concentration, 2%) and then proteins added to achieve a molar ratio of total lipid to protein ∼500–2,000 mol/mol. After 1 h incubation in ice, the lipid–protein–detergent mixture was dialysed into 10 mM Tris-HCl, pH 7.5, 0.3 M sucrose, and 50 mM KCl for 30–40 h using a dialysis membrane with a MW cut-off pore size of 10 kDa. Mueller-Rudin type planar bilayer membranes were formed on a 0.2-mm diameter aperture in a partition that separates two 1-ml compartments, using a mixture of lipids, DOPG, DOPC and DOPE, at a molar ratio of 2:3:5 (10 mg/ml) in n-decane, applied by a brush technique55. The aqueous solution in both compartments consisted of 2 mM KP , pH 7.0, and 0.1 M and 0.4 M KCl in the cis- and trans-compartments, respectively. To study the pH dependence of channel activity, bathing solutions were buffered with 2 mM Na-acetate (pK 4.8), Na-cacodylate (pK 6.2), and Tris (pK 8.3). The pH of the bathing solution was changed by adding 10–20 μl 0.1 M HCl or KOH. The cis-side of the planar bilayer is defined as that to which the electrical potential is applied. Proteoliposomes, 0.1–2 μl, were added to the trans-compartment, and the solutions were stirred until the transmembrane current appeared. A large concentration of an osmolyte inside of the liposomes and the transmembrane KCl concentration gradient caused proteoliposome fusion with the pre-formed planar lipid membrane bilayer. The transmembrane current was measured in voltage-clamp mode with Ag/AgCl electrodes and agar bridges, using a BC-525C amplifier (Warner Instruments). The single-channel conductance of the ExbB–ExbD complexes was measured in symmetrical salt conditions: 0.1 M KCl solution, pH 7.5, at a holding potential of +50 or −50 mV. For ion selectivity experiments, zero-current potential (V ) was determined from volt-ampere characteristics measured in asymmetric salt conditions. Relative cation/anion permeability was calculated using the Goldman-Hodgkin-Katz equation56.


News Article | December 6, 2016
Site: www.theguardian.com

Tech giants are jockeying to be the first to hit a 100% renewable energy goal. Google, which has invested in solar and wind energy for a decade, intends to get there by 2017. Google is the largest corporate buyer of renewable energy, and plans to buy enough wind and solar energy to offset all the electricity used by its 13 data centers and offices in 150 cities worldwide, the company said Tuesday. Apple seems close to reaching its own 100% goal as well. The company said it achieved 93% in 2015. An Apple spokeswoman said the company has yet to set a year for when they would likely cross the finish line. For Google, hitting the 100% target means for every unit of electricity it consumes – typically from coal or natural gas power plants – it would buy a unit of wind or solar electricity. The company wouldn’t say how much electricity it will need to have purchased by the end of next year to reach its 100% goal, but did say that the amount would exceed the 5.7 terawatt-hours solar and wind energy that it bought in 2015. “We want to run our business in an environmentally responsible way, and energy consumption is the largest portion,” said Neha Palmer, head of energy strategy and development at Google’s Global Infrastructure Group. Google is taking a big leap to that 100% goal, having achieved just 37% in 2014. The company has invested in renewable energy ever since it kicked off the construction of a 1.6-megawatt solar energy system in 2006. Since 2010, it’s signed 2.6 gigawatts worth of solar and wind contracts. The tech giant isn’t alone in setting the 100% target. A global campaign to promote 100% renewable energy use in the business world includes Ikea, Facebook, Starbucks and Johnson & Johnson. Businesses, like homeowners, have historically relied on their local utilities for power. In 2015, about 67% of the electricity generated in the US came from fossil fuels. Businesses would have to build and run their own solar or wind farms if they want to hit their 100% more quickly, but that will require hefty investments and expertise they don’t have. As a result, when companies set strong renewable energy goals, they often reach them by buying enough solar and wind electricity or renewable energy credits to offset their use of electricity from coal or natural gas power plants. Some companies, such as Google and Microsoft, have invested in solar and wind power plants to help increase the amount of renewable energy the local electric grids. Or, using their clout as large energy customers, they work on convincing their local utilities to invest in renewable energy. Once extremely expensive, solar and wind have seen their prices falling significantly in the past 10 years. Government tax breaks have also helped to make solar and wind more affordable to both businesses and homeowners. The price for building large solar farms, that have the scale to provide utilities, dropped 12% in 2015, according to Lawrence Berkeley National Laboratory, part of the US Department of Energy. Market analysts expect the prices to continue to fall even with the abundance of cheap coal and natural gas. The pressure on countries around the world to meet their targets from the Paris climate agreement, which went into effect last month, will make renewable energy attractive, said Bloomberg New Energy Finance. The research firm has projected that solar and wind will become the cheapest sources of electricity for “most of the world” after 2030. “Renewable energy has become incredibly cost effective,” said Dan Kammen, a professor of energy at the University of California-Berkeley. “There is no company on the planet that can’t make a 100% energy target a viable, cost-effective strategy.” Yet most businesses have yet to set a renewable energy target. That’s because cost isn’t the only issue. Big companies tend to have an advantage over smaller firms because they have the resources to understand the energy markets and negotiating contracts to buy renewable energy, said Colin Smith, solar analyst for GTM Research. “You’re talking about very complex deals and arrangements with unique risk profiles that most companies aren’t fully well equipped to understand,” he said. Google currently pays for wind and solar power from 20 renewable energy projects in the US and abroad, in places such as Sweden and Chile. However, it doesn’t limit itself to buying solar and wind only in regions where it operates, and sees itself as a champion of reducing the emissions produced by the electric industry, which is the largest source of greenhouse gas emissions in the US. “Wind and solar developers couldn’t get financing without us guaranteeing to pay for the power in the long term,” said Palmer. Setting the 100% renewable energy goal is not the only way to reduce a company’s carbon footprint, said Dan Reicher, executive director of the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. Businesses should first look at cutting down on their energy use and becoming more energy efficient, said Reicher. That will then reduce the amount of renewable energy they need to procure. “Energy efficiency tends not to sound as sexy, just as putting solar panels on your roof is more interesting than putting an efficient furnace in your basement,” he said. “But from an economic and environmental perspective, you want to start with energy efficiency.” Reicher also noted that electricity isn’t always the biggest source of energy for a business. FedEx, for example, uses far more energy in transportation, he said. “Often a big chunk of a company’s business goes beyond electricity,” said Reicher. “What are they doing on the industrial side, on the transportation side, for heating and cooling?” The election of Donald Trump as president has worried renewable energy supporters about the progress made to make solar and wind competitive against coal and natural gas. With the new administration under Donald Trump, renewable energy supporters worry that the government will stop supporting renewable energy. Trump has vowed to revive the coal industry, which has seen half a dozen bankruptcies in recent years as it struggles to compete with the cheaper natural gas. But whether the new administration will create anti-renewable energy policies remains to be seen, Reicher said. Google doesn’t anticipate changes to its renewable energy initiatives in the near future. “The results of any one election won’t change our plans,” said Palmer.


News Article | October 4, 2016
Site: www.chromatographytechniques.com

When the local water management agency closes your favorite beach due to unhealthy water quality, how reliable are the tests they base their decisions on? As it turns out, those tests, as well as the standards behind them, have not been updated in decades. Now scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a highly accurate, DNA-based method to detect and distinguish sources of microbial contamination in water. Using the award-winning PhyloChip, a credit card-sized device that can detect the presence of more than 60,000 species of bacteria and archaea, the new method was found to be more sensitive than conventional methods at assessing health risks. In tests at the Russian River watershed in Northern California, the Berkeley Lab researchers found instances where their method identified potential human health risks that conventional fecal indicator tests had failed to detect. Conversely, they also found instances where the conventional tests flagged bacteria that weren’t likely risks to human health. The research was led by Eric Dubinsky and Gary Andersen, microbial ecologists at Berkeley Lab, and was published recently in the journal Water Research in an article titled, “Microbial source tracking in impaired watersheds using PhyloChip and machine-learning classification.” Steven Butkus of the North Coast Regional Water Quality Control Board, which supported part of the research, was also a co-author. “With the PhyloChip, in an overnight test we can get a full picture of the microorganisms in any given sample,” Dubinsky said. “Instead of targeting one organism, we’re essentially getting a fingerprint of the microbial community of potential sources in that sample. So it gives us a more comprehensive picture of what’s going on. It’s a novel way of going about source tracking.” What local water agencies currently do is collect water samples, culture the bacteria overnight, and then check the growth level of two types of bacteria, E. coli and Enterococcus, which are presumed to be indicators of fecal contamination. However, this method doesn’t distinguish between sources­. The bacteria could have come from humans, cows, ducks, sewage, or even decaying vegetation. “These tests have been used for decades and are relatively primitive,” Dubinsky said. “Back in the 1970s when the Clean Water Act was developed and we had sewage basically flowing into our waters, these tests worked really well. Epidemiological studies showed an association of these bacteria with levels of illness of people who used the water. These bacteria don’t necessarily get you sick, but they’re found in sewage and fecal matter. That’s why they’re measured.” As pollution from point sources—single identifiable sources such as sewage—has been cleaned up over time, the emerging concern has become what are known as nonpoint sources, or diffuse sources, throughout the watershed, such as agricultural lands. “The picture is much more complicated now than it was back then, when the concern was really point sources,” Dubinsky added. The PhyloChip, which was developed by Andersen and several other Berkeley Lab scientists, has been used for a number of medical, agricultural, and environmental purposes, including understanding air pollution, the ecology of coral reefs, and environmental conditions of the Gulf of Mexico after the BP oil spill. With 1 million probes, it identifies microbes based on variations of a specific gene, with no culturing needed. “About seven years ago we started doing water quality work, and we realized the PhyloChip could provide a fundamentally new and improved method for doing source tracking,” Andersen said. Determining the source of any particular pathogen is not a straightforward task. In most cases, a single microbe is not a definitive marker of an animal or other source. “A microbial community is complex,” Dubinsky said. “A cow may have 1,000 different organisms.” So Andersen and Dubinsky had an idea. “We had Laleh Coté, an intern at the time and now a Lab employee, run around and basically collect poop from all sorts of animals,” said Andersen. “What we’ve done since then is develop a reference library of the microbial communities that occur in different types of poop—we have cows, horses, raccoons, humans, different types of birds, pigs, sea lions, and other animals, as well as sewage and septage. We used that library to develop a model.” The new method takes the unknown sample and compares it against this microbial reference library. “We’ve used the PhyloChip in a way that it hasn’t been used before by using machine learning models to analyze the data in order to detect and classify sources,” Andersen said. “It’s essentially giving you a statistical probability that a microbial community came from a particular source.” They validated their method by comparing it to about 40 other methods of microbial source tracking in a California study. “We were the only method that could detect all sources and get them right,” Dubinsky said. If the source is an animal that is not in the reference library, their method can still point you in the right direction. “For example, in that study, one sample was a chicken,” said Dubinsky. “We hadn’t analyzed chickens, but we had geese, gulls, and pigeons. We were still able to determine that the sample was a bird.” In extensive testing throughout the Russian River watershed, which is out of compliance with the Clean Water Act, the Berkeley Lab researchers found widespread contamination by human sources close to areas where communities rely on aging septic tanks. They also found significant human contamination immediately after a weekend jazz festival, whereas testing by conventional methods yielded a much weaker signal after a time lag of a couple days. “Our method is more sensitive to human contamination than those fecal indicator tests are,” Dubinsky said. The team is now working on characterizing the microbial community of naturally occurring E. coliand Enterococci, using Hawaii with its warm waters as a testing ground. “They can occur naturally in sediments and decaying kelp and vegetation,” Dubinsky said. “It is known that they do, but nobody has developed a test to definitively show that.” The researchers will also be able to study whether climate affects microbial communities. “Does a Hawaiian cow look like a California cow in terms of fecal bacteria composition? That’s a good question and something we’ll be able to find out,” he said. They are working closely with the U.S. Environmental Protection Agency (EPA), which is looking at new technologies for what it calls “next generation compliance.” Ultimately the goal is to develop their method—possibly with a downsized version of the PhyloChip—to the point where it can be universally used in any location and by non-experts. Dubinsky says the method should also be useful with the burgeoning issue of algal blooms, to understand, for example, the processes by which they form, the microbial dynamics before and after a bloom, and specifically, whether runoff from livestock production in the Midwest is related to algal blooms in the Great Lakes, a question they’re investigating with the EPA.


News Article | October 26, 2016
Site: www.nature.com

List of discoveries shows US contributions have declined, but Japanese, Russian and European work is on the rise. When it comes to discovering nuclear isotopes, retired physicist Gottfried Münzenberg is top, with 222. His colleague Hans Geissel, from the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, is next with 211. Then comes Francis Aston, the British chemist who won the 1922 Nobel Prize in Chemistry, with 207. These provisional rankings have been assembled by Michael Thoennessen, a physicist at the National Superconducting Cyclotron Laboratory at Michigan State University in East Lansing. With the help of several undergraduate students, he has assembled a database of the almost 3,100 known nuclear isotopes — atoms that share the same number of protons but have varying numbers of neutrons. He lists the discovery of each by researcher, the lab at which they worked and the method by which they produced the isotope. The vast majority of these isotopes only exist naturally for fleeting moments inside stars, formed during the production of heavy elements. They never exist on Earth at all, except for a few nanoseconds in elite nuclear-physics labs. The International Union of Pure and Applied Chemistry, headquartered in Zurich, Switzerland, assigns the official credit for discovering elements, but Thoennessen is now doing the same for all the isotopes of those elements. "I don't think it matters so much who gets the credit, but historically it's important to identify which countries and labs are driving the field," he says. The database lets Thoennessen rank labs as well as people, and to track how different countries' rising and falling investments in nuclear technology have affected where and how isotopes are discovered (Click here for an animated chart of known isotopes that shows how many were found in quick succession with each new wave of technology). Top of the lab list is Lawrence Berkeley National Laboratory in Berkeley, California, with 634 isotopes. GSI Darmstadt is second with 368; the University of Cambridge, UK, is third with 222, its discoveries made mostly in the days of Aston; and the Joint Institute for Nuclear Research in Dubna, Russia, is fourth with 215. Berkeley may reign supreme in terms of sheer numbers, but the trend over time tells a different story. In the 1980s and 1990s the lab failed to upgrade its equipment and so fell behind other labs. It also lost some credibility in 2002, when allegations surfaced that physicist Victor Ninov had faked the discovery of elements 116 and 118 there. These days, most new finds come from GSI Darmstadt, Dubna and other locations, such as the Radioactive Isotope Beam Factory(RIBF) in Wako, part of Japan's RIKEN national network of labs. Although the decline of Berkeley means that the United States is losing ground, in the longer term Thoennessen expects the country to maintain its supremacy with the Facility for Rare Isotope Beams (FRIB), construction of which is expected to begin in 2012 at Michigan State University. Not everyone agrees that this ranking is the best way to capture the science of the field. Krzysztof Rykaczewski, a nuclear physicist at Oak Ridge National Laboratory in Tennessee, who according to Thoennessen's list has discovered 60 isotopes, would like to see a ranking include not only discoveries, but also the first person to study the properties and nuclear structure of the atoms. "Identifying is only the first step," he says. But Patrick Regan, a physicist at the University of Surrey in Guildford, UK, who has 20 isotope discoveries to his name, thinks that the discovery itself does reflect something important. "I'm proud of each one of those 20," he says, "each of them is like a child to science." Thoennessen set the bar high when deciding what counted as a discovery, in the hope of improving standards in the field. He considers an isotope "discovered" only if its mass and charge have been identified in a peer-reviewed paper. As he dug into the literature, he found that some generally accepted isotopes in fact appeared only in less rigorously vetted sources, such as PhD theses, conference proceedings or unpublished 'private communications'. He discounted such discoveries. "There are numbers people take for granted where one really shouldn't," says Thoennessen. One person who is pleasantly surprised by the ranking is Münzenberg. He was aware that he had made a large contribution, including discovering elements 107–112 and some of their isotopes, but he hadn't realized that he would be top in the world. "I didn't know how many we'd made," he says. GSI Darmstadt is expected to announce a further 60 isotopes in the next year, and as equipment Münzenberg designed is still in use he may well remain ahead for the foreseeable future.


News Article | December 14, 2016
Site: www.eurekalert.org

Findings by Berkeley Lab researchers mean the Arctic may be even less of a carbon sink than previously thought When the frozen Arctic tundra starts to thaw around June of each year, the snow melting and the ground softening, the soil may release a large pulse of greenhouse gases, namely, carbon dioxide and methane. Little has been known about such releases. Now scientists at the U.S. Department of Energy's (DOE) Lawrence Berkeley National Laboratory, in collaboration with a team of other scientists taking measurements both in the field and in the lab, have quantified the scale of such releases and explained the underlying mechanisms for the phenomenon. Their study was based on a spring pulse in northern Alaska that they documented in 2014 that included CO2 emissions equivalent to 46 percent of the net CO2 that is absorbed in the summer months and methane emissions that added 6 percent to summer fluxes. What's more, recent climate trends may make such emissions more frequent, the scientists conclude. "We can see the effects of climate change happening more rapidly in the Arctic than in any other part of world," said Berkeley Lab scientist Naama Raz-Yaseef. "So we need to understand the processes that are occurring and what to expect in the future. The amount of CO2 and methane (CH¬¬4) in the atmosphere determines the greenhouse effect--so we need to know more about these sources." Their study was recently published in the journal Geophysical Research Letters in a paper titled, "Large CO2 and CH4 Emissions from Polygonal Tundra During Spring Thaw in Northern Alaska." Raz-Yaseef, an ecohydrologist, was the lead author. Co-authors were Berkeley Lab researchers Margaret Torn, Yuxin Wu, and Tim Kneafsey; Dave Billesbach of the University of Nebraska; Anna Liljedahl and Vladimir Romanovsky of the University of Alaska; David Cook of Argonne National Laboratory; and Stan Wullschleger of Oak Ridge National Laboratory. The study was a project of DOE's Next-Generation Ecosystem Experiment (NGEE-Arctic), which seeks to gain a predictive understanding of the Arctic terrestrial ecosystem's feedback to climate. The team used two towers 4 meters high to collect gas flux readings. The towers are located about 5 km apart near Barrow, Alaska, the northernmost town in the U.S, one tower operated by NGEE and the other by DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility. "Typically we just measure what happens during summer," Raz-Yaseef said. "We assume nothing happens during winter because it's frozen, so there's no biological activity. But in 2014 we measured a big flux of carbon and methane emitted from the ground, at the very beginning of thaw and snowmelt. At first we didn't know if it was real or a measurement error. Then we confirmed it with another tower 5 km away (the ARM tower), which measured a very similar flux of the same size during the same period." That spring they measured a total of three pulses, the longest lasting five days. These findings are important in understanding the carbon balance of ecosystems in the Arctic. So far, the Arctic is considered a carbon sink, meaning it absorbs more CO2 than it emits on an annual basis, thanks mainly to the vegetation that grows in the summer. (Oceans and forests are far larger carbon sinks.) "Given our findings, the Arctic is an even smaller carbon sink than we thought since during some years nearly half of the summer uptake of CO2 is offset with these spring emissions," Raz-Yaseef said. Why is the soil releasing gases? Soils everywhere are full of microbes, and microbial activity emits gases. What makes the Arctic different is the way the ground freezes. As temperatures start to drop in the late fall, the soil freezes from the permafrost up and from the surface down. The active layer in between is trapped in the middle. "For about a month after the surface freezes, the middle layer is still active," Raz-Yaseef explained. "The temperature is still around 0 (Celsius), so microbial activity continues to take place, producing carbon and methane, but it's trapped beneath the surface ice. It can't be emitted, so it's stored throughout the winter." When the ice cap thaws in spring, the gases escape to the atmosphere. The scientists confirmed these observations in several ways. First, they measured soil temperatures every 10 cm down through the permafrost 1.5 meters below ground every 5 minutes year-round. Second, they extracted large, frozen soil cores that were transported to the laboratory for experiments. "(Berkeley Lab scientist) Yuxin Wu thawed a core under controlled conditions, but just like in nature," Raz-Yaseef said. "Just when the surface ice melted in the lab, he measured a large flux of carbon and methane. This is formed from gases trapped in the soil over winter." Separately, Berkeley Lab scientist Timothy Kneafsey took CT scans of hundreds of frozen cores and found gas-rich channels and pockets near the surface, which the scientists posited could serve as accumulation zones and pathways for gas flow. Spring pulses not annual but may become more frequent In fact, it's these channels and pockets that enable the emission pulses, the scientists conclude in their paper. The spring pulses were not detected in 2013 or 2016, and 2015 could not be measured due to equipment malfunction. Since it's not an annual event they analyzed the conditions under which the pulses might occur. "The process is more likely to occur when there are events of rain on ice," Raz-Yaseef said. "When it thaws and freezes repeatedly, that produces cracks in the frozen soil, and through these cracks the gas can be emitted." During warmer years, the scientists expect the spring pulses to be more frequent. "We expect there will be more gas built up due to longer and warmer fall seasons and more frequent pulse events due to more rain on ice in the spring," Raz-Yaseef said. One thing that's for sure is the scientists now know not to ignore gas emissions in early spring. "Now we'll put the towers up much, much earlier, just in case there's another event, we'll be sure to catch it," Raz-Yaseef said. "It's one of those things, once you're aware of it, you open your eyes and looks for it." NGEE-Arctic is supported by the DOE Office of Science. ARM is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | September 26, 2016
Site: www.scientificcomputing.com

The roar can be deafening. Cooling fans and power supplies whoosh and whine from rows and rows of supercomputers at the main data center of the Texas Advanced Computing Center in Austin. The power bill at TACC can reach over a million dollars a year to keep the machines humming. But there's a stranger in town that might change how data centers power their systems. A new kind of advanced computing system called Hikari (Japanese for the word "light") came online at TACC late August, 2016. What's new is that Hikari runs on solar power and high voltage direct current, or HVDC. Hikari is a microgrid that supports a supercomputer, a first for the U.S. By day solar panels that shade a TACC parking lot provide nearly all of Hikari's power, up to 208 kilowatts. At night it switches back to conventional AC power from the utility grid. "The Hikari project is a Japan and Texas collaboration project, and it aims to demonstrate the potential of the HVDC system," said Toshihiro Hayashi, assistant manager in the engineering divisions of NTT FACILITIES, INC., Japan. Engineers of the Hikari HVDC power feeding system predict it will save 15 percent compared to conventional systems. "The 380 volt design reduces the number of power conversions when compared to AC voltage systems," said James Stark, director of Engineering and Construction at the Electronic Environments Corporation(EEC), a Division of NTT FACILITIES, INC.. "What's interesting about that," Stark added, "is the computers themselves - the supercomputer, the blade servers, cooling units, and lighting - are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That's really the largest technical innovation." Data centers in the U.S. consumed an estimated 70 billion kilowatt hours in 2014, which represents about 1.8 percent of total U.S. electricity consumption. That's according to a June 2016 Department of Energy report by the Lawrence Berkeley National Laboratory. High Voltage Direct Current also allows for ease in connection to renewable energy, solar for Hikari but potentially other sources like wind and hydrogen fuel cells. "That's really one of our main focuses, trying to make data centers more sustainable so that we're reducing overall power consumption within an industry that has traditionally consumed a lot of power," James Stark said. With sustainability comes reliability. This held true for another NTT FACILITIES, INC. project called the Sendai Microgrid on the fateful day of the Tohoku earthquake, March 11, 2011. The Sendai Microgrid was equipped with gas engines, a fuel cell, and a photovoltaic array. It continued to supply power and heat to facilities at Tohoku Fukushi University for hospital patients and elderly nursing home residents despite days of blackouts from the catastrophic damage to the district's energy supply system caused by the earthquake. "This microgrid power supply system activated very well after the earthquake," Hayashi said. "It showed that it's a very highly reliable energy system." Another NTT FACILITIES project leading to Hikari is the Tsukuba Research and Development Center, a HVDC microgrid that generates 70 kilowatts for a modular data center. "We have various experiences in Japan of having this HVDC system," Hayashi said. These projects, including Hikari, are supported by NEDO, the New Energy and Industrial Technology Development Organization, a public agency of Japan. The Hikari project partners are NEDO, NTT FACILITIES, INC., and the University of Texas at Austin through the Texas Advanced Computing Center. The collaboration started with a visit to TACC by engineers at NTT FACILITIES, INC. in early 2014, fresh off the heels of basic studies they'd done for installing HVDC at U.S. sites. TACC also shares a strong interest in developing new technologies, including energy savings. "We're very interested in UT Austin's motto," said Hayashi, "which is 'what starts here changes the world.' We very much agree with this motto." Hayashi's team worked with TACC to develop feasibility studies of the Hikari HVDC project from December 2014 to May 2015. This effort led to a Memorandum of Understanding between the State of Texas and NEDO in August of 2015. NTT FACILITIES, INC. worked with EEC to build out Hikari, which completed the system installation late August 2016. "If there wasn't such partnership, we wouldn't have launched this project. I would like to express my gratitude to NEDO for establishing the partnership," Hayashi said. The Hikari supercomputer cluster consists of 432 Hewlett Packard Enterprise (HPE) Apollo 8000 XL730f servers coupled with HPE DL380 and DL360 nodes that are interconnected with a first-of-its-kind Mellanox End-to-End EDR InfinBand at 100 gigabytes per second. Over 10,000 cores from "Haswell" Xeon processors will deliver more than 400 teraflops. The Hikari project also aims to demonstrate energy efficiency through more than just HVDC. The HPE Apollo 8000 systems use a warm water-based liquid cooling system that eliminates the need for fans within the nodes and reduces the energy that would normally be required for water refrigeration and excess heat removal. The solar energy that would have been used for powering fans and chillers can be used for computational work. When it reaches production later in 2017, Hikari will be used by the University of Texas medical researchers to make progress on diseases like cancer and disorders like autism. "We really hope this project will demonstrate the efficiency advantages of using 380 volt DC, not only in data centers, but in any commercial building," James Stark said. "The hope is that the research that comes out of this demonstration project will help to open the door to more widespread use of 380 volt systems throughout data centers and commercial buildings worldwide."


News Article | January 22, 2016
Site: cleantechnica.com

When a solar company analyzes a residence for rooftop solar, they look at several items, including available unobstructed space (that is free from shade), electric bill usage, and size of budget. What is meant by overbuilding a rooftop solar array is to build out more panels than would be needed to address the home’s current electricity demand. First, there are a couple of points worth discussing before diving into the enumerated list. The first is an obvious potential limitation regarding this plan, and that is whether or not there is enough usable space on the rooftop for additional panels. If there isn’t un-shaded available rooftop real estate, then overbuilding rooftop solar becomes a moot point. The second consideration is whether or not the homeowner(s) can afford to overbuild their solar project. It may not be that difficult to overbuild, though, since the added solar panels will have a reduced cost per panel relative to the panels in the primary solar array footprint, due to reduced relative soft costs (more on that later). This leads us to consider the reasons to overbuild, and whether or not they are compelling enough for such an investment. Here are some points for consideration. Everyone is likely to rate the value of each point differently depending upon one’s point of view, and it’s possible that not all points will apply to everyone. Core home functioning is moving away from fossil fuels and toward electricity use. We are seeing more “smart home” connected controls for the thermostat, security, light control, keypad locks, refrigerators, and more. Many of us believe that all of HVAC in homes is moving in the direction of electrical systems. For example, when solar air heaters are installed on the sun-facing side of the home, electricity-using fans are used to move air, and when solar hot water systems are installed, electricity is needed for water pumps. Overbuilding solar arrays leaves capacity available to accommodate these converted home services. Deploying expanded electricity capacity makes sense in this increasingly electrified world. You may have need now or in the future to add one or more EV vehicle charging stations. Odds should be increasing in the coming years that any overnight house guest who drives to your home may be driving an EV that needs a charge. Overbuilding solar leaves room for EV expansion and accommodation. The theory here in this point is that a larger solar array will benefit the environment more than a smaller rooftop solar array. One expanded home solar array will obviously not change the world, but if more and more people do this, then in aggregate it will to some extent contribute in the right direction by displacing fossil fuels. One clear benefit in the migration from burning fossil fuels to increasing renewables electricity production is the retardation of climate disruption due to fewer GHGs (greenhouse gases) being emitted. As more solar electricity is produced, gas and/or coal plants are used less and ultimately, in time, taken offline faster as we migrate to a clean energy world. The environmental benefits of solar is a well-discussed topic on CleanTechnica. If you have interest in learning more, please visit “Going Solar Simplest & Biggest Action Anyone Can Make, Says SolarCity” by Joshua S. Hill. Many utility-grade energy plants require long-distance transmission to get to the final consumption point. The first and most obvious benefit of decentralized energy is that it doesn’t have to be transmitted if it’s used at the point of production; this increases efficiency. Decentralized energy production also makes any country safer from a malicious attack on the grid. As Amory Lovins’ Rocky Mountain Institute states it: “Centralized electricity systems with giant power plants are becoming obsolete. In their place are emerging “distributed resources” — smaller, decentralized electricity supply sources (including efficiency) that are cheaper, cleaner, less risky, more flexible, and quicker to deploy.” (A brief summary of Distributed Energy can be found on Rocky Mountain Institute’s site here.) For additional information about distributed energy, please consider reading these two CleanTechnica articles: “Moving Beyond Utility 2.0, To Energy Democracy,” by John Farrell Whenever energy-related purchases can be diverted from the fossil fuels industry and directed to the solar power (or any renewable energy) industry, this helps with new R&D funds, more clean jobs, and further reduced costs for renewables due to increased production volume. It’s a well-known fact that solar increases a home’s value when it comes time to sell the home. According to the Lawrence Berkeley National Laboratory (Berkeley Lab), as reported in an article on Fool.com (run by The Motley Fool guys), it is claimed that rooftop solar can increase a home’s resale value by about $15,000. By comparing the sale prices of around 4,000 solar rooftop homes across 8 states and 12 years to nearly 19,000 comparable non-solar homes, researchers found that, for an average 3.6kW solar system house, buyers were generally willing to pay around $15,000 more than for a comparable non-solar home. How much some added overbuilt solar would increase a home’s value has not been studied, but it has been estimated. In the same Berkeley Lab document, the researchers state that their models “… indicate decreasing marginal returns to increasing PV system size.” This means that, to a large extent, overbuilding solar would have to be a labor of love and for motives that consider the big picture, and that aren’t strictly related to financial return. Especially if you have a Powerwall or other storage system, overbuilding your rooftop solar could effectively make your electricity production and use self sufficient. The combination of overproduction and storage can provide you with flexibility to bring you effectively offgrid while still being connected to the grid. The combination of solar and storage would allow you to at least not lose energy that you produce that you may otherwise be providing back to the grid for no compensation (depending upon your utility’s net metering policy). Overbuilding prepares you for the possibility that your utility could have a favorable-to-consumer net metering policy in place at the time of solar installation that may be withdrawn later by the utility. Overbuilt rooftop solar, if done by enough people, would likely contribute by some modicum amount to reducing the need for new dirty coal or nat gas plants to be built as your utility’s electricity demand increases over time. This increase in demand could potentially be satisfied by this extra solar electricity production if enough people did it. All renewable energy production contributes favorably to the global ecosystem to some degree by reducing electricity production from dirty fossil fuel energy or nuclear energy. If you are already going to install rooftop solar, the soft costs are already calculated into that project. To add some more panels is an efficient increase per solar panel. According to Barry Cinnamon of Spice Solar of Campbell, California, most of the soft costs are already covered if you were to increase the size of your system. Barry states: “Soft costs are generally fixed for a residential system. The fixed soft costs such as system design, permitting, interconnection, customer acquisition, labor and monitoring changes only slightly if the system size increases.” New, fun gadgets are being developed every year that either use electricity directly or that use batteries, which require electricity to recharge those batteries. For example, how can we live without Samsung’s smart refrigerator announced this year here in Las Vegas at CES that notifies the user that some food item’s expiration date is about to go bad in the refrigerator? (To be fair to other manufacturers, this idea isn’t brand new.) Indeed, we are seeing more and more devices with no end in sight. A variant concept is to get more efficient panels (if the project isn’t already using the best panels) that would theoretically generate a relative increased gain; ideally, one would both get more efficient panels and more solar panels. Overall, if your rooftop solar project size can be increased, there will be a number of sure benefits, and other possible benefits. The increased array size has long-term benefits, but of course must be affordable for the buyers. Go big. If you work in solar sales and wish to reprint this article for customers, you have permission to do so. If you increase your project size due to this article, or if you work in solar sales and use it, please feel free to write a comment below the article even if the comment comes long after this article is published. I’ll see your comment and reply.    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 29, 2016
Site: www.eurekalert.org

Tiny, glowing crystals designed to detect and capture heavy-metal toxins such as lead and mercury could prove to be a powerful new tool in locating and cleaning up contaminated water sources. Motivated by publicized cases in which high levels of heavy metals were found in drinking water in Flint, Mich., and Newark, N.J., a science team led by researchers at Rutgers University used intense X-rays at Lawrence Berkeley National Laboratory (Berkeley Lab) to probe the structure of the crystals they developed and learn how they bind to heavy metals. The crystals function like miniature, reusable sensors and traps, and are known as luminescent metal-organic frameworks, or LMOFs. One type of LMOF that the team tested was found to selectively take up more than 99 percent of mercury from a test mixture of heavy and light metals within 30 minutes, according to recent results published in Applied Materials and Interfaces. No other MOFs have performed as well in this dual role of detecting and capturing, or "adsorbing," toxic heavy metals, the team reported. Simon Teat, a Berkeley Lab staff scientist, studied individual LMOF crystals, each measuring about 100 microns (millionths of a meter), with X-rays at the lab's Advanced Light Source (ALS). Using diffraction patterns produced as the X-ray light struck the LMOF samples, Teat applied software tools to map their three-dimensional structure with atomic resolution. The ALS is one of just a few synchrotron X-ray light sources in the world that have dedicated experimental stations for chemical crystallography studies of crystallized chemical compounds such as MOFs. It all starts with structure What he found was a patterned, grid-like 3-D structure containing carbon, hydrogen, oxygen, nitrogen, and zinc atoms that framed large, open channels. These atomic-scale structural details are key to understanding how the LMOFs bind heavy metals, and can also aid in designing more highly specialized structures. "With MOFs, you're typically interested in using the holes for something," Teat explained. In this case, the structure allows heavy metals to enter these open channels and chemically bind to the MOFs. Their very open framework gives the MOFs an abundant surface area relative to their size, which allows them to take in a large amount of contaminants. The LMOF structure was engineered to glow by incorporating a fluorescent chemical component, or ligand. "When the metal binds to the fluorescent ligand, the resulting framework fluoresces," Teat said. The fluorescence of the LMOFs switches off when they interact with the heavy metals. According to Jing Li, a chemistry professor at Rutgers University who led the research, the technology could be a money-saving solution. "Others had developed MOFs for either the detection of heavy metals or for their removal, but nobody before had really investigated one that does both," Li added. Intense X-rays produced at synchrotrons are the best way to map the 3-D structure of the MOFs, Li said, adding, "Knowing the crystal structures is one of the most important aspects of our research. You need those in order to perform subsequent characterizations and to understand the properties of these materials." In their tests, researchers found that the LMOFs bind strongly to mercury and lead, but bind weakly to lighter metals such as magnesium and calcium that are also found in water supplies but do not pose the same hazards. This selective trait, based on the molecular makeup of the LMOFs, is important, Li said. "We need to have a MOF that is selective and will only take the harmful species." The LMOFs can also be recycled. Researchers found that they could collect, clean, and then reuse the LMOFs for three cycles of toxic cleansing before their performance began to degrade. The study notes that heavily industrialized areas, cities with antiquated water regulations, and agricultural communities can be particularly susceptible to groundwater contamination, which can lead to soil contamination if not addressed. This can cause the contaminants to be taken up by plants and animals in the surrounding environment, broadening the pathways of exposure. Li said that further R&D could explore lower-cost and more durable LMOFs that could last for more cycles, and researchers could also pursue the development of water filters by blending the LMOFs with polymers to create a solid film. "These filters could be used for capture on a larger scale," she said. "We would like to continue with this research," Li said, adding that her team would like to test the system's performance on actual contaminated water sources if funding becomes available. "These are promising results, but we have a long way to go." Her team also has used Berkeley Lab's ALS to determine the crystal structures of MOFs for a wide variety of other applications, including high-explosives detection; toxin detection in foods; and new types of light-emitting components for LEDs, known as phosphors, that incorporate cheaper, more abundant materials. The Advanced Light Source is a DOE Office of Science User Facility. Researchers from the University of Texas at Dallas and Rider University also participated in this research. The work was supported by the DOE Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | February 21, 2017
Site: www.eurekalert.org

UC Riverside's Ming Lee Tang is focused on developing nanoparticles that can increase the efficiency of solar panels RIVERSIDE, Calif. -- Ming Lee Tang, an assistant professor of chemistry at the University of California, Riverside, has been awarded a Sloan Research Fellowship for her research with nanoparticles that could have a strong impact on the solar power industry and biomedical fields. Awarded annually since 1955, Sloan Research Fellowships honor early-career scientists and scholars whose achievements and potential identify them as rising stars, the next generation of scientific leaders. The two-year, $60,000 fellowships are awarded yearly to 126 researchers. "I am happy to get it," Tang said, referring to the fellowship. Tang, who has been at UC Riverside since 2012, is focused on designing, synthesizing and characterizing hybrid organic-inorganic nanocrystal-based materials. Her lab uses the toolbox of synthetic chemistry to create novel nanostructures that have synergy between soft matter and traditional semiconductor materials. This ability to confine light at nanoscale dimensions provides unique properties. Applications include biomedical imaging to treat diseases such as cancer and in the solar industry. For example, she recently co-authored a paper that outlines a new way to make solar energy conversion more efficient. The researchers created a hybrid material that captures two infrared photons that would normally pass right through a solar cell without being converted to electricity. The ability to utilize the infrared portion of the solar spectrum could boost the efficiency in generating solar power by 30 percent or more. Tang received her Ph.D. from Stanford University and then conducted postdoctoral research at the University of California, Berkeley and Lawrence Berkeley National Laboratory. The fellowships -- given by the Alfred P. Sloan Foundation, a philanthropic, not-for-profit grant-making institution based in New York City -- are awarded in eight scientific and technical fields--chemistry, computer science, economics, mathematics, computational and evolutionary molecular biology, neuroscience, ocean sciences, and physics. Candidates must be nominated by their fellow scientists and winning fellows are selected by an independent panel of senior scholars on the basis of a candidate's independent research accomplishments, creativity, and potential to become a leader in his or her field. UC Riverside has had 12 fellows in its history, according to the Sloan Foundation website. Seven of those have been in chemistry. The most recent faculty member named a fellow was Naveen Reddy, an assistant professor in the Department of Physics and Astronomy, in 2013.


News Article | March 28, 2016
Site: www.greentechmedia.com

The growth of distributed energy resources is causing a transition as fundamental as the shift to competitive power markets in the 1990s. Distributed energy resources (DERs) are diverse -- from energy-efficiency and peak-shaving initiatives to distributed generation, storage, and microgrids -- but they all enable customers to reduce their use of traditional grid services, and even to provide valuable services to the grid. These resources fit like a square peg in the round hole of the traditional regulatory paradigm. As a result, regulators are taking a more serious look at a concept called performance-based regulation (PBR). Consultants Mark Newton Lowry and Tim Woolf explore PBR as part of a new series of reports called Future Electric Utility Regulation from Lawrence Berkeley National Laboratory, funded by the U.S. Department of Energy. In their paper, Lowry and Woolf ask: Do today’s utilities need a PBR? Under traditional “cost of service” regulation, revenues are based on the utility’s total costs of providing service. All the investments made by the utility (net of depreciation) are known as the “rate base,” and utilities earn a percentage return on it. The bigger the rate base, the greater the profit. And if sales of electricity increase more than expected between rate cases, the utility keeps the additional margin. As the Alberta Utility Commission put it in a 2010 proceeding that led to a PBR mandate, traditional regulation “offers few incentives to improve efficiency, and produces incentives for regulated companies to maximize costs and inefficiently allocate resources.” Customer-side DERs erode utility earnings by reducing utility sales and opportunities for capital spending. This gives utilities “a material disincentive to accommodate DERs, even when DERs meet customer needs at lower cost than traditional grid service,” write Lowry and Woolf. This is where the round hole comes in. Through proposed rate designs, engineering requirements, demand-side management budgets, and other means, utilities play a key role in determining whether DERs thrive. Because of the incentives inherent in traditional regulation, utilities are unlikely to support new programs and technologies for DERs -- resources that offer considerable benefits to customers. Regulatory approaches that align the interests of consumers and utilities can help achieve a reliable, affordable and flexible power system. While New York’s Reforming the Energy Vision (REV) proceeding is the most high-profile attempt in the United States to solve this problem, states across the country are dealing with it in different ways. Regulators are witnessing increasingly complex dockets brought on by the electric sector’s transformation. Some are asking: Why not tie utility earnings to outcomes rather than to investments and throughput? PBR is an alternative to traditional utility regulation that emphasizes incentives for good performance. It can reduce utilities’ incentives to grow their rate base and use of their system and strengthen incentives to use DERs to reduce costs. PBR allows utilities to innovate and test new approaches, becoming more efficient in the process. If the utility succeeds, it gets to keep a share of the savings, creating a win for both customers and the utility. The flexibility inherent in PBR is especially important in a time of rapid technology change. This is not to say that PBR is the right solution in every case. According to Lowry and Woolf, “It has its pros and cons, and it looks different from the perspective of consumers and utilities. Nor is it always the same; different states and countries have developed many variations.” They document a variety of PBR tools that are available and in active use. Revenue regulation: Also known as “decoupling,” revenue regulation sets utility revenues based on factors other than the megawatt-hours of throughput. This reduces the impact of lower throughput on utility profits, and thus makes utilities more open to the growth of DERs. Multiyear rate plans: MRPs are the most common approach to PBR worldwide. Instead of coming in for a rate case every year or two, the utility operates under a rate plan that generally lasts four to five years. Formulas trigger automatic adjustments to the utility’s allowed revenues between rate cases without linking these adjustments to a utility's actual cost. These formulas are known as attrition relief mechanisms, or ARMs. Stronger incentives for cost containment can help utilities to better appreciate the cost-saving potential of DERs. Performance incentive mechanisms: PIMs strengthen incentives for good performance in targeted areas such as reliability, safety and demand-side management. PIMs for DERs could include things like the speed of the interconnection process and the success of peak load management programs. PIMs can be used with multi-year rate plans or added to traditional cost-of-service regulation. For example, nearly half of all U.S. retail jurisdictions already use PIMs to encourage demand-side management. Regulators have tried PBR in many forms and places over the past 25 years. The “RIIO” approach in Britain has sparked particular interest in some regions. RIIO, short for Revenues = Incentives + Innovation + Outputs, brings all of these elements together into an eight-year plan, with an extensive set of performance metrics and PIMs. While Lowry and Woolf consider the RIIO incentives to be strong and balanced, they note that RIIO plans are complicated, with each plan taking two and a half years to develop. They cite other advantages and disadvantages that PBR offers to both customers and utilities. As demand for power flattens or falls -- as it has for the past decade -- investment opportunities dry up. With PIMs, they note, “Improved performance can become a new profit center for a utility at a time when traditional opportunities for earnings growth are diminishing.” MRPs can even “change the culture of utility management by fostering innovation, giving greater flexibility, and creating an increased focus on opportunities to reduce cost and improve long-term performance.” Another benefit is increased awareness of how a utility’s performance compares to those of peer utilities. Customers can also benefit from PBR. If done right, customers could get better performance from the utility and possibly lower costs. They’ll find it easier to use DERs and may even get proactive help from their utility. Of course, setting incentives and formulas is not easy, and mistakes can lead to problems. If the incentives are too low, utilities won’t respond. If they are too rich, profits will rise at the expense of consumers. PBR mechanisms can be complex and controversial. And having many years between rate cases -- as is the case in Britain -- can provide ample opportunity to go off track, especially in a period of rapid change. While regulators in the United States may not go as far as RIIO, several states have approved MRPs, many states have approved PIMs without MRPs, and many states use decoupling. These PBR approaches, along with continuing adaptations to traditional cost of service regulation, can help support cost-effective use of DERs. Since goals and situations vary widely from state to state, Lowry and Woolf stop short of recommending a single approach to regulators and stakeholders. “If their main concern is to improve performance in specific areas, standalone PIMs might be sufficient. […] If they instead seek wide-ranging performance improvements, including better capital-cost management, MRPs may be better suited to these goals than PIMs alone. Regulators and stakeholders who wish to improve performance comprehensively and also focus on specific areas of performance in need of improvement should consider MRPs with an appropriately tailored package of PIMs.”


State renewable portfolio standards are responsible for over half of all renewable electricity growth in the US since 2000. This is the primary conclusion from a new report published earlier this month by Lawrence Berkeley National Laboratory, which also concluded that total renewable portfolio standards (RPS) demand over the next 15 years is only going to increase, requiring significant growth from the US power sector. The report, US Renewables Portfolio Standards: 2016 Annual Status Report, presented in slide format here (PDF), revealed that RPS policies currently apply to 55% of all US retail electricity supplies. Additionally, 60% of all growth in renewable electricity generation, and 57% of capacity increase since 2000, can be associated with state RPS requirements. Looking forward, however, RPS demand is going to require a lot of work. Total RPS demand will double from 215 TWh in 2015 to 431 TWh in 2030, requiring non-hydro renewable electricity generation to reach 12.1% of retail sales to keep pace. In the end, total RPS demand could require an additional 60 GW of renewable electricity capacity by 2030 to meet this goal, which is approximately a 50% increase from current non-hydro levels of renewable energy capacity, which sits at 114 GW. “This document is intended as a progress report to help policy-makers and other electricity industry participants understand the past and future role of state RPS programs—recognizing that they are but one of a number of key drivers affecting renewable energy development in the United States,” said Berkeley Lab’s Galen Barbose, the report’s author. RPS requirements have mostly been met, with states collectively meeting 95% of their interim RPS targets over the last few years. Specifically, the report highlighted “significant recent policy revisions” in locations such as California, Hawaii, Oregon, Vermont, and New York. In total, nearly 150 RPS-related bills have been introduced throughout the US since the beginning of 2015. Publicity surrounding renewable portfolio standards has risen dramatically over the past 14 months, thanks in part to their obvious success. A new report published earlier this year found that the economic benefits of state renewable portfolio standards significantly outweigh the costs. The report concluded that the economic benefits of these standards in terms of their impact on greenhouse gas emissions averaged $2.2 billion, with the economic benefits accompanying the reduction of other air pollution averaging $5.2 billion in 2013. In March, a new report from GTM Research, found that more than half of all new US utility-scale solar installations in 2016 will be built above obligations set in state renewable portfolio standards. Specifically, GTM predicts more than 6 GW of non-RPS utility-scale solar will come online in 2016 — well above the 4.1 GW installed for the entire sector during 2015, which itself was a record-breaking year.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | February 5, 2016
Site: www.theenergycollective.com

State renewable portfolio standards are widely recognized for their critical role in driving renewable energy growth. However, their full impacts on future renewable energy development have not been fully recognized, even by key government agencies. In a recent study, Benjamin Sovacool and I found that the Energy Information Administration has consistently under-projected renewable energy growth, due in part to improper modelling of state RPS mandates. Last year, the Union of Concerned Scientists similarly found that EPA’s modelling for the proposed Clean Power Plan did not fully account for state RPS policies in several states. These forecasts are critical to shaping perspectives about renewable energy and are major inputs in developing policy, particularly at the federal level. To develop a better idea of the future impact of state RPS policies, this analysis develops quantitative estimates of how much renewable is mandated by existing state policies. Although state RPSs are only part of a picture, this bottom-up approach is a first step in assessing how the future of renewable energy in the U.S. State RPS Policies Likely to Drive Continuing Gains in Renewable Energy To date, state renewable policies have been critical in driving renewable energy growth. By 2013, RPS-compliant renewable energy comprised almost 5% of total U.S. electricity generation, bringing in billions in economic and environmental benefits at limit cost. As states ramp up their RPS requirements in the next fifteen years, state RPS policies are likely to become even more important. Data from Lawrence Berkeley National Laboratory provides an initial indication of why. In addition to tracking compliance, LBNL provides estimates of how much renewable energy will be needed to meet current renewable mandates during the next twenty years. These projections indicate that at least 359,474 GWh of eligible renewables will be needed to comply with state standards by 2030. This nears 9% of total U.S. electricity generation and is more than double the amount of renewable energy required by RPS mandates in 2013. Notably, RPS-compliant renewable is not expected to grow uniformly between 2015 and 2030. Instead we see a rapid ramp up in RPS requirements between 2015 and 2021, with an annualized growth above 9% per year. This is a result of many state RPSs having final goals around 2020. Geographically, RPS-compliant energy to date has been very concentrated. Over the next fifteen years, RPS-compliant will become somewhat less concentrated. California is likely to remain a major RPS state – LBNL’s projections for 2030 indicate California would be responsible for around 26% of total RPS-compliant renewable energy in the U.S, near 2013 levels. However, the share of RPS-compliant energy from Texas, New Jersey, New York, and Pennsylvania would shrink from 26% in 2013 to less than 11% in 2030. The remaining share of RPS-compliant renewable energy (63%) will become more geographically diverse, coming from the remaining 24 states with mandatory RPS policies. Critically, the above estimates from LBNL were last updated in July 2015 and hence do not include several major changes in state policies since then. Most dramatically, they do not include California increasing its RPS goals from 33% by 2020 to 50% by 2030. Similarly, New York Governor Andrew Cuomo has directed the New York Public Service Commission to increase the state’s RPS to 50% by 2030 from its current 30% in 2015 level. Hawaii has also increased its RPS to 100% by 2045. One of the key findings in Benjamin Sovacool and I’s recent paper was that a dynamic policy environment is a challenge to estimating future levels of renewable energy. These new policies are prime examples. My ball park estimates indicate that the revisions in California and New York alone could increase the amount of renewable energy required for state RPS policies in 2030 by 70,000-80,000 GWh, depending on load growth. Thus, current state RPS policies would lead to around 11% of U.S. generation coming from RPS-compliant renewable electricity in 2030, up from less than 3% in 2010. This is more than a 20% increase from what RPS mandates would have required six months ago. Clearly, our expectations regarding the future of energy must be continuously updated as policies and technology change. Wind generation has been the primary compliance option for utilities to meet RPS goals in the last decade, primarily due to the relatively low cost of wind generation and federal tax credit support. Long term contracts and continued wind installations mean that wind will continue to play a major role in meeting RPS requirements moving forward. However, solar is likely to become a major compliance method for renewable standards for the next fifteen years. The primary reason are massive and rapid decreases in solar PV costs. In the past year, the rapid decreases in solar costs and corresponding increase in installations have been pretty thoroughly covered, so I’ll keep the discussion brief. Again, LBNL provides great in-depth analysis for utility-scale solar and distributed solar. In LBNL’s sample, the average price for power purchase agreements for utility solar have dropped from more than $100/MWh in 2011 to less than $50/MWh in 2015. These major cost decreases are driving solar installations to new levels. Installation levels in 2015 set new records, with as much as 3 GW installed in Q4 2015 alone. More dramatically, EIA data indicates that more than 8.3 GW of large-scale solar PV are planned for 2016. That’s excluding distributed generation less than 1.0 MW in capacity, which is likely to drive several more gigawatts of new solar. In many ways, solar is more attractive than wind. While wind generation is primarily concentrated in the Great Plains, solar PV resources are pretty good across most of the country. Further, most solar generation occurs during peak, daylight hours when wholesale electricity prices are higher. Even if solar costs somewhat more than wind on a per MWh basis, it usually offsets higher- cost electricity, making it more financially attractive. As solar reaches full maturity and commercialization nationwide, we are likely to see it be adopted even quicker than wind, with RPS policies providing major demand support. Already, solar is beginning to dominate new builds for RPS-compliance. Between 2013 and 2014, LBNL and NREL estimate that solar was responsible for 76% of new capacity installations to meet RPS requirements. RPS policies have been critical policy mechanisms to incentivize the development of wind and, increasingly, solar markets across the United States. This analysis indicates that existing RPS policies are likely to drive non-hydro renewables to more than 11% of total U.S. electricity generation by 2030, more than double current levels. Voluntary renewable markets, boosted by cost innovation from renewable deployment in RPS states, could lead to at least an additional 2-4% of U.S. generation coming from renewables by 2030. This provides a solid base from which renewables can become a central electricity source in the U.S. However, the recent extension of federal renewable energy tax credits only makes the future more bullish for both wind and solar. The 30% investment tax credit for solar has been extended in full for another three years before ramping down to a permanent 10% in 2022. Meanwhile, the wind production tax credit will be ramped down through 2020. This additional financial support, coming at a time when many state RPSs began to ramp up, will drive a major boom in both solar and wind installations. Critically, the new tax credits providing support through the beginning of Clean Power Plan implementation. Existing state RPS mandates, renewed federal tax credits, and the Clean Power Plan mean further cost reductions in both industries are likely in the next few years. Federal and state policies promise to be even more complementary in the future, with significant economic and environmental ramifications. As it becomes cheaper and cheaper to implement a RPS, more states may create mandatory standards or significantly strengthen existing ones. The recent moves by California, New York, and Hawaii could signal the beginnings of a new wave of RPS policies.


News Article | October 28, 2016
Site: www.prweb.com

Enbala Power Networks has been selected by General Electric Global Research as a key partner in its DOE-funded ARPA-E Network Optimized Distributed Energy Systems (NODES) project. The $3.9 million project’s objective is to create transformational distributed flexibility resource (DFR) technology that aggregates responsive flexible loads and distributed energy resources (DERs) to provide synthetic reserve services to the grid while maintaining high customer quality-of-service. Specifically, a fast reserve similar to a regulating/spinning reserve and a multi-hour, ramping reserve will be developed to provide the same kind of grid balancing flexibility now provided by power plants and large-scale demand response. Other project participants include GE Energy Consulting, the Lawrence Berkeley National Laboratory, Consolidated Edison, Inc., Southern California Edison, Sacramento Municipal Utility District and California Independent System Operator. The 12 NODES projects, including this one, aim to develop innovative and disruptive technologies for real-time management of T&D networks through system-wide control and coordination of flexible load and DERs. In a DOE press release ARPA-E Director Dr. Ellen D. Williams commented, “The research and development of these grid control technologies will make the concept of virtual energy storage a practical reality. The result will enhance the resiliency, security and flexibility of our nation’s electric grid and allow the U.S. to make the best use of its abundant renewable energy resources.” One novel aspect of the GE ARPA-E project is development of a tool that will use short-term and real-time weather forecasts along with other data to estimate the reserve potential of aggregate loads and DERs on a day-ahead basis. An optimization framework that will enable aggregation of large numbers of flexible loads and DERs and determine the optimal day-ahead schedule to bid into the market will also be developed. This will provide the flexible resources required to meet the transformational requirements of today’s evolving grid, while also opening up new opportunities for customers to monetize their assets. Using its Symphony by Enbala distributed energy resource management platform, Enbala is responsible for the project’s control infrastructure and for working collaboratively with Consolidated Edison to recruit a diverse set of customers and distributed energy assets. The advanced control functionality developed within the NODES project will be implemented as micro-services leveraging the GE Predix cloud platform. “This is an innovative project that will effectively demonstrate how grid edge assets can be effectively networked into virtual storage systems that manage the intermittency of renewable energy and help us meet the growing operational challenges of grid infrastructure management,” commented Enbala President and CEO Arthur “Bud” Vos. The DFR technology being created must be able to aggregate and control thousands of customer DERs in real time and match them with production projections. GE electrical engineer Naresh Acharya explained that this project will enable a grid that can reliably manage a power mix where nearly half or more is supplied by renewables. About Enbala Power Networks® Enbala Power Networks is focused on making the world’s power grids greener and more reliable, efficient and predictable by harnessing the power of distributed energy. Enbala’s real-time energy-balancing platform - Symphony by Enbala - is transforming energy system operations through its revolutionary, highly flexible approach for creating controllable and dispatchable energy resources. It unobtrusively captures and aggregates available customer loads, energy storage and renewable energy sources to form a network of continuously controlled energy resources. The platform dynamically optimizes and dispatches these resources to respond to the real-time needs of the power system – all without impacting customer operations. For more information, visit http://www.enbala.com or follow @enbala on Twitter.


News Article | January 16, 2016
Site: cleantechnica.com

The economic benefits of state renewable portfolio standards greatly outweigh the costs, according to a new report. There are now 29 US states that have adopted renewable portfolio standards targets, and they are benefiting as a result. As it stands currently, state renewable portfolio standards are an important driver of renewable energy generation growth in the US — so the new findings are very worth taking note of, especially as the argument used by most critics of the standards is that the raise electricity costs. The new report — A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards — stated that the economic benefits of renewable portfolio standards with regard to reduced greenhouse gas emissions averaged $2.2 billion, and the economic benefits accompanying the reduction of other air pollution averaged $5.2 billion, in 2013. Greentech Media provides some more info: These averages are taken from wider-range estimates of the value of mitigating the negative effects of climate change for GHG reductions, as well as reducing mortality from air pollution. These figures are much higher than the roughly $1 billion in annual compliance costs (that is, higher electricity prices) that have come about as a result of state RPS mandates between the years 2010 and 2013. Those figures come from a previous study from the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) and the National Renewable Energy Laboratory, which also co-authored Wednesday’s report. The two reports use different methodologies, which makes a direct comparison difficult, said Ryan Wiser, LBNL analyst and report co-author, in an interview. Specifically, the previous study relied on state-by-state measures of RPS compliance costs, which vary widely in their sophistication and methods, while the new report used a common methodology across all 29 states (plus the District of Columbia) that have RPS mandates in place. Still, the new report does help provide an important counterweight to arguments that state RPS targets are driving up electricity costs to unacceptable levels, he said. “Most people and most organizations that have come out in opposition to RPS programs have focused on the cost angle,” continued Wiser. “We did this work previously that compiled all these RPS state cost analyses, while recognizing that the states developed their RPS programs not to deliver short-term savings, but to deliver long-term benefits. This more recent work helps us understand what we are getting for these costs, and (whether) those benefits seem like they’re in excess of those costs.”    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 14, 2016
Site: cleantechnica.com

Although snowbound in winter, Colorado is also famous for its sunny skies. In fact, Pueblo Colorado ties 7th place as the sunniest US city, according to NOAA’s annual average % possible sunshine ranking, with an average of 76%. Only Arizona, California, Nevada, and Texas have sunnier cities. It’s no wonder, then, that Colorado ranks high in the US for installed solar energy capacity. According to the US Solar Energy Industries Association (SEIA), Colorado ranks #9 in its ranking of the top 10 states for total installed solar capacity. Growing 117% over 2014, 146 megawatts (MW) of solar were installed in 2015 alone, making Colorado’s current total 599 MW. At 599 MW, this is enough solar energy installed in Colorado to power 114,000 homes. SEIA projects that, over the next 5 years, Colorado will install another 1,878 MW of solar capacity, more than 5 times the amount of solar installed in Colorado in the last 5 years. Over 382 companies contribute to the full value chain in Colorado’s solar market, employing nearly 5,000 people. Following solar investments in the state, $305 million was invested in solar installation during 2015, representing a 44% increase over 2014. Significant growth in solar investments in Colorado for 2016 is also expected. Currently, Xcel Energy has around 370 MW of Colorado’s 599 MW solar power on its system, and approximately 230 MW of that comes from rooftop PV systems. This high percentage of power gives Xcel Energy quite an advantage when it comes to driving Colorado’s solar rates and policies. In the past few years, Xcel has been beating the war drums, attempting to persuade Colorado’s Public Utilities Commission (CPUC) to reduce benefits being realized by rooftop solar customers. In August 2015, solar advocates in Colorado believed their latest troubles were finally over when a year-long debate over net metering ended with no rate reductions. But it was clear the fight was not yet over when Xcel introduced a proposed grid-use fee last spring. Arranged on a sliding scale, the proposed fee ranged from $2.62 per month for customers using fewer than 200 kWh per month, to $44.79 for consumption of 1,401 kWh or more per month. It also proposed lowering the customers’ volumetric charge to help offset the fixed cost increase. Solar proponents rallied once again and argued that “the rate proposal would undermine the economic benefits of installing residential solar by introducing an unavoidable fee and lowering the per-kilowatt-hour charge, which decreases the net metering credit solar customers receive for excess electricity sent back to the grid.” The SEIA reports that installed solar PV system prices in the US have dropped by 12% from last year and 66% from 2010. In 2016, the latest data indicates that the average installed cost of solar in the US is just below $3.50/watt for residential and $2.00/watt for commercial installations. Of course, everyone’s needs are unique, so it is always best to get a personalized solar quote, but at $3.50/watt, this means that in Colorado it currently costs about $17,500 to buy an average-size (5 kW) system. The federal ITC of 30% will reduce this out-of-pocket cost by about $5,250, and you can expect an additional savings of about $1,224/yr ($102/mo) in estimated energy savings. This total reduction works out to be about $6,474, so the first-year cost of installing a 5 kW solar PV system would be around $11,026. Granted, the monthly energy savings estimate is based on 2011 data, but over the course of 20 years this still works out to be about $24,480, or a net gain of about $12,230 for a 5 kW system installed in 2016. That’s a great return on your investment and we didn’t even calculate the additional gain you might make selling your excess electricity back to the utility under Colorado’s current net metering rate of $0.1243 kWh according to EIA! And what’s more, even if you don’t stay in your home for the full 20 years, the sales value of your home increases significantly from the first moment your solar system is installed. According to a study by the Lawrence Berkeley National Laboratory (LBNL), every kilowatt of solar installed on your roof will increase your home’s sales value by an average of $5,911. A 5 kW system will add $29,555 to your home’s value, so even if you sell after one year, you could potentially pay off your ($11,026) solar system and still make $18,529 on your investment! As an enlightened source of energy in a nation increasingly challenging consumers of solar power, Colorado is blazing a bright path to a clean, renewables future. It’s been blazing this trail since 2004, as the first state to adopt a Renewable Energy Standard. Tirelessly carving a trail through political skirmishes and scuffles, Colorado has served as a solar energy guide to new business models — most recently becoming a pioneer in the shared solar energy market. In January 2016, the Colorado Energy Office (CEO) and GRID Alternatives launched a pilot optimizing the community solar model to reduce energy costs for low-income customers. The pilot proposed five demonstration projects, building a total of 579 kW of shared solar, or community solar gardens (CSGs). Colorado Energy Office Director Jeff Ackermann stated, “Colorado has always been a leader in renewable energy, and now we take another innovative step forward as we create community solar models that are more affordable and available to Colorado rural electric cooperatives and the low-income communities they serve.” CSGs provide access to solar energy generation for renters, people living in multi-family housing, or who are not able to take advantage of solar for any number of other reasons, including roof conditions and shady locations, but especially low income. And, if approved by Colorado’s PUC, VoteSolar’s Rick Gilliam noted, the recent settlement agreement reached this past August will significantly expand solar access across the state, especially among low-income consumers. The new settlement will add to Colorado’s already successful CSG shared solar program in several key ways, including that Xcel will take on a 5% low-income participation requirement and there will also be an incremental 4 MW/year CSG program offering 100% low-income participation. Gilliam explained, “Affordable solar offers an exciting opportunity to address some of the biggest financial and health challenges facing low-income families.” Today, according to Shared Renewables HQ, 14 states and Washington, DC have policies in place for shared renewables and CSGs, and many more states are working to expand access to shared renewable energy resources. Check out this quick Youtube clip from Colorado News Channel 13 about Colorado’s Solar Gardens: Finally reaching a settlement agreement in August 2016, the Xcel’s proposed grid fee was killed and net metering was once again saved from manipulation. Instead, according to the settlement, the utility “will institute time-of-use rates on a trial basis in anticipation of moving to time-of-use rates as the default rate design for residential customers.” The time-of-use (TOU) rate pilot proposes to help Xcel assign an appropriate value for electricity that is produced when it’s most valuable. New voluntary programs will be offered to up to 20,000 customers in 2017. In 2018, it will be extended to 34,000 customers, and to 48,000 customers in 2019. The results of the pilot program will be studied for broad implementation of TOU rates in 2020. More than 24 parties signed on to the agreement, including Colorado Public Utilities Commission staff; the Office of Consumer Counsel; several clean energy, consumer, and environmental groups; Walmart; VoteSolar; and the Colorado Solar Energy Industries Association (COSEIA). COSEIA Board President John Bringenberg stated, “This settlement signals cooperation rather than confrontation, which COSEIA believes is the most productive way to advance solar, allow citizens and companies to own clean energy rather than rent dirty fossil fuel, and impact catastrophic climate change.” He noted, “We must work with Xcel and other stakeholders to support the competitive solar industry.” The CPUC is expected to make a final decision on the settlement filing by the end of 2016. It is not unanimously supported, and some groups that signed on have requested some changes. For example, Xcel’s proposed Solar*Connect program has been requested to be changed to Renewable*Connect. This program would allow Xcel to build a solar plant up to 50 MW in size and sell shares of the facility to the public. The settlement agreement also changed Xcel’s 2017 Renewable Energy Plan, calling for expanding clean energy resources in a “measured and economically reasonable way.” Adding 225 MW of solar capacity to Xcel’s Solar*Rewards program from 2017–2019, the settlement specifies 123 MW should come from small-scale solar projects. Over the same three-year period, the settlement also requests up to 105 MW of community solar gardens (CSGs) be developed, carving out an additional 12 MW for low-income customers. This adds up to a total of 342 MW of new solar capacity slated for installment in 2017–2019. Limited numbers of low-income solar program participants will be offered solar rooftops, coupled with the Colorado Energy Office’s weatherization program that leverages DOE funding, and Xcel will also provide a $2/W rebate. With a total of 300 participants, the low-income solar program will be offered to 75 participants in 2017, 100 in 2018, and 125 participants in 2019. Alice Jackson, Xcel Energy’s regional vice president for rates and regulatory affairs, hailed the recent settlement agreement. “It will allow us to meet our customers’ expectations” she stated, “by giving them more control over their energy choices. It will bring more renewable and carbon-free energy to Colorado through the use of new technologies, and it will provide affordable and reliable energy to further power the state’s economy.” Likewise encouraged, Rick Gilliam, Vote Solar’s Program Director of DG Regulatory Policy stated, “Colorado is already a leader in many ways when it comes to our nation’s growing solar economy.” He continued, “This settlement means more good news for Colorado. It will ensure that more of the state’s energy consumers will have the opportunity to benefit from the local jobs, energy bill savings, and cleaner air that solar delivers.” Gilliam continued, “The agreement also recognizes the importance of good regulatory process through a series of subsequent stakeholder workshops in several important technical and policy areas when deciding these critical solar issues. We look forward to participating in those future conversations to help keep Colorado shining.” The following is an in-depth exploration of the state of solar energy in Colorado. Please feel free to offer further contributions in the comments, below. Available within every state of the US, the federal solar investment tax credit (ITC) offers credit from the federal government of up to 30% of a solar installation’s total costs. However, this credit can only be taken full advantage of if you have that much tax liability in the tax year. The current law extends the federal ITC through 2019 and will be phased out after that. Corporate depreciation incentives are also available for business, as well as a Business Energy Investment Tax Credit (ITC-1603), and a Renewable Energy Production Tax Credit (PTC). The following is a limited selection of state solar energy incentives listed for Colorado in the DSIRE Incentives Database. The Database of State Incentives for Renewables & Efficiency (DSIRE) website is operated by the N.C. Clean Energy Technology Center at N.C. State University and offers up-to-date, comprehensive public information on specific solar energy incentives across the US, including state and federal policies, programs, and incentives. • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Wind (Small), Hydroelectric (Small), Fuel Cells using Renewable Fuels • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Low Income Residential, Institutional • Applicable Utilities: All utilities (except municipal utilities with less than 5,000 customers) • System Capacity Limit: IOU customers: 120% of the customer’s average annual consumption. Municipality and co-op customers: 25 kW for non-residential; 10 kW for residential. • Aggregate Capacity Limit: No limit specified. Community solar gardens: 6 MW/yr for 2011-2013; set by PUC thereafter • Net Excess Generation: Credited to customer’s next bill at retail rate. After 12-month cycle, IOU customers may opt to roll over credit indefinitely or to receive payment at IOU’s average hourly incremental cost. Municipality and co-ops provide annual reconciliation at a rate they deem appropriate. • Ownership of Renewable Energy Credits: Customer owns RECs • Meter Aggregation: Allowed for IOU customers. Community solar gardens are allowed Note: Colorado’s Public Utilities Commission opened a proceeding (14M-0235E) in March 2014 for considering changes to net metering and other issues related to retail renewable distributed generation. However, this docket was closed in August 2015 after the Commission voted to keep the current net metering rules. Eligibility and Availability: Net metering was adopted by the Colorado PUC in December 2005. Utilities must provide net metering service at non-discriminatory rates to customer-generators. Utilities must provide a single bi-directional meter free of charge to customer-generators who do not already own one. A second meter is required for systems over 10 kW to measure output for the counting of renewable energy credits (RECs). Net Excess Generation: Monthly net excess generation (NEG) is applied on a 1-for-1 basis as a kilowatt-hour (kWh) credit to the customer’s next bill. At the end of the year, or upon termination, the NEG must be reimbursed to the customer by the utility at the most recent year’s average hourly incremental cost. Net metering customers may choose to have their NEG carried forward from month-to-month indefinitely, but this one-time only option must be done in writing on or before the end of the calendar year. It’s a pretty bad idea, though, because all kWh credits carried forward will be lost upon terminating service with the utility. Renewable Energy Credits: Customer-generators retain ownership of RECs associated with energy generated by the customer-generator’s system. Utilities may acquire RECs by purchasing them from customer-generators vis a standard offer with a minimum term of 20 years if the system is less than 100 kW. Meter Aggregation: A customer with more than one meter located on contiguous property can choose to have their generator offset the load measured at more than one meter. Aggregating meters under net metering requires that all affected meters must be on the same rate schedule and needs a 30-day notice to the utility, specifying the order in which kWh credits will be applied to the multiple meters. Municipal Utilities and Electric Cooperatives: Colorado law enacted in March 2008 (H.B. 1160) requires all electric coops as well as all municipal utilities with over 5,000 customers to offer net metering for residential systems up to 10 kW and commercial and industrial systems up to 25 kW. NEG follows the same rules as above, receiving 1-for-1 credit in following months, and reimbursed at the end of an annual period. Eligibility and Availability: Colorado enacted the Community Solar Gardens (CSGs) Act in 2010, allowing for the creation of CSGs of up to 2 megawatts in an IOU’s service territory (H.B. 1342). H.B. 15-1377 was enacted in 2015, specifying that CSGs can be located in an electric coop’s service territory and used for complying with the retail distributed generation requirements of Colorado’s Renewable Energy Standard. CSGs may be owned by a nonprofit, for-profit organization, or even the utility itself, but must have at least 10 subscribers, unless the CSG is in the territory of an electric coop and the system size is under 50 kW. Subscribers can purchase up to 40% of the electricity produced by the array, receiving kWh credits on their utility bills relative to the subscription amount. Unless owned by a low-income subscriber, CSG subscriptions must be for at least 1 kW. No more than 120% of the subscriber’s annual energy consumption may be supplied by the CGS. H.B. 15-1284, enacted May 2015, added the note that the CSG subscribers must be “located in the service territory of the same qualifying retail utility and also in the same county as, or a county adjacent to, that of the community solar garden.” Net Excess Generation: If a CSG subscriber has a NEG in a monthly billing period, the credit will offset future consumption, rolling over from month-to-month indefinitely until the customer terminates service with the IOU, at which time any remaining billing credits expire. Aggregate Cap: Since 2015, IOUs began acquiring renewable energy from CSGs as part of their renewable portfolio standard compliance plan. Also, a minimum of 5% of an IOU’s purchases from CSGs must be reserved for subscribers in low-income brackets. • Sales and Use Tax Exemption for Renewable Energy Equipment Last updated: 07/21/2015 • Local Option — Sales and Use Tax Exemption for Renewable Energy Systems Last updated: 10/08/2015 • City of Boulder — Solar Sales and Use Tax Rebate Last updated: 01/25/2016 • Renewable Energy and Energy Efficiency for Schools Loan Last updated: 07/21/2015 With Xcel’s settlement agreement in August 2016 to withdraw its proposed grid-use fee, the utility stated that, instead, it “will institute time-of-use rates on a trial basis in anticipation of moving to time-of-use rates as the default rate design for residential customers.” New voluntary programs will be launched by Xcel Energy for up to 20,000 customers in 2017, 34,000 in 2018, and 48,000 in 2019. These programs will be carefully followed and studied for a broader TOU program to be launched in 2020. According to VoteSolar, customers who opt into the test TOU program will have a 2-month window to cancel if they don’t like it. Lauren Randall, Sunrun’s senior manager of public policy, stated that she sees time-of-use rates as a “beneficial compromise” as opposed to changing net metering rates and adding new fees. Randall said, “We think moving forward this could be a model for utilities and solar companies collaborating on rates that are good for consumers who want more energy choice.” She also noted that TOU rates benefit all ratepayers “by producing energy when the grid needs it.” More hearings are ongoing, but the Colorado PUC anticipates a final decision on this latest settlement by the end of 2016. The adoption and integration of distributed energy storage are making steady progress in Colorado. A panel discussion in April 2015 held by the Colorado Public Utilities Commission (CPUC) investigated a broad range of energy storage topics. Participants delved into such issues as costs, economics, anticipated growth rate, and regulatory changes relevant to energy storage strategies for residential, commercial, and industrial customers. Electric Power Research Institute’s (EPRI) Energy Storage Program Senior Project Manager Ben Kaun outlined the beneficial functions offered with energy storage. Kaun emphasized that energy storage functions as a “capacity resource, supporting the grid through flexible ramping, voltage control, and renewable integration, and providing grid reliability and resiliency services.” SolarCity’s Ryan Hanley outlined the trending decline in battery prices, and increase in customer interest in utilizing storage systems with solar installations. Hanley described SolarCity as “bullish on energy storage,” and reported that the company anticipates all solar PV installations will include a storage component by 2020. Hanley emphasized to the Commission that policies are needed to recognize the full advantage of distributed storage and allow participants to profit from those advantages. He also pointed out the need for a widespread deployment of smart inverters to ensure the fullest range of benefits from distributed energy storage. Following up on this panel discussion, the CPUC approved Xcel Energy’s request last spring to launch two energy storage demonstration projects. The settlement agreement, signed in March 2016, authorized Xcel to invest $9.1 million for the two projects that will test energy storage applications, one of which will include a microgrid strategy, as well. Known as the Panasonic Project, the first test will feature a utility-scale solar generation plant with one large battery installed by Xcel near Denver International Airport. The Panasonic Project will be connected to the regional grid but will have the capability to operate as a microgrid, as well. Known as the Stapleton Project, the second application will feature 6 batteries installed on the customer side of the meter at homes with rooftop solar already present. Xcel will also install another 6 batteries on that area’s feeder line which receives significant electricity flowing from distributed generation. These batteries will be purposely utilized for storing excess energy to be discharged during peak load hours. All of the resulting information and data from these two projects will be made publicly available. The PUC noted that milestone reports will provide greater public visibility, “ensuring that costs for the projects would be appropriately vetted in a future rate proceeding, and assuring that all resulting data and information that would benefit the public will be made available to the PUC and the public as a whole.” Xcel Energy stated that it will be “evaluating the capabilities of batteries installed on distribution feeders to regulate voltage, reduce peak demand, and reduce energy costs for the benefit of all the company’s customers.” • Alamosa Solar Generating Project — completed in 2012 by developer Cogentrix. This concentrating photovoltaics project has the capacity to generate 30 MW of electricity — enough to power over 5,400 Colorado homes. • Hooper Solar Project — At 50 MW, Hooper Solar Project is among the larger solar installations in Colorado. Completed in 2016 by SunPower, this PV project produces enough electric capacity to power over 9,500 homes. • Kohl’s, REI, Safeway, Walmart, and Intel — These are among the larger retailers in Colorado that have gone solar. Intel has installed one of the largest corporate photovoltaic systems in the state with 1,000 kW of solar capacity at their location in Fort Collins. According to SEIA, there are currently over 382 solar companies employed throughout the value chain in Colorado, ranging from solar system manufacturers to system installers. The following is an incomplete listing of solar installers in Colorado, chosen and listed based on high customer reviews and recommendations: Sunlight Solar Energy, Inc. (Colorado Branch) 384 Garden of the Gods Road, #150, Colorado Springs, CO 80907 Colorado State Energy Office – Find a wide variety of information on state government energy programs, policy, projects, energy-saving strategies and energy-related statistics Colorado State Legislature – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Colorado Colorado Public Utilities Commisssion – Learn about the governing body that regulates the electricity rates and services of California public utilities DSIRE Incentives Database – Colorado – Search a public clearinghouse for specific solar energy incentives in Colorado and across the United States U.S. Energy Information Administration – Colorado State Profile – Explore official energy statistics, including data on electricity supply and demand, from the U.S. government Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | November 9, 2016
Site: www.sciencedaily.com

New findings suggest the rate at which CO is accumulating in the atmosphere has plateaued in recent years because Earth's vegetation is grabbing more carbon from the air than in previous decades. That's the conclusion of a new multi-institutional study led by a scientist from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). It's based on extensive ground and atmospheric observations of CO , satellite measurements of vegetation, and computer modeling. The research is published online Nov. 8 in the journal Nature Communications. To be clear, human activity continues to emit increasing amounts of carbon, and the atmospheric concentration of CO , now at 400 parts per million (ppm), continues to rise. But the scientists found that between 2002 and 2014, the rate at which CO increased in the atmosphere held steady at about 1.9 ppm/year. In addition, the proportion of the CO emitted annually by human activity that remains in the atmosphere declined by about 20 percent. This slowdown can't keep pace with emissions, so the overall amount of human-caused CO in the atmosphere increased, just not as quickly. And for that, new research suggests, we can thank plants. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly," says Trevor Keenan, a research scientist in Berkeley Lab's Climate & Ecosystem Sciences Division and the corresponding author of the paper. The scientists attribute the stalled CO growth rate to an uptick in land-based photosynthetic activity, fueled by rising CO levels from fossil fuel emissions. It's a snowball effect: as CO levels rise in the atmosphere, photosynthetic activity flourishes and plants take in more carbon, sparking more plant growth, more photosynthesis, and more carbon uptake. They also identified another player. Plant respiration, a process in which plants use oxygen and produce CO , did not increase as quickly as photosynthesis in recent years. This is because plant respiration is sensitive to temperature, and it was affected by the recent slowdown in global warming that was observed most significantly over vegetated land. So, between 2002 and 2014, plants took in more CO through photosynthesis, but did not "exhale" more CO into the atmosphere through respiration. "These changes decreased the amount of anthropogenic CO that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO ," says Keenan. Their findings provide a possible answer to a climatic mystery. The growth rate of atmospheric CO climbed steadily during the latter half of the 20th century, from 0.75 ppm/year in 1959 to 1.86 ppm/year in 2002. But Keenan and colleagues discovered an inflection point last year when they analyzed the latest data from the Global Carbon Project, which quantifies carbon emissions and their sources annually. Since 2002, the growth rate has remained flat. This pause is especially surprising because it has occurred as human activity pumps more and more carbon into the atmosphere. All that CO must be going somewhere, so the scientists suspected something about the carbon cycle has recently changed in a big way. "We believed one of the planet's main carbon sinks had unexpectedly strengthened. The question was: which one?" says Keenan. The scientists ruled out oceans as a dominant cause because most computer models agree the amount of carbon taken in by oceans has increased steadily in recent years. That left terrestrial ecosystems, which undergo a large year-to-year variability in carbon uptake, and the two biggest influences on this variability are photosynthesis and plant respiration. To study these influences, the scientists used ten "global dynamic vegetation models" that predict how the terrestrial carbon cycle changes over time. They also used a model that incorporates satellite measurements of vegetation cover and plant activity to predict global photosynthesis and respiration rates. They validated the model by comparing its results with data from AmeriFlux and FLUXNET, which are networks of eddy-covariance research towers that measure ecosystem carbon, water, and energy fluxes in North and South America. Berkeley Lab manages AmeriFlux for the Department of Energy. Model projections were generated using different scenarios of atmospheric CO level, temperature, soil moisture, and other processes. This enabled the researchers to evaluate the impacts of these processes on the planet's terrestrial carbon cycle. Taken together, the models zeroed in on rising CO levels as having the biggest impact on photosynthesis and plant respiration. The result is a boost in terrestrial carbon uptake, particularly in tropical and high-latitude ecosystems. Specifically, the models suggest rising CO levels caused terrestrial ecosystems to double the rate at which they take in carbon, from between one and two petagrams of carbon per year in the 1950s, to between two and four petagrams of carbon per year in the 2000s. For comparison, human activity emits between nine and ten petagrams of carbon per year (one petagram is one trillion kilograms). The scientists conclude this increase in carbon uptake put the brakes on the growth rate of atmospheric CO between 2002 and 2014. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan, adding that their results answer questions and pose new ones. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate."


News Article | December 8, 2016
Site: www.eurekalert.org

Rapid advances in computing constantly translate into new technologies in our everyday lives. The same is true for high-energy physics. The field has always been an early adopter of new technologies, applying them in ever more complex experiments that study fine details of nature's most fundamental processes. However, these sophisticated experiments produce floods of complex data that become increasingly challenging to handle and analyze. Researchers estimate a decade from now, computing resources may have a hard time keeping up with the slew of data produced by state-of-the-art discovery machines. CERN's Large Hadron Collider, for example, already generates tens of petabytes (millions of gigabytes) of data per year today, and it will produce ten times more after a future high-luminosity upgrade. Big data challenges like these are not limited to high-energy physics. When the Large Synoptic Survey Telescope begins observing the entire southern sky in never-before-seen detail, it will create a stream of 10 million time-dependent events every night and a catalog of 37 billion astronomical objects over 10 years. Another example is the future LCLS-II X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory, which will fire up to a million X-ray pulses per second at materials to provide unprecedented views of atoms in motion. It will also generate tons of scientific data. To make things more challenging, all big data applications will have to compete for available computing resources, for example when shuttling information around the globe via shared networks. What are the tools researchers will need to handle future data piles, sift through them and identify interesting science? How will they be able to do it as fast as possible? How will they move and store tremendous data volumes efficiently and reliably? And how can they possibly accomplish all of this while facing budgets that are expected to stay flat? "Clearly, we're at a point where we need to discuss in what direction scientific computing should be going in order to address increasing computational demands and expected shortfalls," says Richard Mount, head of computing for SLAC's Elementary Particle Physics Division. The researcher co-chaired the 22nd International Conference on Computing in High-Energy and Nuclear Physics (CHEP 2016), held Oct. 10-14 in San Francisco, where more than 500 physicists and computing experts brainstormed possible solutions. Here are some of their ideas. Scientific computing has greatly benefited from what is known as Moore's law--the observation that the performance of computer chips has doubled every 18 months or so for the past decades. This trend has allowed scientists to handle data from increasingly sophisticated machines and perform ever more complex calculations in reasonable amounts of time. Moore's law, based on the fact that hardware engineers were able to squeeze more and more transistors into computer chips, has recently reached its limits because transistor densities have begun to cause problems with heat. Instead, modern hardware architectures involve multiple processor cores that run in parallel to speed up performance. Today's fastest supercomputers, which are used for demanding calculations such as climate modeling and cosmological simulations, have millions of cores and can perform tens of millions of billions of computing operations per second. "In the US, we have a presidential mandate to further push the limits of this technology," says Debbie Bard, a big-data architect at the National Energy Research Scientific Computing Center. "The goal is to develop computing systems within the next 10 years that will allow calculations on the exascale, corresponding to at least a billion billion operations per second." Running more data analyses on supercomputers could help address some of the foreseeable computing shortfalls in high-energy physics, but the approach comes with its very own challenges. "Existing analysis codes have to be reengineered," Bard says. "This is a monumental task, considering that many have been developed over several decades." Maria Girone, chief technology officer at CERN openlab, a collaboration of public and private partners developing IT solutions for the global LHC community and other scientific research, says, "Computer chip manufacturers keep telling us that our software only uses a small percentage of today's processor capabilities. To catch up with the technology, we need to rewrite software in a way that it can be adapted to future hardware developments." Part of this effort will be educating members of the high-energy physics community to write more efficient software. "This was much easier in the past when the hardware was less complicated," says Makoto Asai, who leads SLAC's team for the development of Geant4, a widely used simulation toolkit for high-energy physics and many other applications. "We must learn the new architectures and make them more understandable for physicists, who will have to write software for our experiments." Today, LHC computing is accomplished with the Worldwide LHC Computing Grid, or WLCG, a network of more than 170 linked computer centers in 42 countries that provides the necessary resources to store, distribute and analyze the tens of petabytes of data produced by LHC experiments annually. "The WLCG is working very successfully, but it doesn't always operate in the most cost-efficient way," says Ian Fisk, deputy director for computing at the Simons Foundation and former computing coordinator of the CMS experiment at the LHC. "We need to move large amounts of data and store many copies so that they can be analyzed in various locations. In fact, two-thirds of the computing-related costs are due to storage, and we need to ask ourselves if computing can evolve so that we don't have to distribute LHC data so widely." More use of cloud services that offer internet-based, on-demand computing could be a viable solution for remote data processing and analysis without reproducing data. Commercial clouds have the capacity and capability to take on big data: Google, receives billions of photos per day and hundreds of hours of video every minute, posing technical challenges that have led to the development of powerful computing, storage and networking solutions. While conventional computer algorithms perform only operations that they are explicitly programmed to perform, machine learning uses algorithms that learn from the data and successively become better at analyzing them. In the case of deep learning, data are processed in several computational layers that form a network of algorithms inspired by neural networks. Deep learning methods are particularly good at finding patterns in data. Search engines, text and speech recognition, and computer vision are all examples. "There are many areas where we can learn from technology developments outside the high-energy physics realm," says Craig Tull, who co-chaired CHEP 2016 and is head of the Science Software Systems Group at Lawrence Berkeley National Laboratory. "Machine learning is a very good example. It could help us find interesting patterns in our data and detect anomalies that could potentially hint at new science." At present, machine learning in high-energy physics is in its infancy, but researchers have begun implementing it in the analysis of data from a number of experiments, including ATLAS at the LHC, the Daya Bay neutrino experiment in China and multiple experiments at Fermi National Accelerator Laboratory near Chicago. The most futuristic approach to scientific computing is quantum computing, an idea that goes back to the 1980s when it was first brought up by Richard Feynman and other researchers. Unlike conventional computers, which encode information as a series of bits that can have only one of two values, quantum computers use a series of quantum bits, or qubits, that can exist in several states at once. This multitude of states at any given time exponentially increases the computing power. A simple one-qubit system could be an atom that can be in its ground state, excited state or a superposition of both, all at the same time. "A quantum computer with 300 qubits will have more states than there are atoms in the universe," said Professor John Martinis from the University of California, Santa Barbara, during his presentation at CHEP 2016. "We're at a point where these qubit systems work quite well and can perform simple calculations." Martinis has teamed up with Google to build a quantum computer. In a year or so, he says, they will have built the first 50-qubit system. Then, it will take days or weeks for the largest supercomputers to validate the calculations done within a second on the quantum computer. We might soon find out in what directions scientific computing in high-energy physics will develop: The community will give the next update at CHEP 2018 in Bulgaria.


News Article | November 8, 2016
Site: www.eurekalert.org

New findings suggest the rate at which CO2 is accumulating in the atmosphere has plateaued in recent years because Earth's vegetation is grabbing more carbon from the air than in previous decades. That's the conclusion of a new multi-institutional study led by a scientist from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). It's based on extensive ground and atmospheric observations of CO2, satellite measurements of vegetation, and computer modeling. The research is published online Nov. 8 in the journal Nature Communications. To be clear, human activity continues to emit increasing amounts of carbon, and the atmospheric concentration of CO2, now at 400 parts per million (ppm), continues to rise. But the scientists found that between 2002 and 2014, the rate at which CO2 increased in the atmosphere held steady at about 1.9 ppm/year. In addition, the proportion of the CO2 emitted annually by human activity that remains in the atmosphere declined by about 20 percent. This slowdown can't keep pace with emissions, so the overall amount of human-caused CO2 in the atmosphere increased, just not as quickly. And for that, new research suggests, we can thank plants. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly," says Trevor Keenan, a research scientist in Berkeley Lab's Climate & Ecosystem Sciences Division and the corresponding author of the paper. The scientists attribute the stalled CO2 growth rate to an uptick in land-based photosynthetic activity, fueled by rising CO2 levels from fossil fuel emissions. It's a snowball effect: as CO2 levels rise in the atmosphere, photosynthetic activity flourishes and plants take in more carbon, sparking more plant growth, more photosynthesis, and more carbon uptake. They also identified another player. Plant respiration, a process in which plants use oxygen and produce CO2, did not increase as quickly as photosynthesis in recent years. This is because plant respiration is sensitive to temperature, and it was affected by the recent slowdown in global warming that was observed most significantly over vegetated land. So, between 2002 and 2014, plants took in more CO2 through photosynthesis, but did not "exhale" more CO2 into the atmosphere through respiration. "These changes decreased the amount of anthropogenic CO2 that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO2," says Keenan. Their findings provide a possible answer to a climatic mystery. The growth rate of atmospheric CO2 climbed steadily during the latter half of the 20th century, from 0.75 ppm/year in 1959 to 1.86 ppm/year in 2002. But Keenan and colleagues discovered an inflection point last year when they analyzed the latest data from the Global Carbon Project, which quantifies carbon emissions and their sources annually. Since 2002, the growth rate has remained flat. This pause is especially surprising because it has occurred as human activity pumps more and more carbon into the atmosphere. All that CO2 must be going somewhere, so the scientists suspected something about the carbon cycle has recently changed in a big way. "We believed one of the planet's main carbon sinks had unexpectedly strengthened. The question was: which one?" says Keenan. The scientists ruled out oceans as a dominant cause because most computer models agree the amount of carbon taken in by oceans has increased steadily in recent years. That left terrestrial ecosystems, which undergo a large year-to-year variability in carbon uptake, and the two biggest influences on this variability are photosynthesis and plant respiration. To study these influences, the scientists used ten "global dynamic vegetation models" that predict how the terrestrial carbon cycle changes over time. They also used a model that incorporates satellite measurements of vegetation cover and plant activity to predict global photosynthesis and respiration rates. They validated the model by comparing its results with data from AmeriFlux and FLUXNET, which are networks of eddy-covariance research towers that measure ecosystem carbon, water, and energy fluxes in North and South America. Berkeley Lab manages AmeriFlux for the Department of Energy. Model projections were generated using different scenarios of atmospheric CO2 level, temperature, soil moisture, and other processes. This enabled the researchers to evaluate the impacts of these processes on the planet's terrestrial carbon cycle. Taken together, the models zeroed in on rising CO2 levels as having the biggest impact on photosynthesis and plant respiration. The result is a boost in terrestrial carbon uptake, particularly in tropical and high-latitude ecosystems. Specifically, the models suggest rising CO2 levels caused terrestrial ecosystems to double the rate at which they take in carbon, from between one and two petagrams of carbon per year in the 1950s, to between two and four petagrams of carbon per year in the 2000s. For comparison, human activity emits between nine and ten petagrams of carbon per year (one petagram is one trillion kilograms). The scientists conclude this increase in carbon uptake put the brakes on the growth rate of atmospheric CO2 between 2002 and 2014. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan, adding that their results answer questions and pose new ones. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate." The research is partly funded by the Laboratory Directed Research and Development Program of Berkeley Lab and the Department of Energy's Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | February 4, 2016
Site: www.theenergycollective.com

Federal energy policy developments received a significant amount of attention in 2015 – most notably, the finalized Clean Power Plan and renewal of renewable energy tax credits dominated headlines. However, this focus on national issues tends to obscure critical actions at the local and state levels that are just as important for energy systems. One of the most important state-level policies, renewable portfolio standards, play essential, but not fully appreciated roles in complementing and enabling federal policy action. Renewable portfolio standards (RPSs) are the most influential state-level policies for renewable energy development in the United States. Combined with federal tax incentives, state RPSs have driven wind generation from less than 0.4% of U.S. generation in 2004 to more than 4.4% in 2014 while boosting solar above 1% of total electricity in the U.S. in 2015 for the first time ever. Despite broad recognition of their importance, the central role that state RPSs have played in getting renewable energy to the point it is today is often not fully appreciated. By understanding what state RPS policies have accomplished to date, we can better understand what they are likely to do in the next several years. While the first RPS was passed in Iowa in 1983, it was not until the late 1990’s and mid-2000’s that many states began to follow suit. These early policies often came in response to increasing concerns about energy security and environmental considerations specifically at the state level. While the exact requirements vary by state, RPSs generally require that a certain percentage of state electricity retail sales come from renewable energy by a specified year. As of October 2015, 29 states have mandatory standards while an additional 8 states have voluntary goals. Remarkably, renewable energy mandates are one of the most politically attractive energy policies at the state level. They have been enacted in states ranging from liberal California to moderate Ohio to conservative Texas. Note: This map does not reflect several recent policy changes increasing RPS requirements in CA, NY, HI, and VT Critically, state RPS mandates are generally non-partisan. Despite recent legislative attempts in some states to remove RPS goals, they have proven surprisingly resilient. With only a handful of exceptions, the vast majority of revisions to renewable mandates have led to more stringent renewable requirements. Public opinion polls show consistent support for efforts to increase wind and solar generation, making RPS policies popular for state governors and legislatures. State Renewable Policies Key to Renewable Growth of the Last Decade These widespread policies have had a tremendous impact on renewable energy growth. Data from the Lawrence Berkeley National Laboratory (LBNL) indicates that, in 2013, at least 171,950 GWh of renewable energy was used to comply with state renewable mandates. This nearly equals 5% of total U.S. energy generation in 2013. LBNL’s estimates are based on RPS compliance filings at the state level, meaning they have a high level of reliability. By tracking the retirement of renewable energy credits (great primer here), we can accurately gauge how much renewable energy has been used to comply with state mandates. Source: Spark Library, based on data from LBNL and U.S. EIA Critically, RPS compliance to date has been heavily concentrated geographically. In 2013, almost 27% of RPS compliant generation came from California alone (where it made up almost 18% of total retail sales). Texas, New Jersey, New York, and Pennsylvania combined made up another 26%. Hence, the remaining 24 states with mandatory RPSs contributed less than half of RPS-compliant renewable energy in 2013. This is partially due to California and the other states’ being large electricity consumers. However, it is also a timing issue – California, Texas, New Jersey, and Pennsylvania were some of the first states to enact RPS mandates. RPS requirements in other states are still ramping up, meaning that renewable generation is likely to become somewhat less geographically concentrated in the next decade. Most of the renewable energy used for RPS compliance so far has been from wind energy. A separate estimate from LBNL indicated that as much as 88% of new capacity motivated by state RPSs between 1998 and 2012 was wind. As the figure above clearly illustrates, total U.S. generation very closely tracks the total amount of RPS-compliant renewable energy between 2001 and 2013. This high level of wind generation is not necessarily surprising, even though wind capacity was limited as recently as ten years ago.  Even then, there was significant ongoing research and large deployments in Europe helping the industry reach large-scale commercial viability. Perhaps more critically, the federal production tax credit made wind generation even more cost competitive in the mid-2000’s.  Nationwide average prices for wind capacity installed between 2002 and 2011 ranged between the mid-$30’s/MWh and the upper-$60’s/MWh. Compared to prevailing wholesale electricity prices at the time, these were cost competitive levels. Without the tax credit, wind prices would have been around $20-23/MWh more expensive in all years. In effect, state RPSs and federal tax incentives should be seen as complementary policies. Federal tax incentives made it very inexpensive for states to meet RPS mandates with wind energy, helping encourage the adoption of new and more stringent renewable policies. These state policies then drove significant state economic benefits, strengthening the argument for continued tax credit support. As a quick side note, the price of wind power purchase agreements have been falling since 2009, reaching as low as an average $23.50/MWh in 2014. This is important as it indicates that wind may remain cost competitive with low-cost natural gas resulting from the shale revolution, particularly with most recent renewal of the wind production tax credit. Although wind has been the primary RPS-compliant generation source historically, existing state RPS policies have also benefitted solar energy. Many state renewable mandates have specific provisions (or carve-outs) intended to directly benefit solar and/or distributed generation. The form of these provisions varies significantly. For example, in Colorado, solar located in cooperatives or municipal utility service areas receive a 3X multiplier for the state RPS. Meanwhile, Massachusetts’ RPS contains a specific target of 1,600 MW of solar PV in its RPS. These solar provisions have played notable roles in driving initial solar market growth in many states. The economic benefits of renewable standards are significant. A recent joint study by two federal energy laboratories (LBNL and NREL) found that in 2013 renewables used for RPS compliance: In particular, the direct economic impacts were a principal cause behind the recent decisions to extend both the federal wind and solar tax credits. RPS and federal tax credits continue to be mutually reinforcing, complementary policies. Perhaps more importantly, state RPSs and federal tax credits have driven significant cost innovation in both the wind and solar industries. Long term, renewable energy will continue to grow, bringing in greater economic and environmental benefits at lower (and potentially negative) costs. Further, renewable portfolio standards have come at relatively little cost. State renewable standards have rather limited cost impacts, as demonstrated by yet another joint NREL and LBNL study. In most states, incremental compliance costs for existing standards through 2012 was generally at or below 2% of average retail rates. Even if existing standards are implemented fully, rates are expected to increase by less than 5% in most states, while delivering even greater economic and environmental benefits.


News Article | September 26, 2016
Site: www.scientificcomputing.com

The roar can be deafening. Cooling fans and power supplies whoosh and whine from rows and rows of supercomputers at the main data center of the Texas Advanced Computing Center in Austin. The power bill at TACC can reach over a million dollars a year to keep the machines humming. But there's a stranger in town that might change how data centers power their systems. A new kind of advanced computing system called Hikari (Japanese for the word "light") came online at TACC late August, 2016. What's new is that Hikari runs on solar power and high voltage direct current, or HVDC. Hikari is a microgrid that supports a supercomputer, a first for the U.S. By day solar panels that shade a TACC parking lot provide nearly all of Hikari's power, up to 208 kilowatts. At night it switches back to conventional AC power from the utility grid. "The Hikari project is a Japan and Texas collaboration project, and it aims to demonstrate the potential of the HVDC system," said Toshihiro Hayashi, assistant manager in the engineering divisions of NTT FACILITIES, INC., Japan. Engineers of the Hikari HVDC power feeding system predict it will save 15 percent compared to conventional systems. "The 380 volt design reduces the number of power conversions when compared to AC voltage systems," said James Stark, director of Engineering and Construction at the Electronic Environments Corporation(EEC), a Division of NTT FACILITIES, INC.. "What's interesting about that," Stark added, "is the computers themselves - the supercomputer, the blade servers, cooling units, and lighting - are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That's really the largest technical innovation." Data centers in the U.S. consumed an estimated 70 billion kilowatt hours in 2014, which represents about 1.8 percent of total U.S. electricity consumption. That's according to a June 2016 Department of Energy report by the Lawrence Berkeley National Laboratory. High Voltage Direct Current also allows for ease in connection to renewable energy, solar for Hikari but potentially other sources like wind and hydrogen fuel cells. "That's really one of our main focuses, trying to make data centers more sustainable so that we're reducing overall power consumption within an industry that has traditionally consumed a lot of power," James Stark said. With sustainability comes reliability. This held true for another NTT FACILITIES, INC. project called the Sendai Microgrid on the fateful day of the Tohoku earthquake, March 11, 2011. The Sendai Microgrid was equipped with gas engines, a fuel cell, and a photovoltaic array. It continued to supply power and heat to facilities at Tohoku Fukushi University for hospital patients and elderly nursing home residents despite days of blackouts from the catastrophic damage to the district's energy supply system caused by the earthquake. "This microgrid power supply system activated very well after the earthquake," Hayashi said. "It showed that it's a very highly reliable energy system." Another NTT FACILITIES project leading to Hikari is the Tsukuba Research and Development Center, a HVDC microgrid that generates 70 kilowatts for a modular data center. "We have various experiences in Japan of having this HVDC system," Hayashi said. These projects, including Hikari, are supported by NEDO, the New Energy and Industrial Technology Development Organization, a public agency of Japan. The Hikari project partners are NEDO, NTT FACILITIES, INC., and the University of Texas at Austin through the Texas Advanced Computing Center. The collaboration started with a visit to TACC by engineers at NTT FACILITIES, INC. in early 2014, fresh off the heels of basic studies they'd done for installing HVDC at U.S. sites. TACC also shares a strong interest in developing new technologies, including energy savings. "We're very interested in UT Austin's motto," said Hayashi, "which is 'what starts here changes the world.' We very much agree with this motto." Hayashi's team worked with TACC to develop feasibility studies of the Hikari HVDC project from December 2014 to May 2015. This effort led to a Memorandum of Understanding between the State of Texas and NEDO in August of 2015. NTT FACILITIES, INC. worked with EEC to build out Hikari, which completed the system installation late August 2016. "If there wasn't such partnership, we wouldn't have launched this project. I would like to express my gratitude to NEDO for establishing the partnership," Hayashi said. The Hikari supercomputer cluster consists of 432 Hewlett Packard Enterprise (HPE) Apollo 8000 XL730f servers coupled with HPE DL380 and DL360 nodes that are interconnected with a first-of-its-kind Mellanox End-to-End EDR InfinBand at 100 gigabytes per second. Over 10,000 cores from "Haswell" Xeon processors will deliver more than 400 teraflops. The Hikari project also aims to demonstrate energy efficiency through more than just HVDC. The HPE Apollo 8000 systems use a warm water-based liquid cooling system that eliminates the need for fans within the nodes and reduces the energy that would normally be required for water refrigeration and excess heat removal. The solar energy that would have been used for powering fans and chillers can be used for computational work. When it reaches production later in 2017, Hikari will be used by the University of Texas medical researchers to make progress on diseases like cancer and disorders like autism. "We really hope this project will demonstrate the efficiency advantages of using 380 volt DC, not only in data centers, but in any commercial building," James Stark said. "The hope is that the research that comes out of this demonstration project will help to open the door to more widespread use of 380 volt systems throughout data centers and commercial buildings worldwide."


News Article | February 21, 2017
Site: www.chromatographytechniques.com

Note: This is the first installment in a four-part series that focuses on a partnership between NASA and Berkeley Lab to explore spacecraft materials and meteorites with X-rays in microscale detail. It takes rocket science to launch and fly spacecraft to faraway planets and moons, but a deep understanding of how materials perform under extreme conditions is also needed to enter and land on planets with atmospheres. X-ray science is playing a key role, too, in ensuring future spacecraft survive in extreme environments as they descend through otherworldly atmospheres and touch down safely on the surface. Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and NASA are using X-rays to explore, via 3-D visualizations, how the microscopic structures of spacecraft heat shield and parachute materials survive extreme temperatures and pressures, including simulated atmospheric entry conditions on Mars. Human exploration of Mars and other large-payload missions may require a new type of heat shield that is flexible and can remain folded up until needed. Candidate materials for this type of flexible heat shield, in addition to fabrics for Mars-mission parachutes deployed at supersonic speeds, are being tested with X-rays at Berkeley Lab’s Advanced Light Source (ALS) and with other techniques. “We are developing a system at the ALS that can simulate all material loads and stresses over the course of the atmospheric entry process,” said Harold Barnard, a scientist at Berkeley Lab’s ALS who is spearheading the Lab’s X-ray work with NASA. The success of the initial X-ray studies has also excited interest from the planetary defense scientific community looking to explore the use of X-ray experiments to guide our understanding of meteorite breakup. Data from these experiments will be used in risk analysis and aid in assessing threats posed by large asteroids. The ultimate objective of the collaboration is to establish a suite of tools that includes X-ray imaging and small laboratory experiments, computer-based analysis and simulation tools, as well as large-scale high-heat and wind-tunnel tests. These allow for the rapid development of new materials with established performance and reliability. This system can heat sample materials to thousands of degrees, subject them to a mixture of different gases found in other planets’ atmospheres, and with pistons stretch the material to its breaking point, all while imaging in real time their 3-D behavior at the microstructure level. NASA Ames Research Center (NASA ARC) in California’s Silicon Valley has traditionally used extreme heat tests at its Arc Jet Complex to simulate atmospheric entry conditions. Researchers at ARC can blast materials with a giant superhot blowtorch that accelerates hot air to velocities topping 11,000 miles per hour, with temperatures exceeding that at the surface of the sun. Scientists there also test parachutes and spacecraft at its wind-tunnel facilities, which can produce supersonic wind speeds faster than 1,900 miles per hour. Michael Barnhardt, a senior research scientist at NASA ARC and principal investigator of the Entry Systems Modeling Project, said the X-ray work opens a new window into the structure and strength properties of materials at the microscopic scale, and expands the tools and processes NASA uses to “test drive” spacecraft materials before launch. “Before this collaboration, we didn’t understand what was happening at the microscale. We didn’t have a way to test it,” Barnhardt said. “X-rays gave us a way to peak inside the material and get a view we didn’t have before. With this understanding, we will be able to design new materials with properties tailored to a certain mission.” He added, “What we’re trying to do is to build the basis for more predictive models. Rather than build and test and see if it works,” the X-ray work could reduce risk and provide more assurance about a new material’s performance even at the drawing-board stage. Francesco Panerai, a materials scientist with NASA contractor AMA Inc. and the X-ray experiments test lead for NASA ARC, said that the X-ray experiments at Berkeley Lab were on samples about the size of a postage stamp. The experimental data is used to improve realistic computer simulations of heat shield and parachute systems. “We need to use modern measurement techniques to improve our understanding of material response,” Panerai said. The 3-D X-ray imaging technique and simulated planetary conditions that NASA is enlisting at the ALS provide the best pictures yet of the behavior of the internal 3-D microstructure of spacecraft materials. The experiments are being conducted at an ALS experimental station that captures a sequence of images as a sample is rotated in front of an X-ray beam. These images, which provide views inside the samples and can resolve details less than 1 micron, or 1 millionth of a meter, can be compiled to form detailed 3-D images and animations of samples. This study technique is known as X-ray microtomography. “We have started developing computational tools based on these 3-D images, and we want to try to apply this methodology to other research areas, too,” he said.


News Article | November 30, 2016
Site: www.cemag.us

Tiny, glowing crystals designed to detect and capture heavy-metal toxins such as lead and mercury could prove to be a powerful new tool in locating and cleaning up contaminated water sources. Motivated by publicized cases in which high levels of heavy metals were found in drinking water in Flint, Mich., and Newark, N.J., a science team led by researchers at Rutgers University used intense X-rays at Lawrence Berkeley National Laboratory (Berkeley Lab) to probe the structure of the crystals they developed and learn how they bind to heavy metals. The crystals function like miniature, reusable sensors, and traps, and are known as luminescent metal-organic frameworks, or LMOFs. One type of LMOF that the team tested was found to selectively take up more than 99 percent of mercury from a test mixture of heavy and light metals within 30 minutes, according to recent results published in Applied Materials and Interfaces. No other MOFs have performed as well in this dual role of detecting and capturing, or “adsorbing,” toxic heavy metals, the team reports. Simon Teat, a Berkeley Lab staff scientist, studied individual LMOF crystals, each measuring about 100 microns (millionths of a meter), with X-rays at the lab’s Advanced Light Source (ALS). Using diffraction patterns produced as the X-ray light struck the LMOF samples, Teat applied software tools to map their three-dimensional structure with atomic resolution. The ALS is one of just a few synchrotron X-ray light sources in the world that have dedicated experimental stations for chemical crystallography studies of crystallized chemical compounds such as MOFs. What he found was a patterned, grid-like 3D structure containing carbon, hydrogen, oxygen, nitrogen, and zinc atoms that framed large, open channels. These atomic-scale structural details are key to understanding how the LMOFs bind heavy metals, and can also aid in designing more highly specialized structures. “With MOFs, you’re typically interested in using the holes for something,” Teat explains. In this case, the structure allows heavy metals to enter these open channels and chemically bind to the MOFs. Their very open framework gives the MOFs an abundant surface area relative to their size, which allows them to take in a large amount of contaminants. The LMOF structure was engineered to glow by incorporating a fluorescent chemical component, or ligand. “When the metal binds to the fluorescent ligand, the resulting framework fluoresces,” Teat says. The fluorescence of the LMOFs switches off when they interact with the heavy metals. According to Jing Li, a chemistry professor at Rutgers University who led the research, the technology could be a money-saving solution. “Others had developed MOFs for either the detection of heavy metals or for their removal, but nobody before had really investigated one that does both,” Li adds. Intense X-rays produced at synchrotrons are the best way to map the 3D structure of the MOFs, Li says, adding, “Knowing the crystal structures is one of the most important aspects of our research. You need those in order to perform subsequent characterizations and to understand the properties of these materials.” In their tests, researchers found that the LMOFs bind strongly to mercury and lead, but bind weakly to lighter metals such as magnesium and calcium that are also found in water supplies but do not pose the same hazards. This selective trait, based on the molecular makeup of the LMOFs, is important, Li says. “We need to have a MOF that is selective and will only take the harmful species.” The LMOFs can also be recycled. Researchers found that they could collect, clean, and then reuse the LMOFs for three cycles of toxic cleansing before their performance began to degrade. The study notes that heavily industrialized areas, cities with antiquated water regulations, and agricultural communities can be particularly susceptible to groundwater contamination, which can lead to soil contamination if not addressed. This can cause the contaminants to be taken up by plants and animals in the surrounding environment, broadening the pathways of exposure. Li said that further R&D could explore lower-cost and more durable LMOFs that could last for more cycles, and researchers could also pursue the development of water filters by blending the LMOFs with polymers to create a solid film. “These filters could be used for capture on a larger scale,” she says. “We would like to continue with this research,” Li says, adding that her team would like to test the system’s performance on actual contaminated water sources if funding becomes available. “These are promising results, but we have a long way to go.” Her team also has used Berkeley Lab’s ALS to determine the crystal structures of MOFs for a wide variety of other applications, including high-explosives detection; toxin detection in foods; and new types of light-emitting components for LEDs, known as phosphors, that incorporate cheaper, more abundant materials. The Advanced Light Source is a DOE Office of Science User Facility. Researchers from the University of Texas at Dallas and Rider University also participated in this research. The work was supported by the DOE Office of Science.


News Article | November 2, 2016
Site: www.sciencenews.org

Scientists have lost their latest round of hide-and-seek with dark matter, but they’re not out of the game. Despite overwhelming evidence that an exotic form of matter lurks unseen in the cosmos, decades of searches have failed to definitively detect a single particle of dark matter. While some scientists continue down the road of increasingly larger detectors designed to catch the particles, others are beginning to consider a broader landscape of possibilities for what dark matter might be. “We’ve been looking where our best guess told us to look for all these years, and we’re starting to wonder if we maybe guessed wrong,” says theoretical astrophysicist Dan Hooper of Fermilab in Batavia, Ill. “People are just opening their minds to a wider range of options.” Dark matter permeates the cosmos: The material keeps galaxies from flying apart and has left its imprints in the oldest light in the universe, the cosmic microwave background, which dates back to just 380,000 years after the Big Bang. Indirect evidence from dark matter’s gravitational influences shows that it makes up the bulk of the mass in the universe. But scientists can’t pin down what dark matter is without detecting it directly. In new results published in August and September, three teams of scientists have come up empty-handed, finding no hints of dark matter. The trio of experiments searched for one particular variety of dark matter — hypothetical particles known as WIMPs, or weakly interacting massive particles, with a range of possible masses that starts at several times that of a proton. WIMPs, despite their name, are dark matter bigwigs — they have long been the favorite explanation for the universe’s missing mass. WIMPs are thought to interact with normal matter only via the weak nuclear force and gravity. Part of WIMPs’ appeal comes from a prominent but unverified theory, supersymmetry, which independently predicts such particles. Supersymmetry posits that each known elementary particle has a heavier partner; the lightest partner particle could be a dark matter WIMP. But evidence for supersymmetry hasn’t materialized in particle collisions at the Large Hadron Collider in Geneva, so supersymmetry’s favored status is eroding (SN: 10/1/16, p. 12). Supersymmetry arguments for WIMPs are thus becoming shakier — especially since WIMPs aren’t showing up in detectors. Scientists typically search for WIMPs by looking for interactions with normal matter inside a detector. Several current experiments use tanks of liquefied xenon, an element found in trace amounts in Earth’s atmosphere, in hopes of detecting the tiny amounts of light and electric charge that would be released when a WIMP strikes a xenon nucleus and causes it to recoil. The three xenon experiments are the Large Underground Xenon, or LUX, experiment, located in the Sanford Underground Research Facility in Lead, S.D.; the PandaX-II experiment, located in China’s JinPing underground laboratory in Sichuan; and the XENON100 experiment, located in the Gran Sasso National Laboratory in Italy. Teams of scientists at the three locations each reported no signs of dark matter particles. The experiments are most sensitive to particles with masses around 40 or 50 times that of a proton. Scientists can’t completely rule out WIMPs of these masses, but the interactions would have to be exceedingly rare. In initial searches, proponents of WIMPs expected that the particles would be easy to find. “It was thought to be like, ‘OK, we’ll run the detector for five minutes, discover dark matter, and we’re all done,’” says physicist Matthew Szydagis of the University at Albany in New York, a member of LUX. That has turned into decades of hard work. As WIMPs keep failing to turn up, some scientists are beginning to become less enamored with the particles and are considering other possibilities more closely. One alternative dark matter contender now attracting more attention is the axion. This particle was originally proposed decades ago as part of the solution to a particle physics quandary known as the strong CP problem — the question of why the strong nuclear force, which holds particles together inside the nucleus, treats matter and antimatter  equally. If dark matter consists of axions, the particle could therefore solve two problems at once. Axions are small fry as dark matter goes — they can be as tiny as a millionth of a billionth the mass of a WIMP. The particles interact so feebly that they are extremely difficult to detect. If axions are dark matter, “you’re sitting in an enormous, dense sea of axions and you don’t even notice them,” says physicist Leslie Rosenberg of the University of Washington in Seattle, the leader of the Axion Dark Matter eXperiment. After a recent upgrade to the experiment, ADMX scientists are searching for dark matter axions using a magnetic field and special equipment to coax the particles to convert into photons, which can then be detected. Although WIMPs and axions remain the front-runners, scientists are beginning to move beyond these two possibilities. In between the featherweight axions and hulking WIMPs lies a broad range of masses that hasn’t been well explored. Scientists’ favorite theories don’t predict dark matter particles with such intermediate masses, says theoretical physicist Kathryn Zurek of Lawrence Berkeley National Laboratory in California, but that doesn’t mean that dark matter couldn’t be found there. Zurek advocates a diverse search over a broad range of masses, instead of focusing on one particular theory. “Dark matter direct detection is not one-size-fits-all,” she says. In two papers published in Physical Review Letters on January 7 and September 14,  Zurek and colleagues proposed using superconductors — materials that allow electricity to flow without resistance — and superfluids, which allow fluids to flow without friction, to detect light dark matter particles. “We are trying to broaden as much as possible the tools to search for dark matter,” says Zurek. Likewise, scientists with the upcoming Super Cryogenic Dark Matter Search SNOLAB experiment, to be located in an underground lab in Sudbury, Canada, will use detectors made of germanium and silicon to search for dark matter with smaller masses than the xenon experiments can. Scientists have not given up on xenon WIMP experiments. Soon some of those experiments will be scaling up — going from hundreds of kilograms of liquid xenon to tons — to improve their chances of catching a dark matter particle on the fly. The next version of XENON100, the XENON1T experiment (pronounced “XENON one ton”) is nearly ready to begin taking data. LUX’s next generation experiment, known as LUX-ZEPLIN or LZ, is scheduled to begin in 2020. PandaX-II scientists are also planning a sequel. Physicists are still optimistic that these detectors will finally find the elusive particles. “Maybe we will have some opportunity to see something nobody has seen,” says Xiangdong Ji of Shanghai Jiao Tong University, the leader of PandaX-II. “That’s what’s so exciting.” In the sea of nondetections of dark matter, there is one glaring exception. For years, scientists with the DAMA/LIBRA experiment at Gran Sasso have claimed to see signs of dark matter, using crystals of sodium iodide. But other experiments have found no signs of DAMA’s dark matter. Many scientists believe that DAMA has been debunked. “I don't know what generates the weird signal that DAMA sees,” says Hooper. “That being said, I don't think it's likely that it’s dark matter.” But other experiments have not used the same technology as DAMA, says theoretical astrophysicist Katherine Freese of the University of Michigan in Ann Arbor. “There is no alternative explanation that anybody can think of, so that is why it is actually still very interesting.” Three upcoming experiments should soon close the door on the mystery, by searching for dark matter using sodium iodide, as DAMA does: the ANAIS experiment in the Canfranc Underground Laboratory in Spain, the COSINE-100 experiment at YangYang Underground Laboratory in South Korea, and the SABRE experiment, planned for the Stawell Underground Physics Laboratory in Australia. Scientists’ efforts could still end up being for naught; dark matter may not be directly detectable at all. “It’s possible that gravity is the only lens with which we can view dark matter,” says Szydagis. Dark matter could interact only via gravity, not via the weak force or any other force. Or it could live in its own “hidden sector” of particles that interact among themselves, but mostly shun normal matter. Even if no particles are detected anytime soon, most scientists remain convinced that an unseen form of matter exists. No alternative theory can explain all of scientists’ cosmological observations. “The human being is not going to give up for a long, long time to try to search for dark matter, because it’s such a big problem for us,” says Ji.


Cover for 'A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards'. Credit: Berkeley Lab A new study estimates that $2.2 billion in benefits came from reduced greenhouse gas emissions and $5.2 billion from reductions in other air pollution for state renewable portfolio standard (RPS) policies operating in 2013. The report also shows national water withdrawals and consumption were reduced by 830 billion gallons and 27 billion gallons in 2013, respectively. The report, entitled A Retrospective Analysis of the Benefits and Impacts of U.S. Renewable Portfolio Standards, was conducted by researchers from the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and National Renewable Energy Laboratory (NREL) and evaluates the benefits and other impacts of RPS policies. RPS policies require utilities or other electricity providers to meet a minimum portion of their load with eligible forms of renewable electricity. They currently exist in 29 U.S. states plus Washington, D.C., and have been a driver for renewable electricity generation in the United States over the past decade. Many states are currently considering whether to extend, eliminate, or otherwise revise existing RPS policies. "This work is intended to inform these ongoing discussions by helping states evaluate RPS programs," said Berkeley Lab's Ryan Wiser, one of the report authors. The study takes care to describe its methods and highlights uncertainties in its findings. For example, benefits from greenhouse gas reductions were estimated to range from $0.7 to $6.3 billion, reflecting differences in underlying estimates of damages caused by climate change. Similarly, air pollution reduction benefits—which arise primarily from avoided premature mortality—were estimated to range from $2.6 to $9.9 billion in 2013, reflecting differences in underlying epidemiological literature, among other factors. "Our goal was to estimate the magnitude of RPS benefits and impacts at a national-level, using established methodologies, while recognizing that individual states can perform their own, more-detailed assessments," adds NREL's Jenny Heeter, another of the report's authors. In addition to evaluating environmental benefits, the study also assessed other impacts. Specifically, the research estimates that RPS policies supported 200,000 renewable energy-related jobs in 2013, saved consumers up to $1.2 billion from reduced wholesale electricity prices and another $1.3 to $3.7 billion from reduced natural gas prices. Consumer savings from reduced electricity and natural gas prices occur because renewable electricity displaces other electricity generation with higher operating costs, much of which is fueled by natural gas. The study is careful to describe these as impacts rather than benefits as they represent resource transfers from some stakeholders to others, rather than net societal benefits on state, national, or global scales. This work was a follow-up and complement to an earlier study by the two labs that focused on the costs of state RPS programs to-date and that noted the need for a full understanding of the potential benefits, impacts, and costs of RPS programs. To that end, this most recent study provides a point of comparison for estimates of RPS program costs. Based on the results of this national study, benefits from reduced greenhouse gas emissions equate to 0.7 to 6.4 cents per kilowatt-hour (kWh) of renewable energy, while benefits from reduced emissions of criteria air pollutants amount to 2.6 to 10.1 cents per kWh. Consumer savings from wholesale electricity market and natural gas price reductions represent another 0 to 1.2 cents per kWh and 1.3 to 3.7 cents per kWh, respectively. Although the study takes a national view—evaluating all state RPS programs as a whole—many of the associated benefits and impacts were highly regional. For example, the economic benefits from air pollution reductions are associated mostly with reduced sulfur dioxide (SO2) emissions from coal-fired power plants and are concentrated primarily in the Mid-Atlantic, Great Lakes, Northeast, and Texas. Reductions in water withdrawal and consumption were largest in California and Texas respectively—both states that regularly experience droughts—while renewable energy jobs from RPS projects were concentrated mostly in California, where large amounts of utility-scale photovoltaic generation was being built in 2013. Having now examined both the costs and benefits of state RPS programs historically, the researchers are planning a follow-up effort for the coming year to evaluate the costs and benefits of RPS programs prospectively, considering scheduled increases to each state's requirements as well as potential policy revisions.


News Article | March 7, 2016
Site: www.rdmag.com

Cutting-edge simulations run at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) over a two-year period are helping physicists better understand what influences the behavior of the plasma turbulence that is driven by the intense heating necessary to create fusion energy. This research has yielded exciting answers to long-standing questions about plasma heat loss that have previously stymied efforts to predict the performance of fusion reactors and could help pave the way for this alternative energy source. The key to making fusion work is to maintain a sufficiently high temperature and density to enable the atoms in the reactor to overcome their mutual repulsion and bind to form helium. But one side effect of this process is turbulence, which can increase the rate of plasma heat loss, significantly limiting the resulting energy output. So researchers have been working to pinpoint both what causes the turbulence and how to control or possibly eliminate it. Because fusion reactors are extremely complex and expensive to design and build, supercomputers have been used for more than 40 years to simulate the conditions to create better reactor designs. NERSC is a Department of Energy Office of Science User Facility that has supported fusion research since 1974. One roadblock in the quest for fusion is that, to date, computer models have often been unable to predict exactly how turbulence will behave inside the reactor. In fact, there have long been differences between predictions and experimental results in fusion experiments when studying how turbulence contributes to heat loss in the confined plasma. Now researchers at MIT's Plasma Science and Fusion Center, in collaboration with colleagues at the University of California at San Diego (UCSD) and General Atomics, have discovered a solution to this disparity. By performing high-resolution multi-scale simulations, the team was able to simultaneously resolve multiple turbulence instabilities that have previously been treated in separate simulations. A series of these multi-scale simulations run on NERSC's Edison system found that interactions between turbulence at the tiniest scale (that of electrons) and turbulence at a scale 60 times larger (that of ions) can account for the mysterious mismatch between theoretical predictions and experimental observations of the heat loss. The findings, published December 17, 2015 in Nuclear Fusion, could greatly improve our knowledge of what's really going on inside the current tokamak research experiments that exist around the world and in future experimental reactors under construction or planning, the researchers noted. "For a very long time, the predictions from leading theories have been unable to explain how much heat loss is coming from electrons in fusion plasma," said Nathan Howard, a research scientist at MIT's Plasma and Fusion Science Center and lead author on the Nuclear Fusion paper. "You apply your best theories, but they have underpredicted the amount of heat loss coming from the electrons. In this particular work, we have shown that using the coupled model--where you capture both the large-scale and small-scale turbulence simultaneously--you can actually reproduce the experimental electron heat losses, in part because there appear to be strong interactions between the large-scale and small-scale turbulence that weren't well understood previously." Doing so requires prodigious amounts of computer time to run simulations that encompass such widely disparate scales, he emphasized. The entire study took between 100 million and 120 million CPU hours on Edison, Howard noted; each simulation required approximately 15 million hours of computation, carried out by using 17,000-30,000 processors at NERSC, making Howard the biggest user of NERSC computing time in 2014. Using an ordinary MacBook Pro to run the full set of six simulations that the team carried out, he estimates, would have taken about 3,000 years. "I ran one entire simulation in 36-hour jobs and had to restart it about 24 times to get it to run long enough," he explained. "And that's not accounting for the amount of time it took to get jobs through the queue on the supercomputer." For the simulations, the researchers used data from experiments conducted in 2012 at MIT's Alcator C-Mod tokamak. The multi-scale simulations used the gyrokinetic model implemented by the GYRO code developed by Jeff Candy at General Atomics. The key inputs into the simulation codes were calculated from the experimental data; the simulation results were then compared to the experimentally derived measurements. "We are actually able to reproduce multiple aspects of the experiment with our simulations, which is important because it gives you some confidence that what you are seeing in the simulations is actually representative of reality," Howard said. For more than a decade, the general expectation by physicists had been that, because the turbulent "swirls" associated with the ions are so much larger than those associated with electrons, electron-scale swirls would simply be smeared out by the much larger turbulent ion motion. And even if the smaller swirls survived the larger ion-scale turbulence, their tiny size suggested that their effects on heat loss would be negligible. But the new findings show that this thinking is not always correct. The two scales of turbulence can indeed coexist, the researchers found, and when they do they can interact with each other so strongly that it's impossible to predict the total heat loss accurately unless using a simulation that simultaneously resolves both scales. In fact, far from being eliminated by the larger-scale turbulence, the tiny swirls produced by electrons continue to be clearly visible in the results, stretched out into long ribbons that wind around the donut-shaped vacuum chamber that characterizes a tokamak reactor. Despite the temperature of 100 million degrees Celsius inside the plasma, these ribbon-like swirls, or eddies, persist long enough to influence how heat gets dissipated from the turbulent mass--a determining factor in how much fusion can actually take place inside the reactor. "This is the first time a lot of these very big hypotheses have been confirmed and shown to be operative in relevant plasma conditions," said Chris Holland, associate research scientist in the Center for Energy Research at UCSD and a co-author on the Nuclear Fusion paper. Looking ahead, work is already underway to modify the numerical and parallelization algorithms in GYRO so it will be able to make optimal use of many-integrated-core architectures (like Cori Phase II) as well as sophisticated accelerator hardware on emerging and future exascale platforms. "We are redesigning the GYRO code from the ground up to take advantage of these powerful but challenging new architectures and to increase the physics fidelity of the simulations," Candy said. "The challenge we are working on right now is figuring out how to get the complicated simulations, which require significant memory size and bandwidth, to work efficiently and scale well on these new platforms so we can continue to study even more complex scenarios," Holland added. "I think this is a problem that can be solved, and if we are successful it opens up exciting possibilities for new ways of doing the simulations, looking at more scales and doing more comprehensive predictions for ITER and other kinds of reactors. If we can find a way to use the new generation of platforms and make these simulations more routine, then it becomes a really exciting tool."


News Article | November 3, 2016
Site: www.rdmag.com

Scientists at Lehigh University, in collaboration with Lawrence Berkeley National Laboratory, have demonstrated the fabrication of what they call a new class of crystalline solid by using a laser heating technique that induces atoms to organize into a rotating lattice without affecting the macroscopic shape of the solid. By controlling the rotation of the crystalline lattice, the researchers say they will be able to make a new type of synthetic single crystals and "bio-inspired" materials that mimic the structure of special biominerals and their superior electronic and optical properties as well. The group reported its findings today (Nov. 3) in Scientific Reports, a Nature journal, in an article titled "Rotating lattice single crystal architecture on the surface of glass." The paper's lead author is Dmytro Savytskii, a research scientist in the department of materials science and engineering at Lehigh. The other authors are Volkmar Dierolf, distinguished professor and chair of the department of physics at Lehigh; Himanshu Jain, the T.L. Diamond Distinguished Chair in Engineering and Applied Science and professor of materials science and engineering at Lehigh; and Nobumichi Tamura of the Lawrence Berkeley National Lab in Berkeley, California. The development of the rotating lattice single (RLS) crystals follows a discovery reported in March in Scientific Reports in which the Lehigh group demonstrated for the first time that a single crystal could be grown from glass without melting the glass. In a typical crystalline solid, atoms are arranged in a lattice, a regularly repeating, or periodic three-dimensional structure. When viewed from any angle--left to right, up and down, front to back--a crystal-specific periodicity becomes evident. Glass, by contrast, is an amorphous material with a disordered atomic structure. Because they have no grain boundaries between interconnecting crystals, single-crystal materials often possess exceptional mechanical, optical and electrical properties. Single crystals give diamonds their brilliance and jet turbine blades their resistance to mechanical forces. And the single crystal of silicon of which a silicon chip is made gives it superior conducting properties that form the basis for microelectronics. The periodicity, or repeating pattern, in a rotating lattice single crystal, said Jain and Dierolf, differs from the periodicity in a typical single crystal. "We have found that when we grow a crystal out of glass," said Jain, "the periodicity does not result the some way. In one direction, it looks perfect, but if you turn the lattice and look at it from a different angle, you see that the whole structure is rotating." "In a typical single-crystal material," said Dierolf, "once I figure out how the pattern repeats, then, if I know the precise location of one atom, I can predict the precise location of every atom. This is possible only because single crystals possess a long-range order. "When we grow an RLS crystal out of glass, however, we have found that the periodicity does not result the some way. To predict the location of every atom, I have to know not just the precise location of a particular atom but the rotation angle of the lattice as well. "Thus, we have to slightly modify the textbook definition of single crystals." The rotation, said Jain, occurs at the atomic scale and does not affect the shape of the glass material. "Only the string of atoms bends, not the entire material. We can see the bending of the crystal lattice with x-ray diffraction." To achieve this rotation, the researchers heat a very small portion of the surface of a solid glass material with a laser, which causes the atoms to become more flexible. "The atoms want to arrange in a straight line but the surrounding glass does not allow this," said Jain. "Instead, the glass, being completely solid, forces the configuration of the atoms to bend. The atoms move and try to organize in a crystalline lattice, ideally in a perfect single crystal, but they cannot because the glass prevents the perfect crystal from forming and forces the atoms to arrange in a rotational lattice. The beauty is that the rotation occurs smoothly on the micrometer scale. "Our laser imposes a degree of asymmetry on the growth of the crystal. We control the asymmetry of the heating source to impose this rotational pattern on the atoms." The group's ability to control the amount of heating is critical to the formation of the rotating lattice, said Jain. "The key to the creation of the rotating atomic lattice is that it occurs without melting the glass. Melting allows too much freedom of atomic movement, which makes it impossible to control the organization of the lattice. "Our subtle way of heating the glass overcomes this. We heat only the surface of the glass, not inside. This is very precise, very localized heating. It causes only a limited movement of the atoms, and it allows us to control how the atomic lattice will bend." Rotating lattices have been observed in certain biominerals in the ocean, said Jain and Dierolf, and it may also occur on a very small scale in some natural minerals as spherulites. "But no one had previously made this on a larger scale in a controlled way, which we have accomplished with the asymmetrical imposition of a laser to cause the rotating lattice," said Jain. "Scientists were not able to understand this phenomenon before because they could not observe it on a large enough scale. We are the first group to induce this to happen on an effectively unlimited dimension with a laser." Jain and Dierolf and their group are planning further studies to improve their ability to manipulate the ordering of the atoms. The researchers performed the laser heating of the glass at Lehigh and characterized the glass with micro x-ray diffraction on a synchrotron at the Lawrence Berkeley National Lab. They plan to perform further characterization at Berkeley and with electron microscopy at Lehigh. The project has been funded for six years by the U.S. Department of Energy. "This is a novel way of making single crystals," said Dierolf. "It opens a new field by creating a material with unique, novel properties."


News Article | November 10, 2016
Site: www.eurekalert.org

At its start, the universe was a superhot melting pot that very briefly served up a particle soup resembling a "perfect," frictionless fluid. Scientists have recreated this "soup," known as quark-gluon plasma, in high-energy nuclear collisions to better understand our universe's origins and the nature of matter itself. The physics can also be relevant to neutron stars, which are the extraordinarily dense cores of collapsed stars. Now, powerful supercomputer simulations of colliding atomic nuclei, conducted by an international team of researchers including a Berkeley Lab physicist, provide new insights about the twisting, whirlpool-like structure of this soup and what's at work inside of it, and also lights a path to how experiments could confirm these characteristics. The work is published in the Nov. 1 edition of Physical Review Letters. This soup contains the deconstructed ingredients of matter, namely fundamental particles known as quarks and other particles called gluons that typically bind quarks to form other particles, such as the protons and neutrons found at the cores of atoms. In this exotic plasma state -- which can reach trillions of degrees Fahrenheit, hundreds of thousands of times hotter than the sun's core -- protons and neutrons melt, freeing quarks and gluons from their usual confines at the center of atoms. These record-high temperatures have been achieved by colliding gold nuclei at Brookhaven National Laboratory's RHIC (Relativistic Heavy Ion Collider), for example, and lead nuclei at CERN's LHC (Large Hadron Collider). Experiments at RHIC discovered in 2005 that quark-gluon plasma behaves like a fluid. In addition to gold nuclei, RHIC has also been used to collide protons, copper and uranium. The LHC began conducting heavy-ion experiments in 2014, and has confirmed that the quark-gluon plasma behaves like a fluid. There remain many mysteries about the inner workings of this short-lived plasma state, which may only have existed for millionths of a second in the newborn universe, and nuclear physicists are using a blend of theory, simulations and experiments to glean new details about this subatomic soup. "In our sophisticated simulations, we found that there is much more structure to this plasma than we realized," said Xin-Nian Wang, a theorist in the Nuclear Science Division at Berkeley Lab who has worked for years on the physics of high-energy nuclear collisions. When plotted out in two dimensions, the simulations found that slightly off-center collisions of heavy nuclei produce a wobbling and expanding fluid, Wang said, with local rotation that is twisted in a corkscrew-like fashion. This corkscrew character relates to the properties of the colliding nuclei that created the plasma, which the simulation showed expanding along -- and perpendicular to -- the beam direction. Like spinning a coin by flicking it with your finger, the simulations showed that the angular momentum properties of the colliding nuclei can transfer spin properties to the quark gluon plasma in the form of swirling, ring-like structures known as vortices. The simulations showed two of these doughnut-shaped vortices -- each with a right-handed orientation around each direction of the separate beams of the colliding nuclei -- and also many pairs of oppositely oriented vortices along the longest dimension of the plasma. These doughnut-shaped features are analogous to swirling smoke rings and are a common feature in classical studies of fluids, a field known as hydrodynamics. The simulations also revealed a patterned outward flow from hot spots in the plasma that resemble the spokes of a wheel. The time scale covered in the simulation was infinitesimally small, Wang said, roughly the amount of time it takes light to travel the distance of 10-20 protons. During this time the wobbling fluid explodes like a fireball, spurting the particle soup outward from its middle more rapidly than from its top. Any new understanding of quark-gluon plasma properties should be helpful in interpreting data from nuclei-colliding experiments, Wang said, noting that the emergence of several localized doughnut-like structures in the simulations was "completely unexpected." "We can think about this as opening a completely new window of looking at quark-gluon plasmas, and how to study them," he said. "Hopefully this will provide another gateway into understanding why this quark-gluon fluid is such a perfect fluid -- the nature of why this is so is still a puzzle. This work will benefit not only theory, but also experiments." The simulations provide more evidence that the quark-gluon plasma behaves like a fluid, and not a gas as had once been theorized. "The only way you can describe this is to have a very small viscosity," or barely any friction, a characteristic of a so-called 'perfect fluid' or 'fundamental fluid,'" Wang said. But unlike a familiar fluid like water, the simulation focuses on a fluid state hundreds of times smaller than a water molecule. Michael Lisa, a physics professor at Ohio State University who is part of the collaboration supporting the Solenoidal Tracker at RHIC (STAR), said the so-called vorticity or "swirl structure" of this plasma has never been measured experimentally, though this latest theoretical work may help to home in on it. STAR is designed to study the formation and characteristics of the quark-gluon plasma. "Wang and his collaborators have developed a sophisticated, state-of-the-art hydrodynamic model of the quark-gluon plasma and have identified swirling structures that vary within the fluid itself," he said. "Even more useful is the fact that they propose a method to measure these structures in the laboratory." Lisa also said there is ongoing analysis work to confirm the simulation's findings in data from experiments at RHIC and the LHC. "It is precisely innovations like this, where theory and experiment collaborate to explore new phenomena, that hold the greatest hope for greater insight into the quark-gluon plasma," he said. "Many tools have been used to probe the inner working mechanics and symmetry properties of this unique matter," said Zhangbu Xu, a spokesperson for the STAR collaboration and a staff scientist at Brookhaven National Laboratory. He also said that preliminary results from STAR also suggest some spinning motion in the fluid, and the simulation work "adds a new dimension" to this possibility. To carry out the simulations, the team performed calculations using the Green Cube at the GSI Helmholtz Center for Heavy Ion Research in Germany and also at Central China Normal University. Researchers from Germany's Frankfurt Institute for Advanced Studies, Goethe University, GSI Helmholtz Center for Heavy Ion Research; and the University of Science and Technology of China and Central China Normal University also participated in this work. This work was supported by the Department of Energy's Office of Science, Germany's Helmholtz Association, GSI Helmholtz Center for Heavy Ion Research, and State Offensive for the Development of Scientific and Economic Excellence (LOEWE) program. The Relativistic Heavy Ion Collider is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | February 5, 2016
Site: www.theenergycollective.com

State renewable portfolio standards are widely recognized for their critical role in driving renewable energy growth. However, their full impacts on future renewable energy development have not been fully recognized, even by key government agencies. In a recent study, Benjamin Sovacool and I found that the Energy Information Administration has consistently under-projected renewable energy growth, due in part to improper modelling of state RPS mandates. Last year, the Union of Concerned Scientists similarly found that EPA’s modelling for the proposed Clean Power Plan did not fully account for state RPS policies in several states. These forecasts are critical to shaping perspectives about renewable energy and are major inputs in developing policy, particularly at the federal level. To develop a better idea of the future impact of state RPS policies, this analysis develops quantitative estimates of how much renewable is mandated by existing state policies. Although state RPSs are only part of a picture, this bottom-up approach is a first step in assessing how the future of renewable energy in the U.S. State RPS Policies Likely to Drive Continuing Gains in Renewable Energy To date, state renewable policies have been critical in driving renewable energy growth. By 2013, RPS-compliant renewable energy comprised almost 5% of total U.S. electricity generation, bringing in billions in economic and environmental benefits at limit cost. As states ramp up their RPS requirements in the next fifteen years, state RPS policies are likely to become even more important. Data from Lawrence Berkeley National Laboratory provides an initial indication of why. In addition to tracking compliance, LBNL provides estimates of how much renewable energy will be needed to meet current renewable mandates during the next twenty years. These projections indicate that at least 359,474 GWh of eligible renewables will be needed to comply with state standards by 2030. This nears 9% of total U.S. electricity generation and is more than double the amount of renewable energy required by RPS mandates in 2013. Notably, RPS-compliant renewable is not expected to grow uniformly between 2015 and 2030. Instead we see a rapid ramp up in RPS requirements between 2015 and 2021, with an annualized growth above 9% per year. This is a result of many state RPSs having final goals around 2020. Geographically, RPS-compliant energy to date has been very concentrated. Over the next fifteen years, RPS-compliant will become somewhat less concentrated. California is likely to remain a major RPS state – LBNL’s projections for 2030 indicate California would be responsible for around 26% of total RPS-compliant renewable energy in the U.S, near 2013 levels. However, the share of RPS-compliant energy from Texas, New Jersey, New York, and Pennsylvania would shrink from 26% in 2013 to less than 11% in 2030. The remaining share of RPS-compliant renewable energy (63%) will become more geographically diverse, coming from the remaining 24 states with mandatory RPS policies. Critically, the above estimates from LBNL were last updated in July 2015 and hence do not include several major changes in state policies since then. Most dramatically, they do not include California increasing its RPS goals from 33% by 2020 to 50% by 2030. Similarly, New York Governor Andrew Cuomo has directed the New York Public Service Commission to increase the state’s RPS to 50% by 2030 from its current 30% in 2015 level. Hawaii has also increased its RPS to 100% by 2045. One of the key findings in Benjamin Sovacool and I’s recent paper was that a dynamic policy environment is a challenge to estimating future levels of renewable energy. These new policies are prime examples. My ball park estimates indicate that the revisions in California and New York alone could increase the amount of renewable energy required for state RPS policies in 2030 by 70,000-80,000 GWh, depending on load growth. Thus, current state RPS policies would lead to around 11% of U.S. generation coming from RPS-compliant renewable electricity in 2030, up from less than 3% in 2010. This is more than a 20% increase from what RPS mandates would have required six months ago. Clearly, our expectations regarding the future of energy must be continuously updated as policies and technology change. Wind generation has been the primary compliance option for utilities to meet RPS goals in the last decade, primarily due to the relatively low cost of wind generation and federal tax credit support. Long term contracts and continued wind installations mean that wind will continue to play a major role in meeting RPS requirements moving forward. However, solar is likely to become a major compliance method for renewable standards for the next fifteen years. The primary reason are massive and rapid decreases in solar PV costs. In the past year, the rapid decreases in solar costs and corresponding increase in installations have been pretty thoroughly covered, so I’ll keep the discussion brief. Again, LBNL provides great in-depth analysis for utility-scale solar and distributed solar. In LBNL’s sample, the average price for power purchase agreements for utility solar have dropped from more than $100/MWh in 2011 to less than $50/MWh in 2015. These major cost decreases are driving solar installations to new levels. Installation levels in 2015 set new records, with as much as 3 GW installed in Q4 2015 alone. More dramatically, EIA data indicates that more than 8.3 GW of large-scale solar PV are planned for 2016. That’s excluding distributed generation less than 1.0 MW in capacity, which is likely to drive several more gigawatts of new solar. In many ways, solar is more attractive than wind. While wind generation is primarily concentrated in the Great Plains, solar PV resources are pretty good across most of the country. Further, most solar generation occurs during peak, daylight hours when wholesale electricity prices are higher. Even if solar costs somewhat more than wind on a per MWh basis, it usually offsets higher- cost electricity, making it more financially attractive. As solar reaches full maturity and commercialization nationwide, we are likely to see it be adopted even quicker than wind, with RPS policies providing major demand support. Already, solar is beginning to dominate new builds for RPS-compliance. Between 2013 and 2014, LBNL and NREL estimate that solar was responsible for 76% of new capacity installations to meet RPS requirements. RPS policies have been critical policy mechanisms to incentivize the development of wind and, increasingly, solar markets across the United States. This analysis indicates that existing RPS policies are likely to drive non-hydro renewables to more than 11% of total U.S. electricity generation by 2030, more than double current levels. Voluntary renewable markets, boosted by cost innovation from renewable deployment in RPS states, could lead to at least an additional 2-4% of U.S. generation coming from renewables by 2030. This provides a solid base from which renewables can become a central electricity source in the U.S. However, the recent extension of federal renewable energy tax credits only makes the future more bullish for both wind and solar. The 30% investment tax credit for solar has been extended in full for another three years before ramping down to a permanent 10% in 2022. Meanwhile, the wind production tax credit will be ramped down through 2020. This additional financial support, coming at a time when many state RPSs began to ramp up, will drive a major boom in both solar and wind installations. Critically, the new tax credits providing support through the beginning of Clean Power Plan implementation. Existing state RPS mandates, renewed federal tax credits, and the Clean Power Plan mean further cost reductions in both industries are likely in the next few years. Federal and state policies promise to be even more complementary in the future, with significant economic and environmental ramifications. As it becomes cheaper and cheaper to implement a RPS, more states may create mandatory standards or significantly strengthen existing ones. The recent moves by California, New York, and Hawaii could signal the beginnings of a new wave of RPS policies.


News Article | November 17, 2016
Site: www.eurekalert.org

For the third time in its history, Thomas Jefferson National Accelerator Facility is home to one of the world's 500 fastest supercomputers. The SciPhi-XVI supercomputer was just listed as a TOP500 Supercomputer Site on November 14, placing 397th on the 48th edition of the list of the world's top supercomputers. The Top500 list was announced Monday, Nov. 14, at The International Conference for High Performance Computing, Networking, Storage and Analysis, SC16, in Salt Lake City. It's announced semi-annually and is comprised of systems that are ranked according to their computation speeds as determined by the LINPACK Benchmark, which is a calculation of a linear system represented by a large double precision dense matrix. SciPhi-XVI was delivered to Jefferson Lab and installed in mid-August and was deployed on Sept. 30 with 200 nodes. An additional 64 nodes were added to the machine in late October, and the High Performance Computing group then benchmarked the machine. The SciPhi-XVI machine benchmarked at 425.8 Teraflops using 256 of its 264 nodes. The cluster's 264 nodes are comprised of the second-generation Intel® Xeon Phi™ chips, which were released in 2016 and are codenamed Knights Landing. Each node has 64 computing cores, and individual nodes within the machine are connected via the Intel® Omni-Path Architecture, which supports a 100 Gigabit/second data transfer rate between the nodes. "A special Intel distributed LINPACK benchmark software, HPL (High Performance Linpack), was used to rank the Jefferson Lab cluster, and it is based on an open source distribution with performance enhancement for Intel® Xeon Phi™ processors," said Jie Chen, a member of Jefferson Lab's High Performance Computing group. "To achieve the performance number for the SciPhi-XVI cluster, a set of HPL running parameters has to be determined through many trials. In particular, the size of the matrix representing the linear equations solved by the HPL was chosen to be 2,457,600." Jefferson Lab conducts discovery-caliber nuclear physics research in exploring the atomic nucleus and its fundamental constituents, such as protons and neutrons and these particles' building blocks: quarks and gluons. The theory of how these particles interact -- Quantum Chromodynamics, or QCD, is calculated in the form of lattice QCD, or LQCD. Scientists will use the SciPhi-XVI machine for LQCD calculations, as well as a test system for the use of the Xeon Phi™ processors for analyzing data that is generated in Jefferson Lab's experimental physics program. The Chroma code, which is used to solve LQCD on the Xeon Phi™ processor platform, will be used with an open source library of equation solvers for LQCD. The High Performance Computing group has been modernizing the freely available Chroma code, so that it will run faster and provide better results, more quickly on the new generations of hardware. "There are always challenges to bringing online state-of-the-art hardware systems. We are very proud of our staff for their hard work and dedication that made this achievement possible," said Jefferson Lab Chief Information Office, Amber Boehnlein. "This hardware achievement, together with our efforts to continually optimize the performance of our software for the newest systems, positions the laboratory to continue performing great science in high-performance computation." Jefferson Lab work on Chroma is funded through the U.S. Department of Energy Office of Science. The research is carried out in collaboration with partner institutions, such as DOE's National Energy Research Scientific Computing Center (NERSC) Exascale Science Application Program (NESAP) and the Scientific Discovery through Advanced Computing (SciDAC) program. NERSC is a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. Supercomputer computation time on SciPhi-XVI is allocated based on the recommendations of the United States Lattice Gauge Theory Computational Program (USQCD), a consortium of top LQCD theorists in the United States that spans both high-energy physics and nuclear physics. The USQCD consortium developed the QUDA code library that is used with the Chroma code. Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility for the U.S. Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.


News Article | September 22, 2016
Site: www.washingtonpost.com

On Thursday the Rocky Mountain Institute — an energy-focused think tank known for its boldness in predicting technological change — released a truly sweeping report on the future of vehicles. And if it’s right — something that, to be sure, can be questioned — then we would be on the verge of an energy and technology transformation to rival the sudden decline of the coal industry. The institute certainly isn’t the first to suggest self-driving cars are something special. Uber and Lyft are already exploring self-driving robotaxis. Apple, Google, and Tesla are also in the game. But what’s striking is the optimism about a real technological transformation that could arrive from the merger of two vehicle trends — automation and electrification — and the idea that it could happen quite fast, with major consequences for reducing our greenhouse gas emissions. “It is possible that a new mobility system will emerge in the next few years that is superior to our existing system in almost every way,” write the study’s authors, Charlie Johnson and Jonathan Walker. Autonomous cars won’t just be safer and more energy efficient, the study argues. Hiring one to taxi you around will also become very cheap, they argue, once these vehicles can be electrified, rather than running on gas. The upshot is that at some point, perhaps quite soon, it will make more economic sense to have a robotaxi ferry you around on all your trips than to personally own a car. At the same time, as the trend becomes widespread — with fleets of autonomous vehicles, owned by companies or cities, on the road and not running on gas — our greenhouse gas emissions will plunge, the research suggests. By the year 2018, the study argues, “solely using autonomous taxis for transportation could cost the same as owning and operating a car.” More specifically, the researchers think that electric robotaxis might quickly rival the per-mile cost of personal car use, which they say is 85 cents per mile. “We think as soon as Uber is ready to pull the driver out of their autonomous vehicle — right now they have a safety driver in the vehicle, but they’re testing the autonomous software,” says Walker. “As soon as they can remove that safety driver, the cost of that service is going to be about the same as owning and operating a sedan in Pittsburgh.” And of course these vehicles will be electric because that, too, will save money — in this case, for the companies that own and operate them. The huge distances that autonomous vehicles might drive in their lifetimes will mean that even if an electric vehicle costs more than a gas vehicle to purchase, it will make up the difference over the course of its use. Most striking is what this would do to greenhouse gas emissions. The study suggests a major decrease in gasoline demand and, as a result, a major increase in electricity demand, and a plummeting of transportation-sector greenhouse-gas emissions. Walker said the reduction could be as large as 800 million tons per year in the U.S. by 2035, a huge contribution to fighting climate change. “Personal mobility hasn’t changed in 100 years, and it actually plausibly could in the next five to 10,” Walker said. The study also forecasts that this means that “peak car ownership in the United States will occur around 2020 and will drop quickly after that.” The Rocky Mountain Institute isn’t the only such bullish voice. John Zimmer, the co-founder of Lyft, recently wrote at Medium that “by 2025, private car ownership will all-but end in major U.S. cities.” Meanwhile, a study last year of the adoption electrified robotic cabs found they could slash our greenhouse gas emissions, by 2030, by tremendous percentages. But is this all too gung-ho? To find out, I queried several outside experts about the scenario. Jeffrey Greenblatt, a researcher at the Lawrence Berkeley National Laboratory who co-authored the just mentioned study on greenhouse gas reductions from electric, shared, and autonomous vehicles, said the work seemed pretty solid. “It’s clearly optimistic, but I think it’s based on reasonable assumptions of the future cost of automated technology, as well as the total cost of ownership of vehicles.,” he said. Margo Oge, a former EPA transportation and air quality official and author of the book “Driving the Future: Combating Climate Change with Cleaner, Smarter Cars” agreed about the Rocky Mountain Institute’s “overall findings that autonomous, shared mobility will utilize largely electric power trains as they will be cheaper to operate. It’s the most economical strategy  for autonomous shared mobility and it will help significantly reduce carbon pollution – especially as the utilization of those EVs will be much higher than the conventional powertrains they displace.” However, the speed of the transition is another matter, Oge said, and here, the group might be off. “In the end the overall success of autonomous mobility will be based on public trust,” she said. “It’s not just an issue of technology. Trust takes time to develop.” It’s still not clear how people — or, for that matter, legal systems — will deal with the possibility of occasional autonomous vehicle crashes, even if they are significantly more rare than today’s car crashes. Moreover, the programming of these vehicles to deal with situations like an automotive version of the ethical “trolley dilemma” — in which a vehicle would have to “choose” whether or not to kill its driver in order to save multiple other individuals — is bound to spur controversy. Another transportation researcher, Susan Shaheen of the University of California-Berkeley, praised the Rocky Mountain Institute work for trying to push us in thinking about what the future could be like. “I think the purpose of papers like this is to set a visionary plan,” she said. But Shaheen also noted some potential drawbacks — for instance, regulatory. “By 2018, a question to ask is, are cities going to be ready to convert to a shared, automated, electrified fleet?” she said. “Chances are, we will not be at that point.” The Rocky Mountain Institute’s Walker admits that there could also be regulatory and cultural hurdles to such a vast transition. “Certain cities and states may make it legal, and some cities and states may not,” he said. And then, there’s just the possibility that some Americans love their cars and won’t want to give them up, he said. And there’s yet another problem, noted Don MacKenzie, a transportation researcher at the University of Washington in Seattle — what happens outside of urban areas? “This will not work so well when you need a ride to your cottage or the state park for the weekend, where there is no one else for the vehicle pick up after dropping you off,” he wrote by email (though MacKenzie agreed with other aspects of the report). “In such conditions, an important question becomes whether people will be willing to give up owning a personal vehicle. If they still have that personal vehicle, there’s a good chance they will continue to drive it (and drive it more).” Nonetheless, Walker said that the bullish scenario is something the group is standing by. “There’s people who say the technology’s not going to be ready, but they’re quoting things like 5 or 10 years, when a year ago, they were quoting 30 years,” Walker said. Think California’s current drought is bad? Past incarnations have lasted hundreds of years Scientists may have solved a key riddle about Antarctica — and you’re not going to like the answer What the ‘sixth extinction’ will look like in the oceans: The largest species die off first For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.


News Article | December 16, 2016
Site: www.eurekalert.org

Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). A new way to visualize electric fields "This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid," said Jason Horng, a co-lead author of a study published Dec. 16 in Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab's Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. "The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell," said Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. "One of the outstanding problems in studying a large network of cells is understanding how information propagates between them," Balch said. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui said, "This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane." The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers said. Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab's Advanced Light Source to understand the effects of an electric field on graphene's absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene's light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer's light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field's strength and location along the surface of the graphene. The new imaging platform -- dubbed CAGE for "Critically coupled waveguide-Amplified Graphene Electric field imaging device" -- proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field's location along the graphene sheet's surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Balch said that there are already plans to test the platforms with living cells. "We are working with collaborators to test this with real heart cells," she said. "There are several potential applications for this research in heart health and drug screening." There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. "The kind of elegance behind this system comes from its generality," Balch said. "It can be sensitive to anything that carries charge." The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | December 14, 2015
Site: www.rdmag.com

The Large Underground Xenon (LUX) dark matter experiment, which operates nearly a mile underground at the Sanford Underground Research Facility (SURF) in the Black Hills of South Dakota, has already proven itself to be the most sensitive detector in the hunt for dark matter, the unseen stuff believed to account for most of the matter in the universe. Now, a new set of calibration techniques employed by LUX scientists has again dramatically improved the detector's sensitivity. Researchers with LUX are looking for WIMPs, or weakly interacting massive particles, which are among the leading candidates for dark matter. "We have improved the sensitivity of LUX by more than a factor of 20 for low-mass dark matter particles, significantly enhancing our ability to look for WIMPs," said Rick Gaitskell, professor of physics at Brown University and co-spokesperson for the LUX experiment. "It is vital that we continue to push the capabilities of our detector in the search for the elusive dark matter particles," Gaitskell said. LUX improvements, coupled to advanced computer simulations at the U.S. Department of Energy's Lawrence Berkeley National Laboratory's (Berkeley Lab) National Energy Research Scientific Computing Center (NERSC) and Brown University's Center for Computation and Visualization (CCV), have allowed scientists to test additional particle models of dark matter that now can be excluded from the search. NERSC also stores large volumes of LUX data--measured in trillions of bytes, or terabytes--and Berkeley Lab has a growing role in the LUX collaboration. Scientists are confident that dark matter exists because the effects of its gravity can be seen in the rotation of galaxies and in the way light bends as it travels through the universe. Because WIMPs are thought to interact with other matter only on very rare occasions, they have yet to be detected directly. "We have looked for dark matter particles during the experiment's first three-month run, but are exploiting new calibration techniques better pinning down how they would appear to our detector," said Alastair Currie of Imperial College London, a LUX researcher. "These calibrations have deepened our understanding of the response of xenon to dark matter, and to backgrounds. This allows us to search, with improved confidence, for particles that we hadn't previously known would be visible to LUX." The new research is described in a paper submitted to Physical Review Letters. The work reexamines data collected during LUX's first three-month run in 2013 and helps to rule out the possibility of dark matter detections at low-mass ranges where other experiments had previously reported potential detections. LUX consists of one-third ton of liquid xenon surrounded with sensitive light detectors. It is designed to identify the very rare occasions when a dark matter particle collides with a xenon atom inside the detector. When a collision happens, a xenon atom will recoil and emit a tiny flash of light, which is detected by LUX's light sensors. The detector's location at Sanford Lab beneath a mile of rock helps to shield it from cosmic rays and other radiation that would interfere with a dark matter signal. So far LUX hasn't detected a dark matter signal, but its exquisite sensitivity has allowed scientists to all but rule out vast mass ranges where dark matter particles might exist. These new calibrations increase that sensitivity even further. One calibration technique used neutrons as stand-ins for dark matter particles. Bouncing neutrons off the xenon atoms allows scientists to quantify how the LUX detector responds to the recoiling process. "It is like a giant game of pool with a neutron as the cue ball and the xenon atoms as the stripes and solids," Gaitskell said. "We can track the neutron to deduce the details of the xenon recoil, and calibrate the response of LUX better than anything previously possible." The nature of the interaction between neutrons and xenon atoms is thought to be very similar to the interaction between dark matter and xenon. "It's just that dark matter particles interact very much more weakly--about a million-million-million-million times more weakly," Gaitskell said. The neutron experiments help to calibrate the detector for interactions with the xenon nucleus. But LUX scientists have also calibrated the detector's response to the deposition of small amounts of energy by struck atomic electrons. That's done by injecting tritiated methane--a radioactive gas--into the detector. "In a typical science run, most of what LUX sees are background electron recoil events," said Carter Hall a University of Maryland professor. "Tritiated methane is a convenient source of similar events, and we've now studied hundreds of thousands of its decays in LUX. This gives us confidence that we won't mistake these garden-variety events for dark matter." Another radioactive gas, krypton, was injected to help scientists distinguish between signals produced by ambient radioactivity and a potential dark matter signal. "The krypton mixes uniformly in the liquid xenon and emits radiation with a known, specific energy, but then quickly decays away to a stable, non-radioactive form," said Dan McKinsey, a UC Berkeley physics professor and co-spokesperson for LUX who is also an affiliate with Berkeley Lab. By precisely measuring the light and charge produced by this interaction, researchers can effectively filter out background events from their search. "And so the search continues," McKinsey said. "LUX is once again in dark matter detection mode at Sanford Lab. The latest run began in late 2014 and is expected to continue until June 2016. This run will represent an increase in exposure of more than four times compared to our previous 2013 run. We will be very excited to see if any dark matter particles have shown themselves in the new data." McKinsey, formerly at Yale University, joined UC Berkeley and Berkeley Lab in July, accompanied by members of his research team. The Sanford Lab is a South Dakota-owned facility. Homestake Mining Co. donated its gold mine in Lead to the South Dakota Science and Technology Authority (SDSTA), which reopened the facility in 2007 with $40 million in funding from the South Dakota State Legislature and a $70 million donation from philanthropist T. Denny Sanford. The U.S. Department of Energy (DOE) supports Sanford Lab's operations. Kevin Lesko, who oversees SURF operations and leads the Dark Matter Research Group at Berkeley Lab, said, "It's good to see that the experiments installed in SURF continue to produce world-leading results." The LUX scientific collaboration, which is supported by the DOE and National Science Foundation (NSF), includes 19 research universities and national laboratories in the United States, the United Kingdom and Portugal. "The global search for dark matter aims to answer one of the biggest questions about the makeup of our universe. We're proud to support the LUX collaboration and congratulate them on achieving an even greater level of sensitivity," said Mike Headley, Executive Director of the SDSTA. Planning for the next-generation dark matter experiment at Sanford Lab is already under way. In late 2016 LUX will be decommissioned to make way for a new, much larger xenon detector, known as the LUX-ZEPLIN (LZ) experiment. LZ would have a 10-ton liquid xenon target, which will fit inside the same 72,000-gallon tank of pure water used by LUX. Berkeley Lab scientists will have major leadership roles in the LZ collaboration. "The innovations of the LUX experiment form the foundation for the LZ experiment, which is planned to achieve over 100 times the sensitivity of LUX. The LZ experiment is so sensitive that it should begin to detect a type of neutrino originating in the Sun that even Ray Davis' Nobel Prize-winning experiment at the Homestake mine was unable to detect," according to Harry Nelson of UC Santa Barbara, spokesperson for LZ.


News Article | November 4, 2016
Site: www.eurekalert.org

From deep within the Earth to the upper atmosphere, the organisms and ecosystems highlighted in the 37 projects selected for the 2017 Community Science Program (CSP) of the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, reflect the breadth and depth of interests researchers are exploring to find solutions to energy and environmental challenges. "These new CSP projects, selected through our external review process, exploit DOE JGI's experimental and analytical "omics" capabilities and build our portfolio in key focus areas including sustainable bioenergy production, plant microbiomes and terrestrial biogeochemistry," said Susannah Tringe, DOE JGI User Programs Deputy. The CSP 2017 projects were selected from 98 full proposals received, resulting from 123 letters of intent submitted. The full list of projects may be found at http://jgi. . A number of the accepted proposals call for developing reference genomes for plants of relevance to bioenergy, either as potential feedstocks or because they possess biochemistries that could provide insights into better ways to derive biofuels from plant biomass. These will take advantage of powerful DOE JGI sequencing upgrades. Building upon the previously published reference genome for Sorghum bicolor generated by the DOE JGI, Todd Mockler of the Donald Danforth Plant Science Center has targeted the sequencing of a number of highly diverse sorghum lines. The project will explore and begin to assemble the pangenome to accelerate gene discovery and increase understanding of which variants are associated with different functional outcomes. This work is being conducted in close concert with DOE's Advanced Research Projects Agency-Energy's (ARPA-E) Transportation Energy Resources from Renewable Agriculture (TERRA) program. David Des Marais of Harvard University is similarly focused on Brachypodium, which has had two genomes - Brachypodium distachyon and B. sylvaticum - sequenced by the DOE JGI. Among his plans are estimating a pangenome for B. sylvaticum and sequencing 4 additional Brachypodium species to further functional genomic analysis research in grasses. Karen Aitken of Australia's national science agency CSIRO is focused on producing the first genome assembly of a cultivated sugarcane plant, namely variety R570. Cultivated sugarcane produces 80 percent of the world's sugar and is already harvested in large volumes and along with its transportation system efficiencies, is helping to guide optimized production strategies for other renewable and sustainable biofuel crops. Many projects focus on plants with demonstrated tolerance to stressors such as drought. The proposal from John Cushman of the University of Nevada seeks to establish the common or crystalline ice plant (Mesembryanthemum crystallinum L.) as a DOE JGI Flagship Genome species like poplar, soybean, and Brachypodium distachyon. Most plants use what is known as the C3 pathway to photosynthetically fix carbon (from CO2) in the presence of abundant nutrients, while plants in arid conditions rely on a different mechanism, the water-conserving CAM pathway. A rapidly growing desert annual native to the Namibian desert, the common ice plant is the first reported species that can switch from the C3 pathway to CAM when stressed by factors such as drought and salinity. Elizabeth Kellogg, also of the Danforth Center, requested a genome sequence for the big bluestem (Andropogon gerardii subsp. gerardii), the plant most associated with the Great Plains. The big bluestem dominates the tall grass prairie, accounting for up to 70 percent of biomass in some areas, and is highly resistant to climate change. In a complementary project, Karolina Heyduk of the University of Georgia seeks genome sequences for two cacti: the C3 species Yucca aloifolia and the CAM species Y. filamentosa. The comparison of the genomes for these two related cacti could illuminate the genes and pathways involved in these two forms of photosynthesis. Similarly, the gene atlas for CAM plant Kalanchoe laxiflora proposed by Xiaohan Yang of Oak Ridge National Laboratory could allow researchers to understand how it responds to changes in environmental conditions, including temperature, water, and nitrogen source. Few conifers have had their genomes sequenced thus far, and Joerg Bohlmann of Canada's University of British Columbia wants to raise the number. He has targeted the genomes and transcriptomes of the common yew and western red cedar, as well as the transcriptome of the Jeffrey pine, all candidate bioenergy feedstocks. Two projects focus on the impact of fire on forests as full recovery can take decades. Daniel Cullen of Forest Products Laboratory is comparing how microbial communities in forests dominated by Lodgepole pine (Pinus contorta) fare in fire-disturbed and undisturbed forests, in part to define processes that underlie carbon cycling in coniferous forests. Similarly, Thomas Bruns of University of California, Berkeley seeks to learn more about the effects of pyrophilous fungi on post-fire soil carbon by studying the genomes and transcriptomes of 13 fungi as well as the metatranscriptomes of burned soils. Several projects focus on plant-microbe interactions, both on micro- and macroscales. Sharon Doty of the University of Washington is investigating poplar endophytes, one that fixes nitrogen and one involved in phytoremediation. Plant root microbial consortia are known to be critically important to plant growth and resilience to changing soil conditions but much remains to be learned about community composition and function. Devin Coleman-Derr of the USDA-ARS, with his resource allocation, is seeking to learn more about the role of root-associated Actinobacteria in promoting host fitness in sorghum and rice under drought conditions. Paul Dijkstra of Northern Arizona University, with his allocation, will illuminate soil bacterial transcriptional regulatory networks in switchgrass fields. William Whitman of the University of Georgia plans to develop pangenomes of 100-200 species of soil or plant-associated prokaryotes. The pangenome concept is vital to understanding the repertoire of genes upon which microbial populations may call as local environments change. Jared LeBoldus of Oregon State University is targeting metabolites of Sphaerulina musiva, the cause of Septoria stem canker and leaf spot disease in poplar. Christine Smart of Cornell University is characterizing the mechanisms of willow rust (Melampsora americana), and conducting a comparative genomic study involving M. americana and other Melampsora genomes sequenced by the DOE JGI that are known plant pathogens of poplar and other candidate bioenergy feedstocks. Gregory Bonito of Michigan State University will identify the mechanisms of attraction, communication, and growth-promoting activity between Mortierella, a close relative of arbuscular mycorrhizal fungi, and the DOE JGI Flagship plants. Nhu Nguyen of the University of California, Berkeley will be a generating genus-wide molecular phylogeny of Suillus, asking for 50 fungal genome sequences. Suillus fungi tolerate heavy metals, but the protection varies among hosts. Wayne Nicholson of the University of Florida will be generating a more comprehensive assessment of Carnobacterium, strains of which can be found in all pressure niches from the deep ocean to the upper atmosphere. Sean Crowe of Canada's University of British Columbia will characterize the methanome, comprised of genomic information distributed across organisms that either produce or consume methane, which is both a source of fuel and a greenhouse gas. Graeme Attwood of New Zealand's AgResearch Ltd and his team seek to define gene function in rumen microbes, in part to control the microbes' production of methane emissions from bovines. Soil emissions are a focus of Jennifer Pett-Ridge of Lawrence Livermore National Laboratory and Jonathan Raff of Indiana University. Pett-Ridge is determining the impact microbes in humid tropical forest soils have on carbon cycling, work that complements her DOE Early Career Research Program award. Raff intends to use samples from temperate hardwood forest sites to learn more about soil emissions of nitric oxide (NO) and more accurately represent NO sinks and sources in climate models. Alison Buchan of University of Tennessee, Knoxville is generating data about lignin related aromatic compounds in salt marshes that are removed between river mouths and open oceans, and the biochemical pathways employed in this process. A similar project comes from Christopher Francis of Stanford University and involves the San Francisco Bay Delta, the largest estuary on the west coast of North America. His team is investigating how environmental changes drive estuarine microbial community changes, and if certain pathways and organisms dominate under certain conditions, or if genes co-vary with specific functional gene ecotypes. Several projects are focused on algae for their roles in carbon fixation and for potential bioenergy applications. Chi Zhang of the University of Nebraska - Lincoln is focused on Zygnematales, the closest algal lineage to land plants, to learn more about how plant cell walls formed in the evolutionary transition from aquatic to land plants. The information could shed light on how to deconstruct plant cell walls for biofuel production without impacting plant viability. Matthew Johnson of Woods Hole Oceanographic Institute is interested in the bloom-forming Mesodinium rubrum and M. major complex. While these algae are major contributors to primary production, they acquire their ability to harness light energy for nutrients through predation so genome sequencing would help distinguish native metabolic pathways from those of the prey. Jeffry Dudycha of the University of South Carolina is pursuing a project based on cryptophytes, eukaryotic microalgae that are important primary producers in aquatic environments and are capable of capturing a broad spectrum of available light. By sequencing representatives from all major clades, the team hopes to maximize ways diverse algal communities could boost lipid yields for biofuels. Biological soil crusts or biocrusts are extremely sensitive to climate changes. As surviving extreme drought is a rare feature in plants, Elena Lopez Peredo of Marine Biological Laboratory will be generating annotated genomes of Scenedesmus algae to learn more about the desiccation tolerance of green microalgae in biocrusts. In a complementary project, Steven Harris of University of Nebraska - Lincoln is detailing the interactions of fungal and algal components of biocrusts to learn more about how they can tolerate environmental stress. Several projects build off previous efforts to sequence 1,000 microbial genomes (KMG), which have so far led to the selection of nearly 2,300 type strains. Markus Göker of Germany's DSMZ is spearheading KMG-4 to sequence environmentally relevant cultures. Ramunas Stepanauskas of Bigelow Laboratory for Ocean Sciences is using single-cell genomics to target taxonomic "blind spots" - clades systematically underrepresented or missed due to primer mismatches or intervening sequences in marker genes. Barbara MacGregor of the University of North Carolina is exploring how horizontal gene transfer has shaped the genomes of large sulfur bacteria, which are often found in dense microbial mats and play roles in carbon, nitrogen, sulfur and phosphorus cycles. Mary Ann Moran of the University of Georgia is examining organic sulfur biogeochemistry in a coastal ocean ecosystem. She and colleagues from the Monterey Bay Aquarium Research Institute collected a time series of coastal microbes and microbial communities which will be subjected to sequencing to explore how metabolism shifts as community composition changes. Mark Dopson of Sweden's Linnaeus University has a project that deploys DOE JGI's single cell genomics resources on samples sourced from deep subsurface. Targeted microbiomes will come from deep bedrock waters including Mont Terri Underground Rock Lab in Switzerland and sites constructed to investigate the safety of deep geological storage of spent nuclear fuel. Data would provide knowledge contributing to informed decisions on safety. Ashley Shade of Michigan State University plans to use a synthetic bacterial community to study the interactions among multiple community members, and then link community structure to activity and to exometabolite interactions. In sum, these 37 new projects will furnish a genomic basis for ongoing and new explorations into the genetic bases for plant, fungal, algal, microbial, and microbial community mechanisms focused on important processes of biofuel and bioenergy feedstock composition and metabolisms, as well as related environmental processes of importance for DOE missions. The U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, is committed to advancing genomics in support of DOE missions related to clean energy generation and environmental characterization and cleanup. DOE JGI, headquartered in Walnut Creek, Calif., provides integrated high-throughput sequencing and computational analysis that enable systems-based scientific approaches to these challenges. Follow @doe_jgi on Twitter. DOE's Office of Science is the largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | March 14, 2016
Site: www.cemag.us

Hydrogen is the lightest and most plentiful element on Earth and in our universe. So it shouldn’t be a big surprise that scientists are pursuing hydrogen as a clean, carbon-free, virtually limitless energy source for cars and for a range of other uses, from portable generators to telecommunications towers — with water as the only byproduct of combustion. While there remain scientific challenges to making hydrogen-based energy sources more competitive with current automotive propulsion systems and other energy technologies, researchers at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new materials recipe for a battery-like hydrogen fuel cell — which surrounds hydrogen-absorbing magnesium nanocrystals with atomically thin graphene sheets — to push its performance forward in key areas. The graphene shields the nanocrystals from oxygen and moisture and contaminants, while tiny, natural holes allow the smaller hydrogen molecules to pass through. This filtering process overcomes common problems degrading the performance of metal hydrides for hydrogen storage. These graphene-encapsulated magnesium crystals act as “sponges” for hydrogen, offering a very compact and safe way to take in and store hydrogen. The nanocrystals also permit faster fueling, and reduce the overall “tank” size. “Among metal hydride-based materials for hydrogen storage for fuel-cell vehicle applications, our materials have good performance in terms of capacity, reversibility, kinetics and stability,” says Eun Seon Cho, a postdoctoral researcher at Berkeley Lab and lead author of a study related to the new fuel cell formula, published recently in Nature Communications. In a hydrogen fuel cell-powered vehicle using these materials, known as a “metal hydride” (hydrogen bound with a metal) fuel cell, hydrogen gas pumped into a vehicle would be chemically absorbed by the magnesium nanocrystaline powder and rendered safe at low pressures. Jeff Urban, a Berkeley Lab staff scientist and co-author, says, “This work suggests the possibility of practical hydrogen storage and use in the future. I believe that these materials represent a generally applicable approach to stabilizing reactive materials while still harnessing their unique activity — concepts that could have wide-ranging applications for batteries, catalysis, and energetic materials.” The research, conducted at Berkeley Lab’s Molecular Foundry and Advanced Light Source, is part of a National Lab Consortium, dubbed HyMARC (Hydrogen Materials — Advanced Research Consortium) that seeks safer and more cost-effective hydrogen storage, and Urban is Berkeley Lab’s lead scientist for that effort. The U.S. market share for all electric-drive vehicles in 2015, including full-electric, hybrids and plug-in hybrid vehicles, was 2.87 percent, which amounts to about 500,000 electric-drive vehicles out of total vehicle sales of about 17.4 million, according to statistics reported by the Electric Drive Transportation Association, a trade association promoting electric-drive vehicles. Hydrogen-fuel-cell vehicles haven’t yet made major in-roads in vehicle sales, though several major auto manufacturers including Toyota, Honda, and General Motors, have invested in developing hydrogen fuel-cell vehicles. Indeed, Toyota released a small-production model called the Mirai, which uses compressed-hydrogen tanks, last year in the U.S. A potential advantage for hydrogen-fuel-cell vehicles, in addition to their reduced environmental impact over standard-fuel vehicles, is the high specific energy of hydrogen, which means that hydrogen fuel cells can potentially take up less weight than other battery systems and fuel sources while yielding more electrical energy. A measure of the energy storage capacity per weight of hydrogen fuel cells, known as the “gravimetric energy density,” is roughly three times that of gasoline. Urban noted that this important property, if effectively used, could extend the total vehicle range of hydrogen-based transportation, and extend the time between refueling for many other applications, too. More R&D is needed to realize higher-capacity hydrogen storage for long-range vehicle applications that exceed the performance of existing electric-vehicle batteries, Cho said, and other applications may be better suited for hydrogen fuel cells in the short term, such as stationary power sources, forklifts and airport vehicles, portable power sources like laptop battery chargers, portable lighting, water and sewage pumps and emergency services equipment. Cho says that a roadblock to metal hydride storage has been a relatively slow rate in taking in (absorption) and giving out (desorption) hydrogen during the cycling of the units. In fuel cells, separate chemical reactions involving hydrogen and oxygen produce a flow of electrons that are channeled as electric current, creating water as a byproduct. The tiny size of the graphene-encapsulated nanocrystals created at Berkeley Lab, which measure only about 3-4 nanometers, or billionths of a meter across, is a key in the new fuel cell materials’ fast capture and release of hydrogen, Cho said, as they have more surface area available for reactions than the same material would at larger sizes. Another key is protecting the magnesium from exposure to air, which would render it unusable for the fuel cell, she added. Working at The Molecular Foundry, researchers found a simple, scalable and cost-effective “one pan” technique to mix up the graphene sheets and magnesium oxide nanocrystals in the same batch. They later studied the coated nanocrystals’ structure using X-rays at Berkeley Lab’s Advanced Light Source. The X-ray studies showed how hydrogen gas pumped into the fuel cell mixture reacted with the magnesium nanocrystals to form a more stable molecule called magnesium hydride while locking out oxygen from reaching the magnesium. “It is stable in air, which is important,” Cho says. Next steps in the research will focus on using different types of catalysts — which can improve the speed and efficiency of chemical reactions — to further improve the fuel cell’s conversion of electrical current, and in studying whether different types of material can also improve the fuel cell’s overall capacity, Cho says. The Molecular Foundry and Advanced Light Source are both DOE Office of Science User Facilities. The work was supported by the Department of Energy Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy, the Bay Area Photovoltaic Consortium (BAPVC), and the US-India Partnership to Advance Clean Energy-Research (PACE-R) for the Solar Energy Research Institute for India and the U.S. (SERIIUS).


Home > Press > A new way to display the 3-D structure of molecules: Metal-organic frameworks provide a new platform for solving the structure of hard-to-study samples Abstract: Researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have created a sort of nanoscale display case that enables new atomic-scale views of hard-to-study chemical and biological samples. Their work, published online Aug. 18 in the journal Science, could help to reveal new structural details for a range of challenging molecules--including complex chemical compounds and potentially new drugs--by stabilizing them inside sturdy structures known as metal-organic frameworks (MOFs). The researchers introduced a series of different molecules that were chemically bound inside these porous MOFs, each measuring about 100 millionths of a meter across, and then used X-ray techniques to determine the precise molecular structure of the samples inside the MOFs. The samples ranged from a simple alcohol to a complex plant hormone, and the new method, dubbed "CAL" for covalent alignment (the molecules form a type of chemical bond known as a covalent bond in the MOFs), enables researchers to determine the complete structure of a molecule from a single MOF crystal that contains the sample molecules in its pores. The MOFs in the study, which are identical and are easy to manufacture in large numbers, provided a sort of backbone for the sample molecules that held them still for the X-ray studies--the molecules otherwise can be wobbly and difficult to stabilize. The researchers prepared the samples by dipping the MOFs into solutions containing different molecular mixes and then heating them until they crystallized. "We wanted to demonstrate that any of these molecules, no matter how complex, can be incorporated and their structure determined inside the MOFs," said Omar Yaghi, a materials scientist at Berkeley Lab and chemistry professor at UC Berkeley who led the research. The MOFs also possess a particular handedness known as "chirality"--like a left-handed person vs. a right-handed person--that selectively binds with molecular samples that also possess this handedness. The difference in a molecule's handedness is particularly important for pharmaceuticals, as it can mean the difference between a medicine and a poison. "This is one of the holy grails: how to crystallize complex molecules, and to determine their chirality," Yaghi said. Seungkyu Lee and Eugene A. Kapustin, Berkeley Lab researchers and UC Berkeley graduate students who participated in the latest work, said hard-to-study proteins, such as those important for drug development, are high-priority targets for the new technique. "We are aiming for those molecules that have never been crystallized before," Kapustin said. "That's our next step. So we cannot only show the arrangement of atoms, but also the handedness of molecules, in which pharmaceutical companies are interested." One of the best methods for studying any molecule's 3-D structure in atomic detail is to form it into a crystal. Then, researchers point intense X-ray light at the crystal, which produces a pattern of spots--like light off of a disco ball. Such patterns serve as a fingerprint for fully mapping the molecule's 3-D structure. Some molecules are difficult to form into crystals, though, and the process of crystallizing a single molecule can in some cases involve years of effort and expense. "To crystallize a molecule typically involves a trial-and-error method," Yaghi said. "Every chemist and biologist has to submit to this process. But in this MOF material you don't need all that--it traps the molecule and orders it. It's a way to bypass that trial-and-error approach to crystallography." Different types of MOFs, with different pore sizes, could be tested to find out which ones work best with different types of samples, Lee said. Importantly, the MOFs in the latest study did not appear to distort the natural, intact structure of the molecules. Researchers say it's possible to determine the complete 3-D structure of a molecule even if the samples only fill about 30 percent of a MOF's pores. Researchers determined the atomic structure of the MOFs and the bound molecules with X-rays at Berkeley Lab's Advanced Light Source (ALS), and they also studied the MOFs using a technique called nuclear magnetic resonance (NMR) at Berkeley Lab's Molecular Foundry. In all, the researchers studied 16 different molecules bound inside the MOF pores, including a plant hormone called jasmonic acid whose chiral structure had never been directly determined before, other plant hormone known as gibberellins, methanol, and other acids and alcohols. The metals in the MOF framework itself can actually serve to enhance the quality of the X-ray images, Kapustin said, adding that in one case the technique allowed researchers to distinguish between two nearly identical plant hormones based on the difference in a single atomic bond. Researchers could see structural details down to hundredths of a nanometer--less than the diameter of some atoms. "You can see with such precision whether it is a double bond or a single bond, or if this is a carbon atom or some other atom," Lee said. "Once you bind a molecule in the MOF, you can learn the absolute structure very precisely since the chirality of the MOF serves as a reference during the structure refinement." ### This work was supported by BASF SE in Germany and the King Abdulaziz City for Science and Technology Center of Excellence for Nanomaterials and Clean Energy Applications. The Advanced Light Source and Molecular Foundry are both DOE Office of Science User Facilities. About Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > New fuel cell design powered by graphene-wrapped nanocrystals: Berkeley Lab innovation could lead to faster fueling, improved performance for hydrogen-powered vehicles Abstract: Hydrogen is the lightest and most plentiful element on Earth and in our universe. So it shouldn't be a big surprise that scientists are pursuing hydrogen as a clean, carbon-free, virtually limitless energy source for cars and for a range of other uses, from portable generators to telecommunications towers--with water as the only byproduct of combustion. While there remain scientific challenges to making hydrogen-based energy sources more competitive with current automotive propulsion systems and other energy technologies, researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new materials recipe for a battery-like hydrogen fuel cell--which surrounds hydrogen-absorbing magnesium nanocrystals with atomically thin graphene sheets--to push its performance forward in key areas. The graphene shields the nanocrystals from oxygen and moisture and contaminants, while tiny, natural holes allow the smaller hydrogen molecules to pass through. This filtering process overcomes common problems degrading the performance of metal hydrides for hydrogen storage. These graphene-encapsulated magnesium crystals act as "sponges" for hydrogen, offering a very compact and safe way to take in and store hydrogen. The nanocrystals also permit faster fueling, and reduce the overall "tank" size. "Among metal hydride-based materials for hydrogen storage for fuel-cell vehicle applications, our materials have good performance in terms of capacity, reversibility, kinetics and stability," said Eun Seon Cho, a postdoctoral researcher at Berkeley Lab and lead author of a study related to the new fuel cell formula, published recently in Nature Communications. In a hydrogen fuel cell-powered vehicle using these materials, known as a "metal hydride" (hydrogen bound with a metal) fuel cell, hydrogen gas pumped into a vehicle would be chemically absorbed by the magnesium nanocrystaline powder and rendered safe at low pressures. Jeff Urban, a Berkeley Lab staff scientist and co-author, said, "This work suggests the possibility of practical hydrogen storage and use in the future. I believe that these materials represent a generally applicable approach to stabilizing reactive materials while still harnessing their unique activity--concepts that could have wide-ranging applications for batteries, catalysis, and energetic materials." The research, conducted at Berkeley Lab's Molecular Foundry and Advanced Light Source, is part of a National Lab Consortium, dubbed HyMARC (Hydrogen Materials--Advanced Research Consortium) that seeks safer and more cost-effective hydrogen storage, and Urban is Berkeley Lab's lead scientist for that effort. The U.S. market share for all electric-drive vehicles in 2015, including full-electric, hybrids and plug-in hybrid vehicles, was 2.87 percent, which amounts to about 500,000 electric-drive vehicles out of total vehicle sales of about 17.4 million, according to statistics reported by the Electric Drive Transportation Association, a trade association promoting electric-drive vehicles. Hydrogen-fuel-cell vehicles haven't yet made major in-roads in vehicle sales, though several major auto manufacturers including Toyota, Honda, and General Motors, have invested in developing hydrogen fuel-cell vehicles. Indeed, Toyota released a small-production model called the Mirai, which uses compressed-hydrogen tanks, last year in the U.S. A potential advantage for hydrogen-fuel-cell vehicles, in addition to their reduced environmental impact over standard-fuel vehicles, is the high specific energy of hydrogen, which means that hydrogen fuel cells can potentially take up less weight than other battery systems and fuel sources while yielding more electrical energy. A measure of the energy storage capacity per weight of hydrogen fuel cells, known as the "gravimetric energy density," is roughly three times that of gasoline. Urban noted that this important property, if effectively used, could extend the total vehicle range of hydrogen-based transportation, and extend the time between refueling for many other applications, too. More R&D is needed to realize higher-capacity hydrogen storage for long-range vehicle applications that exceed the performance of existing electric-vehicle batteries, Cho said, and other applications may be better suited for hydrogen fuel cells in the short term, such as stationary power sources, forklifts and airport vehicles, portable power sources like laptop battery chargers, portable lighting, water and sewage pumps and emergency services equipment. Cho said that a roadblock to metal hydride storage has been a relatively slow rate in taking in (absorption) and giving out (desorption) hydrogen during the cycling of the units. In fuel cells, separate chemical reactions involving hydrogen and oxygen produce a flow of electrons that are channeled as electric current, creating water as a byproduct. The tiny size of the graphene-encapsulated nanocrystals created at Berkeley Lab, which measure only about 3-4 nanometers, or billionths of a meter across, is a key in the new fuel cell materials' fast capture and release of hydrogen, Cho said, as they have more surface area available for reactions than the same material would at larger sizes. Another key is protecting the magnesium from exposure to air, which would render it unusable for the fuel cell, she added. Working at The Molecular Foundry, researchers found a simple, scalable and cost-effective "one pan" technique to mix up the graphene sheets and magnesium oxide nanocrystals in the same batch. They later studied the coated nanocrystals' structure using X-rays at Berkeley Lab's Advanced Light Source. The X-ray studies showed how hydrogen gas pumped into the fuel cell mixture reacted with the magnesium nanocrystals to form a more stable molecule called magnesium hydride while locking out oxygen from reaching the magnesium. "It is stable in air, which is important," Cho said. Next steps in the research will focus on using different types of catalysts--which can improve the speed and efficiency of chemical reactions--to further improve the fuel cell's conversion of electrical current, and in studying whether different types of material can also improve the fuel cell's overall capacity, Cho said. ### The Molecular Foundry and Advanced Light Source are both DOE Office of Science User Facilities. The work was supported by the Department of Energy Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy, the Bay Area Photovoltaic Consortium (BAPVC), and the US-India Partnership to Advance Clean Energy-Research (PACE-R) for the Solar Energy Research Institute for India and the U.S. (SERIIUS). About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit www.science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > New form of electron-beam imaging can see elements that are 'invisible' to common methods: Berkeley Lab-pioneered 'MIDI-STEM' produces high-resolution views of lightweight atoms Abstract: Electrons can extend our view of microscopic objects well beyond what's possible with visible light--all the way to the atomic scale. A popular method in electron microscopy for looking at tough, resilient materials in atomic detail is called STEM, or scanning transmission electron microscopy, but the highly-focused beam of electrons used in STEM can also easily destroy delicate samples. This is why using electrons to image biological or other organic compounds, such as chemical mixes that include lithium--a light metal that is a popular element in next-generation battery research--requires a very low electron dose. Scientists at the Department of Energy'sc Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new imaging technique, tested on samples of nanoscale gold and carbon, that greatly improves images of light elements using fewer electrons. The newly demonstrated technique, dubbed MIDI-STEM, for matched illumination and detector interferometry STEM, combines STEM with an optical device called a phase plate that modifies the alternating peak-to-trough, wave-like properties (called the phase) of the electron beam. This phase plate modifies the electron beam in a way that allows subtle changes in a material to be measured, even revealing materials that would be invisible in traditional STEM imaging. Another electron-based method, which researchers use to determine the detailed structure of delicate, frozen biological samples, is called cryo-electron microscopy, or cryo-EM. While single-particle cryo-EM is a powerful tool--it was named as science journal Nature's 2015 Method of the Year --it typically requires taking an average over many identical samples to be effective. Cryo-EM is generally not useful for studying samples with a mixture of heavy elements (for example, most types of metals) and light elements like oxygen and carbon. "The MIDI-STEM method provides hope for seeing structures with a mixture of heavy and light elements, even when they are bunched closely together," said Colin Ophus, a project scientist at Berkeley Lab's Molecular Foundry and lead author of a study, published Feb. 29 in Nature Communications, that details this method. If you take a heavy-element nanoparticle and add molecules to give it a specific function, conventional techniques don't provide an easy, clear way to see the areas where the nanoparticle and added molecules meet. "How are they aligned? How are they oriented?" Ophus asked. "There are so many questions about these systems, and because there wasn't a way to see them, we couldn't directly answer them." While traditional STEM is effective for "hard" samples that can stand up to intense electron beams, and cryo-EM can image biological samples, "We can do both at once" with the MIDI-STEM technique, said Peter Ercius, a Berkeley Lab staff scientist at the Molecular Foundry and co-author of the study. The phase plate in the MIDI-STEM technique allows a direct measure of the phase of electrons that are weakly scattered as they interact with light elements in the sample. These measurements are then used to construct so-called phase-contrast images of the elements. Without this phase information, the high-resolution images of these elements would not be possible. In this study, the researchers combined phase plate technology with one of the world's highest resolution STEMs, at Berkeley Lab's Molecular Foundry, and a high-speed electron detector. They produced images of samples of crystalline gold nanoparticles, which measured several nanometers across, and the super-thin film of amorphous carbon that the particles sat on. They also performed computer simulations that validated what they saw in the experiment. The phase plate technology was developed as part of a Berkeley Lab Laboratory Directed Research and Development grant in collaboration with Ben McMorran at University of Oregon. The MIDI-STEM technique could prove particularly useful for directly viewing nanoscale objects with a mixture of heavy and light materials, such as some battery and energy-harvesting materials, that are otherwise difficult to view together at atomic resolution. It also might be useful in revealing new details about important two-dimensional proteins, called S-layer proteins, that could serve as foundations for engineered nanostructures but are challenging to study in atomic detail using other techniques. In the future, a faster, more sensitive electron detector could allow researchers to study even more delicate samples at improved resolution by exposing them to fewer electrons per image. "If you can lower the electron dose you can tilt beam-sensitive samples into many orientations and reconstruct the sample in 3-D, like a medical CT scan. There are also data issues that need to be addressed," Ercius said, as faster detectors will generate huge amounts of data. Another goal is to make the technique more "plug-and-play," so it is broadly accessible to other scientists. ### Berkeley Lab's Molecular Foundry is a DOE Office of Science User Facility. Researchers from the University of Oregon, Gatan Inc. and Ulm University in Germany also participated in the study. About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 11, 2015
Site: phys.org

Basic biology research at Lawrence Berkeley National Laboratory (Berkeley Lab) has led to the formation of Cinder Biological, or CinderBio, a startup company producing a new class of enzymes made from microbes that thrive in hot volcanic waters. Co-founded by Berkeley Lab scientists Steve Yannone and Jill Fuss, CinderBio will first target the food processing industry, where its enzymes can significantly reduce the vast amounts of chemicals and water used to clean equipment. Eventually it expects to formulate enzymes for the biofuels, paper, and textile industries, and possibly even more. "Our enzymes are unique—they can operate in conditions that nobody else's can, so this opens up a lot of previously unexplored applications," said Fuss, a molecular biologist and the company's chief technology officer. "They're made from microbes that come from Yellowstone and live in hot acidic pools, so they thrive in nearly boiling acid. We've been able to take gene sequences and make them into enzymes that are extraordinarily stable." CinderBio won a cash prize in the 2014 FLoW DOE National Clean Energy Business Plan Competition at Caltech. It was also was one of eight technologies selected to be a project for the Fall 2013 UC Berkeley Haas School of Business Cleantech to Market program that seeks to translate cleantech research into market opportunities. This year, with a SBIR (Small Business Innovation Research) Phase 1 grant from the National Science Foundation, CinderBio tested its technology at a local creamery. The cleaning of dairy and other food processing equipment is normally a multi-step process. "They use a lot of base, then they wash, then acid, then wash, then sanitizer," said Fuss. "It takes a lot of time and uses a lot of chemicals and water. The microbes in the equipment can be hard to get rid of, so in some areas they've started to use harsher chemicals to combat those biofilms." In the field trials, using CinderBio's enzymes to do the cleaning in place of chemicals, Yannone, a named inventor and the company's CEO, said he was amazed at how successful they were. "I came back after our first trials and recalculated everything because I thought we had moved a decimal point," he said. "We reduced water usage by almost 30 percent and our enzymes removed contaminants much faster and much more effectively than we had expected. The managers and owner at the creamery were all very excited by our results." Enzymes are naturally occurring molecules that act as catalysts to activate or speed up biochemical reactions; their use in industry is commonplace and usually beneficial to the environment since they can replace or reduce the use of chemicals or allow processes to take place at lower temperatures, lowering energy use. CinderBio's technology comes from extremophilic microbes that grow naturally in thermal pools and have been studied for many years. The Berkeley Lab team was able to exploit aspects of the biology of these extreme microbes to turn them into enzyme-making machines. They developed a set of genetic tools that allow them to introduce naturally occurring genes into these microbes to produce enzymes that are more heat- and acid- stable than what has been available. "Historically scientists have thought that all the information needed to make an active enzyme is entirely encoded in the DNA sequence," said Yannone. "Some of our revelations center on the idea that that's not true, that there are other components in a microbe making an enzyme that's stable and works in hot acid other than just the DNA sequence." Yannone and Fuss first started studying the microbe about 10 years ago, looking for the fundamental biological processes that allowed it to thrive in such extreme conditions. Although scientists are still investigating what exactly confers such extreme stability on the microbe, Fuss was able to characterize the structural biochemistry of one of the enzymes. And Yannone, working with his undergraduate student then technician, Adam Barnebey, developed the molecular biology system that led to the formation of CinderBio. Their invention was patented by Berkeley Lab, and Barnebey was the company's first employee. They are under consideration for a Phase 2 SBIR grant and next year may seek additional outside funding. They plan to further refine the platform and scale up production while seeking practical feedback from the food processing industry, a vast market. "We think a lot about what we eat and how our food is grown, but we don't think about what happens in the middle," Fuss said. "Our technology helps make food processing more sustainable and could be used for any food product that goes in a container and ends up on a shelf." Besides saving water and time, use of CinderBio's enzymes would also cut down on chemical waste. Chemical wastewater cannot be reclaimed and instead often has to be trucked off-site. Replacing these chemicals with biodegradable CinderBio enzymes makes that wastewater reclaimable, of particular importance given the ongoing drought in California. "It's great when Berkeley Lab scientists are able to transform their fundamental research into an innovative technology that not only solves industry challenges but also contributes to our country's energy and environmental challenges," said Elsie Quaite-Randall, Berkeley Lab's Chief Technology Transfer Officer. Explore further: Going to extremes to find greener chemicals


News Article | October 10, 2016
Site: www.rdmag.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer's spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data--like heat maps, node link diagrams and anatomical views--to provide context for brain connectivity data. "The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before," says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. "Other tools tend to look at abstract or statistical network connections but don't do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context," says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool's functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool's development. As a neuroscientist at UCSF's Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer's and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. "We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That's the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections," says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. "For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c," Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they'd also look at fMRI data that had been reduced to a static network diagram. "The problem with this analysis process is that it's all static. If I wanted to explore another region of the brain, which would be a different pattern, I'd have to input a whole different set of data and create another set of static images," says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. "Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn't realize existed before in the dataset. Every time I use it, I find something surprising in the data," says Brown. "It is also incredibly valuable for researchers who don't know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition." The idea for Brain Modulyzer initiated when Berkeley Lab's Weber and Seeley met at the "Computational Challenges for Precision Medicine" in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries--a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. "At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools," says Brown. "The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch." After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called "Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data." Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. "We want to make sure that this tool is useful to the community, so we will keep iterating on it," says Brown. "We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time."


News Article | October 12, 2016
Site: www.biosciencetechnology.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer's spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data--like heat maps, node link diagrams and anatomical views--to provide context for brain connectivity data. "The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before," says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. "Other tools tend to look at abstract or statistical network connections but don't do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context," says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool's functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool's development. As a neuroscientist at UCSF's Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer's and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. "We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That's the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections," says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. "For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c," Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they'd also look at fMRI data that had been reduced to a static network diagram. "The problem with this analysis process is that it's all static. If I wanted to explore another region of the brain, which would be a different pattern, I'd have to input a whole different set of data and create another set of static images," says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. "Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn't realize existed before in the dataset. Every time I use it, I find something surprising in the data," says Brown. "It is also incredibly valuable for researchers who don't know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition." The idea for Brain Modulyzer initiated when Berkeley Lab's Weber and Seeley met at the "Computational Challenges for Precision Medicine" in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries--a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. "At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools," says Brown. "The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch." After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called "Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data." Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. "We want to make sure that this tool is useful to the community, so we will keep iterating on it," says Brown. "We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time."


News Article | October 10, 2016
Site: www.chromatographytechniques.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer’s spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data—like heat maps, node link diagrams and anatomical views—to provide context for brain connectivity data. “The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before,” says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. “Other tools tend to look at abstract or statistical network connections but don’t do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context,” says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool’s functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool’s development. As a neuroscientist at UCSF’s Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer’s and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. “We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That’s the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections,” says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. “For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c,” Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they’d also look at fMRI data that had been reduced to a static network diagram. “The problem with this analysis process is that it’s all static. If I wanted to explore another region of the brain, which would be a different pattern, I’d have to input a whole different set of data and create another set of static images,” says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. “Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn’t realize existed before in the dataset. Every time I use it, I find something surprising in the data,” says Brown. “It is also incredibly valuable for researchers who don’t know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition.” The idea for Brain Modulyzer initiated when Berkeley Lab’s Weber and Seeley met at the “Computational Challenges for Precision Medicine” in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries—a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. “At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools,” says Brown. “The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch.” After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called “Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data.” Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. “We want to make sure that this tool is useful to the community, so we will keep iterating on it,” says Brown. “We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time.” In addition to Brown, Weber, Murugesan and Seeley other authors on the paper are Bernd Hamann (UC Davis), Andrew Trujillo (UCSF) and Kristopher Bouchard (Berkeley Lab).


News Article | November 15, 2016
Site: cleantechnica.com

As one of the sunniest states in the US, Arizona has enjoyed early and great success as a leader in solar energy production. With 2,453 megawatts (MW) of solar installed, the US Solar Energy Industries Association (SEIA) ranks Arizona as #2 for total installed solar capacity. Expanding 4% over 2014, in 2015, 258 MW of solar capacity was installed, ranking Arizona 5th in the nation for 2015 installed capacity. Over $582 million was invested last year in Arizona’s solar installations. Currently, the total amount of solar power capacity installed in Arizona is enough to power 348,000 homes, but over the next 5 years, SEIA expects that number to more than double. With over 375 solar companies serving the value chain in Arizona, there are more than 6,900 people employed in the state’s solar market. With all this tremendous growth in Arizona’s solar industry over the past 4 years, it is sad to see the current negative effects of solar policy manipulation and politics at play in Arizona’s energy arena. Changes in net metering incentives, the imposition of new rate plans, proposed demand rates and grid-use charges, and on top of all of this, politically nuanced media representation have given rise to huge instability in the state’s solar market. Engaging with local stakeholders and policy makers, SEIA states that it is working to promote “stability and transparency into policies so that the market can recover from this market disturbance and continue to grow.” A huge back-and-forth exchange between solar proponents and state utilities has developed into a full-blown battle over the value of rooftop solar in Arizona. At the root of the tumult is the view of Arizona Public Service Electric Company (APS) and state regulators that grid-connected solar customers must pay their “fair share” for grid services in the form of an additional fee tacked on to their utility bill. Solar proponents respond that grid-connected solar energy supplies a bonus to the grid that other customers don’t offer, so paying an additional charge is an unfair penalty. Especially in the face of lowered net metering rates for excess solar energy entering the grid, proposing an additional charge for grid-connected solar customers has really slapped rooftop solar customers in the face and heated up the public debate. The volley reached the level of drafting competing solar ballot initiatives to amend the state’s constitution, until solar advocates and state lawmakers entered mediation, brokered a compromise, and dropped the ballot initiatives. The ceasefire, brokered between SolarCity (representing “Yes on AZ Solar,” a political action committee funded by SolarCity) and Arizona Public Service Company finally occurred when the two sides agreed to mediate over how solar customers will be compensated for their excess electricity. The ceasefire was brokered less than 1 hour after the Arizona State Senate initiated steps to send the electorate measures mandating separate rates for rooftop solar customers, and for regulating solar leasing companies as utilities. Head utility regulator, Arizona Corporation Commission (ACC) Chairman Doug Little, expressed relief at the ceasefire negotiation. He said, “It would have been a pretty ugly dispute between the ballot measures over the summer.” In August, after 2 full days of testimony before the commission, the ACC rejected requests to add fees for solar customers and to end net metering. Earthjustice attorney Michael Hiatt reported, “This decision is great news for Arizona families and small businesses that plan on going solar, and for everyone who breathes cleaner air as a result.” He added, “The decision sends a powerful message to Arizona utilities that the Commission will not simply rubberstamp their anti-solar agenda.” However, the squabbles over Arizona’s solar energy future are not over yet. A year-long Value of Solar Docket was opened at this same time last year, hoping to resolve the rancorous debate over solar customer compensation once and for all. The VOS docket could conclude within the next month and promises to offer another round of spectacular fireworks, rhetoric, and politically nuanced media representation. Stay Tuned! The following is an (untumultuous) in-depth exploration of the state of solar energy in Arizona. Please feel free to offer further contributions in the comments, below. Because everyone’s situation is unique, there’s really no substitute to simply getting a solar quote. However, research and data are always being updated and the SEIA reports that installed solar PV system prices in the US have dropped by 12% from last year and 66% from 2010. The latest data indicates that in 2016 the average installed cost of solar in the US is just below $3.50/watt for residential and $2.00/watt for commercial installations. This means that today it will cost about $17,500 to buy an average-size (5 kW) system at roughly $3.50/watt. In Arizona, 6 kW systems are fairly common, so this price works out to about $21,000. Even using 2011’s data for monthly savings, at $137 saved per month for Arizona over the course of 20 years, this works out to be $32,880, netting a gain of (at least) $11,880 on a 6 kW system installed in 2016. What if you don’t stay in your home for the full 20 years? In Arizona, property taxes don’t increase with the addition of a rooftop solar installation, but your home’s sales value increases an average of $5,911 per kilowatt of solar installed, according to a 2013 Lawrence Berkeley National Laboratory (LBNL) study. So, even if you sell your home in the year following your 6 kW solar rooftop installation, it is reasonable to expect a sales price increase of something like $35,466. This leaves you with a net gain of about $14,466 on your ($21,000) solar rooftop, assuming you sell your house at the asking price. Sell or stay, either way, rooftop solar is a win-win scenario! The federal solar investment tax credit (ITC) is available in all states of the US. Up to 30% of a solar installation’s costs can be credited via the ITC. However, you can only take advantage of the credit once, and only if you have that much tax liability in that tax year. Currently, the 30% ITC extends through 2019 but is scheduled to be phased out after that. Businesses may also qualify for incentives, including the Renewable Energy Production Tax Credit (PTC), the Business Energy Investment Tax Credit (ITC-1603), and a corporate depreciation incentive, as well. The following are a few of Arizona’s state solar energy incentives listed on the DSIRE Incentives Database. Operated by the N.C. Clean Energy Technology Center at N.C. State University, the Database of State Incentives for Renewables & Efficiency (DSIRE) offers an up-to-date and comprehensive website for solar energy incentives across the US, including federal and state incentives, programs, and policies. • Start Date: 05/23/2009 • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Hydrogen, Municipal Solid Waste, Combined Heat & Power, Landfill Gas, Wind (Small), Hydroelectric (Small), Anaerobic Digestion, Fuel Cells using Renewable Fuels • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Federal Government, Agricultural, Institutional, Senior citizens • Applicable Utilities: Investor-owned utilities, electric cooperatives • System Capacity Limit: No capacity limit specified, but system must be sized to meet part or all of customer’s electric load and may not exceed 125% of customer’s total connected load • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Credited to customer’s next bill at retail rate; excess reconciled annually at avoided-cost rate • Ownership of Renewable Energy Credits: Not addressed in net metering rules; customer owns RECs unless participating in a utility incentive • Meter Aggregation: Not addressed Net Excess Generation: Accomplished using a single bi-directional meter, customer net excess generation (NEG) “is carried over to the customer’s next bill at the utility’s retail rate, as a kilowatt-hour (kWh) credit. Any NEG remaining at the customer’s last monthly bill in the annual true-up period will be paid to the customer, via check or billing credit, at the utility’s avoided cost payment.” The NEG rule continues, “For customers taking service under a time-of-use rate, off-peak generation will be credited against off-peak consumption, and on-peak generation will be credited against on-peak consumption. The customer’s monthly bill is based on the net on-peak kWh and net off-peak kWh amounts. Any monthly customer NEG will be carried over to the customer’s next bill as an off-peak or on-peak kWh credit.” Additional Charges: The ACC requires net metering charges be assessed non-discriminatorily. Any new or additional charges increasing eligible customer-generator’s costs beyond that of other customers in the rate class that the eligible customer-generator would be otherwise assigned to must first be proposed to the ACC for consideration and approval. The ACC also notes that the utility has the burden of proof in any such cost increase proposal. In December 2013, responding to an application from APS addressing cost shifting, the ACC ordered a $0.70 per kW charge (a $3.50 monthly charge for a 5 kW system) for all residential distributed generation systems installed on or after January 1, 2014.** However, this charge doesn’t apply to customers with systems installed on or before December 31, 2013. The ACC also requires APS to file quarterly reports about the number of new distributed generation installations per month, the kW size of those installations, and to report the amount of revenue collected from customers through the lost fixed cost recovery charge. Other utilities have also proposed additional charges for customer-generators. *Salt River Project (SRP) and municipal utilities do not fall under the jurisdiction of the Arizona Corporation Commission, and are therefore not subject to the state net metering rules–see SRP Net Metering Program Overview, below. **The charge applies specifically to “distributed generation” systems, not “net-metered systems.” However, as net metering only applies to systems located on the customer’s premises, this charge will affect net metering customers. The charge only applies to APS customers. • Website: http://www.srpnet.com/environment/earthwise/solar/solarelectric.aspx • Utilities: Salt River Project • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Photovoltaics, Wind (All), Wind (Small) • Applicable Sectors: Commercial, Residential, Low-Income Residential • Applicable Utilities: SRP • System Capacity Limit: Existing Self-Generation Customers: 300 kW • New Self-Generation Customers: Not specified • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Credited to customer’s next bill at retail rate; held for same time of use period. • For existing DG customers: excess reconciled annually in April at average annual market price minus price adjustment of 0.00017/kWh • Ownership of Renewable Energy Credits: Not addressed • Meter Aggregation: Not allowed Note: Salt River Project (SRP) requires an interconnection agreement. SRP changed its existing net metering program for residential customers in February 2015. These changes went into effect with the April 2015 billing cycle. However, customers who purchased their distributed energy system or signed a lease agreement before December 8, 2014, are allowed to keep their original net metering rate plan for 20 years. Net Excess Generation: According to the SRP NEG rules, “The kWh delivered to SRP are subtracted from the kWh delivered from SRP for each billing cycle. If the kWh calculation is net positive for the billing cycle, SRP will bill the net kWh to the customer under the applicable price plan. If the kWh calculation is net negative for the billing cycle, SRP will carry forward and credit the kWh against customer kWh usage on the next monthly bill. However, if the kWh is net negative at the end of the April billing cycle, SRP will credit the net kWh from the customer at an average annual market price. No credits will be carried forward to the May billing cycle.” Additional Charges: Under SRP’s self-generation plan, customers must pay a fixed monthly service fee based on the size of their electricity service. They also must pay a grid, or demand, charge based on the customer’s maximum energy usage during peak electricity times. According to SRP, the energy charges per kilowatt-hour (kWh) are lower with the customer generation plan than with the standard residential rate plans. Aimed at reducing Arizona’s staggering peak load, a $4 million residential storage program was mandated in July 2016 by the Arizona Corporation Commission (ACC). Arizona Public Service (APS) received a 120-day deadline to firm up the specifics and get approval from the ACC, making this the first time a state besides California has mandated investment in home energy storage. Totally amazingly, the ACC has initiated this hallmark policy at no additional cost to ratepayers by tapping into an existing fund for energy-efficiency projects. Commissioner Andy Tobin proposed the idea last June. Refined with the help of Commissioner Bob Stump, the proposal passed at ACC’s July meeting. Commissioner Tobin said, “My view is, why should we be waiting around when we can help stimulate the storage market with surplus funds that are already there, that are supposed to be used for these energy efficiencies?” He added, “We’re missing the boat across the country, quite frankly, in pushing for storage the way we pushed for solar 10 years ago.” The ACC has mandated that energy storage should help residents lower demand during peak system load times. The proposal states, “Distributed generation technology may require sending more complex price signals not otherwise appropriate for traditional customers,” explaining that the utility “may need to offer participating customers advanced, time-differentiated rate plans.” “Arizona’s only energy problem in the short [term] and now,” said Tobin, “is peak demand four hours a day, five days a week for three months.” He explained, “When you’re talking 110, 115 degrees on a hot day and you’ve got kids in the house or seniors at home, it’s not like you have a lot of options.” The ACC’s proposed residential energy storage program offers a new option. Incentivizing residents to use stored energy at peak load times may be the best solution for Arizona. The pilot plan may even lead to a robust market for energy storage in the state, as well as offering significant savings for residential ratepayers. Commissioner Tobin is optimistic about the future of energy storage for Arizona. “If you can take two hours in storage out of a four-hour peak in an Arizona summer,” he said, “that’s huge.” The Solar Energy Industry Association (SEIA) lists the following solar installations as especially noteworthy in Arizona: • Agua Caliente in Yuma County was completed in 2013 by developer First Solar. This photovoltaic project has the capacity to generate 290 MW of electricity– enough to power over 41,200 Arizona homes. • At 125 MW, Arlington Valley Solar Project II is among the largest solar installations in Arizona. Completed in 2015 by LS Power, this photovoltaic project has enough electric capacity to power more than 17,700 homes. • Several large retailers in Arizona have gone solar, including Wal-Mart, REI, Intel, and IKEA. Macy’s has installed one of the largest corporate photovoltaic systems in the state with 3,456 kW of solar capacity at their location in Goodyear. According to SEIA, there are currently over 375 solar companies employed throughout the value chain in Arizona, ranging from solar system installers to manufacturers. The following is not a complete listing of all solar installers in Arizona, but were chosen and listed based on high customer recommendations and reviews: It is worse than ironic that Arizona should be debating the value of rooftop solar energy generation. The Arizona Public Service (APS) has around $2 billion invested in solar energy capacity, with a portfolio divided between 551 MW of rooftop solar installations and 499 MW of utility-scale solar plants. This past summer, amidst all the tumult and rancor over rooftop solar customer compensation, APS became the first utility outside of California to surpass 1 GW of solar energy capacity. Reaching this milestone hasn’t exactly promoted solar benevolence among APS, however. As noted in the introduction, APS has rather promoted battle between itself and Arizona’s solar advocates, narrowly avoiding a ballot initiative to amend the state constitution in favor of rooftop solar customers. A rate increase proposal before the Arizona Corporation Commission (ACC) is currently under review, pushing for a mandatory demand charge that would increase residential customers’ bills by an average of 7.96%. “In its upcoming rate case,” states Solar Strong America Chairman Kris Mayes, “APS suggests ratepayers shoulder an additional $3.6 billion in new costs. That’s real money out of household budgets, and if the ACC can help mitigate those costs by encouraging more rooftop solar energy, then the benefits of rooftop solar must be fully acknowledged.” Mayes, a former ACC Commissioner, adds dryly, “Arizona ratepayers could have cost savings falling from the sky, if our regulators see the light.” In November, the ACC is expected to vote on a methodology for quantifying the benefits of rooftop solar for utility ratepayers. The “Value of Solar” docket, as the ACC vote is referred to, is a critical pivot point upon which Arizona’s rooftop solar industry is poised, breathless but hopeful for its future. Retired Congressman Barry Goldwater Jr. and Retired Mohave County Sheriff Tom Sheahan are encouraging all solar advocates to weigh in on the value that rooftop solar and energy choice has in Arizona. In their recent opinion piece, published in the AZ Capitol Times, Goldwater and Sheahan note: “The Arizona Corporation Commission is holding a proceeding to determine the value of rooftop solar, the value that utilities should pay customers who deliver electricity back to the grid. It is clear that rooftop solar benefits all Arizonans. It drives down costs for utilities by lessening the need to build new generating stations and transmission lines, for starters – costs that are passed onto us. “Solar gives Arizonans options to generate their own electricity and lower their bills. It reduces pollution. Solar saves water. Solar creates jobs and provides almost 9,000 Arizona families with incomes higher than the national average according to the Solar Foundation. The benefits of our sunshine and rooftop solar are numerous. “Rooftop solar encourages free market competition. That’s why Arizona monopoly utilities have been trying to undermine rooftop solar. They don’t want you to supply any of your own energy. “The outcome of the ACC Value of Solar docket will impact all Arizonans. Recent independent studies conducted in other states show rooftop solar provides a long-term benefit to all, whether you have solar on your roof or not. “If our Arizona Commissioners conclude that solar is not a net positive for everyone, utilities across our state will use this as an excuse to strengthen their monopolies. This will hurt Arizona energy choice and send jobs and business running from the state. All benefits need to be thoroughly considered in the Commission’s deliberations. “Currently six utilities regulated by the ACC have filed proposals that threaten to eliminate thousands of solar jobs and the growing solar industry. Now, they are hoping that the value of solar proceeding will give them an excuse to do so. “A fair methodology for assessing all the benefits of rooftop solar would do the opposite and stop the utility assault on rooftop solar in its tracks. “That’s why it’s important for Arizonans to weigh in. Let regulators know the value that rooftop solar and energy choice has for Arizona. Let regulators know that our values lie in individual choice and freedom, homegrown jobs, innovation and competition.” Goldwater and Sheahan urge interested people to call the Arizona Corporation Commission and reference the Value of Solar Docket. ACC Phoenix Office: (602) 542-4251 or 1-(800) 222-7000 (Toll Free In-State Only). ACC Tucson Office: (520) 628-6550 or 1-(800) 535-0148 (Toll Free In-State Only). In addition, you can email Arizona’s Commissioners and offer your thoughts on the value of rooftop solar at the following addresses: Arizona Corporation Commission – Learn about the governing body that regulates the electricity rates and services of Arizona public utilities Arizona State Legislature – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Arizona Arizona Governor’s Office of Energy Policy – Find a wide variety of information on state government energy programs, policy, projects, energy-saving strategies and energy-related statistics Arizona State University – Solar Power Lab – Read about the innovative solar technology research taking place at Arizona State University Arizona Goes Solar – Learn about residential and commercial solar energy options and related regulations through a collaborative effort led by the Arizona Corporation Commission and implemented by the regulated electric utilities in Arizona DSIRE Incentives Database — Arizona – Search a public clearinghouse for specific solar energy incentives in Arizona and across the United States U.S. Energy Information Administration – Arizona State Profile – Explore official energy statistics, including data on electricity supply and demand, from the U.S. government BBB Accredited Business Directory — Solar Energy System Design & Installation – Better Business Bureau’s comprehensive directory of Arizona’s solar energy contractors, dealers, suppliers, manufacturers, and distributors Check out this Youtube clip on Arizona’s Solar Story: Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | December 4, 2016
Site: cleantechnica.com

Connecticut enjoys one of the fastest-growing solar markets among the smaller US states. One reason is because the Nutmeg State actually receives more sunshine than most of the solar farms in Germany, with an average of close to 4.5 hours of direct sunlight for every 10 square feet (approx. 1 square meter). It may be a small state, but Connecticut is big on renewables and makes it easy for residents and businesses to go solar. The state has consistently ranked in the top 10 nationally for energy efficiency programs & policies since the early 2000s. Over 10,000 homes in Connecticut added solar panels since 2012, with nearly 5,000 installations completed in 2014 alone. And, with the state’s recent launch of C-PACE, municipalities and commercial property owners are increasingly installing solar systems, as well. The Solar Energy Industries Association (SEIA) reports that over 147 solar companies employ nearly 2000 people in Connecticut. The total installed capacity of solar energy in Connecticut is 265 MW, ranking it 17th in the US. The state’s 2015 installation of solar capacity ranks 14th nationally, growing 64% in 2015, with 91 MW installed for the year. Investments in solar installations for 2015 represented a whopping 124% increase over 2014, pulling in $270 million, and 2016 is expected to see even greater growth. Unlike many states where going solar is treated like an act of treason, Connecticut has been consistently and refreshingly open-minded and generous in its adoption of clean energy policies, programs, and incentives. With the electric restructuring legislation of 1998, Connecticut’s Public Act 98-28 established separate funds supporting renewable energy and energy efficiency. A surcharge on ratepayers’ utility bills started up in 2000 for the renewable energy fund, charging $0.0005 per kilowatt-hour (0.5 mill per kWh), rising to 1 mill in July 2004. The Connecticut Green Bank, known officially as the Clean Energy Finance and Investment Authority (CEFIA), administers the CT Clean Energy Fund (CCEF) fed by the renewable energy fund surcharge. Designated by the CT General Assembly, the CT Green Bank is a quasi-governmental investment organization that develops programs and funds projects that meet the CCEF mission. The total funding for the CCEF from 2000–2010 was $151 million, with annual funding of approximately $20 million. Eligible projects include solar photovoltaics (PV), biomass, hydroelectric, hydrogen, combined heat & power (CHP), fuel cells using non-renewable or renewable fuels, tidal, wave, ocean thermal, and other distributed generation technologies. Applicable sectors include residential, commercial, industrial, institutional, and utilities. Also, since 1977, Connecticut residents have enjoyed a 100% exemption from additional property taxes when installing solar panels on their homes. A solar installation represents an instant property value boost of many thousands of dollars for a homeowner. However, as long as the proper claim is filed in time, Connecticut never charges the homeowner any extra taxes for installing solar panels. Add to this a 100% sales tax exemption for the solar installation and its value immediately increases an additional 6.35%. Thanks to this early and visionary foresight, Connecticut offers a wide range of generously funded incentives, rebates, and financing programs to help its citizens transition to solar energy production for their homes and businesses. The following is an in-depth exploration of the state of solar energy in Connecticut. Please feel free to offer further contributions in the comments, below. Determining what a solar PV system will cost up front and save you over the long run depends on a number of factors, including system size, the length of time the system is in operation, and the price of electricity over the course of this time. In Connecticut, the average residential solar system is 7 kW and generates around 8,400 kWh per year, but your needs and capabilities may be different, so it is always best to get a personalized solar quote. According to the SEIA, installed solar PV system prices in the US have dropped 12% from 2015 and 66% since 2010. In 2016, the latest data indicate that the average installed cost of solar in the US is $2.89/watt for residential and around $2.00/watt for commercial installations. At $3.50/watt, this means that in Connecticut it will cost roughly $24,500 to buy an average-size (7 kW) system. The federal ITC of 30% reduces this out-of-pocket expense by about $7,350, and you can expect an additional savings of about $1,488 ($124/mo) in estimated energy savings. The total reduction works out to be around $8,838, so the first year cost of installing a 7 kW solar PV system would be around $15,662. Keep in mind that the monthly energy savings estimate is based on 2011 data, but over 20 years this still works out to be around $29,760, or a net gain of around $14,098 for a 7 kW system installed in 2016. No matter how you look at it, this is a great return on your investment and we didn’t even calculate the additional gain you will make by selling your excess electricity back to the utility under Connecticut’s generous net metering rate of $0.18 kWh! And finally, even if you don’t remain in your home for a full 20 years, your home’s sales value significantly increases as soon as your solar system is installed. According to a Lawrence Berkeley National Laboratory (LBNL) study, every kilowatt of solar installed on your roof increases the sales value of your home by an average of $5,911. This means that a 7 kW system will add $41,377 to your home’s value, so even if you sell it after the first year, you could potentially pay off your ($15,662) solar PV system and still make $25,715 on your investment! The federal solar investment tax credit (ITC) is available in every state and offers credit from the federal government of up to 30% of the total costs of a solar installation. To get this tax rebate, fill out the one-page form with your federal tax return. If you are not expecting to owe taxes in the same year as your installation, roll over your credit to the following year. The federal ITC is currently scheduled to remain through 2019 and is to be phased out thereafter. Businesses are also eligible for corporate depreciation incentives, a Business Energy Investment Tax Credit (ITC-1603) and a Renewable Energy Production Tax Credit (PTC). The following is a limited listing of Connecticut solar energy incentives from the DSIRE Incentives Database. Operated by the NC Clean Energy Technology Center at NC State University, the Database of State Incentives for Renewables & Efficiency (DSIRE) website offers current, comprehensive public information on US solar energy incentives, including federal and state policies, programs, and incentives. • Eligible Renewable/Other Technologies: Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Municipal Solid Waste, Combined Heat & Power, Fuel Cells using Non-Renewable Fuels, Landfill Gas, Tidal, Wave, Ocean Thermal, Wind (Small), Hydroelectric (Small), Fuel Cells using Renewable Fuels • Eligible Efficiency Technologies: Heat recovery, Yes; specific technologies not identified • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Federal Government, Agricultural, Multifamily Residential, Institutional • Applicable Utilities: Investor-owned utilities • System Capacity Limit: Standard net metering: 2 MW; Virtual net metering: 3 MW • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Carried over as a kWh credit for one year; Reimbursed to customer at the avoided cost of wholesale power at the end of the year (March 31). • Ownership of Renewable Energy Credits: Customer owns RECs • Meter Aggregation: Yes (virtual net metering allowed for municipal, state, or agricultural customers) Connecticut Light and Power Company (CL&P) and United Illuminating Company (UI), CT’s two investor-owned utilities, are required to provide net metering to customers that generate electricity using “Class I” renewable energy resources, which include solar and other renewables, up to two megawatts (MW) in capacity, with no stated limit on the aggregate capacity of net-metered systems in the utility’s service territory. Any net excess generation (NEG) during a monthly billing cycle is carried over to the following month as a kWh credit for one year. On March 31 (the year’s end), the utility must pay the customer for all remaining NEG at the “avoided cost of wholesale power.” Virtual Net Metering: — Virtual net metering is allowed for state, municipal, and agricultural customers generating electricity from either Class I or Class III* resources from facilities of up to 3 MW. The system may be owned by the customer, leased by the customers, or owned by a third-party on a customer’s property. It may supply electricity for the municipal host customer and additional beneficial accounts so long as beneficial accounts and the host account are within the same electric distribution company’s service territory. Municipal or state customers may host up to 5 additional municipal or state accounts, and 5 additional non-state or -municipal buildings if those accounts are “critical facilities”** and are connected to a microgrid. Agricultural customers may host up to 10 beneficial accounts so long as those accounts either use electricity for agricultural purposes, or are municipal, or non-commercial critical facilities. Additionally, all virtual net metering hosts may aggregate all meters owned by the customer host. When host customers produce more electricity than is consumed, excess electricity is credited to the beneficial accounts for the next billing cycle at the retail rate “against the generation service component and a declining percentage of the transmission and distribution charges that are billed to the beneficial accounts.” The declining percentages are as follows: • First year of commercial operation: 80% of transmission and distribution charges • Second year of commercial operation: 60% of transmission and distribution charges • Third year of commercial operation and after: 40% of transmission and distribution charges Excess credits are rolled over every month for one year. The electric distribution company must compensate municipal or state host customers for excess virtual net metering credits remaining, if any, at the end of the calendar year “at the retail generation rate and the above declining percentage of transmission and distribution charges.” Note: HB 5496 enacted on June 2016 requires that the virtual net metering facilities must be operational within 18 months from the date CT Department of Energy and Environmental Protection (DEEP) issues final permit. Shared Clean Energy Pilot Program — Enacted in June 2015, SB 928 requires DEEP to create a two-year pilot “shared energy facility program,” also known as community net metering. This program allows individuals to subscribe and receive credits for electricity generated from an off-site solar or other renewable energy facility. Subscribers must be located within the same electric distribution company service territory as the facility is located. To be eligible, a “Shared Clean Energy Facility” must: • Have a nameplate capacity of 4 MW or less • Have at least two subscribers DEEP was required to issue a Request for Proposal (RFP) to develop shared clean energy facilities by 1 July 2016. The success of the pilot program will be studied for two years, with full analysis and recommendations to be issued by 1 July 2018. Shared clean energy facilities may be owned by for-profit or not-for-profit organizations and third-party entities may be contracted with to build, own, or operate these facilities. The pilot program total capacity is capped at 6 MW, with 2 MW allocated for the United Illuminating service area and 4 MW for the Eversource service area. *Class III resources are defined as “the electricity output from combined heat and power systems with an operating efficiency level of no less than fifty per cent that are part of customer-side distributed resources developed at commercial and industrial facilities in this state on or after January 1, 2006, a waste heat recovery system installed on or after April 1, 2007, that produces electrical or thermal energy by capturing preexisting waste heat or pressure from industrial or commercial processes, or the electricity savings created in this state from conservation and load management programs begun on or after January 1, 2006.” **Critical Facilities are defined as “a hospital, police station, fire station, water treatment plant, sewage treatment plant, public shelter, correctional facility, production and transmission facilities of a television or radio station, commercial area of a municipality, municipal center, or any other area identified by the Department of Energy and Environmental Protection as critical.” • Sales and Use Tax Exemption for Energy-Efficient Products Last updated: 12/12/2014 • Sales and Use Tax Exemption for Solar and Geothermal Systems Last updated: 12/12/2014 • The United Illuminating Company – ZREC and LREC Long Term Contracts Last updated: 02/23/2015 In September 2015, the CT DEEP opened a proceeding to advance energy storage systems and other electric grid improvements. This proceeding supports Section 103 of Public Act 15-5, passed in June 2015, requiring Connecticut electric distribution companies to submit proposals to DEEP for “demonstration projects to build, own, or operate grid-side system enhancements, such as energy storage systems.” As an initial step, the proceeding offered opportunities for public comment and stakeholder workshop, as well as establishing priority goals and objectives for the Distributed Energy Resource (DER) Integration Demonstration Projects. The (DER) Integration Demonstration Projects program is “designed to find best practices on how different grid-side system enhancements can be reliably and efficiently integrated into the grid in a manner that is cost-effective for all ratepayers.” All projects are subject to approval by DEEP first, and then by the Connecticut Public Utilities Regulatory Authority (PURA). Proposals submitted to DEEP are required to: • Demonstrate and investigate how distributed energy resources (DER) can be reliably and efficiently integrated into the electric distribution system • Maximize the value provided to the electric grid, electric ratepayers, and the public from distributed energy resources • Complement and enhance the programs, products, and incentives available through the Connecticut Green Bank, the Connecticut Energy Efficiency Fund, and other similar programs In January 2017, Connecticut’s state legislative committee with jurisdiction over energy will receive DEEP’s evaluation on all approved proposals. Energy storage systems are expected to be the most likely beneficiaries of the program. Distribution system automation and controls are also expected to receive attention, as well as advanced distribution system metering, communication, and system enhancements enabling two-way power flow. One example of an energy storage system already on the ground in Connecticut comes from a ground-breaking project teaming SolarCity with Connecticut Municipal Electric Energy Cooperative (CMEEC) and Brightfields Development, LLC. SolarCity’s largest such installation in the Northeast, this project provides 13 MW of solar and 6 MWh of energy storage systems in southern Connecticut. CMEEC will use the solar and stored energy to serve its municipal members and wholesale electric customers. Seven solar project sites in total will comprise almost 57,000 solar panels. All of the lands used for the project sites were considered under-utilized or were no longer being used for farming. One of the sites was a former landfill. The energy storage system is SolarCity’s first capacity services project. SolarCity is providing the engineering, installation, and maintenance of the battery storage system to be controlled by CMEEC via SolarCity’s proprietary GridLogic control platform. GridLogic allows CMEEC to remotely and instantaneously dispatch stored solar energy to optimize performance and manage load spikes. CMEEC CEO Drew Rankin explains, “CMEEC’s focus is on energizing customers and community prosperity, so we are excited to help create clean, reliable and locally based power resources.” Rankin continues, “These systems utilize leading edge technology that is both powerful and fiscally responsible. With these solar and storage projects, we are pleased to join SolarCity at the forefront of energy innovation.” • Somers Solar Center — completed in 2013 by developer HelioSage Energy. This photovoltaic project has a power output capacity of 5 MW — enough to power over 700 Connecticut homes. • East Lyme Solar Park — At 5 MW, East Lyme is among the largest solar installations in Connecticut. Completed in 2014 by Greenskies Renewable Energy, this photovoltaic project has enough electric capacity to power more than 700 homes. • Walmart, Kohl’s, Staples, Target, and Walgreens — These are among the larger retailers in Connecticut that have gone solar. • IKEA New Haven — with 940 kW, IKEA’s New Haven installation is one of the largest corporate photovoltaic systems in the state. With over 147 solar companies supporting Connecticut’s solar industry, a wide variety of products and services are locally available. A recent survey by the Solar Foundation ranked Connecticut 18th in the nation for solar industry jobs per capita. Over 50% of these jobs are held by installers and 29% are sales and distribution positions. As a fast-growing market, it is not surprising to find many of the top national solar companies working in CT. A pioneering program called Solarize Connecticut was “sparked” by the Connecticut Green Bank and is administered by SmartPower. Town by town, community by community, Connecticut is going solar in a well-organized fashion. Currently, 58 communities in the state have successfully completed Solarize campaigns. Over 2,000 homeowners have already installed over 17 MW of solar on their rooftops, and Solarize Connecticut campaigns are enjoying ongoing success throughout the state. In addition to federal and state incentives to lower the cost of going solar, Solarize Connecticut offers significant discounts due to economies of scale. The process starts in a community through the help of town officials and local program management volunteers from Energy Task Forces. SmartPower issues a Request for Proposals from installers interested in serving the town participating in the Solarize campaign. SmartPower and a town committee interviews the installers and ranks their proposals based on such factors as prices, equipment quality, quality of service offered, and overall installer track record. To take advantage of Solarize Connecticut’s special pricing, residents must agree to work with the installers competitively selected by the local committee. Residents are free to work with any other installer to go solar on their own, of course, but many folks feel comfortable working under the umbrella of the Solarize Connecticut program. Homeowners have access to $0 down financing through Dividend Solar, and the Smart-E Loan offers 10- and 12-year terms, as well, making financing a solar installation very convenient in Connecticut. Check out this quick YouTube clip showcasing the launch of the Solarize Stamford Program: Connecticut Public Utility Regulatory Authority – Learn about the governing body that regulates the electricity rates and services of Connecticut public utilities Connecticut General Assembly – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Connecticut Connecticut Clean Energy Fund: Clean Energy Finance and Investment Authority (CEFIA) – As the nation’s first full-scale clean energy finance authority, CEFIA will leverage public and private funds to drive investment and scale up clean energy deployment in Connecticut. Energize Connecticut – A cost-saving clean energy initiative of the Energy Efficiency Fund, the Connecticut Green Bank, the State and local electric and gas utilities with funding from a charge on customer energy bills. Solar Energy Association of Connecticut, Inc. – nonprofit organization promoting the development of renewable energy and environmentally benign technologies in Connecticut. DSIRE Incentives DataBase Connecticut – Search a public clearinghouse for specific solar energy incentives in Connecticut and across the United States. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


Home > Press > A new way to look at MOFs: International study challenges prevailing view on how metal organic frameworks store gases Abstract: An international collaboration of scientists led by Omar Yaghi, a renowned chemist with the Lawrence Berkeley National Laboratory (Berkeley Lab), has developed a technique they dubbed "gas adsorption crystallography" that provides a new way to study the process by which metal-organic frameworks (MOFs) - 3D crystals with extraordinarily large internal surface areas - are able to store immense volumes of gases such a carbon dioxide, hydrogen and methane. This new look at MOFs led to a discovery that holds promise for the improved design of MOFs tailored specifically for carbon capture, or for the use of hydrogen and natural gas (methane) fuels. "Up to this point we have been shooting in the dark in our designing of MOFs without really understanding the fundamental reasons for why one MOF is better than another," says Yaghi. "Our new study expands our view and thinking about MOFs by introducing gas-gas interactions and their organization into superlattices that are a major factor in achieving high storage capacity for gases." Yaghi, who invented MOFs in the early 1990s while at the Arizona State University, is now a faculty scientist with Berkeley Lab's Materials Sciences Division and the University of California (UC) Berkeley Chemistry Department, where he also serves as co-director of the Kavli Energy NanoScience Institute (Kavli-ENSI). For this latest study, Yaghi and Osamu Terasaki, a chemist with Stockholm University, along with collaborators from institutes in the United States, China, South Korea and Saudi Arabia, interfaced a gas adsorption apparatus with a form of X-ray crystallography, called in-situ small angle X-ray scattering (SAXS). The result was a gas adsorption crystallography technique that uncovered evidence of cooperative interactions between gas molecules within a MOF. "These cooperative gas-gas interactions lead to highly organized behavior, which results in the formation of gas aggregates about 40 nanometers in size," Yaghi says. "The aggregates are arranged in orderly superlattice structures, which is in stark contrast to the prevailing view that the adsorption of gas molecules by MOFs occurs stochastically." Yaghi and Terasaki are the corresponding authors of a paper describing this study that has been published in Nature. The paper is titled "Extra adsorption and adsorbate superlattice formation in metal-organic frameworks." The lead authors are Hae Sung Cho, Hexiang Deng and Keiichi Miyasaka. Other co-authors are Zhiyue Dong, Minhyung Cho, Alexander Neimark and Jeung Ku Kang. Since Yaghi's original invention, thousands of different types of MOFs have been created. A typical MOF consists of a metal oxide center surrounded by organic molecules that form a highly porous three-dimensional crystal framework. The variations on this basic structure are virtually limitless and can be customized so that a MOF's pores adsorb specific gas molecules, making MOFs potentially ideal gas storage vessels. "One gram of MOF has a surface area of up to 10,000 square meters onto which it is possible to compact gas molecules into MOF pores like so many bees on a honeycomb without the high pressures and low temperatures usually required for compressed gas storage," Yaghi says. The selectivity and uptake capacity of a MOF are determined by the nature of the gas molecule being adsorbed and its interactions with the MOF's constituents. While the interactions of gas molecules with the internal surface of a MOF and among themselves within individual pores have been extensively studied, the gas-gas interactions across a MOF's pore walls have not been explored until now. With their SAXS-based gas adsorption crystallography technique, Yaghi, Terasaki and their collaborators discovered that local strain in the MOF induced by pore-filling can give rise to collective and long-range gas-gas interactions, resulting in the formation of superlattices that extend over several pores. "We were able to track and map the distribution and ordering of adsorbate molecules in five members of the mesoporous MOF-74 series along entire adsorption-desorption isotherms," Yaghi says. "In all cases, we found that the capillary condensation that fills the pores gives rise to the formation of extra-adsorption domains that span several neighboring pores and have a higher adsorbate density than non-domain pores." The next step, Yaghi says, will be to apply this new gas adsorption crystallography technique to other porous molecular systems that can serve as gas storage vessels, such as covalent organic frameworks (COFs) and zeolitic imidazolate frameworks (ZIFs). "We want to generate a comprehensive view of how various gases interact collectively within porous materials interior," says Yaghi. "We will then feed this data into computer models to improve the theory of gas adsorption." About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > New graphene-based system could help us see electrical signaling in heart and nerve cells: Berkeley-Stanford team creates a system to visualize faint electric fields Abstract: Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). A new way to visualize electric fields "This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid," said Jason Horng, a co-lead author of a study published Dec. 16 in Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab's Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. "The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell," said Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. "One of the outstanding problems in studying a large network of cells is understanding how information propagates between them," Balch said. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui said, "This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane." The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers said. Tapping graphene's light-absorbing properties Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab's Advanced Light Source to understand the effects of an electric field on graphene's absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene's light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer's light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field's strength and location along the surface of the graphene. Millionths-of-a-volt sensitivity The new imaging platform -- dubbed CAGE for "Critically coupled waveguide-Amplified Graphene Electric field imaging device" -- proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field's location along the graphene sheet's surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Next up: living heart cells Balch said that there are already plans to test the platforms with living cells. "We are working with collaborators to test this with real heart cells," she said. "There are several potential applications for this research in heart health and drug screening." There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. "The kind of elegance behind this system comes from its generality," Balch said. "It can be sensitive to anything that carries charge." ### The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility. About Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Diz-Munoz A.,University of California at Berkeley | Diz-Munoz A.,University of California at San Francisco | Fletcher D.A.,University of California at Berkeley | Fletcher D.A.,Lawrence Berkeley National Laboratory | Weiner O.D.,University of California at San Francisco
Trends in Cell Biology | Year: 2013

Many cell phenomena that involve shape changes are affected by the intrinsic deformability of the plasma membrane (PM). Far from being a passive participant, the PM is now known to physically, as well as biochemically, influence cell processes ranging from vesicle trafficking to actin assembly. Here we review current understanding of how changes in PM tension regulate cell shape and movement, as well as how cells sense PM tension. © 2012 Elsevier Ltd.


Larabell C.A.,University of California at San Francisco | Larabell C.A.,Lawrence Berkeley National Laboratory | Nugent K.A.,University of Melbourne
Current Opinion in Structural Biology | Year: 2010

X-ray imaging of biological samples is progressing rapidly. In this paper we review the progress to date in high-resolution imaging of cellular architecture. In particular we survey the progress in soft X-ray tomography and argue that the field is coming of age and that important biological insights are starting to emerge. We then review the new ideas based on coherent diffraction. These methods are at a much earlier stage of development but, as they eliminate the need for X-ray optics, have the capacity to provide substantially better spatial resolution than zone plate-based methods. © 2010 Elsevier Ltd.


Martin L.W.,University of Illinois at Urbana - Champaign | Ramesh R.,Lawrence Berkeley National Laboratory
Acta Materialia | Year: 2012

We review recent developments and advances in multiferroic and magnetoelectric heterostructures. Driven by the promise of new materials functionality (i.e. electric field control of ferromagnetism), extensive on-going research is focused on the search for and characterization of new multiferroic materials. In this review we develop a comprehensive overview of multiferroic materials, including details on the nature of order parameters and coupling in these materials, the scarcity of such materials in single phase form, routes to create and control the properties of these materials, and we finish by investigating such effects in a number of model materials and heterostructures. This includes an in-depth discussion of BiFeO 3, an investigation of recent advances in magnetoelectric materials, and an introduction to a variety of approaches by which one can achieve novel materials functionality. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Frauendorf S.,University of Notre Dame | Macchiavelli A.O.,Lawrence Berkeley National Laboratory
Progress in Particle and Nuclear Physics | Year: 2014

The role of neutron-proton pairing correlations on the structure of nuclei along the N=Z line is reviewed. Particular emphasis is placed on the competition between isovector (T=1) and isoscalar (T=0) pair fields. The expected properties of these systems, in terms of pairing collective motion, are assessed by different theoretical frameworks including schematic models, realistic Shell Model and mean field approaches. The results are contrasted with experimental data with the goal of establishing clear signals for the existence of neutron-proton (np) condensates. We will show that there is clear evidence for an isovector np condensate as expected from isospin invariance. However, and contrary to early expectations, a condensate of deuteron-like pairs appears quite elusive and pairing collectivity in the T=0 channel may only show in the form of a phonon. Arguments are presented for the use of direct reactions, adding or removing an np pair, as the most promising tool to provide a definite answer to this intriguing question. © 2014 Elsevier B.V. All rights reserved.


Bouchard K.E.,University of California at San Francisco | Bouchard K.E.,Lawrence Berkeley National Laboratory | Chang E.F.,University of California at San Francisco
Journal of Neuroscience | Year: 2014

Speech production requires the precise control of vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly organized into complex sequences. Multiple productions of the same phoneme can exhibit substantial variability, some of which is inherent to control of the vocal tract and its biomechanics, and some of which reflects the contextual effects of surrounding phonemes (“coarticulation”). The role of the CNS in these aspects of speech motor control is not well understood. To address these issues, we recorded multielectrode cortical activity directly from human ventral sensory-motor cortex (vSMC) during the production of consonant-vowel syllables. We analyzed the relationship between the acoustic parameters of vowels (pitch and formants) and cortical activity on a single-trial level. We found that vSMC activity robustly predicted acoustic parameters across vowel categories (up to 80% of variance), as well as different renditions of the same vowel (up to 25% of variance). Furthermore, we observed significant contextual effects on vSMC representations of produced phonemes that suggest active control of coarticulation: vSMC representations for vowels were biased toward the representations of the preceding consonant, and conversely, representations for consonants were biased toward upcoming vowels. These results reveal that vSMC activity for phonemes are not invariant and provide insight into the cortical mechanisms of coarticulation. © 2014 the authors.


Bzdak A.,Brookhaven National Laboratory | Koch V.,Lawrence Berkeley National Laboratory
Physical Review C - Nuclear Physics | Year: 2012

We show that the effect of finite acceptance drastically influences the net-baryon and net-charge cumulants, which are believed to be sensitive probes of the QCD phase diagram. We derive general formulas that relate the true cumulants K n which reflect the full dynamics of the system with the actually measured cumulants c n for a given acceptance, modeled by a binomial probability parameter p. We find that this relation involves additional moments which cannot be expressed by cumulants and should be measured in order to extract any potential information about the QCD critical point. We demonstrate that for a wide range of the true cumulant ratios K n/K m the measured ratios c n/c m quickly converge if p<1/2, which makes the interpretation of the data very challenging, especially in case of the net-proton cumulants. Our study further suggests that the measurement of net-charge cumulants may be more advantageous for the investigation of the QCD phase diagram. © 2012 American Physical Society.


Schneidman-Duhovny D.,University of California at San Francisco | Hammel M.,Lawrence Berkeley National Laboratory | Sali A.,University of California at San Francisco
Nucleic Acids Research | Year: 2010

Small angle X-ray scattering (SAXS) is an increasingly common technique for low-resolution structural characterization of molecules in solution. SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can contribute to many applications, such as comparing a conformation in solution with the corresponding X-ray structure, modeling a flexible or multi-modular protein, and assembling a macromolecular complex from its subunits. These applications require rapid computation of a SAXS profile from a molecular structure. FoXS (Fast X-Ray Scattering) is a rapid method for computing a SAXS profile of a given structure and for matching of the computed and experimental profiles. Here, we describe the interface and capabilities of the FoXS web server (http://salilab.org/foxs). © The Author(s) 2010. Published by Oxford University Press.


Dobrowolska M.,University of Notre Dame | Tivakornsasithorn K.,University of Notre Dame | Liu X.,University of Notre Dame | Furdyna J.K.,University of Notre Dame | And 3 more authors.
Nature Materials | Year: 2012

The ferromagnetic semiconductor (Ga,Mn)As has emerged as the most studied material for prototype applications in semiconductor spintronics. Because ferromagnetism in (Ga,Mn)As is hole-mediated, the nature of the hole states has direct and crucial bearing on its Curie temperature T C. It is vigorously debated, however, whether holes in (Ga,Mn)As reside in the valence band or in an impurity band. Here we combine results of channelling experiments, which measure the concentrations both of Mn ions and of holes relevant to the ferromagnetic order, with magnetization, transport, and magneto-optical data to address this issue. Taken together, these measurements provide strong evidence that it is the location of the Fermi level within the impurity band that determines T C through determining the degree of hole localization. This finding differs drastically from the often accepted view that T C is controlled by valence band holes, thus opening new avenues for achieving higher values of T C. © 2012 Macmillan Publishers Limited. All rights reserved.


Minor D.L.,University of California at San Francisco | Minor D.L.,Lawrence Berkeley National Laboratory
Cell | Year: 2016

Ligand binding usually moves the target protein from an ensemble of inactive states to a well-defined active conformation. Matthies et al. flip this scheme around, finding that, for the magnesium channel CorA, loss of ligand binding induces an ensemble of conformations that turn the channel on. © 2016 Elsevier Inc.


Bagriantsev S.N.,University of California at San Francisco | Clark K.A.,University of California at San Francisco | Minor D.L.,University of California at San Francisco | Minor D.L.,Lawrence Berkeley National Laboratory
EMBO Journal | Year: 2012

K 2P 2.1 (TREK-1) is a polymodal two-pore domain leak potassium channel that responds to external pH, GPCR-mediated phosphorylation signals, and temperature through the action of distinct sensors within the channel. How the various intracellular and extracellular sensory elements control channel function remains unresolved. Here, we show that the K 2P 2.1 (TREK-1) intracellular C-terminal tail (Ct), a major sensory element of the channel, perceives metabolic and thermal commands and relays them to the extracellular C-type gate through transmembrane helix M4 and pore helix 1. By decoupling Ct from the pore-forming core, we further demonstrate that Ct is the primary heat-sensing element of the channel, whereas, in contrast, the pore domain lacks robust temperature sensitivity. Together, our findings outline a mechanism for signal transduction within K 2P 2.1 (TREK-1) in which there is a clear crosstalk between the C-type gate and intracellular Ct domain. In addition, our findings support the general notion of the existence of modular temperature-sensing domains in temperature-sensitive ion channels. This marked distinction between gating and sensory elements suggests a general design principle that may underlie the function of a variety of temperature-sensitive channels. © 2012 European Molecular Biology Organization.


Browne E.,Lawrence Berkeley National Laboratory | Tuli J.K.,Brookhaven National Laboratory
Nuclear Data Sheets | Year: 2012

The evaluators present in this publication spectroscopic data and level schemes from radioactive decay and nuclear reactions for all isobars with mass number A=230. This evaluation includes the first experimental evidence of 230Am, produced through the 197Au( 40Ar,3n) 234Bk (α decay to 230Am) reaction, E( 40Ar)=188.4 MeV (2003MoZX). © 2012 Elsevier Inc.


Grant
Agency: Department of Energy | Branch: | Program: STTR | Phase: Phase I | Award Amount: 224.78K | Year: 2014

Generating and screening multiple combinations of genes, enzymes, and other biological parts has become vital to next generation biofuel development and production. This currently requires large personnel costs and sizeable capital investments in robotics equipment. Commercial adoption of the synthetic biology technology will require high throughput capabilities, low cost, product reliability, and full automation with minimal human intervention. HJ Science & amp; Technology, Inc. proposes to demonstrate the feasibility of an automated, programmable, low-cost, and compact platform capable of running rapid and complex bioengineering processes and optimization of next generation biofuel development and production. Our approach leverages a Technology Transfer Opportunity developed by the Joint BioEnergy Institute for microfluidic automation of combinatorial DNA assembly, cell transformation, and screening of cells. The microfluidic automation platform offers significant labor savings by simplifying and accelerating the process and forgoes the need for robotics as well as reduces costs by using much smaller volumes of reagent materials. In Phase I, we will establish the feasibility of the proposed microfluidic automation technology by engineering and screening E. coli cells for the production of fatty acid ethyl esters (FAEE). By performing all three key operations: 1) construction of a plasmid containing genes for FAEE production in E. coli, and 2) subsequent transformation into E. coli cells/chromosomal integration and screening of expression products, on the same microfluidic chip in a fully automated format, we can assess the FAEE production yield as a function the gene variant in almost real time, thereby greatly enhancing the overall efficiency of the FAEE production process. Commercial Applications and Other Benefits: In addition to biofuel, the proposed microfluidic automation technology has combinatorial DNA construction and functional screening in these industries: 1) pharmaceuticals, especially protein-based drugs, 2) agriculture, 3) bio-based chemicals, 4) biotechnology, 6) enzymes, and 7) DNA synthesis. The proposed technology will also greatly benefit the life-science instrumentation and microfluidic device industries.


Grosse Perdekamp M.,University of Illinois at Urbana - Champaign | Yuan F.,Lawrence Berkeley National Laboratory
Annual Review of Nuclear and Particle Science | Year: 2015

We review the current status and future perspectives of theory and experiments of transverse spin phenomena in high-energy scattering processes off nucleon targets and related issues in nucleon structure and QCD. Systematic exploration of transverse spin effects requires measurements in polarized deep-inelastic scattering, polarized pp collisions, and e+e- annihilations. Sophisticated QCD-based techniques are also needed to analyze the experimental data sets. Copyright ©2015 by Annual Reviews. All rights reserved.


Fletcher D.A.,University of California at Berkeley | Fletcher D.A.,Lawrence Berkeley National Laboratory | Mullins R.D.,University of California at San Francisco
Nature | Year: 2010

The ability of a eukaryotic cell to resist deformation, to transport intracellular cargo and to change shape during movement depends on the cytoskeleton, an interconnected network of filamentous polymers and regulatory proteins. Recent work has demonstrated that both internal and external physical forces can act through the cytoskeleton to affect local mechanical properties and cellular behaviour. Attention is now focused on how cytoskeletal networks generate, transmit and respond to mechanical signals over both short and long timescales. An important insight emerging from this work is that long-lived cytoskeletal structures may act as epigenetic determinants of cell shape, function and fate. © 2010 Macmillan Publishers Limited. All rights reserved.


Patent
Clemson University, Pacific Northwest National Laboratory and Lawrence Berkeley National Laboratory | Date: 2014-02-04

A liquid sampling, atmospheric pressure, glow discharge (LS-APGD) device as well as systems that incorporate the device and methods for using the device and systems are described. The LS-APGD includes a hollow capillary for delivering an electrolyte solution to a glow discharge space. The device also includes a counter electrode in the form of a second hollow capillary that can deliver the analyte into the glow discharge space. A voltage across the electrolyte solution and the counter electrode creates the microplasma within the glow discharge space that interacts with the analyte to move it to a higher energy state (vaporization, excitation, and/or ionization of the analyte).


News Article | September 1, 2016
Site: www.greencarcongress.com

« Ballard MOU with strategic partner Broad-Ocean targets fuel cell modules for buses and commercial vehicles | Main | Ceres Power and Cummins win DOE award to develop SOFC systems for data centers » Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have directly probed the solid/liquid interface of the electrochemical double layer (EDL) using a novel X-ray toolkit. The X-ray tools and techniques could be extended, the researchers say, to provide new insight about battery performance and corrosion, a wide range of chemical reactions, and even biological and environmental processes that rely on similar chemistry. Originally conceived by Hermann von Helmholtz in the 19th century, the EDL is a key concept in the modern electrochemistry of electrified interfaces. The properties of the interface formed by a charged electrode surface immersed in an electrolyte governs the charge transfer processes through the interface itself, thus influencing the electrochemical responses of the electrode/electrolyte system. These concepts and models together serve as the foundation of modern electrochemistry, the researchers noted in an open-access paper describing the work published in Nature Communications. The researchers used ambient-pressure x-ray photoelectron spectroscopy (AP-XPS) to study the ECL. AP-XPS is a technique available from two of the beamlines at the Advanced Light Source at the Lab. X-ray photoelectron spectroscopy (XPS) is a powerful and versatile surface characterization technique that can provide quantitative information about elemental composition and chemical specificity. However, conventional XPS measurements require ultra-high vacuum (UHV) conditions to avoid electron scattering with gas molecules as well as the surface contaminations. To apply XPS to liquid and gas phases, researchers developed the ambient pressure XPS (AP-XPS) technique enabling the use of a laboratory-based X-ray source in near ambient pressure conditions. In 2015, Berkeley Lab researchers used AP-XPS to explore the electrochemical oxidation of the Pt electrode at an oxygen evolution reaction (OER) potential (Axnanda et al.). A key breakthrough enabling the latest experiment was in tailoring “tender” X-rays (also used in the Axnanda study)—which have an energy range tuned in a middle ground between the typical high-energy (or “hard”) and low-energy (or “soft”) X-rays used in research—to focus on chemistry within the double layer of a sample electrochemical system. In a battery, this electrochemical double layer describes the layer of charged atoms or molecules in the battery’s fluid that are drawn in and cling to the surface of the electrode because of their opposite electrical charge—an essential step in battery operation—and a second and closely related zone of chemical activity that is affected by the chemistry at the electrode’s surface. The complex molecular-scale dance of charge flow and transfer within a battery’s double layer is central to its function. The latest work shows changes in the electric potential in this double layer. This potential is a location-based measure of the effect of an electric field on an object—an increased potential would be found in an electric charge moving toward a lightbulb, and flows to a lower potential after powering on the lightbulb. To be able to directly probe any attribute of the double layer is a significant advancement. Essentially, we now have a direct map, showing how potential within the double layer changes based on adjustments to the electrode charge and electrolyte concentration. Independent of a model, we can directly see this—it’s literally a picture of the system at that time. This will help us with guidance of theoretical models as well as materials design and development of improved electrochemical, environmental, biological, and chemical systems. —Ethan Crumlin, a research scientist at Berkeley Lab’s ALS who led the experiment Zahid Hussain, division deputy for scientific support at the ALS, who participated in the experiment, added, “The problem of understanding solid/liquid interfaces has been known for 50-plus years—everybody has been using simulations and modeling to try to conceive of what’s at work. Solid/liquid interfaces are key for all kinds of research, from batteries to fuel cells to artificial photosynthesis.” The latest work has narrowed the list of candidate models that explain what’s at work in the double layer. In the experiment, researchers from Berkeley Lab and Shanghai studied the active chemistry of a gold electrode and a water-containing electrolyte that also contained a neutrally charged molecule called pyrazine. They used AP-XPS to measure the potential distribution for water and pyrazine molecules across the solid/liquid interface in response to changes in the electrode potential and the electrolyte concentration. The experiment demonstrated a new, direct way to precisely measure a potential drop in the stored electrical energy within the double layer’s electrolyte solution. These measurements also allowed researchers to determine associated charge properties across the interface (known as the “potential of zero charge” or “pzc”). The technique is well-suited to active chemistry, and there are plans to add new capabilities to make this technique more robust for studying finer details during the course of chemical reactions, and to bring in other complementary X-ray study techniques to add new details, Hussain said. An upgrade to the X-ray beamline where the experiment was conducted is now in progress and is expected to conclude early next year. Also, a brand new beamline that will marry this and several other X-ray capabilities for energy-related research, dubbed AMBER (Advanced Materials Beamline for Energy Research) is under construction at the ALS and is scheduled to begin operating in 2018. Researchers from the Joint Center for Artificial Photosynthesis, the Joint Center for Energy Storage Research, the Gwangju Institute of Science and Technology in the Republic of Korea, the Shanghai Institute of Microsystem and Information Technology in China, and the School of Physical Science and Technology in China participated in this research. The work was supported by the US Department of Energy Office Science, the National Natural Science Foundation of China, and the Chinese Academy of Sciences-Shanghai Science Research Center. The Advanced Light Source is a DOE Office of Science User Facility.


News Article | December 2, 2016
Site: www.eurekalert.org

David Bailey, Jonathan Borwein, Andrew Mattingly, and Glenn Wightwick will receive the 2017 AMS Levi L. Conant Prize. The four co-authors are honored for their article "The Computation of Previously Inaccessible Digits of Π2 and Catalan's Constant," Notices of the AMS, August 2013. "This fascinating article will delight any mathematician who has ever been intrigued by the mysteries of the digits of Π," the prize citation says. The number Π, known to the ancients, is familiar from many contexts, including its role in the formulas for the circumference and area of a circle. Catalan's constant, named after the mathematician Eugène Charles Catalan (1814-1894), is of a more recent vintage. This mysterious number crops up in various problems in mathematics, but it is not known whether it is transcendental, as is Π---or even whether it is irrational. The article opens with a historical journey, from Archimedes to the computer age, with many interesting anecdotes along the way. It then goes on to discuss the remarkable "BBP" formula, discovered by Bailey together with Peter Borwein and Simon Plouffe. The formula allows one to calculate binary or hexadecimal digits of Π beginning with the nth digit without first calculating any of the preceding n - 1 digits. The article leads readers through not only an elementary proof of the BBP formula but also the unconventional search that originally led to this formula as well as similar formulas for Catalan's constant and Π2. The article also provides intriguing insights into the age-old question of whether the digits of Π are truly randomly distributed. Certainly the task of computing digits of Π has a practical side: running paired computations of Π provides a strenuous integrity test of computer hardware and software. But, as this outstanding article makes clear, it is the fundamental properties and deep mystery of Π that have made it so powerfully fascinating for so long. David H. Bailey is a retired senior scientist at the Lawrence Berkeley National Laboratory, and a research associate at the University of California, Davis. Andrew Mattingly is senior information technology architect at IBM Australia. Glenn Wightwick is deputy vice-chancellor and vice-president (Research) at the University of Technology Sydney. The prize will be awarded posthumously to Jonathan Borwein, who died in August 2016, before learning that he and his co-authors would receive the prize. Borwein was professor of mathematics at the Centre for Computer Assisted Research Mathematics and its Applications at the University of Newcastle. "Borwein's creative work and beautiful expositions will be sorely missed," the prize citation says. Presented annually, the AMS Conant Prize recognizes the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years. The prize will be awarded Thursday, January 5, 2017, at the Joint Mathematics Meetings in Atlanta. Find out more about AMS prizes and awards at http://www. . Founded in 1888 to further mathematical research and scholarship, today the American Mathematical Society fulfills its mission through programs and services that promote mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life.


News Article | November 17, 2016
Site: www.eurekalert.org

X-ray technique at Berkeley Lab provides high-res views of the structure and movement of genetic material in cell nuclei Scientists have mapped the reorganization of genetic material that takes place when a stem cell matures into a nerve cell. Detailed 3-D visualizations show an unexpected connectivity in the genetic material in a cell's nucleus, and provide a new understanding of a cell's evolving architecture. These unique 3-D reconstructions of mouse olfactory cells, which govern the sense of smell, were obtained using X-ray imaging tools at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). The results could help us understand how patterning and reorganization of DNA-containing material called chromatin in a cell's nucleus relate to a cell's specialized function as specific genes are activated or silenced. Chromatin is compacted to form chromosomes, which pass along an organism's genetic fingerprint to newly formed cells during cell division. The results were published this week in a special edition of Cell Reports that highlights epigenetics, a field of study focused on a layer of biochemistry that affects gene expression and that is closely coupled to DNA but does not alter the genetic code. Researchers used a powerful X-ray microscope at Berkeley Lab's Advanced Light Source (ALS) to capture images of nerve cell samples at different stages of maturity as they became more specialized in their function -- this process is known as "differentiation." Cells at each stage were imaged from dozens of different angles using X-rays. Each set of 2-D images was used to calculate a 3-D reconstruction of a cell detailing the changing chromatin formations in the nuclei. They also were able to measure the dense packing in a form of chromatin called heterochromatin, and they learned about the importance of a specific protein in controlling the compaction of heterochromatin and its confinement to the nucleus. "It's a new way of looking at the nucleus where we don't have to chemically treat the cell," said Carolyn Larabell, director of the National Center for X-ray Tomography (NCXT), a joint program of Berkeley Lab and UC San Francisco (UCSF). "Being able to directly image and quantify changes in the nucleus is enormously important and has been on cell biologists' wish list for many years." Chromatin is "notoriously sensitive," she said, to chemical stains and other chemical additives that are often used in biological imaging to highlight regions of interest in a given sample. "Until now, it has only been possible to image the nucleus indirectly by staining it, in which case the researcher has to take a leap of faith that the stain was evenly distributed." Larabell, a faculty scientist at Berkeley Lab and a UCSF professor, said it was previously thought that chromatin existed as a series of disconnected islands, though the latest study showed how the chromatin is compartmentalized into two distinct regions of "crowding" that form a continuous network throughout the nucleus. "We were really surprised: There are no islands, it's all connected," she said, adding, "We could see how chromatins pack through the nucleus and how molecules move through the nucleus, and we found that heterochromatin is 30 percent more crowded than the region where active genes are. That cannot be done with any other imaging techniques." Two-dimensional images would have shown the nucleus as a "flat, confusing mess," she said. One aim of the latest study was to gain new insight into gene expression in mice specific to olfactory genes. Mice have about 1,500 genes related to smell. Each olfactory nerve cell expresses just one of these olfactory genes to produce a receptor that recognizes a related set of odors. The many receptors in a mouse's nasal cavity allow it to detect a wide range of smells. "We're trying to understand how the reorganization of chromatin affects gene expression," Larabell said. "No one's been able to study this at the human level yet." This research will hopefully lead to new insights about diseases and disorders that relate to gene expression. Already, the study's results are being incorporated into models of cell development. One of the precursors to Alzheimer's disease, which attacks the brain's nerve cells, is a loss of smell, so understanding this connection to olfactory nerve cells could perhaps serve as a diagnostic tool and perhaps unlock a deeper understanding of the degenerative disorder. The latest study used a microscopy technique known as soft X-ray tomography to record a series of images from small groups of dozens of frozen olfactory nerve cells in three separate stages of development. The technique, which is unique to Berkeley Lab's ALS, captured details as small as tens of nanometers, or tens of billionths of a meter. Researchers visually distinguished regions of highly compacted heterochromatin from other chromatin types. With the proven success of the imaging technique, Larabell said it's possible to perform statistical analyses based on large collections of cell nuclei images sorted by different stages of development. Coupled with other types of imaging techniques, researchers hope to isolate individual gene-selection processes in upcoming work. "This work highlights the power of multidisciplinary research," said Mark Le Gros, associate director of the NCXT and a physicist who was responsible for the design and construction of the X-ray microscope. Le Gros, the lead author in this research, added, "This is an example of work that required a combination of molecular biologists and cell biologists with physicists and computer scientists." The Advanced Light Source is a DOE Office of Science User Facility. The latest work also featured participation from researchers at the Foundation for Research and Technology-Hellas in Greece, Massachusetts Institute of Technology, and University of Jyväskylä in Finland. The work was supported by the National Institutes of Health. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | February 23, 2017
Site: www.eurekalert.org

PULLMAN, Wash. - A Washington State University study of the chemistry of technetium-99 has improved understanding of the challenging nuclear waste and could lead to better cleanup methods. The work is reported in the journal Inorganic Chemistry. It was led by John McCloy, associate professor in the School of Mechanical and Materials Engineering, and chemistry graduate student Jamie Weaver. Researchers from Pacific Northwest National Laboratory (PNNL), the Office of River Protection and Lawrence Berkeley National Laboratory collaborated. Technetium-99 is a byproduct of plutonium weapons production and is considered a major U.S. challenge for environmental cleanup. At the Hanford Site nuclear complex in Washington state, there are about 2,000 pounds of the element dispersed within approximately 56 million gallons of nuclear waste in 177 storage tanks. The U.S. Department of Energy is in the process of building a waste treatment plant at Hanford to immobilize hazardous nuclear waste in glass. But researchers have been stymied because not all the technetium-99 is incorporated into the glass and volatilized gas must be recycled back into the melter system. The element can be very soluble in water and moves easily through the environment when in certain forms, so it is considered a significant environmental hazard. Because technetium compounds are challenging to work with, earlier research has used less volatile substitutes to try to understand the material's behavior. Some of the compounds themselves have not been studied for 50 years, said McCloy. "The logistics are very challenging," he said. The WSU work was done in PNNL's highly specialized Radiochemical Processing Laboratory and the radiological annex of its Environmental Molecular Sciences Laboratory. The researchers conducted fundamental chemistry tests to better understand technetium-99 and its unique challenges for storage. They determined that the sodium forms of the element behave much differently than other alkalis, which possibly is related to its volatility and to why it may be so reactive with water. "The structure and spectral signatures of these compounds will aid in refining the understanding of technetium incorporation into nuclear waste glasses," said McCloy. The researchers also hope the work will contribute to the study of other poorly understood chemical compounds.


News Article | November 22, 2016
Site: www.eurekalert.org

Three years ago the White House launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to accelerate the development and application of novel technologies that will give us a better understanding about how brains work. Since then, dozens of technology firms, academic institutions, scientists and other have been developing new tools to give researchers unprecedented opportunities to explore how the brain processes, utilizes, stores and retrieves information. But without a coherent strategy to analyze, manage and understand the data generated by these new technologies, advancements in the field will be limited. This is precisely why Lawrence Berkeley National Laboratory (Berkeley Lab) Computational Neuroscientist Kristofer Bouchard assembled an international team of interdisciplinary researchers--including mathematicians, computer scientists, physicists and experimental and computational neuroscientists--to develop a plan for managing, analyzing and sharing neuroscience data. Their recommendations were published in a recent issue of Neuron. "The U.S. BRAIN Initiative is just one of many national and private neuroscience initiatives globally that are working toward accelerating our understanding of brains," says Bouchard. "Many of these efforts have given a lot of attention to the technological challenges of measuring and manipulating neural activity, while significantly less attention has been paid to the computing challenges associated with the vast amounts of data that these technologies are generating." To maximize the return on investments in global neuroscience initiatives, Bouchard and his colleagues argue that the international neuroscience community should have an integrated strategy for data management and analysis. This coordination would facilitate the reproducibility of workflows, which then allows researchers to build on each other's work. For a first step, the authors recommend that researchers from all facets of neuroscience agree on standard descriptions and file formats for products derived from data analysis and simulations. After that, the researchers should work with computer scientists to develop hardware and software ecosystems for archiving and sharing data. The authors suggest an ecosystem similar to the one used by the physics community to share data collected by experiments like the Large Hadron Collider (LHC). In this case, each research group has their own local repository of physiological or simulation data that they've collected or generated. But eventually, all of this information should also be included in "meta-repositories" that are accessible to the greater neuroscience community. Files in the "meta-repositories" should be in a common format, and the repositories would ideally be hosted by an open-science supercomputing facility like the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), located at Berkeley Lab. Because novel technologies are producing unprecedented amounts of data, Bouchard and his colleagues also propose that neuroscientists collaborate with mathematicians to develop new approaches for data analysis and modify existing analysis tools to run on supercomputers. To maximize these collaborations, the analysis tools should be open-source and should integrate with brain-scale simulations, they emphasize. "These are the early days for neuroscience and big data, but we can see the challenges coming. This is not the first research community to face big data challenges; climate and high energy physics have been there and overcome many of the same issues," says Prabhat, who leads NERSC's Data & Analytics Services Group. Berkeley Lab is well positioned to help neuroscientists address these challenges because of its long tradition of interdisciplinary science, Prabhat adds. DOE facilities like NERSC and the Energy Sciences Network (ESnet) have worked closely with Lab computer scientists to help a range of science communities--from astronomy to battery research--collaborate and manage and archive their data. Berkeley Lab mathematicians have also helped researchers in various scientific disciplines develop new tools and methods for data analysis on supercomputers. "Harnessing the power of HPC resources will require neuroscientists to work closely with computer scientists and will take time, so we recommend rapid and sustained investment in this endeavor now," says Bouchard. "The insights generated from this effort will have high-payoff outcomes. They will support neuroscience efforts to reveal both the universal design features of a species' brain and help us understand what makes each individual unique." In addition to Bouchard and Prabhat, other authors on the paper include: James Aimone (Sandia National Laboratories); Miyoung Chun (Kavli Insitute for Fundamental Neuroscience); Thomas Dean (Google Research); Michael Denker (Jülich Research Center); Markus Diesmann (Jülich Research Center/ Aachen University); Loren Frank (Kavli Insitute for Fundamental Neuroscience/ Howard Hughs Medical Institute/ UC San Francisco); Narayanan Kasthuri (University of Chicago/Argonne National Laboratory); Christof Koch (Allen Institute for Brain Science); Friedrich Sommer (UC Berkeley); Horst Simon, David Donofrio and Oliver Rübel (Berkeley Lab). Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | February 24, 2017
Site: www.rdmag.com

A Washington State University study of the chemistry of technetium-99 has improved understanding of the challenging nuclear waste and could lead to better cleanup methods. The work is reported in the journal Inorganic Chemistry. It was led by John McCloy, associate professor in the School of Mechanical and Materials Engineering, and chemistry graduate student Jamie Weaver. Researchers from Pacific Northwest National Laboratory (PNNL), the Office of River Protection and Lawrence Berkeley National Laboratory collaborated. Technetium-99 is a byproduct of plutonium weapons production and is considered a major U.S. challenge for environmental cleanup. At the Hanford Site nuclear complex in Washington state, there are about 2,000 pounds of the element dispersed within approximately 56 million gallons of nuclear waste in 177 storage tanks. The U.S. Department of Energy is in the process of building a waste treatment plant at Hanford to immobilize hazardous nuclear waste in glass. But researchers have been stymied because not all the technetium-99 is incorporated into the glass and volatilized gas must be recycled back into the melter system. The element can be very soluble in water and moves easily through the environment when in certain forms, so it is considered a significant environmental hazard. Because technetium compounds are challenging to work with, earlier research has used less volatile substitutes to try to understand the material's behavior. Some of the compounds themselves have not been studied for 50 years, said McCloy. "The logistics are very challenging," he said. The WSU work was done in PNNL's highly specialized Radiochemical Processing Laboratory and the radiological annex of its Environmental Molecular Sciences Laboratory. The researchers conducted fundamental chemistry tests to better understand technetium-99 and its unique challenges for storage. They determined that the sodium forms of the element behave much differently than other alkalis, which possibly is related to its volatility and to why it may be so reactive with water. "The structure and spectral signatures of these compounds will aid in refining the understanding of technetium incorporation into nuclear waste glasses," said McCloy. The researchers also hope the work will contribute to the study of other poorly understood chemical compounds.


News Article | February 24, 2017
Site: www.rdmag.com

A Washington State University study of the chemistry of technetium-99 has improved understanding of the challenging nuclear waste and could lead to better cleanup methods. The work is reported in the journal Inorganic Chemistry. It was led by John McCloy, associate professor in the School of Mechanical and Materials Engineering, and chemistry graduate student Jamie Weaver. Researchers from Pacific Northwest National Laboratory (PNNL), the Office of River Protection and Lawrence Berkeley National Laboratory collaborated. Technetium-99 is a byproduct of plutonium weapons production and is considered a major U.S. challenge for environmental cleanup. At the Hanford Site nuclear complex in Washington state, there are about 2,000 pounds of the element dispersed within approximately 56 million gallons of nuclear waste in 177 storage tanks. The U.S. Department of Energy is in the process of building a waste treatment plant at Hanford to immobilize hazardous nuclear waste in glass. But researchers have been stymied because not all the technetium-99 is incorporated into the glass and volatilized gas must be recycled back into the melter system. The element can be very soluble in water and moves easily through the environment when in certain forms, so it is considered a significant environmental hazard. Because technetium compounds are challenging to work with, earlier research has used less volatile substitutes to try to understand the material's behavior. Some of the compounds themselves have not been studied for 50 years, said McCloy. "The logistics are very challenging," he said. The WSU work was done in PNNL's highly specialized Radiochemical Processing Laboratory and the radiological annex of its Environmental Molecular Sciences Laboratory. The researchers conducted fundamental chemistry tests to better understand technetium-99 and its unique challenges for storage. They determined that the sodium forms of the element behave much differently than other alkalis, which possibly is related to its volatility and to why it may be so reactive with water. "The structure and spectral signatures of these compounds will aid in refining the understanding of technetium incorporation into nuclear waste glasses," said McCloy. The researchers also hope the work will contribute to the study of other poorly understood chemical compounds.


News Article | February 23, 2017
Site: phys.org

The work is reported in the journal Inorganic Chemistry. It was led by John McCloy, associate professor in the School of Mechanical and Materials Engineering, and chemistry graduate student Jamie Weaver. Researchers from Pacific Northwest National Laboratory (PNNL), the Office of River Protection and Lawrence Berkeley National Laboratory collaborated. Technetium-99 is a byproduct of plutonium weapons production and is considered a major U.S. challenge for environmental cleanup. At the Hanford Site nuclear complex in Washington state, there are about 2,000 pounds of the element dispersed within approximately 56 million gallons of nuclear waste in 177 storage tanks. The U.S. Department of Energy is in the process of building a waste treatment plant at Hanford to immobilize hazardous nuclear waste in glass. But researchers have been stymied because not all the technetium-99 is incorporated into the glass and volatilized gas must be recycled back into the melter system. The element can be very soluble in water and moves easily through the environment when in certain forms, so it is considered a significant environmental hazard. Because technetium compounds are challenging to work with, earlier research has used less volatile substitutes to try to understand the material's behavior. Some of the compounds themselves have not been studied for 50 years, said McCloy."The logistics are very challenging," he said. The WSU work was done in PNNL's highly specialized Radiochemical Processing Laboratory and the radiological annex of its Environmental Molecular Sciences Laboratory. The researchers conducted fundamental chemistry tests to better understand technetium-99 and its unique challenges for storage. They determined that the sodium forms of the element behave much differently than other alkalis, which possibly is related to its volatility and to why it may be so reactive with water. "The structure and spectral signatures of these compounds will aid in refining the understanding of technetium incorporation into nuclear waste glasses," said McCloy. The researchers also hope the work will contribute to the study of other poorly understood chemical compounds. More information: Jamie Weaver et al, Chemical Trends in Solid Alkali Pertechnetates, Inorganic Chemistry (2017). DOI: 10.1021/acs.inorgchem.6b02694


News Article | September 19, 2016
Site: www.cemag.us

Graphene is the stuff of the future. For years, researchers and technologists have been predicting the utility of the one-atom-thick sheets of pure carbon in everything from advanced touch screens and semiconductors to long-lasting batteries and next-generation solar cells. But graphene’s unique intrinsic properties — supreme electrical and thermal conductivities and remarkable electron mobility, to name just a few — can only be fully realized if it is grown free from defects that disrupt the honeycomb pattern of the bound carbon atoms. A team led by Materials Scientist Anirudha Sumant with the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s Center for Nanoscale Materials (CNM) and Materials Science Division, along with collaborators at the University of California-Riverside, has developed a method to grow graphene that contains relatively few impurities and costs less to make, in a shorter time and at lower temperatures compared to the processes widely used to make graphene today. Theoretical work led by Argonne nanoscientist Subramanian Sankaranarayanan at the CNM helped researchers understand the molecular-level processes underlying the graphene growth. The new technology taps ultrananocrystalline diamond (UNCD), a synthetic type of diamond that Argonne researchers have pioneered through years of research. UNCD serves as a physical substrate, or surface on which the graphene grows, and the source for the carbon atoms that make up a rapidly produced graphene sheet. “When I first looked at the [scanning electron micrograph] and saw this nice uniform, very complete layer, it was amazing,” says Diana Berman, the first author of the study and former postdoctoral research associate who worked with Sumant and is now an Assistant Professor at the University of North Texas. “I’d been dealing with all these different techniques of growing graphene, and you never see such a uniform, smooth surface.” Current graphene fabrication protocols introduce impurities during the etching process itself, which involves adding acid and extra polymers, and when they are transferred to a different substrate for use in electronics. “The impurities introduced during this etching and the transferring step negatively affect the electronic properties of the graphene,” Sumant says. “So you do not get the intrinsic properties of the graphene when you actually do this transfer.” The team found that the single-layer, single-domain graphene can be grown over micron-size holes laterally, making them completely free-standing (that is, detached from the underlying substrate). This makes it possible to exploit the intrinsic properties of graphene by fabricating devices directly over free-standing graphene. The new process is also much more cost-effective than conventional methods based on using silicon carbide as a substrate. Sumant says that the 3- to 4-inch silicon carbide wafers used in these types of growth methods cost about $1,200, while UNCD films on silicon wafers cost less than $500 to make. The diamond method also takes less than a minute to grow a sheet of graphene, where the conventional method takes on the order of hours. The high quality of graphene was confirmed by the UC Riverside co-authors Zhong Yan and Alexander Balandin by fabricating top-gate field-effect transistors from this material and measuring its electron mobility and charge carrier concentration. “It is well known that certain metals, such as nickel and iron, dissolve diamond at elevated temperatures, and the same process has been used for many years to polish diamond,” says Sumant. He and his team used this property to employ nickel in converting the top layer of diamond into amorphous carbon, but it was not clear how these freed carbon atoms converted instantly into high-quality graphene. After Sumant’s and Berman’s initial breakthrough of growing graphene directly on UNCD, Sankaranarayanan and his postdocs Badri Narayanan and Sanket Deshmukh, computational material scientists at the CNM used resources at the Argonne Leadership Computing Facility (ALCF) to help the team better understand the mechanism of the growth process underlying this interesting phenomenon using reactive molecular dynamic simulations. Computer simulations developed by Narayanan, Deshmukh and Sankaranarayanan showed that certain crystallographic orientation of nickel-111 highly favor nucleation, and subsequent rapid growth of graphene; this was then confirmed experimentally. These large-scale simulations also showed how graphene forms. The nickel atoms diffuse into the diamond and destroy its crystalline order, while carbon atoms from this amorphous solid move to the nickel surface and rapidly form honeycomb-like structures, resulting in mostly defect-free graphene. The nickel then percolated through the fine crystalline grains of the UNCD, sinking out of the way and removing the need for acid to dissolve away excess metal atoms from the top surface. “It is like meeting a good Samaritan at an unknown place who helps you, does his job and leaves quietly without a trace,” says Sumant. “The proven predictive power of our simulations places us in a position of advantage to enable rapid discovery of new catalytic alloys that mediate growth of high-quality graphene on dielectrics and move away on their own when the growth is completed,” adds Narayanan. In addition to the utility in making minimally defective, application-ready graphene for things like low-frequency vibration sensors, radio frequency transistors and better electrodes for water purification, Berman and Sumant say that the Argonne team has already secured three patents arising from their new graphene growth method. The researchers have already struck a collaboration with Swedish Institute of Space Physics involving the European Space Agency for their Jupiter Icy Moons Explorer (JUICE) program to develop graphene-coated probes that may help exploratory vehicles sense the properties of plasma surrounding the moons of Jupiter. Closer to home, the team has also crafted diamond and graphene needles for researchers at North Carolina University to use in biosensing applications. The Argonne researchers are now fine-tuning the process — tweaking the temperature used to catalyze the reaction and adjusting the thickness of the diamond substrate and the composition of the metal film that facilitates the graphene growth — to both optimize the reaction and to better study the physics at the graphene-diamond interface. “We’re trying to tune this more carefully to have a better understanding of which conditions lead to what quality of graphene we’re seeing,” Berman says. Other Argonne authors involved in the study were Alexander Zinovev and Daniel Rosenmann. The paper, “Metal-induced rapid transformation of diamond into single and multilayer graphene on wafer scale,” is published in Nature Communications. The study used resources of the CNM and the ALCF as well as the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory, all DOE Office of Science User Facilities. Additional support was provided by the U.S. Department of Energy’s Office of Science.


« Honda begins sales of Clarity Fuel Cell in Japan; targeting 200 units first year | Main | UQM receives new follow-on order from Proterra to support increased demand for electric buses » The US Department of Energy (DOE) selected 33 small businesses to work directly with DOE national labs to accelerate the commercialization of new clean energy technologies. The department’s Office of Energy Efficiency and Renewable Energy is investing nearly $6.7 million under Round 1 of the new Small Business Vouchers (SBV) pilot. For Round 1, the small businesses and laboratories will collaborate on advancing a number of clean energy technologies, including water, wind, bioenergy, solar, buildings, vehicles, fuel cells, geothermal technologies, and advanced manufacturing. The selected small businesses will work with scientists at nine department laboratories: Oak Ridge National Laboratory (ORNL); National Renewable Energy Laboratory (NREL); Lawrence Berkeley National Laboratory (LBNL); Sandia National Laboratories (SNL); Pacific Northwest National Laboratory (PNNL); Idaho National Laboratory (INL); Los Alamos National Laboratory (LANL); Argonne National Laboratory (ANL); and Lawrence Livermore National Laboratory (LLNL). SBV is a collaborative, national effort that provides $20 million for US companies to help improve industry awareness of national laboratory capabilities and provide small and mid-size businesses access to the resources available within the national laboratory system. Vouchers range from $50,000-300,000. The companies were competitively chosen from a pool of hundreds of applications. Almost 40% of the businesses represent new DOE-industry partnerships and relationships with the national labs. Building on the tremendous response of Round 1, the department also has begun accepting applications for Round 2 of the SBV pilot. A total of $13 million worth of funding remains; over the course of the year, up to 100 vouchers will be awarded.


News Article | November 21, 2016
Site: www.eurekalert.org

The living machinery responsible for photosynthesis - while commonplace and essential to life on Earth - is still not fully understood. One of its molecular mysteries involves how a protein complex, photosystem II, harvests energy from sunlight and uses it to split water into hydrogen and oxygen. This process generates the oxygen in the air that we all breathe. New X-ray methods at the Department of Energy's SLAC National Accelerator Laboratory have captured the highest resolution room-temperature images of this protein complex, which allows scientists to closely watch how water is split during photosynthesis at the temperature at which it occurs naturally. The research team took diffraction images using the bright, fast pulses of X-rays at SLAC's X-ray free-electron laser - the Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Nature published the study on Nov. 21. Previously, the resting state of photosystem II had been seen in detail using samples that were frozen. In this latest study, the researchers were able to see two key steps in photosynthetic water splitting under conditions as it occurs in nature, a big step to decoding how the process works in detail. A damage-free, room temperature study means there are fewer artifacts in the images, and this gives a clearer picture of how photosystem II works in nature. For many years now, scientists have been trying to understand this process in meticulous detail. Besides its fundamental importance to science, understanding this process might help develop ways to create artificial photosynthesis devices that can serve as potential clean energy sources. "The eventual goal is to emulate what photosynthesis has been doing for about three billion years. This has been a research challenge for decades," said Junko Yano, principal investigator and senior scientist at Lawrence Berkeley National Laboratory. "We now have the right tool, the femtosecond X-ray laser pulses created at LCLS, that allows us to observe the water-splitting reaction as it happens, in real time, and as it happens in nature." "We want to have enough snapshots of this process to see how exactly the oxygen is formed," added Uwe Bergmann, a distinguished scientist at SLAC, and co-author of the Nature paper. "This method - to be able to get high resolution images at room temperature and under illumination in real time - is really the strategy we need to catch the molecule in the act." The international research team is a long-standing collaboration between SLAC and Berkeley Lab, and includes Humboldt University in Germany, Umeå University and Uppsala University in Sweden, Stanford University, Brookhaven National Laboratory and University of Oxford in the United Kingdom. In previous high-resolution studies of the system at synchrotron light sources, samples experienced radiation damage from longer exposure to X-rays. At LCLS, the researchers were able to take advantage of the ultrafast laser pulses to collect X-ray crystallography and spectroscopy data before damage occurred. "The beauty of the LCLS is that the laser pulses are so short - only 40 femtoseconds in duration, but very intense - that you can collect the data before the sample is destroyed," said co-author Jan Kern, scientist at Berkeley Lab and SLAC, in a related press release. "It's very new, and there are only two places in the world, LCLS and the SPring-8 Angstrom Compact free electron LAser (SACLA), where this can be done at present." In a recent experiment at SACLA, the only other operating hard X-ray free-electron laser, another team of scientists was able to look at the first step in the cycle in a frozen sample, and under dark conditions - which means the water-splitting reaction had not yet been initiated in the protein complex. This latest study at SLAC catches this first step, as well as another step that is two stages further along in the four-step cycle, and at the same temperature the process normally occurs in nature. To do this, the scientists placed droplets of the sample in a solution with small, crystallized forms of the photosystem II on a moving conveyor belt and illuminated the samples with pulses of green light from a laser to initiate the water-splitting reaction. After two light pulses, they captured images of the crystals using X-rays, with a resolution finer than 2.5 angstroms - significantly better than previously achieved at room temperature. The water-splitting reaction takes place at a metal catalyst within the photosystem II protein, known as the oxygen-evolving complex, that is made up of four manganese atoms and one calcium atom. The complex uses the energy from light to form pure oxygen from two water molecules. The four manganese atoms are critical in shuffling electrons through the cycle, but it is unknown where exactly in the complex the involved water is located or where the oxygen formation occurs. To sort this out, the researchers used ammonia, a water substitute, to narrow down where oxygen atoms from two water molecules combine to form an oxygen molecule. If the ammonia was bound to a site, and the reaction still proceeded, that site is unlikely to be part of the oxygen molecule formation. The results from this study offered a surprise - the data do not seem to support two leading theories for how the reaction proceeds within the oxygen-evolving complex. In future studies using this same technique, the researchers hope to capture more images at different steps of the process, which will allow them to further refine the details of the water-splitting reaction. "The chemistry is so unusual," co-principal investigator and senior scientist, Vittal Yachandra at Berkeley Lab, said "Learning how exactly this water-splitting process works will be a breakthrough in our understanding, and it can help in the development of solar fuels and renewable energy." This research was supported by the DOE Office of Science and the National Institutes of Health, among other funding agencies. Some of the work for this study was conducted at the Advanced Light Source and the National Energy Research Scientific Computing Center, both DOE Office of Science User Facilities at Berkeley Lab. SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit http://www. . SLAC National Accelerator Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


Home > Press > How seashells get their strength: Study shows how calcium carbonate forms composites to make strong materials such as in shells and pearls Abstract: Seashells and lobster claws are hard to break, but chalk is soft enough to draw on sidewalks. Though all three are made of calcium carbonate crystals, the hard materials include clumps of soft biological matter that make them much stronger. A study today in Nature Communications reveals how soft clumps get into crystals and endow them with remarkable strength. The results show that such clumps become incorporated via chemical interactions with atoms in the crystals, an unexpected mechanism based on previous understanding. By providing insight into the formation of natural minerals that are a composite of both soft and hard components, the work will help scientists develop new materials for a sustainable energy future, based on this principle. "This work helps us to sort out how rather weak crystals can form composite materials with remarkable mechanical properties," said materials scientist Jim De Yoreo of the Department of Energy's Pacific Northwest National Laboratory. "It also provides us with ideas for trapping carbon dioxide in useful materials to deal with the excess greenhouse gases we're putting in the atmosphere, or for incorporating light-responsive nanoparticles into highly ordered crystalline matrices for solar energy applications." Beautiful and functional Calcium carbonate is one of the most important materials on earth, crystallizing into chalk, shells, and rocks. Animals from mollusks to people use calcium carbonate to make biominerals such as pearls, seashells, exoskeletons, or the tiny organs in ears that maintain balance. These biominerals include proteins or other organic matter in the crystalline matrix to convert the weak calcium carbonate to hard, durable materials. Scientists have been exploring how organisms produce these biominerals in the hopes of determining the basic geochemical principles of how they form, and also how to build synthetic materials with unique properties in any desired shape or size. The strength of a material depends on how easy it is to disrupt its underlying crystal matrix. If a material is compressed, then it becomes harder to break the matrix apart. Proteins trapped in calcium carbonate crystals create a compressive force -- or strain -- within the crystal structure. Unlike the strain that makes muscles sore, this compressive strain is helpful in materials, because it makes it harder to disrupt the underlying crystal structure, thereby adding strength. Scientists understand how forces, stress and strain combine to make strong materials, but they understand less about how to create the materials in the first place. Pearls of wisdom The leading explanation for how growing crystals incorporate proteins and other particles is by simple mechanics. Particles land on the flat surface of calcium carbonate as it is crystallizing, and units of calcium carbonate attach over and around the particles, trapping them. "The standard view is that the crystal front moves too fast for the inclusions to move out of the way, like a wave washing over a rock," said De Yoreo. That idea's drawback is that it lacks the details needed to explain where the strain within the material comes from. The new results from De Yoreo and colleagues do, however. "We've found a completely different mechanism," he said. To find out how calcium carbonate incorporates proteins or other strength-building components, the team turned to atomic force microscopy, also known as AFM, at the Molecular Foundry, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. In AFM, the microscope tip delicately runs over the surface of a sample like a needle running over the grooves in a vinyl record. This creates a three-dimensional image of a specimen under the scope. The team used a high concentration of calcium carbonate that naturally forms a crystalline mineral known as calcite. The calcite builds up in layers, creating uneven surfaces during growth, like steps and terraces on a mountainside. Or, imagine a staircase. A terrace is the flat landing at the bottom; the stair steps have vertical edges from which calcite grows out, eventually turning into terraces too. For their inclusions, the team created spheres out of organic molecules and added them to the mix. These spheres called micelles are molecules that roll up like roly-poly bugs based on the chemistry along their bodies -- pointing outwards are the parts of their molecules that play well chemically with both the surrounding water and the calcite, while tucked inside are the parts that don't get along with the watery environment. Better composites through chemistry The first thing the team noticed under the microscope is that the micelles do not randomly land on the flat terraces. Instead they only stick to the edges of the steps. "The step edge has chemistry that the terrace doesn't," said De Yoreo. "There are these extra dangling bonds that the micelles can interact with." The edges hold onto the micelles as the calcium carbonate steps close around them, one after another. The team watched as the growing steps squeezed the micelles. As the step closed around the top of the micelle, first a cavity formed and then it disappeared altogether under the surface of the growing crystal. To verify that the micelles were in fact buried within the crystals, the team dissolved the crystal and looked again. Like running a movie backwards, the team saw micelles appear as the layers of crystal disappeared. Finally, the team recreated the process in a mathematical simulation. This showed them that the micelles -- or any spherical inclusions -- are compressed like springs as the steps close around them. These compressed springs then create strain in the crystal lattice between the micelles, leading to enhanced mechanical strength. This strain likely accounts for the added strength seen in seashells, pearls and similar biominerals. "The steps capture the micelles for a chemical reason, not a mechanical one, and the resulting compression of the micelles by the steps then leads to forces that explain where the strength comes from," said De Yoreo. ### This work was supported by the Department of Energy Office of Science, National Institutes of Health. About Pacific Northwest National Laboratory Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Unique 2-level cathode structure improves battery performance: Controlling surface chemistry could lead to higher-capacity, faster-charging batteries for electronics, vehicles, and energy-storage applications Abstract: Building a better battery is a delicate balancing act. Increasing the amounts of chemicals whose reactions power the battery can lead to instability. Similarly, smaller particles can improve reactivity but expose more material to degradation. Now a team of scientists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory, Lawrence Berkeley National Laboratory, and SLAC National Accelerator Laboratory say they've found a way to strike a balance--by making a battery cathode with a hierarchical structure where the reactive material is abundant yet protected. This 3-D rendering shows a hollow sphere about 10 millionths of a meter across made of a cathode material called NMC. Color-coding shows the uneven distribution of three key ingredients throughout the particle: manganese (blue), cobalt (red) and nickel (green). This uneven distribution allows the particle to store more energy during lithium-ion battery operation while protecting itself from degradation. SLAC National Accelerator Laboratory Test batteries incorporating this cathode material exhibited improved high-voltage cycling behavior--the kind you'd want for fast-charging electric vehicles and other applications that require high-capacity storage. The scientists describe the micro-to-nanoscale details of the cathode material in a paper published in the journal Nature Energy January 11, 2016. "Our colleagues at Berkeley Lab were able to make a particle structure that has two levels of complexity where the material is assembled in a way that it protects itself from degradation," explained Brookhaven Lab physicist and Stony Brook University adjunct assistant professor Huolin Xin, who helped characterize the nanoscale details of the cathode material at Brookhaven Lab's Center for Functional Nanomaterials (CFN, https://www.bnl.gov/cfn/). X-ray imaging performed by scientists at the Stanford Synchrotron Radiation Lightsource (SSRL) at SLAC along with Xin's electron microscopy at CFN revealed spherical particles of the cathode material measuring millionths of meter, or microns, in diameter made up of lots of smaller, faceted nanoscale particles stacked together like bricks in a wall. The characterization techniques revealed important structural and chemical details that explain why these particles perform so well. The lithium ion shuttle Chemistry is at the heart of all lithium-ion rechargeable batteries, which power portable electronics and electric cars by shuttling lithium ions between positive and negative electrodes bathed in an electrolyte solution. As lithium moves into the cathode, chemical reactions generate electrons that can be routed to an external circuit for use. Recharging requires an external current to run the reactions in reverse, pulling the lithium ions out of the cathode and sending them to the anode. Reactive metals like nickel have the potential to make great cathode materials--except that they are unstable and tend to undergo destructive side reactions with the electrolyte. So the Brookhaven, Berkeley, and SLAC battery team experimented with ways to incorporate nickel but protect it from these destructive side reactions. They sprayed a solution of lithium, nickel, manganese, and cobalt mixed at a certain ratio through an atomizer nozzle to form tiny droplets, which then decomposed to form a powder. Repeatedly heating and cooling the powder triggered the formation of tiny nanosized particles and the self-assembly of these particles into the larger spherical, sometimes hollow, structures. Using x-rays at SLAC's SSRL, the scientists made chemical "fingerprints" of the micron-scale structures. The synchrotron technique, called x-ray spectroscopy, revealed that the outer surface of the spheres was relatively low in nickel and high in unreactive manganese, while the interior was rich in nickel. "The manganese layer forms an effective barrier, like paint on a wall, protecting the inner structure of the nickel-rich 'bricks' from the electrolyte," Xin said. But how were the lithium ions still able to enter the material to react with the nickel? To find out, Xin's group at the CFN ground up the larger particles to form a powder composed of much smaller clumps of the nanoscale primary particles with some of the interfaces between them still intact. "These samples show a small subset of the bricks that form the wall. We wanted to see how the bricks are put together. What kind of cement or mortar binds them? Are they layered together regularly or are they randomly oriented with spaces in between?" Xin said. Nanoscale details explain improved performance Using an aberration-corrected scanning transmission electron microscope--a scanning transmission electron microscope outfitted with a pair of "glasses" to improve its vision--the scientists saw that the particles had facets, flat faces or sides like the cut edges of a crystal, which allowed them to pack tightly together to form coherent interfaces with no mortar or cement between the bricks. But there was a slight misfit between the two surfaces, with the atoms on one side of the interface being ever so slightly offset relative to the atoms on the adjoining particle. "The packing of atoms at the interfaces between the tiny particles is slightly less dense than the perfect lattice within each individual particle, so these interfaces basically make a highway for lithium ions to go in and out," Xin said. Like tiny smart cars, the lithium ions can move along these highways to reach the interior structure of the wall and react with the nickel, but much larger semi-truck-size electrolyte molecules can't get in to degrade the reactive material. Using a spectroscopy tool within their microscope, the CFN scientists produced nanoscale chemical fingerprints that revealed there was some segregation of nickel and manganese even at the nanoscale, just as there was in the micron-scale structures. "We don't know yet if this is functionally significant, but we think it could be beneficial and we want to study this further," Xin said. For example, he said, perhaps the material could be made at the nanoscale to have a manganese skeleton to stabilize the more reactive, less-stable nickel-rich pockets. "That combination might give you a longer lifetime for the battery along with the higher charging capacity of the nickel," he said. ### This work was supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Vehicle Technologies Office, of the U.S. Department of Energy. The Center for Functional Nanomaterials at Brookhaven Lab and the Stanford Synchrotron Radiation Lightsource at SLAC are both DOE Office of Science User Facilities supported by the DOE Office of Science. About Brookhaven National Laboratory Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE's Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Technique matters: A different way to make cathodes may mean better batteries: Berkeley Lab researchers find that controlling surface chemistry is important to developing high-energy lithium-ion batteries Abstract: Lithium nickel manganese cobalt oxide, or NMC, is one of the most promising chemistries for better lithium batteries, especially for electric vehicle applications, but scientists have been struggling to get higher capacity out of them. Now researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) have found that using a different method to make the material can offer substantial improvements. Working with scientists at two other Department of Energy (DOE) labs--Brookhaven National Laboratory and SLAC National Accelerator Laboratory--a team led by Berkeley Lab battery scientist Marca Doeff was surprised to find that using a simple technique called spray pyrolysis can help to overcome one of the biggest problems associated with NMC cathodes--surface reactivity, which leads to material degradation. "We made some regular material using this technique, and lo and behold, it performed better than expected," said Doeff, who has been studying NMC cathodes for about seven years. "We were at a loss to explain this, and none of our conventional material characterization techniques told us what was going on, so we went to SLAC and Brookhaven to use more advanced imaging techniques and found that there was less nickel on the particle surfaces, which is what led to the improvement. High nickel content is associated with greater surface reactivity." Their results were published online in the premier issue of the journal Nature Energy in an article titled, "Metal segregation in hierarchically structured cathode materials for high-energy lithium batteries." The facilities used were the Stanford Synchrotron Radiation Lightsource (SSRL) at SLAC and the Center for Functional Nanomaterials (CFN) at Brookhaven, both DOE Office of Science User Facilities. These results are potentially significant because they pave the way for making lithium-ion batteries that are cheaper and have higher energy density. "We still want to increase the nickel content even further, and this gives us a possible avenue for doing that," Doeff said. "The nickel is the main electro-active component, plus it's less expensive than cobalt. The more nickel you have, the more practical capacity you may have at voltages that are practical to use. We want more nickel, but at the same time, there's the problem with surface reactivity." The cathode is the positive electrode in a battery, and development of an improved cathode material is considered essential to achieving a stable high-voltage cell, the subject of intense research. Spray pyrolysis is a commercially available technique used for making thin films and powders but has not been widely used to make materials for battery production. The surface reactivity is a particular problem for high-voltage cycling, which is necessary to achieve higher capacities needed for high-energy devices. The phenomenon has been studied and various strategies have been tried to ameliorate the issue over the years, including using partial titanium substitution for cobalt, which counteracts the reactivity of the surfaces to some extent. At SSRL researchers Dennis Nordlund and Yijin Liu used x-ray transmission microscopy and spectroscopy to examine the material in the tens of nanometers to 10-30 micron range. At CFN researcher Huolin Xin used a technique called electron energy loss spectroscopy (EELS) with a scanning transmission electron microscope (STEM), which was able to zoom in on details down to the nanoscale. At these two scales, Doeff and her Berkeley Lab colleagues--Feng Lin, Yuyi Li, Matthew Quan, and Lei Cheng--working with the scientists at SSRL and CFN made some important findings about the material. Lin, a former Berkeley Lab postdoctoral researcher working with Doeff and first author on the paper, said: "Our previous studies revealed that engineering the surface of cathode particles could be the key to stabilizing battery performance. After some deep effort to understand the stability challenges of NMC cathodes, we are now getting one step closer to improving NMC cathodes by tuning surface metal distribution." The research results point the way to further refinements. "This research suggests a path forward to getting these materials to cycle with higher capacities--that is to design materials that are graded, with less nickel on the surface," Doeff said. "I think our next step will be to try to make these materials with a larger compositional gradient and combine some other things to make them work together, such as titanium substitution, so we can utilize more capacity and thereby increase the energy density in a lithium ion battery." Spray pyrolysis is an inexpensive, common technique for making materials. "The reason we like it is that it offers a lot of control over the morphology. You get beautiful spherical morphology which is very good for battery materials," Doeff said. "We're not the first ones who have come up with idea of decreasing nickel on the surface. But we were able to do it in one step using a very simple procedure." ### This research was supported by DOE's Vehicle Technologies Office. About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Nature's masonry: The first steps in how thin protein sheets form polyhedral shells: First-of-its kind study shows beginning stages in assembly of bacterial microcompartments Abstract: Scientists have for the first time viewed how bacterial proteins self-assemble into thin sheets and begin to form the walls of the outer shell for nano-sized polyhedral compartments that function as specialized factories. The research, led by researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and Michigan State University in collaboration with the University of Liverpool, provides new clues for scientists seeking to use these 3-D structures as "nanoreactors" to selectively suck in toxins or churn out desired products. The new insight may aid scientists who seek to tap this natural origami by designing novel compartments or using them as scaffolding for new types of nanoscale architectures, such as drug-delivery systems. "We have a new clue in understanding nature's inner-cell architecture," said Cheryl Kerfeld, a Berkeley Lab structural biologist who is co-corresponding author on the study. Her research group at Berkeley Lab specializes in the structure and inner workings of these tiny compartments, known as bacterial microcompartments or BMCs. Kerfeld holds joint appointments with Berkeley Lab's Molecular Biophysics and Integrated Bioimaging (MBIB) Division and Michigan State University. "We usually only get to see these structures after they form, but in this case we're watching them assemble and answering some questions about how they form," Kerfeld said. "This is the first time anyone has visualized the self-assembly of the facets, or sides, of the microcompartments. It's like seeing walls, made up of hexagonally shaped tiles, being built by unseen hands." The study was published online Nov. 30 in Nano Letters. Several models had been proposed for how these compartments are built from scratch inside bacteria by proteins, and there were many open questions about the construction process. Researchers combined X-ray studies of the 3-D structure of a protein that resembles a hexagon with imaging by an atomic-force microscope to reveal how the hexagons arrange in a honeycomb pattern in the microcompartment's walls. Markus Sutter, a Berkeley Lab scientist who is the study's lead author, determined the 3-D structure of the basic building block protein at the Advanced Light Source at Berkeley Lab using crystallized samples. Patterns produced when X-rays struck the protein crystals provided key details about the protein's shape, at the scale of individual atoms. "That gave us some exact dimensions," Sutter said, which helped to interpret the microscope images. "It also showed us that hexagons had distinct sidedness: One side is concave, the other side is convex." Liverpool's atomic-force microscope, BioAFM, showed that individual hexagon-shaped protein pieces naturally join to form ever-larger protein sheets in a liquid solution. The hexagons only assembled with each other if they had the same orientation--convex with convex or concave with concave. "Somehow they selectively make sure they end up facing the same way," Kerfeld added. The study also found that individual hexagon-shaped pieces of the protein sheet can dislodge and move from one protein sheet to another. Such dynamics may allow fully formed compartments to repair individual sides. The protein sheets studied were not viewed inside living bacteria, though the conditions of the microscope experiment were designed to mimic those of the natural bacterial environment. "We think this is what goes on when these compartments assemble inside the microbe," Kerfeld said. Some studies have proposed that the protein shell of microcompartments might be several layers thick. However, this study suggests that the shell facets are composed of a single protein layer. Sutter said this makes sense: The compartments are known to selectively allow some chemical exchanges between their contents and their outside environment, and a thicker shell could complicate these exchanges. The exact mechanism for this chemical exchange is not yet well-understood. This and other mysteries of the microcompartments can hopefully be resolved with follow-up studies that seek to chronicle the complete assembly process, the researchers said. Fully-formed 3-D microcompartments have a soccer-ball-like geometry that incorporates pentagon-shaped protein structures known as pentamers, for example, that were not included in the latest study. "The holy grail is to see the structure and dynamics of an intact shell, composed of several different types of hexagonal proteins and with the pentagons that cap its corners," Kerfeld said. It's possible that simply adding these pentamers to the protein sheets from the latest experiment could stimulate the growth of a complete 3-D structure, but Kerfeld added, "I wouldn't be surprised if there's more to the story." Once more is learned about the microcompartments, it's conceivable they could be used to concentrate the production of beneficial enzymes, organize them to produce an ordered sequence of chemical reactions, or to remove particular toxins from the surrounding environment, she said. ### The X-ray studies were performed at Berkeley Lab's Advanced Light Source, a DOE Office of Science User Facility, and the microscope work was performed by Kerfeld's collaborator, Luning Liu at the University of Liverpool. Matthew Faulkner, Clément Aussignargues, Bradley C. Paasch and Steve Barrett also participated in the study. This research was supported by the National Institutes of Health. About Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Revealing the fluctuations of flexible DNA in 3-D: First-of-their-kind images by Berkeley Lab-led research team could aid in use of DNA to build nanoscale devices Abstract: An international team working at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has captured the first high-resolution 3-D images from individual double-helix DNA segments attached at either end to gold nanoparticles. The images detail the flexible structure of the DNA segments, which appear as nanoscale jump ropes. This video shows techniques that scientists used to produce 3-D reconstructions of shape fluctuations in double-helix DNA segments attached to gold nanoparticles. Credit: Lei Zhang, Dongsheng Lei, Jessica M. Smith, Meng Zhang, Huimin Tong, Xing Zhang, Zhuoyang Lu, Jiankang Liu, A. Paul Alivisatos & Gang "Gary" Ren These views compare the various shape fluctuations obtained from different samples of the same type of double-helix DNA segment (DNA renderings in green, 3-D reconstructions in purple) connected to gold nanoparticles (yellow). Credit: Berkeley Lab This unique imaging capability, pioneered by Berkeley Lab scientists, could aid in the use of DNA segments as building blocks for molecular devices that function as nanoscale drug-delivery systems, markers for biological research, and components for computer memory and electronic devices. It could also lead to images of important disease-relevant proteins that have proven elusive for other imaging techniques, and of the assembly process that forms DNA from separate, individual strands. The shapes of the coiled DNA strands, which were sandwiched between polygon-shaped gold nanoparticles, were reconstructed in 3-D using a cutting-edge electron microscope technique coupled with a protein-staining process and sophisticated software that provided structural details to the scale of about 2 nanometers, or two billionths of a meter. "We had no idea about what the double-strand DNA would look like between the nanogold particles," said Gang "Gary" Ren, a Berkeley Lab scientist who led the research. "This is the first time for directly visualizing an individual double-strand DNA segment in 3-D," he said. The results were published in the March 30 edition of Nature Communications. The method developed by this team, called individual-particle electron tomography (IPET), had earlier captured the 3-D structure of a single protein that plays a key role in human cholesterol metabolism. By grabbing 2-D images of the same object from different angles, the technique allows researchers to assemble a 3-D image of that object. The team has also used the technique to uncover the fluctuation of another well-known flexible protein, human immunoglobulin 1, which plays a role in our immune system. For this latest study of DNA nanostructures, Ren used an electron-beam study technique called cryo-electron microscopy (cryo-EM) to examine frozen DNA-nanogold samples, and used IPET to reconstruct 3-D images from samples stained with heavy metal salts. The team also used molecular simulation tools to test the natural shape variations, called "conformations," in the samples, and compared these simulated shapes with observations. Ren explained that the naturally flexible dynamics of samples, like a man waving his arms, cannot be fully detailed by any method that uses an average of many observations. A popular way to view the nanoscale structural details of delicate biological samples is to form them into crystals and zap them with X-rays, though this does not preserve their natural shape and the DNA-nanogold samples in this study are incredibly challenging to crystallize. Other common research techniques may require a collection of thousands near-identical objects, viewed with an electron microscope, to compile a single, averaged 3-D structure. But this 3-D image may not adequately show the natural shape fluctuations of a given object. The samples in the latest experiment were formed from individual polygon gold nanostructures, measuring about 5 nanometers across, connected to single DNA-segment strands with 84 base pairs. Base pairs are basic chemical building blocks that give DNA its structure. Each individual DNA segment and gold nanoparticle naturally zipped together with a partner to form the double-stranded DNA segment with a gold particle at either end. The samples were flash-frozen to preserve their structure for study with cryo-EM imaging, and the distance between the two gold particles in individual samples varied from 20-30 nanometers based on different shapes observed in the DNA segments. Researchers used a cryo-electron microscope at Berkeley Lab's Molecular Foundry for this study. They collected a series of tilted images of the stained objects, and reconstructed 14 electron-density maps that detailed the structure of individual samples using the IPET technique. They gathered a dozen conformations for the samples and found the DNA shape variations were consistent with those measured in the flash-frozen cryo-EM samples. The shapes were also consistent with samples studied using other electron-based imaging and X-ray scattering methods, and with computer simulations. While the 3-D reconstructions show the basic nanoscale structure of the samples, Ren said that the next step will be to work to improve the resolution to the sub-nanometer scale. "Even in this current state we begin to see 3-D structures at 1- to 2-nanometer resolution," he said. "Through better instrumentation and improved computational algorithms, it would be promising to push the resolution to that visualizing a single DNA helix within an individual protein." The technique, he said, has already excited interest among some prominent pharmaceutical companies and nanotechnology researchers, and his science team already has dozens of related research projects in the pipeline. In future studies, researchers could attempt to improve the imaging resolution for complex structures that incorporate more DNA segments as a sort of "DNA origami," Ren said. Researchers hope to build and better characterize nanoscale molecular devices using DNA segments that can, for example, store and deliver drugs to targeted areas in the body. "DNA is easy to program, synthesize and replicate, so it can be used as a special material to quickly self-assemble into nanostructures and to guide the operation of molecular-scale devices," he said. "Our current study is just a proof of concept for imaging these kinds of molecular devices' structures." ### The Molecular Foundry is a DOE Office of Science User Facility. In addition to Berkeley Lab scientists, other researchers contributing to this study were from UC Berkeley, the Kavli Energy NanoSciences Institute at Berkeley Lab and UC Berkeley, and Xi'an Jiaotong University in China. This work was supported by the National Science Foundation, DOE Office of Basic Energy Sciences, National Institutes of Health, the National Natural Science Foundation of China, Xi'an Jiaotong University in China, and the Ministry of Science and Technology in China. About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 18, 2016
Site: www.csmonitor.com

To feed the growing global population, food production will have to increase significantly in the next 30 years. Scientists may have just come up with one possible solution: more efficient photosyntesis. Three tobacco plants transformed to improve photoprotection recovery are visibily larger than the normal (wildtype) tobacco plant (r.). Scientists have altered a plant’s genes to make it use sunlight more efficiently, a breakthrough that may eventually dramatically increase the amount of food grown. About 7 billion people live on Earth today and that number is expected to rise to more than 9 billion by 2050. In order to meet that growing population's demand for food, the United Nations Food and Agriculture Organization has estimated that we will need to increase global food production by at least 70 percent. To make that happen, scientists have come up with a gene editing method that could make the process of photosynthesis more efficient in plants, thus increasing crop yield without increasing agricultural land use. Their technique is detailed in a study published Thursday in the journal Science. The purpose of the research, which was funded by the Bill and Melinda Gates Foundation, is to generate more sustainable farming practices for subsistence farmers in developing countries in the future. “[Plants] are not optimized for productivity and that is the key change in thinking that you need to have,” study co-lead author Johannes Kromdijk of the University of Illinois at Urbana-Champaign, tells The Christian Science Monitor. “They are optimized to reproduce, but they are not necessarily optimized to reproduce as much as possible.” But to meet the demand for food, humans need plants to produce more. This is not the first time that scientists have worked to genetically improve crop yield, says Donald Ort, a professor of plant biology also at the University of Illinois at Urbana-Champaign who was not an author on the paper. During the green revolution of the 1950s through the 1980s, breeding research improved plants’ ability to intercept radiation. Today the best crops absorb around 90 percent of the light available during the growing season. Scientists also worked to improve the harvest index – the percentage of the above ground dry matter that contributes to food – so that now 50 to 60 percent of that biomass is seeds. “The other determinant of yield potential is photosynthetic energy conversion efficiency – the efficiency with which the plant can take that intercepted light and turn it into biomass,” Dr. Ort tells the Monitor. “During the first green revolution that was not improved at all, so it seems like if there is gong to be a second green revolution it would have to be driven by improving that process.” And photosynthesis is frequently an inefficient process. Direct sunlight can damage important molecules within the plant, so when the sun is particularly bright the plant converts photons into heat in a process called photoprotection – a kind of sunblock that protects the plant from sunburn. However, as a result, the efficiency of the plant's photosynthesis process drops significantly. While it only takes a few minutes for the plant to react to direct sunlight, it takes hours for it to return to a normal rate of photosynthesis, and in that time the plant loses out on an estimated 20 percent of potential growth and crop yield. This lag time would not be a problem in the wild, where the only evolutionary pressure is to reproduce enough to pass on genes, but it does raise concerns when the world’s food supply may need to double in 30 to 40 years. Using tobacco plants, because their genes can be easily modified, the researchers tested what would happen if they sped up the photosynthetic recovery time. By changing just three genes involved in photoprotection, the plants' leaves, stems, and roots all got larger, says Krishna Niyogi, a co-author on the study and professor of plant and microbial biology at University of California Berkeley and researcher at the Lawrence Berkeley National Laboratory. After being tested in a lab and a greenhouse, the plants were transplanted to a field, where in 22 days they weighed 14 to 20 percent more than unmodified plants. “None of us care at all about tobacco,” Dr. Kromdijk says. “It is just a health-damaging crop that we really don’t need to improve, but we used it as a quick proof of concept and now we can start with the real interesting stuff by trying to put this technology to use to improve food crops.” Eventually, the researchers will test the same process on such crops as cassava, rice, and cowpea, which will help farmers in the developing world, particularly across sub-saharan Africa and southeast Asia. If the concept applies to these crops as well, it would allow for more food to be produced without increasing the amount of land dedicated to agricultural production, which is already at its limit in many parts of the world. There will likely be some social push back from the vocal anti-GMO movement, but Dr. Niyogi says the actual gene manipulation in this highly productive process is fairly minimal. “In this first proof of concept, we did take the genes of one plant and put them into another plant, but all plants have these three genes,” Niyogi tells the Monitor. “So in the long run, what we can do is just manipulate the genes that are already in the plant and not introduce the genes from another organism.” And ending hunger, Ort says, also makes for a pretty convincing argument. “There is the belief that we need to develop these capabilities so that they exist at the time when push comes to shove and we simply don't have enough food and then maybe some of these things get reassessed at that time,” Ort says.


News Article | November 18, 2016
Site: www.csmonitor.com

To feed the growing global population, food production will have to increase significantly in the next 30 years. Scientists may have just come up with one possible solution: more efficient photosyntesis. Three tobacco plants transformed to improve photoprotection recovery are visibily larger than the normal (wildtype) tobacco plant (r.). Scientists have altered a plant’s genes to make it use sunlight more efficiently, a breakthrough that may eventually dramatically increase the amount of food grown. About 7 billion people live on Earth today and that number is expected to rise to more than 9 billion by 2050. In order to meet that growing population's demand for food, the United Nations Food and Agriculture Organization has estimated that we will need to increase global food production by at least 70 percent. To make that happen, scientists have come up with a gene editing method that could make the process of photosynthesis more efficient in plants, thus increasing crop yield without increasing agricultural land use. Their technique is detailed in a study published Thursday in the journal Science. The purpose of the research, which was funded by the Bill and Melinda Gates Foundation, is to generate more sustainable farming practices for subsistence farmers in developing countries in the future. “[Plants] are not optimized for productivity and that is the key change in thinking that you need to have,” study co-lead author Johannes Kromdijk of the University of Illinois at Urbana-Champaign, tells The Christian Science Monitor. “They are optimized to reproduce, but they are not necessarily optimized to reproduce as much as possible.” But to meet the demand for food, humans need plants to produce more. This is not the first time that scientists have worked to genetically improve crop yield, says Donald Ort, a professor of plant biology also at the University of Illinois at Urbana-Champaign who was not an author on the paper. During the green revolution of the 1950s through the 1980s, breeding research improved plants’ ability to intercept radiation. Today the best crops absorb around 90 percent of the light available during the growing season. Scientists also worked to improve the harvest index – the percentage of the above ground dry matter that contributes to food – so that now 50 to 60 percent of that biomass is seeds. “The other determinant of yield potential is photosynthetic energy conversion efficiency – the efficiency with which the plant can take that intercepted light and turn it into biomass,” Dr. Ort tells the Monitor. “During the first green revolution that was not improved at all, so it seems like if there is gong to be a second green revolution it would have to be driven by improving that process.” And photosynthesis is frequently an inefficient process. Direct sunlight can damage important molecules within the plant, so when the sun is particularly bright the plant converts photons into heat in a process called photoprotection – a kind of sunblock that protects the plant from sunburn. However, as a result, the efficiency of the plant's photosynthesis process drops significantly. While it only takes a few minutes for the plant to react to direct sunlight, it takes hours for it to return to a normal rate of photosynthesis, and in that time the plant loses out on an estimated 20 percent of potential growth and crop yield. This lag time would not be a problem in the wild, where the only evolutionary pressure is to reproduce enough to pass on genes, but it does raise concerns when the world’s food supply may need to double in 30 to 40 years. Using tobacco plants, because their genes can be easily modified, the researchers tested what would happen if they sped up the photosynthetic recovery time. By changing just three genes involved in photoprotection, the plants' leaves, stems, and roots all got larger, says Krishna Niyogi, a co-author on the study and professor of plant and microbial biology at University of California Berkeley and researcher at the Lawrence Berkeley National Laboratory. After being tested in a lab and a greenhouse, the plants were transplanted to a field, where in 22 days they weighed 14 to 20 percent more than unmodified plants. “None of us care at all about tobacco,” Dr. Kromdijk says. “It is just a health-damaging crop that we really don’t need to improve, but we used it as a quick proof of concept and now we can start with the real interesting stuff by trying to put this technology to use to improve food crops.” Eventually, the researchers will test the same process on such crops as cassava, rice, and cowpea, which will help farmers in the developing world, particularly across sub-saharan Africa and southeast Asia. If the concept applies to these crops as well, it would allow for more food to be produced without increasing the amount of land dedicated to agricultural production, which is already at its limit in many parts of the world. There will likely be some social push back from the vocal anti-GMO movement, but Dr. Niyogi says the actual gene manipulation in this highly productive process is fairly minimal. “In this first proof of concept, we did take the genes of one plant and put them into another plant, but all plants have these three genes,” Niyogi tells the Monitor. “So in the long run, what we can do is just manipulate the genes that are already in the plant and not introduce the genes from another organism.” And ending hunger, Ort says, also makes for a pretty convincing argument. “There is the belief that we need to develop these capabilities so that they exist at the time when push comes to shove and we simply don't have enough food and then maybe some of these things get reassessed at that time,” Ort says.


Home > Press > Weaving a new story for COFS and MOFs: First materials to be woven at the atomic and molecular levels created at Berkeley Abstract: There are many different ways to make nanomaterials but weaving, the oldest and most enduring method of making fabrics, has not been one of them - until now. An international collaboration led by scientists at the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, has woven the first three-dimensional covalent organic frameworks (COFs) from helical organic threads. The woven COFs display significant advantages in structural flexibility, resiliency and reversibility over previous COFs - materials that are highly prized for their potential to capture and store carbon dioxide then convert it into valuable chemical products. "We have taken the art of weaving into the atomic and molecular level, giving us a powerful new way of manipulating matter with incredible precision in order to achieve unique and valuable mechanical properties," says Omar Yaghi, a chemist who holds joint appointments with Berkeley Lab's Materials Sciences Division and UC Berkeley's Chemistry Department, and is the co-director of the Kavli Energy NanoScience Institute (Kavli-ENSI). "Weaving in chemistry has been long sought after and is unknown in biology," Yaghi says. "However, we have found a way of weaving organic threads that enables us to design and make complex two- and three-dimensional organic extended structures." Yaghi is the corresponding author of a paper in Science reporting this new technique. The paper is titled "Weaving of organic threads into a crystalline covalent organic framework." The lead authors are Yuzhong Liu, Yanhang Ma and Yingbo Zhao. Other co-authors are Xixi Sun, Felipe Gándara, Hiroyasu Furukawa, Zheng Liu, Hanyu Zhu, Chenhui Zhu, Kazutomo Suenaga, Peter Oleynikov, Ahmad Alshammari, Xiang Zhang and Osamu Terasaki. COFs and their cousin materials, metal organic frameworks (MOFs), are porous three-dimensional crystals with extraordinarily large internal surface areas that can absorb and store enormous quantities of targeted molecules. Invented by Yaghi, COFs and MOFs consist of molecules (organics for COFs and metal-organics for MOFs) that are stitched into large and extended netlike frameworks whose structures are held together by strong chemical bonds. Such frameworks show great promise for, among other applications, carbon sequestration. Through another technique developed by Yaghi, called "reticular chemistry," these frameworks can also be embedded with catalysts to carry out desired functions: for example, reducing carbon dioxide into carbon monoxide, which serves as a primary building block for a wide range of chemical products including fuels, pharmaceuticals and plastics. In this latest study, Yaghi and his collaborators used a copper(I) complex as a template for bringing threads of the organic compound "phenanthroline" into a woven pattern to produce an immine-based framework they dubbed COF-505. Through X-ray and electron diffraction characterizations, the researchers discovered that the copper(I) ions can be reversibly removed or restored to COF-505 without changing its woven structure. Demetalation of the COF resulted in a tenfold increase in its elasticity and remetalation restored the COF to its original stiffness. "That our system can switch between two states of elasticity reversibly by a simple operation, the first such demonstration in an extended chemical structure, means that cycling between these states can be done repeatedly without degrading or altering the structure," Yaghi says. "Based on these results, it is easy to imagine the creation of molecular cloths that combine unusual resiliency, strength, flexibility and chemical variability in one material." Yaghi says that MOFs can also be woven as can all structures based on netlike frameworks. In addition, these woven structures can also be made as nanoparticles or polymers, which means they can be fabricated into thin films and electronic devices. "Our weaving technique allows long threads of covalently linked molecules to cross at regular intervals," Yaghi says. "These crossings serve as points of registry, so that the threads have many degrees of freedom to move away from and back to such points without collapsing the overall structure, a boon to making materials with exceptional mechanical properties and dynamics." ### This research was primarily supported by BASF (Germany) and King Abdulaziz City for Science and Technology (KACST). About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Researchers at MIT have developed a practical and physically-based way of treating the surface of materials called perovskite oxides, to make them more durable and improve their performance. These materials are promising candidates to serve as electrodes in energy-conversion devices such as fuel cells and electrolyzers. This surface treatment could solve one of the major challenges that has hindered widespread deployment of fuel cell technology that, when operated reversibly, can present a promising alternative to batteries for renewable-energy storage. The new findings are being reported today in the journal Nature Materials, in a paper by MIT Associate Professor Bilge Yildiz of the departments of Nuclear Science and Engineering and Materials Science and Engineering, former MIT postdoc Nikolai Tsvetkov, graduate students Qiyang Lu and Lixin Sun, and Ethan Crumlin of the Lawrence Berkeley National Laboratory. Perovskites have become a bustling area of research in recent years, with potential applications in areas ranging from fuel cell electrodes, to nonvolatile memory chips for computers, to solar thermochemical fuel production through the splitting of water and carbon dioxide. They are a broad class of oxide materials, and many teams are exploring variations of perovskite composition in search of the most promising candidates for different uses. But the relative instability of the material’s surface ove