Berkeley, CA, United States
Berkeley, CA, United States

The Lawrence Berkeley National Laboratory , also known as the "Berkeley Lab", is a United States national laboratory located in the Berkeley Hills near Berkeley, California that conducts unclassified scientific research on behalf of the United States Department of Energy . It is managed and operated by the University of California, whose oldest campus, the University of California, Berkeley's main campus, it overlooks. Plans announced by the university in 2012 called for a second Berkeley Lab campus to be built on land it owns nearby at Richmond Field Station. Wikipedia.


Time filter

Source Type

Kusoglu A.,Lawrence Berkeley National Laboratory | Weber A.Z.,Lawrence Berkeley National Laboratory
Chemical Reviews | Year: 2017

In this comprehensive review, recent progress and developments on perfluorinated sulfonic-acid (PFSA) membranes have been summarized on many key topics. Although quite well investigated for decades, PFSA ionomers’ complex behavior, along with their key role in many emerging technologies, have presented significant scientific challenges but also helped create a unique cross-disciplinary research field to overcome such challenges. Research and progress on PFSAs, especially when considered with their applications, are at the forefront of bridging electrochemistry and polymer (physics), which have also opened up development of state-of-the-art in situ characterization techniques as well as multiphysics computation models. Topics reviewed stem from correlating the various physical (e.g., mechanical) and transport properties with morphology and structure across time and length scales. In addition, topics of recent interest such as structure/transport correlations and modeling, composite PFSA membranes, degradation phenomena, and PFSA thin films are presented. Throughout, the impact of PFSA chemistry and side-chain is also discussed to present a broader perspective. © 2017 American Chemical Society.


Cate J.H.D.,University of California at Berkeley | Cate J.H.D.,Lawrence Berkeley National Laboratory
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2017

Translation in eukaryotes is highly regulated during initiation, a process impacted by numerous readouts of a cell’s state. There are many cases in which cellular messenger RNAs likely do not follow the canonical ‘scanning’ mechanism of translation initiation, but the molecular mechanisms underlying these pathways are still being uncovered. Some RNA viruses such as the hepatitis C virus use highly structured RNA elements termed internal ribosome entry sites (IRESs) that commandeer eukaryotic translation initiation, by using specific interactions with the general eukaryotic translation initiation factor eIF3. Here, I present evidence that, in addition to its general role in translation, eIF3 in humans and likely in all multicellular eukaryotes also acts as a translational activator or repressor by binding RNA structures in the 50-untranslated regions of specific mRNAs, analogous to the role of the mediator complex in transcription. Furthermore, eIF3 in multicellular eukaryotes also harbours a 50 7-methylguanosine cap-binding subunit—eIF3d—which replaces the general cap-binding initiation factor eIF4E in the translation of select mRNAs. Based on results from cell biological, biochemical and structural studies of eIF3, it is likely that human translation initiation proceeds through dozens of different molecular pathways, the vast majority of which remain to be explored. This article is part of the themed issue ‘Perspectives on the ribosome’. © 2017 The Authors.


Williams I.N.,Lawrence Berkeley National Laboratory | Pierrehumbert R.T.,University of Oxford
Geophysical Research Letters | Year: 2017

We present a method to attribute cloud radiative feedbacks to convective processes, using subcloud layer buoyancy as a diagnostic of stable and deep convective regimes. Applying this approach to tropical remote sensing measurements over years 2000-2016 shows that an inferred negative short-term cloud feedback from deep convection was nearly offset by a positive cloud feedback from stable regimes. The net cloud feedback was within statistical uncertainty of the National Center for Atmospheric Research Community Atmosphere Model (CAM5) with historical forcings, with discrepancies in the partitioning of the cloud feedback into convective regimes. Compensation between high-cloud responses to tropics-wide warming in stable and unstable regimes resulted in smaller net changes in high-cloud fraction with warming. In addition, deep convection and associated high clouds set in at warmer temperatures in response to warming, as a consequence of nearly invariant subcloud buoyancy. This invariance further constrained the magnitude of cloud radiative feedbacks and is consistent with climate model projections. © 2017. American Geophysical Union. All Rights Reserved.


Fuller F.D.,Lawrence Berkeley National Laboratory | Bollinger J.M.,Jr | Phillips G.N.,Jr
Nature Methods | Year: 2017

X-ray crystallography at X-ray free-electron laser sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy, both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing insights into the interplay between the protein structure and dynamics and the chemistry at an active site. The implementation of such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly affects the data quality. We present here a robust way of delivering controlled sample amounts on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Green M.A.,Lawrence Berkeley National Laboratory
IEEE Transactions on Applied Superconductivity | Year: 2017

This paper is about the first 25 years of the author's career. The author started working with superconducting (S/C) magnets in 1965. He has been in the field for more than 50 years. He worked on S/C magnet refrigeration and with S/C magnets from 1966 to 1971. He made the first measurement of higher multipoles generated by the circulating currents within the superconductor filaments in the first Berkeley S/C dipole magnet built in 1969. He also performed cooling experiments with super critical and two-phase helium on a small S/C magnet using a refrigerator. In the spring of 1971, the author joined the staff of the Institute for Experimental Kernphysik in Karlsruhe, Germany. He was also part of the Group European for Superconducting Super Synchrotron (GESSS) collaboration between Centre European Research Nuclear(CERN), Kernforschungzentrum Karlsruhe, Saclay, France, and the RAL, U.K. The German S/C dipole is described and his work on S/C synchrotrons is discussed. In the 1970s, he did a lot of work on quench protection of high current density S/C magnets. This paper has led to a number of insights that are shared here. © 2002-2011 IEEE.


Green M.A.,Lawrence Berkeley National Laboratory
IEEE Transactions on Applied Superconductivity | Year: 2017

This paper describes the development of detector magnets from the mid-1960s to the LHC detector magnets. The discovery of cryogenic stability in March, 1965, made large detector magnets possible. A number of bubble chamber magnets were built as a result. In the 1970s, the quest for thin magnets started. This led to the development of quench protection through quench-back and the development of indirect cooling with two-phase helium in tubes. The late 1970s and 1980s led to the development of a conductor with ultrapure aluminum as a stabilizer. This led to large coils that were wound inside a strong aluminum mandrel. The LHC saw the development of magnets that are open and they return their own flux without iron. This paper describes the development of detector magnets that have been used in high-energy physics. © 2002-2011 IEEE.


Van Tilborg J.,Lawrence Berkeley National Laboratory
Optics InfoBase Conference Papers | Year: 2016

Ongoing efforts towards a compact free electron laser based on laser-plasma acceleration (LPA) are presented. Recent results [van Tilborg (PRL 2015) and Steinke (Nature, accepted)] demonstrate compact transport and support LPA-based FEL design and simulations. © OSA 2016.


Belkacem A.,Lawrence Berkeley National Laboratory
Optics InfoBase Conference Papers | Year: 2016

Sources of intense femtosecond pulses at short wavelengths such as high intensity high harmonic sources are intense enough to drive a sizable fraction (>10%) of a sample to a well-defined excited state or to a coherent superposition of states. These dynamically evolving excited states can then be probed with pump-probe, multi-color or non-linear photoionization techniques. In general electronic excited states are short lived. The most important tool necessary to carry out this task is high harmonic sources optimized for producing very intense short pulses of EUV photons in the 5-60 eV energy range. We will present the application of the intense EUV pulses to study femtosecond dynamics of molecules such as methanol, carbon dioxide and ethylene in the gas as well as light induced self-organization of nanoparticles at interfaces. We use a variety of techniques including time-of-flight, electron spectroscopy and velocity map imaging to unravel the ultrafast chemical dynamics. © OSA 2016.


Byrd J.,Lawrence Berkeley National Laboratory
Optics InfoBase Conference Papers | Year: 2016

Recent technological developments promise to enable compact high brightness storage rings. We review these developments in the context of a nominal 500 MeV storage ring as high brightness light sources in the EUV, soft/hard x-ray and gamma regimes. © OSA 2016.


Anders A.,Lawrence Berkeley National Laboratory
Journal of Applied Physics | Year: 2017

High Power Impulse Magnetron Sputtering (HiPIMS) is a coating technology that combines magnetron sputtering with pulsed power concepts. By applying power in pulses of high amplitude and a relatively low duty cycle, large fractions of sputtered atoms and near-target gases are ionized. In contrast to conventional magnetron sputtering, HiPIMS is characterized by self-sputtering or repeated gas recycling for high and low sputter yield materials, respectively, and both for most intermediate materials. The dense plasma in front of the target has the dual function of sustaining the discharge and providing plasma-assistance to film growth, affecting the microstructure of growing films. Many technologically interesting thin films are compound films, which are composed of one or more metals and a reactive gas, most often oxygen or nitrogen. When reactive gas is added, non-trivial consequences arise for the system because the target may become “poisoned,” i.e., a compound layer forms on the target surface affecting the sputtering yield and the yield of secondary electron emission and thereby all other parameters. It is emphasized that the target state depends not only on the reactive gas' partial pressure (balanced via gas flow and pumping) but also on the ion flux to the target, which can be controlled by pulse parameters. This is a critical technological opportunity for reactive HiPIMS (R-HiPIMS). The scope of this tutorial is focused on plasma processes and mechanisms of operation and only briefly touches upon film properties. It introduces R-HiPIMS in a systematic, step-by-step approach by covering sputtering, magnetron sputtering, reactive magnetron sputtering, pulsed reactive magnetron sputtering, HiPIMS, and finally R-HiPIMS. The tutorial is concluded by considering variations of R-HiPIMS known as modulated pulsed power magnetron sputtering and deep-oscillation magnetron sputtering and combinations of R-HiPIMS with superimposed dc magnetron sputtering. © 2017 Author(s).


Lomnitz M.R.,Lawrence Berkeley National Laboratory
Journal of Physics: Conference Series | Year: 2017

Due to their large masses, heavy quarks are predominantly produced through initial hard scatterings in heavy-ion collisions. As such, they experience the entire evolution of the hot and dense medium created in such collisions and are expected to thermalize much more slowly than light flavor quarks. For instance, the azimuthal anisotropy of charm quarks with respect to the reaction plane over a broad momentum range can provide insights into the degree of thermalization and the bulk properties of the system. Specifically at low transverse momenta we can examine the bulk properties in the strongly coupled regime. In this talk we present the STAR measurement of elliptic (v 2) and triangular flow (v 3) of D 0 mesons in Au+Au collisions at √sNN = 200 GeV obtained from the first year of physics running with the new STAR Heavy Flavor Tracker. Comparison with the azimuthal anisotropy of other particle species and a series of model calculations will be shown, and the charm quark dynamics in the sQGP medium will be discussed. © Published under licence by IOP Publishing Ltd.


Gregorich K.,Lawrence Berkeley National Laboratory
EPJ Web of Conferences | Year: 2016

Proton number, Z, and mass number,A, assignments for newly discovered heavy element nuclides have historically been made by observing α decay to a daughter with well-established Z and A, and then observing the well-know decay of that daughter. For all of the new superheavy element isotopes observed in 48Ca irradiations of actinide targets, this correlation technique has not been possible, because the α-decay chains end in spontaneous fission of previously unknown isotopes. Consequently, Z and A assignments have been made by less-direct means. The superheavy element Z and A assignment methods are summarized, and possibilities for how they may be incorrect are explored. While it is highly likely that most of the superheavy element Z and A assignments are correct, there is a real need for a direct proof. © The Authors, published by EDP Sciences.


Gates J.M.,Lawrence Berkeley National Laboratory
EPJ Web of Conferences | Year: 2016

The identification of six new elements within the last 15 years and with proton numbers, Z = 113-118 has transformed the heavy element field. However, one key piece of information on these nuclei remains unmeasured: their proton and mass numbers, A. At Lawrence Berkeley National Laboratory, the heavy element group has undertaken a program to study these new elements to perform experiments aimed at measuring the Z and A.Here, an overview of recent experiments aimed towards identifying the Z of SHE, and the prospects for Z and A identification experiments at LBNL are presented. © The Authors, published by EDP Sciences.


Clark R.M.,Lawrence Berkeley National Laboratory
EPJ Web of Conferences | Year: 2016

In this contribution I discuss recent results on the decay of excited states, via electromagnetic transitions (γ-ray and conversion electrons), α decay, and fission, and discuss what these studies reveal in terms of the stability of the heaviest elements. © The Authors, published by EDP Sciences.


Frei H.,Lawrence Berkeley National Laboratory
Journal of Energy Chemistry | Year: 2017

Water oxidation, as a mandatory reaction of solar fuels conversion systems, requires the use of light absorbers with electronic properties that are well matched with those of the multi-electron catalyst in order to achieve high efficiency. Molecular light absorbers offer flexibility in fine tuning of orbital energetics, and metal oxide nanoparticles have emerged as robust oxygen evolving catalysts. Hence, these material choices offer a promising approach for the development of photocatalytic systems for water oxidation. However, efficient charge transfer coupling of molecular light absorbers and metal oxide nanoparticle catalysts has proven a challenge. Recent new approaches toward the efficient coupling of these components based on synthetic design improvements combined with direct spectroscopic observation and kinetic evaluation of charge transfer processes are discussed. © 2017 Science Press.


News Article | April 14, 2017
Site: www.scientificcomputing.com

Cryo-electron microscopy (cryo-EM)—which enables the visualization of viruses, proteins, and other biological structures at the molecular level—is a critical tool used to advance biochemical knowledge. Now Lawrence Berkeley National Laboratory (Berkeley Lab) researchers have extended cryo-EM’s impact further by developing a new computational algorithm that was instrumental in constructing a 3-D atomic-scale model of bacteriophage P22 for the first time. Over 20,000 two-dimensional cryo-EM images of bacteriophage P22 (also known as the P22 virus that infects the common bacterium Salmonella) from Baylor College of Medicine were used to make the model. The results were published by researchers from Baylor College of Medicine, Massachusetts Institute of Technology, Purdue University and Berkeley Lab in the Proceedings of the National Academies of Sciences earlier in March. “This is a great example of how to exploit electron microscopy technology and combine it with new computational methods to determine a bacteriophage’s structure,” said Paul Adams, Berkeley Lab’s Molecular Biophysics & Integrated Bioimaging division director and a co-author of the paper. “We developed the algorithms—the computational code—to optimize the atomic model so that it best fit the experimental data.” Pavel Afonine, a Berkeley Lab computational research scientist and paper co-author, took the lead in developing the algorithm using Phenix, a software suite used traditionally in X-ray crystallography for determining macromolecular structures. The successful rendering of bacteriophage P22’s 3-D atomic-scale model allows researchers to peek inside the virus’ protein coats at resolution. It is the culmination of several years of work that previously had enabled Baylor College researchers to trace out most of the protein’s backbone, but not the fine details, according to Corey Hryc, co-first author and a graduate student of Baylor biochemistry professor Wah Chiu. “Thanks to this exquisite structural detail, we have determined the protein chemistry of the P22 virus,” Chiu said. “I think it is important that we provide detailed annotations with the structure so other researchers can use it for their future experiments,” he added. Chiu’s lab has been using cryo-EM and computer reconstruction techniques to build 3-D molecular structures for almost 30 years. And the findings could have valuable biological implications as well. Thanks to the 3-D atomic-scale model, it’s now “possible to see the interactions between the pieces making up the P22 virus, which are critical to making it stable,” Adams said. This helps researchers figure out how to make chemicals that can bind to certain proteins. Adams underscores that the ability to understand the configuration of atoms in molecular space can be used to generate new insights into drug design and development.


California’s push to make aggregated distributed energy resources into transmission grid market players is the most developed in the country. But it’s still about a year from going live in a big way. It’s also facing some key challenges, like getting approval -- or at least “concurrence” -- from the utilities that run the distribution grids where these newly minted DER providers will carry out their megawatt-scale energy shifting acts. And then there’s the question of whether distributed energy resources (DERs) will be worth more at wholesale than they are under California’s new distribution grid values -- or whether those values can be stacked together. All of this uncertainty hasn’t stopped companies from applying for the job. Lorenzo Kristov, market and infrastructure principal at state grid operator CAISO, said at last week's California’s Distributed Energy Future 2017 conference in San Francisco that several companies have already submitted applications to become DER providers under the new program. “I’d imagine they’re in the process of developing their actual resources they’ll be providing in the market,” he said. Kristov didn’t name the companies involved. But a November 2016 report from CAISO to the Federal Energy Regulatory Commission does name four companies that have signed up for a “pro forma distributed energy resource provider agreement” -- the first step in becoming a distributed energy resource provider, or DERP. One was utility San Diego Gas & Electric, which proposed a 3- to 4-megawatt aggregation of energy storage sites across its territory -- the largest of the four proposed projects. SDG&E proposed a 2018 start date. Another was Apparent Energy, which said it was ready to launch in early 2017, working with Silicon Valley Power and Palo Alto’s municipal utility on two aggregations of 1 to 1.5 megawatts each. But a December report from Silicon Valley Power noted that Apparent “could not make a business case in SVP territory” at that time, although “as DG resources potentially grow and as the CAISO markets evolve, there could be potential.” A third was Galt Power, a participant in other North American transmission markets, which proposed working with energy developer Customized Energy Solutions. The companies are “in discussions with several entities seeking to aggregate renewables and small-scale storage.” Finally, there was Olivine, a scheduling coordinator that serves as an intermediary between CAISO and DER providers, which is “working with a number of clients, including municipalities, community choice aggregators, and resource owners.” Because every would-be DERP has to work through a scheduling coordinator, it’s hard to know which of Olivine’s clients might be involved in the company’s application. Olivine is also involved in the Demand Response Auction Mechanism, or DRAM, pilot program, which has so far put together more than 100 megawatts of DER resources from companies including Stem, Advanced Microgrid Solutions, EnergyHub, Ohmconnect and AutoGrid. CAISO’s report notes that Olivine is “considering the addition of distributed energy resources and the potential conversion of storage and electric vehicle assets currently participating as demand response resources,” indicating that some of these DRAM clients could also be eyeing their potential as DERPs. CAISO just published its “new resource implementation process” on its DERP website this week, opening up the potential for more applications. Last summer, after years of effort, CAISO got federal approval for its new distributed energy resource provider tariff. It allows for DERPs to submit aggregations of between 500 kilowatts and 10 megawatts that can meet the requirements for its day-ahead and hourly energy markets, or its faster-responding ancillary services markets. Since then, California’s efforts have helped jump-start bigger changes. In October, FERC issued a ruling that opened the option of aggregated DERs for the rest of the country’s independent system operators (ISOs) or regional transmission organizations (RTOs), opening a vast new potential market. That’s only potential, though. It can take years for FERC orders like these to make their way through grid operator technical working groups and stakeholder proceedings and into real-world markets. No other region is as far along as California right now, although mid-Atlantic grid operator PJM has opened a discussion, or a “problem statement” in its terminology, and Texas grid operator ERCOT, which is outside FERC’s jurisdiction, has held an on-again, off-again discussion on the subject. Applying for DERP status is only the first step in a multi-stage process, Kristov noted. CAISO’s recently released “new resource implementation process” includes a 43-item list of requirements involving interconnection, metering, telemetry, topology and other such technical details. Once those are completed, it will take months more to process and verify each aggregation, he said. In the meantime, CAISO is busy working with the California Public Utilities Commission and utilities in the state on another challenge -- getting more visibility between transmission and distribution grids. “The ISO only sees the system down to the transmission-distribution interface,” or the transmission substations that connect the state’s high-voltage grid with the distribution grid. “Even if we have telemetry to some of the devices, we don’t have the distribution system data," said Kristov. That can cause problems in two directions, Kristov said. For the distribution utility, there’s the prospect of half a megawatt or more of load suddenly dropping away or coming on-line under CAISO dispatch, causing local grid instability. FERC’s order this fall specified that distribution utilities have the right to review the composition of these DER aggregations. To solve that problem right now, CAISO requires each DERP to “obtain concurrence from the applicable utility distribution company (UDC) or metered sub-system (MSS)” to alleviate concerns, involving a utility-by-utility process that takes up to 30 business days. In the other direction, CAISO needs to worry about distribution grid topologies, or states of network interconnection, he said. California’s transmission system is pretty stable topologically -- it doesn’t see major switches and shifts in the flow of power. “But in distribution, they’re having changes in topology all the time, they’re switching circuits,” he said, and “that can affect whether a DER can respond to a dispatch or not.” Both of these problems could be addressed by better visibility and data-sharing between utilities and CAISO, he noted. “The ISO could provide those dispatch instructions to the distribution company, and the distribution company could know...‘Oh, that’s where it is; it’s going to happen 5 minutes from now -- will that cause a problem for us?’” California’s utilities are arguably ahead of many in the country in terms of visibility into their distribution grids, with widely deployed smart meters and multiple pilot projects integrating DERs into the software and control systems that run their low-voltage networks. But they’ve still got a long way to go, as evidenced by the multibillion-dollar grid modernization investments utilities are asking the CPUC to approve for the coming years. The state’s big investor-owned utilities are also mapping out their distribution grids to find the value of DERs as part of their multibillion-dollar annual capital investment budgets, under the CPUC’s distribution resources plan and integration of distributed energy resources proceedings. This process will create valuable data for CAISO as well as the utilities, Kristov noted. Indeed, the value of DERs for local grid needs may well exceed the value they can realize on wholesale energy and ancillary services markets, he said. “DER substituting for distribution assets is probably more promising than DER substituting for transmission assets,” explained Kristov -- an observation backed up by a Lawrence Berkeley National Laboratory analysis of the state’s future energy needs. At the same time, CAISO does see great value in DERs that can help it manage the "duck curve" imbalances that solar power is causing on California’s grid, he said. “That problem can be solved very well at the distribution level.” But “not all the value has been clearly monetized in terms of services to be able to do that.”


Rust usually indicates neglect; it undermines the structures and tools we rely on every day, from cars to bridges and buildings. But if carefully controlled, the same process that creates rust – metal oxidation – could offer scientists ways to advance state-of-the-art battery or drug delivery technologies. To achieve such control, scientists must first understand exactly how the oxidation process works. With the help of supercomputers and synchrotrons, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and Temple University are illustrating the process on a finer scale than ever before. In a paper published today in Science, Argonne and Temple University researchers describe the behavior of metal nanoparticles by watching them in real time as they oxidized. By using a combination of X-ray scattering and computational simulation, the researchers were able to observe and model the changes in nanoparticle geometry as they occurred. This knowledge adds to our understanding of fundamental everyday reactions like corrosion, and builds a foundation for developing new techniques to image, manipulate and control such reactions at the atomic scale. “During oxidation of metals, there is a directional flow of material across a solid/gas or solid/liquid interface which can sometimes lead to the formation of holes in the atomic lattice. This process is known as the Kirkendall effect. If well understood, it can be used to design exotic materials at the nanoscale,” said computational scientist Subramanian Sankaranarayanan, one of the principal investigators of the study and a researcher at Argonne’s Center for Nanoscale Materials. In their study, researchers sought to understand the Kirkendall effect in small particles of iron during oxidation at the nanoscale level, specifically in the 10-nanometer range. At this scale, roughly 10,000 times thinner than a sheet of paper, iron nanoparticles exposed to an oxygen environment exhibit a unique property – they form exotic structures, such as hollowed-out nanoparticles or nanoshells, which already have been used as electrodes in battery applications and as vehicles for drug delivery in medicine. The shape, structure and distribution of the holes in these nanoshells depend on how oxidation progresses in time. “What we’ve done, through experimental and theoretical approaches, is build an understanding of the process itself — how these holes form and coalesce,” said co-author Badri Narayanan, an Argonne staff scientist who was a postdoctoral appointee at the time of study. “Without understanding these processes as they naturally occur, you can never hope to control them to produce new materials with exceptional functionality.” The Argonne study was the first time-resolved analysis to use two X-ray scattering techniques to monitor structural evolution during nanoparticle oxidation in 3-D. Small-angle X-ray scattering at the Advanced Photon Source helped characterize the void structures, while wide-angle X-ray scattering provided information on the crystalline structure of the nanoparticles; the combination of the two enabled researchers to experimentally investigate both the metal lattice and pore structure. With these experimental techniques, researchers could see how voids formed at a relatively high spatial resolution, but not one that reached the level of individual atoms. For this insight, researchers turned to the supercomputing resources at the Argonne Leadership Computing Facility. Computer simulations complemented the experimental observations and enabled the researchers to simulate the oxidation of iron nanoparticles atom-by-atom – meaning researchers could visualize the formation and breakage of bonds and track the movement of individual atoms. X-ray experiments and multimillion-atom reactive simulations were performed on exactly the same particle size to facilitate direct comparison of the evolving structure. “We needed the immense computing power of the Argonne Leadership Computing Facility’s 10-petaflop supercomputer, Mira, to perform these large-scale reactive simulations,” said Narayanan. “The simulations provided more detailed insight into the transformation of nanoparticles into nanoshells and the atomic-scale processes that govern their evolution.” The ability to integrate synthesis and experimental methods (X-ray imaging and nanoparticle synthesis and transmission electron microscopy performed at the Center for Nanoscale Materials) with computer modeling and simulation to build new knowledge is among the most valuable aspects of the study, the authors said. “A full 3-D evolution of morphologies of nanoparticles under real reaction conditions with sub-nanometer to atomistic resolution had not been realized until this study,” said Sankaranarayanan. “This truly exemplifies how the sum can be greater than the parts – how theory and imaging together give us information that is better than what can be obtained when these methods are used independently.” The study, titled “Quantitative 3D Evolution of Colloidal Nanoparticle Oxidation in Solution,” is published in Science. Other authors of this study include Yugang Sun, Xiaobing Zuo, Sheng Peng and Ganesh Kamath. The research was supported by Temple University. The work was completed using resources at the Advanced Photon Source, the Center for Nanoscale Materials and the Argonne Leadership Computing Facility, as well as the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory – all DOE Office of Science User Facilities. Computing time at the Argonne Leadership Computing Facility was awarded through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program and the ASCR Leadership Computing Challenge.


News Article | April 17, 2017
Site: www.chromatographytechniques.com

A solar-powered water harvester can pull water molecules directly from the air – making drinkable water available even in the desert, according to a new proof-of-concept study in the journal Science. The new technology, a collaboration between scientists at the University of California at Berkeley and the Massachusetts of Technology, is based on a passive system that draws moisture into a foam-like material and condenses the water even at humidity levels as low as 20 percent, according to the scientists. The concept is similar to an electric dehumidifier found in many homes – but this one can work without using the extra energy that’s not available in extreme environments, said Omar Yaghi, one of the senior authors, a chemist at UC Berkeley and faculty scientist at Lawrence Berkeley National Laboratory. “I call it personalized water,” said Yaghi, in a school statement. “One vision for the future is to have water off-grid, where you have a device at home running on ambient solar for delivering water that satisfies the needs of a household.” Or it could be used to provide life-saving water in the most inhospitable deserts of the world, they added. Yaghi and the team in California developed the foundational technology at work in the new tool. Metal-organic frameworks employ magnesium, aluminum or other metals in a complex arrangement with organic molecules. The particular MOFs capture different molecules. Three years ago, they developed a combination of zirconium and adipic acid that captures water vapor. The concept was picked up by collaborators Evelyn Wang at MIT and her team of mechanical engineers, and funded by the Advanced Research Projects Agency of the U.S. Department of Energy. The MIT engineers developed two pounds of miniscule MOF crystals, which were compressed between a solar absorber and a condenser plate. The air diffuses through the porous MOF. Water molecules bind to the structure. The solar energy drives the captured water vapor toward the condenser, where the liquid collects and drips down, they explain. The device was able to pull 2.8 liters of water from the air over 12 hours using the 2 pounds of MOF material. The tool was also tested in a real-world scenario: on an MIT rooftop, they said. Yaghi said it could collect 12 ounces of water – about a Coke can’s worth – in under an hour. Further engineering could refine the tool for use in particular environments. For instance, a technique of fog harvesting near 100 percent humidity is already employed in select areas like Chile and Morocco. The refinement of the MOF and its collection methods could be key, said Hyunho Kim, a graduate student at MIT and one of the authors. “By carefully designing this material, we can have surface properties that can absorb water very efficiently at 50 percent humidity, but with a different design, it can work at 30 percent,” said Kim. “By selecting the right materials, we can make it suitable for different conditions. Eventually we can harvest water from the entire spectrum (of water concentrations).” The technology could be refined for the world’s harshest deserts, they added. For instance, the average humidity of the Sahara Desert, the biggest desert on Earth, is 25 percent. Slightly before dawn this morning, the humidity in Death Valley, the most arid place in the United States, is 19 percent.


News Article | May 2, 2017
Site: www.treehugger.com

There's been a lot of news about artificial photosynthesis lately (here and here), and while I think it's probably more useful to generate electricity via solar photovoltaic and solar thermal technology, there will no doubt be some cases where it'll make sense to use the sun to split water into hydrogen and oxygen directly. Researchers at the Lawrence Berkeley National Laboratory has shown what they call the "first fully integrated nanosystem for artificial photosynthesis". A key to the system is its structure on the nanoscale: “Similar to the chloroplasts in green plants that carry out photosynthesis, our artificial photosynthetic system is composed of two semiconductor light absorbers, an interfacial layer for charge transport, and spatially separated co-catalysts,” says Peidong Yang, a chemist with Berkeley Lab’s Materials Sciences Division, who led this research. “To facilitate solar water- splitting in our system, we synthesized tree-like nanowire heterostructures, consisting of silicon trunks and titanium oxide branches. Visually, arrays of these nanostructures very much resemble an artificial forest.” (source) But while it works, the system so far only has a sunlight-to-fuel efficiency of 0.12%. That's not quite as low as it might seem because natural photosynthesis isn't that efficient to begin with, but to be commercially viable the efficiency would have to be much higher. Maybe that's possible, but it'll be hard to compare with solar PV and solar thermal which already have efficiencies that are orders of magnitude better (see this chart), so it could actually make more sense to generate electricity and then produce hydrogen via electrolysis rather than use artificial leaves. See also: IBM solar collector magnifies sun by 2,000x (without cooking itself), costs 3x less than similar systems


The soils and sediments beneath our feet can contain an astonishing amount of carbon -- more than in all of the world's plants and the atmosphere combined -- and represents a significant potential source of the greenhouse gas carbon dioxide. In a new study, Stanford scientists have uncovered a previously unknown mechanism that explains why microbes sometimes fail to break down all the plant and animal matter, leaving carbon underfoot. Understanding where, and how long, this buried organic matter lingers is crucial for scientists and policymakers to better predict and respond to climate change. "Our picture of how organic matter is broken down in soils and sediments is incomplete," said study lead author Kristin Boye, an associate staff scientist at the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory and former postdoctoral scholar at Stanford's School of Earth, Energy & Environmental Sciences. "With this study, we are gaining new insights into the mechanisms of carbon preservation in low- or no-oxygen subterranean environments." In oxygen-starved places such as marshes and in floodplains, microorganisms do not equally break down all of the available organic matter, the study shows. Instead, carbon compounds that do not provide enough energy to be worthwhile for microorganisms to degrade end up accumulating. This passed-over carbon, however, does not necessarily stay locked away below ground in the long run. Being water soluble, the carbon can seep into nearby oxygen-rich waterways, where microbes readily consume it. To date, models of local ecosystems and broader climate change have failed to take into account this newfound carbon preservation mechanism, having focused chiefly on microbial enzymes and the availability of other elements for organic matter breakdown. "Soils and sediments are a huge and dynamic reservoir of carbon," said study senior author Scott Fendorf, a professor of soil biogeochemistry at Stanford Earth. "That's why we worry about turnover times here with regard to how fast organic carbon is degraded and released as carbon dioxide into the atmosphere." Tracking the fate of the carbon For the new study, published today in Nature Geoscience, the research team collected core samples of buried sediments from four floodplains in the upper Colorado River Basin in the states of Colorado and New Mexico. The approximately 3-foot-long, column-shaped samples went deep enough to reach oxygen-starved layers where microbes must switch from doing the microbial equivalent of breathing oxygen to breathing sulfur. In either case, the microbes combine oxygen or sulfur with carbon-based food to produce energy and release either carbon dioxide or sulfur dioxide into the atmosphere. (That sulfur dioxide is responsible for the distinctive smell of oxygen-poor wetlands.) To identify where in the sediment samples microbes had made the switch, the researchers turned to the Stanford Synchrotron Radiation Lightsource facility. The synchrotron machine generates extremely bright X-ray light that, when shone upon the samples, generates a signal revealing the chemistry of the sulfur. The presence of sulfide minerals indicates where the microbes began making use of sulfur alongside carbon to power their biochemical machinery. The question was whether the switch to sulfur influenced the carbon sources the microbes ate or left behind. To find out, the researchers relied on unique instrumentation and collaborations within the Environmental Molecular Sciences Laboratory at the Pacific Northwest National Laboratory in Richmond, Washington. With the aid of a very strong magnet, an instrument called a mass spectrometer at the lab characterized the water-soluble organic material. The tests found that, in contrast to the layers where oxygen was available, leftover carbon compounds in the sediment samples where sulfur had been used for respiration were mostly of the sort that requires more energy to degrade than would be liberated through the degradation itself. Of no use, then, to growing microbes, these carbon compounds had remained within the deeper sediment layers. Floodplains, like those sampled in the study, rank among the most common areas globally for the internment of plant and animal matter by water-borne sediments. The oxygen-poor conditions created underground there are known to sequester carbon, but as the study suggests, partly for reasons previously unrecognized and with unforeseen consequences. For such flood-prone, low-lying areas are by definition close to waterways. Soluble, unused organic material can migrate quite easily into an aerated waterway for subsequent breakdown, triggering algae blooms and other water quality issues while also leading to carbon dioxide production. Models of how living organisms, the ground, bodies of water and the atmosphere recycle carbon will increasingly need to incorporate key nuances, like the preservation mechanism described in the new Stanford study, in order to inform scientists' understanding as well as policymakers' decisions. "Getting the constraints right on what really controls the processes of carbon breakdown is essential," said Fendorf. "That's what our study helps illuminate." Other co-authors on the study, titled "Thermodynamically controlled preservation of organic carbon in floodplains," include Vincent Noel, Sharon Bone and John Bargar of the SLAC National Accelerator Laboratory; Malak Tfaily of the Pacific Northwest National Laboratory; and Kenneth Williams of the Lawrence Berkeley National Laboratory. Funding was provided by the U.S. Department of Energy, the Office of Biological and Environmental Research, the SLAC National Accelerator Laboratory and the Lawrence Berkeley National Laboratory.


News Article | May 2, 2017
Site: www.cemag.us

Physicists at the University of California, Irvine and elsewhere have fabricated new two-dimensional materials with breakthrough electrical and magnetic attributes that could make them building blocks of future quantum computers and other advanced electronics. In three separate studies appearing this month in Nature, Science Advances, and Nature Materials, UCI researchers and colleagues from UC Berkeley, Lawrence Berkeley National Laboratory, Princeton University, Fudan University, and the University of Maryland explored the physics behind the 2D states of novel materials and determined they could push computers to new heights of speed and power. “Finally, we can take exotic, high-end theories in physics and make something useful,” says UCI associate professor of physics and astronomy Jing Xia, a corresponding author on the three studies. “We’re exploring the possibility of making topological quantum computers for the next 100 years.” The common threads running through the papers are that the research is conducted at extremely cold temperatures and that the signal carriers in all three studies are not electrons — as with traditional silicon-based technologies — but Dirac or Majorana fermions, particles without mass that move at nearly the speed of light. One of the key challenges of such research is handling and analyzing miniscule material samples, just two atoms thick, several microns long and a few microns across. Xia’s lab at UCI is equipped with a fiber-optic Sagnac interferometer microscope that he built. (The only other one in existence is at Stanford University, assembled by Xia when he was a graduate student there.) Calling it the most sensitive magnetic microscope in the world, Xia compares it to a telescope that an ornithologist in Irvine could use to inspect the eye of a bird in New York. “This machine is the ideal measurement tool for these discoveries,” says UCI graduate student Alex Stern, lead author on two of the papers. “It’s the most accurate way to optically measure magnetism in a material.” In a study published in Nature, the researchers detail their observation — via the Sagnac interferometer — of magnetism in a microscopic flake of chromium germanium telluride. The compound, which they created, was viewed at minus 387 degrees Fahrenheit. CGT is a cousin of graphene, a superthin atomic carbon film. Since its discovery, graphene has been considered a potential replacement for silicon in next-generation computers and other devices because of the speed at which electronic signals skitter across its almost perfectly flat surface. But there’s a catch: Certain computer components, such as memory and storage systems, need to be made of materials that have both electronic and magnetic properties. Graphene has the former but not the latter. CGT has both. His lab also used the Sagnac interferometer for a study published earlier this month in Science Advances examining what happens at the precise moment bismuth and nickel are brought into contact with one another — again at a very low temperature (in this case, minus 452 degrees Fahrenheit). Xia says his team found at the interface between the two metals “an exotic superconductor that breaks time-reversal symmetry.” “Imagine you turn back the clock and a cup of red tea turns green. Wouldn’t that make this tea very exotic? This is indeed exotic for superconductors,” he says. “And it’s the first time it’s been observed in 2D materials.” The signal carriers in this 2D superconductor are Majorana fermions, which could be used for a braiding operation that theorists believe is vital to quantum computing. “The issue now is to try to achieve this at normal temperatures,” Xia says. The third study shows promise in overcoming that hurdle. In 2012, Xia’s lab delivered to the Defense Advanced Research Projects Agency a radio-frequency oscillator built around samarium hexaboride. The substance is an insulator on the inside but allows signal-carrying current made of Dirac fermions to flow freely on its 2D surface. Using a special apparatus built in the Xia lab — also one of only two in the world — UCI researchers applied tensile strain to the samarium hexaboride sample and demonstrated in the Nature Materials study that they could stabilize the 2D surface state at minus 27 degrees Fahrenheit. “Believe it or not, that’s hotter than some parts of Canada,” Xia quips. “This work is a big step toward developing future quantum computers at nearly room temperature.” Funding for UCI’s involvement in the three studies was provided by the National Science Foundation. Additional support was furnished by the U.S. Department of Energy.


News Article | April 17, 2017
Site: www.renewableenergyworld.com

To meet skyrocketing demand for electricity, African countries may have to triple their energy output by 2030. While hydropower and fossil fuel power plants are favored approaches in some quarters, a new assessment by the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has found that wind and solar can be economically and environmentally competitive options and can contribute significantly to the rising demand.


The global market for electric vehicles is expected to more than double in the next two years, when more than 25 million electric cars are expected to be on the road. These electric cars will account for one-quarter of all new cars purchased. But one of the key hurdles to the adoption of electric cars is the performance of the lithium-ion batteries used to power them. “Car owners want batteries that can go long distances on a single charge, can charge quickly, have long lives, are efficient, are not too costly, and are safe,” says Gao Liu, a group leader in the Energy Storage & Distributed Resources Division at Lawrence Berkeley National Laboratory. To meet these challenges, lithium-ion battery makers have explored new anode and cathode materials and even modified the electrolytes that transfer charge between them. But some of the most promising improvements are coming from an oft-overlooked place: the binders that hold these lithium-ion battery components together. New binders are poised to improve the lifetime and capacity of lithium-ion batteries. As noted, rapid charging is high on customers’ wish lists. It takes about five minutes to fill a regular car with gas, but it can take hours to recharge an electric car or hybrid, depending on the type of charger used. Commercial electric-car charging stations cost about $100,000 and can provide a charge in about half an hour. The more typical home units cost a lot less but can take up to seven hours to charge the car. “We’d like to cut the typical charge time to 15 minutes,” Liu says, “and get a lot more energy per pound of battery.” Charging stations are still relatively rare, although they are becoming more prevalent. There are approximately 55,000 places electric cars can be charged in the U.S., compared to some 150,000 gas stations. Until charging stations are as common as gas stations, increasing charge capacity will continue to be a key goal in the race to develop better lithium battery technology. The more energy a battery can store, the more miles an electric vehicle can travel between charges. This distance between charges is called driving range. “The battery industry is now focused on helping consumers avoid what’s known as ‘range anxiety,’ ” says Robert Gibbison, director of marketing and product management at specialty chemical company Ashland. “That’s a fear of not having enough charge to get home or to the next charging station without running out of power. Customers want to be sure they can get from point A to point B on a single charge.” Right now, energy capacity, and the driving range associated with it, is largely limited by the size, weight, and efficiency of the battery, and those factors are governed by cost. A $75,000 Tesla can go more than 250 miles on a single charge, but most pure-electric vehicles priced around $30,000 (the average selling price for most mass-produced electric vehicles) are limited to fewer than 100 miles per charge. Most carmakers are aiming to boost that figure to at least 200 miles per charge. With a hybrid, consumers can always rely on gasoline if necessary, but the pure-electric-vehicle range is, obviously, tied to the battery. The Soteras MSi binder is a two-component system that is typically added at a 95:5 ratio. The binder relies on cross-linking to control swelling of silicon, which promotes superior cycle performance at capacities greater than 400 mAh/g when used with silicon oxide, silicon composite, or silicon-graphene technologies. The Soteras MSi binder offers good electrochemical stability and allows manufacturers to produce batteries with a greater capacity than what conventional binders allow. It also has good slurry properties, which facilitate smooth-coated surfaces on the current collector and help prevent battery failure. It is also water based, which follows an industry trend, because that reduces health risks, cost, and processing. In testing on a silicon oxide carbon anode at 420 mAh/g, the Soteras MSi binder retained more than 90% of its charge after 100 cycles. In contrast, a conventional binder made of carboxymethyl cellulose and styrene butadiene retained less than 70% of its charge after the same number of cycles. The Soteras MSi binder also outperformed this conventional binder at high charge rates. The Soteras CCS binder addresses heat shrinkage of the battery’s separator. The binders are electrolyte insoluble and work with single- and multilayer ceramic-coated separators. The two-component coating system is compatible with typical coating processes and provides good lithium-ion permeability even as it minimizes negative effects on cell electrochemistry. One new approach to improve battery charge capacity is to move from graphite-based anodes to ones made with silicon. Silicon packs a higher density of energy per unit of material, according to Donghai Wang, associate professor of mechanical engineering at Pennsylvania State University. “Silicon can store 10 times the charge graphite can,” says Wang, “so it offers a great charge for performance in lithium-ion batteries.” Liu says most battery makers are mixing silicon with graphite, but they’d like to replace the graphite entirely. The shift hasn’t been easy, and it has taken years of research to achieve. “Across industry and academia, people have been struggling to make the switch to silicon,” says Alan Goliaszewski, research director of energy and specialties at Ashland. That’s largely because silicon particles in the anode change size during discharge and charge, lowering battery capacities over time. “One solution is to use different binders,” says Wang. Binders hold the active ingredients in batteries together and put them in contact with the foils that make it possible for the batteries to deliver electricity. Many experts see binders that stabilize these silicon particles as key to developing batteries that better satisfy customers’ needs in a variety of ways—not just by being longer lasting but also by having thermostability and greater capacity. Ashland’s new Soteras™ MSi binder, for example, supports incorporation of silicon in place of some or all of the graphite in silicon-based anodes. Used appropriately, it can increase battery capacity by up to 30%, Goliaszewski says. The firm’s Soteras™ CCS binder, meanwhile, allows battery manufacturers to address another important problem: heat shrinkage of the battery’s separator, a problem that can seriously reduce battery life span and impact safety. The binder allows battery makers to incorporate ceramic-coated polyethylene and polypropylene separators, which deliver better performance. “If you coat [the separator] with ceramic particles, even if you get a substantial temperature increase, the ceramic stays intact, and that keeps the anode and cathode from touching each other,” Wang explains. The technology works even up to temperatures as high as 140 °C. Importantly, these new binders are near “drop-in” solutions, Goliaszewski says. “They can help manufacturers improve the efficacy of their materials to make longer-lasting batteries without having to change other aspects of their production process.” Despite this progress, there’s much room for improvement of lithium batteries for electric cars, including shortening charge rates. “In a car battery, you are storing a lot more energy than in a cell phone or laptop,” Goliaszewski says. Putting all that energy back into a discharged battery takes time—and that doesn’t sit well with consumers. So manufacturers are looking, in particular, to increase the speed at which batteries recharge. But high charge rates come at a price, Gibbison explains. “Typically, the faster you charge, the less total charge you get.” New binders will clearly be one significant part of the solution to making better and more efficient lithium batteries for cars. “We’re looking forward to seeing how our customers, who formulate these lithium batteries, use these new technologies,” Gibbison says. “The data suggest that these new binders are a step-change technology that can improve the usability of lithium-ion batteries and potentially allow manufacturers to command a higher selling price. That can directly impact their profitability.”


News Article | April 19, 2017
Site: www.eurekalert.org

OAKLAND, Calif. - California and more than two dozen other states require oil and gas producers to disclose the chemicals they use during hydraulic fracturing activities, enabling scientific and public scrutiny of the environmental and human health hazards these substances may pose. But all existing disclosure regulations cover chemical use only in hydraulic fracturing, known as fracking, and, in California, two other types of well-stimulation treatments. Many of the same chemicals used in hydraulic fracturing go undisclosed when they are used in numerous routine, unregulated oil- and gas-field activities such as the drilling, cleaning and maintenance of wells, according to a study published in PLOS ONE today (April 19). The study, conducted by scientists at the Lawrence Berkeley National Laboratory, University of the Pacific and the California-based energy science and policy institute PSE Healthy Energy, is the first published research to investigate chemicals used in unregulated routine oil- and gas-field activities, including the overlap between chemicals used in both regulated and unregulated activities. Analyzing publicly available data of chemical use in oil and gas production operations in the Los Angeles Basin, researchers found that the number of the chemicals used for routine activities is as high or higher than the number used for hydraulic fracturing, and those chemicals are used frequently and in high quantities. Further, the disclosure data showed that the same chemicals used in hydraulic fracturing were also used in more than half of recorded routine activities, which are unregulated. For example, they found common use of biocides, a class of hazardous chemicals that includes formaldehyde, and acidizing agents including hydrofluoric acid, in both regulated well-stimulation activities and unregulated routine activities. These findings have major implications for chemical disclosure policies and risk assessments of oil and gas development in California and across the nation, the researchers concluded. "Policies that focus exclusively on hydraulic fracturing or well stimulation miss a huge swath of chemical usage that poses environmental and human health hazards," said Seth B.C. Shonkoff, executive director of PSE and corresponding author on the study. "Especially as water produced by oil and gas development is increasingly used to replenish aquifers, irrigate agriculture, water livestock and increase stream flow around the country, we need to know, more than ever, what's in it. Policies that govern chemical use in oil and gas development should apply to all uses -- none of these known hazardous substances should be getting a free pass," he said. Scientists analyzed data from the South Coast Air Quality Management District, which encompasses the densely populated Southern California counties of San Bernardino, Orange, Riverside and Los Angeles -- the second most productive oil and gas region in the third largest oil producing state in the United States, according to the study authors. The air district's unique chemical disclosure rules for the oil and gas sector provided researchers with the only existing source of chemical use in routine activities in the United States; unlike the rest of the state, oil and gas operators in the jurisdiction must publicly disclose chemical-usage information related to most oil and gas activities, not just in fracking and well stimulation. The analysis found that of the 548 chemical additives reported to the Los Angeles Basin database, 525 of those - about 95 percent - were used in activities that do not require disclosure under statewide policy. The most frequently used chemicals included solvents, petroleum products and salts, which are blended for production uses; carboxylic acids, which are used to control scale and iron; corrosion inhibitors; and hydrochloric and hydrofluoric acid, which were used extensively and in large quantities for well cleaning and routine maintenance. Biocides, a class of hazardous chemicals that kill bacteria, were used in 63 percent of routine events recorded, compared to 93 percent of hydraulic fracturing treatments. In routine uses, the most commonly used biocides were formaldehyde, a preservative with numerous medical and industrial applications, which was used in 57 percent of events; and glutaraldehyde, a disinfectant with a variety of applications in medical settings, which was found in 23 percent of events. "A comparison within the statewide database shows a significant overlap between both types and amounts of chemicals used for well-stimulation treatments included under California's mandatory-disclosure regulations and routine activities that are currently excluded from state regulations," said study lead author William Stringfellow, director and professor of the Ecological Engineering Research Program at University of the Pacific in Stockton, California, and an environmental staff engineer at Berkeley Lab. Due to the extent of chemical usage and the overlap with chemicals that are regulated when used during hydraulic fracturing, researchers called for further study and more policies that expand beyond a focus strictly on hydraulic fracturing. "Given concerns regarding the use and release of hazardous chemicals during oil and gas development around the country, the increasing reuse of produced water and its rising potential for human exposure, it is important to evaluate chemicals put down wells for any purpose, not just those used during the limited phase of hydraulic fracturing," Shonkoff said. An increased interest in expanding the reuse of produced water -- water generated as a byproduct of oil and gas production -- for food-crop irrigation and aquifer recharging, along with oil and gas fields increasing being co-located in areas of high population density like the Los Angeles Basin, increases the potential for chemical exposures. Hazards posed by these chemical additives are in addition to those posed by naturally occurring constituents of produced water, such as salts, heavy metals, volatile organic compounds, hydrocarbons and naturally occurring radioactive materials. The authors also note that they found many chemicals in use that were considered low-risk for human and environmental exposures. The study was the outgrowth of research published in 2015 to bring California in compliance with the state's 2013 Senate Bill 4. In addition to requiring disclosure of chemical use in hydraulic fracturing and two other types of well stimulation treatments, the bill mandated an independent scientific study to identify and describe the hazards, risks and impacts of hydraulic fracturing and oil and gas development throughout the state. Shonkoff, who is also a visiting scholar at the University of California, Berkeley, and an affiliate of Berkeley Lab, was the lead author on the public health portion of that report, which was produced on behalf of the California Council on Science and Technology. "When we evaluated the reported hydraulic-fracturing chemicals, we saw numerous other chemicals that were unaccounted for in the disclosures," Shonkoff said. "We saw that there's a lot we don't know about the chemicals used in non-hydraulic fracturing events in oil and gas fields -- that information gap can put the public's health at risk." PSE (Physicians, Scientists and Engineers) for Healthy Energy is a nonprofit research institute dedicated to supplying evidence-based scientific and technical information on the public health, environmental and climate dimensions of energy production and use. We are the only interdisciplinary collaboration focused specifically on health and sustainability at the intersection of energy science and policy. Visit us at psehealthyenergy.org and follow us on Twitter @PhySciEng.


News Article | April 17, 2017
Site: www.renewableenergyworld.com

The U.S. Department of Energy’s Lawrence Berkeley National Laboratory on March 1 said that its scientists have launched two California Energy Commission (CEC)-funded projects aimed at making geothermal energy more cost-effective to deploy and operate.


News Article | April 30, 2017
Site: www.scientificamerican.com

Since the 1970s, annual per capita electricity consumption in the United States – the amount of electricity that we each consume over the course of a year - has more than doubled. But, in California, per capita electricity demand has been almost constant. This fact – dubbed the “Rosenfeld Effect” – is named in honor of energy efficiency godfather, Dr. Arthur H. Rosenfeld who passed away earlier this year at the age of 90. Arthur Hinton Rosenfeld was born in 1926 in Birmingham, Alabama. After achieving a PhD  in physics from the University of Chicago - where he studied under Enrico Fermi - Rosenfeld became a physics Professor at UC Berkeley and a senior scientist at Lawrence Berkeley National Laboratory. Throughout his career, Rosenfeld's work to curb energy waste would help lay the foundation for state and federal energy efficiency regulations. He was even honored for his work in energy efficiency by his own unit – a single Rosenfeld is defined as an electricity savings of 3 billion kilowatt-hours (kWh) per year, or the amount of efficiency savings needed to replace a 500 Megawatt coal-fired power plant. Over his career, Dr. Rosenfeld served as a Senior Advisor at the U.S. Department of Energy (1994-1999) and Commissioner of the California Energy Commission (2000-2010). In 2011, he received a Medal for Technology and Innovation from President Barack Obama.


News Article | May 1, 2017
Site: www.cemag.us

It may not seem like a material as thin as an atom could hide any surprises, but a research team led by scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) discovered an unexpected magnetic property in a two-dimensional material. The scientists found that a 2D van der Waals crystal, part of a class of material whose atomically thin layers can be peeled off one by one with adhesive tape, possessed an intrinsic ferromagnetism. The discovery, published April 26 in the journal Nature, could have major implications for a wide range of applications that rely upon ferromagnetic materials, such as nanoscale memory, spintronic devices, and magnetic sensors. “This is an exciting discovery,” says study principal investigator Xiang Zhang, senior faculty scientist at Berkeley Lab’s Materials Sciences Division and UC Berkeley professor. “This experiment presents smoking-gun evidence for an atomically thin — and atomically flat — magnet, which surprised many people. It opens the door for exploring fundamental spin physics and spintronic applications at low dimensions.” The study tackles a long-standing issue in quantum physics about whether magnetism would survive when materials shrink down to two dimensions. For half a century, the Mermin-Wagner theorem has addressed this question by stating that if 2D materials lack magnetic anisotropy, a directional alignment of electron spins in the material, there may be no magnetic order. “Interestingly, we found that magnetic anisotropy is an inherent property in the 2D material we studied, and because of this characteristic, we were able to detect the intrinsic ferromagnetism,” says study lead author Cheng Gong, a postdoctoral researcher in Zhang’s lab. Van der Waals forces, named after a Dutch scientist, refers to intermolecular forces of attraction that do not arise from the typical covalent or ionic bonds that keep molecules intact. These quantum forces are used by geckos as they effortlessly scamper along walls and ceilings. Van der Waals crystals describe materials in which the 2D layers are not connected to each other via traditional bonds, allowing them to be easily exfoliated with tape. Research on graphene, the most well-known van der Waals material, earned the Nobel Prize in physics in 2010. “It’s like the pages of a book,” says Gong. “The pages can be stacked on top of each other, but the forces linking one page to another is much weaker than the in-plane forces that keep a single sheet intact.” Gong estimates that for this study, he peeled off more than 3,000 flakes of chromium germanium telluride (Cr2Ge2Te6, or CGT). While CGT has existed as a bulk material for decades, the researchers say the 2D flakes could represent an exciting new family of 2D van der Waals crystal. “CGT is also a semiconductor, and the ferromagnetism is intrinsic,” says co-senior author Jing Xia, UC Irvine associate professor of physics and astronomy. “That makes it cleaner for applications in memory and spintronics.” The researchers detected the magnetization from atomically thin materials using a technique called magneto-optic Kerr effect. The method involves the super-sensitive detection of the rotation of linearly polarized light when it interacts with electron spins in the material. The key to one of the study’s more surprising findings is that the magnetic anisotropy was very small in the CGT material. That enabled researchers to easily control the temperature at which the material loses its ferromagnetism, known as the transition or Curie temperature. “This is a significant discovery,” says Gong, “People believe that the Curie temperature is an inherent property of a magnetic material and cannot be changed. Our study shows that it can.” The researchers showed that they could control the transition temperature of the CGT flake using surprisingly small magnetic fields of 0.3 tesla or less. “Thin films of metals like iron, cobalt, and nickel, unlike 2D van der Waals materials, are structurally imperfect and susceptible to various disturbances, which contribute to a huge and unpredictable spurious anisotropy,” says Gong. “In contrast, the highly crystalline and uniformly flat 2-D CGT, together with its small intrinsic anisotropy, allows small external magnetic fields to effectively engineer the anisotropy, enabling an unprecedented magnetic field control of ferromagnetic transition temperatures.” The study authors also point out that a striking feature of van der Waals crystals is that they can be easily combined with dissimilar materials without restrictions based on structural or chemical compatibility. “The opportunities to combine different materials to develop new functionalities are appealing,” says co-senior author Steven Louie, senior faculty scientist at Berkeley Lab’s Materials Sciences Division and UC Berkeley professor of physics. “This offers a huge amount of flexibility in designing artificial structures for diverse magneto-electric and magneto-optical applications.” Other senior collaborators on this work include Zi Qiang Qiu at Berkeley Lab and UC Berkeley; and Robert Cava at Princeton University. This work was primarily supported by DOE’s Office of Science.


News Article | May 5, 2017
Site: www.rdmag.com

A recent discovery by Sandia National Laboratories researchers may unlock the potential of biofuel waste -- and ultimately make biofuels competitive with petroleum. Fuel made from plants is much more expensive than petroleum, but one way to decrease the cost would be to sell products made from lignin, the plant waste left over from biofuel production. Lignin typically is either burned to produce electricity or left unused in piles because no one has yet determined how to convert it into useful products, such as renewable plastics, fabrics, nylon and adhesives. The electricity isn't even available to the general public; it's only used by companies that create large amounts of lignin, like pulp and paper manufacturers. Now Sandia scientists, working with researchers from Lawrence Berkeley National Laboratory at the Joint BioEnergy Institute, have decoded the structure and behavior of LigM, an enzyme that breaks down molecules derived from lignin. The group's work on LigM appears in the latest issue of the Proceedings of the National Academy of Sciences. The enzyme has little in common with other, better understood proteins, which previously made it impossible for scientists to guess how it functions. This paper marks the first time anyone has solved the structure of LigM, opening a path toward new molecules and new, marketable products. For decades, scientists have wrestled with the problem of breaking down lignin, the part of plant cell walls that provides structure and protection from bacterial and insect attacks. This strength also makes lignin difficult to deconstruct, though there have been recent breakthroughs. The plant matter used to produce ethanol can be chemically or physically pre-treated so that the lignin is deconstructed in the process. However, these methods can be expensive and reduce the amount of biofuel that can be harvested. They could also interfere with later-stage lignin harvesting. That's why some researchers are focused on finding enzymes that convert lignin naturally and gently. Sandia scientist and lead author Amanda Kohler said her team knew enzymes could metabolize lignin and its derivatives because there are decades-old records of bacteria using enzymes for this purpose. Sphingonomas bacteria was discovered living in the waste water of a pulp mill more than 30 years ago. Once researchers realized the bacterium's unique enzymatic pathways enabled it to live on lignin, their challenge was then to understand the enzymes in these pathways so they could mimic what nature had already done, and use that understanding productively. Kohler and her team focused on LigM, an enzyme used by Sphingomonas, because it performs a key step in the conversion of lignin derivatives and it is the simplest of the known enzyme systems that perform this function. "When trying to mimic natural systems in a laboratory setting, the simplest, most direct systems are the best," Kohler explained. The team found that half of LigM's structure is composed of a common protein architecture found in all forms of life, from bacteria to humans. The rest of the enzyme -- the active portion -- is not found in any other known protein structure. This unique structure gives LigM the ability to bind specifically to molecules derived from lignin. "Solving the structure allows us to understand how the organism may have evolved its unique function, which I think is scientifically one of the most interesting findings," said paper co-author and Sandia scientist Ken Sale. The team used the Advanced Light Source Synchrotron at Lawrence Berkeley National Laboratories, along with high-performance computing and fundamental biochemistry to gain their insights into LigM. LigM is designed to break down lignin derivatives, not lignin itself. It is important to understand that LigM's function is only one key step in a longer pathway of reactions needed to fully deconstruct lignin. One active area of research involves finding other organisms, possibly fungi, that can execute the first step of breaking down large lignin mass into smaller fragments. Some of the Sandia scientists who solved LigM's structure, Sale and Matthew Mills, have recently learned more about another enzyme that helps drive the breakdown of lignin into smaller fragments. LigM works on a later stage in the process, when smaller lignin fragments already have been converted into a molecule called vanillic acid. "There is still work to be done to figure out the whole reaction pathway," Kohler says. "But now we have a much-needed understanding of a key step in this process, and are developing enzymes to fit our end goals of lowering the cost of biofuels by making products from lignin."


EMERYVILLE, Calif. -- A recent discovery by Sandia National Laboratories researchers may unlock the potential of biofuel waste -- and ultimately make biofuels competitive with petroleum. Fuel made from plants is much more expensive than petroleum, but one way to decrease the cost would be to sell products made from lignin, the plant waste left over from biofuel production. Lignin typically is either burned to produce electricity or left unused in piles because no one has yet determined how to convert it into useful products, such as renewable plastics, fabrics, nylon and adhesives. The electricity isn't even available to the general public; it's only used by companies that create large amounts of lignin, like pulp and paper manufacturers. Now Sandia scientists, working with researchers from Lawrence Berkeley National Laboratory at the Joint BioEnergy Institute, have decoded the structure and behavior of LigM, an enzyme that breaks down molecules derived from lignin. The group's work on LigM appears in the latest issue of the Proceedings of the National Academy of Sciences. The enzyme has little in common with other, better understood proteins, which previously made it impossible for scientists to guess how it functions. This paper marks the first time anyone has solved the structure of LigM, opening a path toward new molecules and new, marketable products. For decades, scientists have wrestled with the problem of breaking down lignin, the part of plant cell walls that provides structure and protection from bacterial and insect attacks. This strength also makes lignin difficult to deconstruct, though there have been recent breakthroughs. The plant matter used to produce ethanol can be chemically or physically pre-treated so that the lignin is deconstructed in the process. However, these methods can be expensive and reduce the amount of biofuel that can be harvested. They could also interfere with later-stage lignin harvesting. That's why some researchers are focused on finding enzymes that convert lignin naturally and gently. Sandia scientist and lead author Amanda Kohler said her team knew enzymes could metabolize lignin and its derivatives because there are decades-old records of bacteria using enzymes for this purpose. Sphingonomas bacteria was discovered living in the waste water of a pulp mill more than 30 years ago. Once researchers realized the bacterium's unique enzymatic pathways enabled it to live on lignin, their challenge was then to understand the enzymes in these pathways so they could mimic what nature had already done, and use that understanding productively. Kohler and her team focused on LigM, an enzyme used by Sphingomonas, because it performs a key step in the conversion of lignin derivatives and it is the simplest of the known enzyme systems that perform this function. "When trying to mimic natural systems in a laboratory setting, the simplest, most direct systems are the best," Kohler explained. The team found that half of LigM's structure is composed of a common protein architecture found in all forms of life, from bacteria to humans. The rest of the enzyme -- the active portion -- is not found in any other known protein structure. This unique structure gives LigM the ability to bind specifically to molecules derived from lignin. "Solving the structure allows us to understand how the organism may have evolved its unique function, which I think is scientifically one of the most interesting findings," said paper co-author and Sandia scientist Ken Sale. The team used the Advanced Light Source Synchrotron at Lawrence Berkeley National Laboratories, along with high-performance computing and fundamental biochemistry to gain their insights into LigM. LigM is designed to break down lignin derivatives, not lignin itself. It is important to understand that LigM's function is only one key step in a longer pathway of reactions needed to fully deconstruct lignin. One active area of research involves finding other organisms, possibly fungi, that can execute the first step of breaking down large lignin mass into smaller fragments. Some of the Sandia scientists who solved LigM's structure, Sale and Matthew Mills, have recently learned more about another enzyme that helps drive the breakdown of lignin into smaller fragments. LigM works on a later stage in the process, when smaller lignin fragments already have been converted into a molecule called vanillic acid. "There is still work to be done to figure out the whole reaction pathway," Kohler says. "But now we have a much-needed understanding of a key step in this process, and are developing enzymes to fit our end goals of lowering the cost of biofuels by making products from lignin."


Irvine, Calif. - Physicists at the University of California, Irvine and elsewhere have fabricated new two-dimensional quantum materials with breakthrough electrical and magnetic attributes that could make them building blocks of future quantum computers and other advanced electronics. In three separate studies appearing this month in Nature, Science Advances and Nature Materials, UCI researchers and colleagues from UC Berkeley, Lawrence Berkeley National Laboratory, Princeton University, Fudan University and the University of Maryland explored the physics behind the 2-D states of novel materials and determined they could push computers to new heights of speed and power. The common threads running through the papers are that the research is conducted at extremely cold temperatures and that the signal carriers in all three studies are not electrons - as with traditional silicon-based technologies - but Dirac or Majorana fermions, particles without mass that move at nearly the speed of light. "Finally, we can take exotic, high-end theories in physics and make something useful," said UCI associate professor of physics & astronomy Jing Xia, a corresponding author on two of the studies. "We're exploring the possibility of making topological quantum computers [currently theoretical] for the next 100 years." One of the key challenges of such research is handling and analyzing miniscule material samples, just two atoms thick, several microns long and a few microns across. Xia's lab at UCI is equipped with a fiber-optic Sagnac interferometer microscope that he built. (The only other one in existence is at Stanford University, assembled by Xia when he was a graduate student there.) Calling it the most sensitive magnetic microscope in the world, Xia compares it to a telescope that an ornithologist in Irvine could use to inspect the eye of a bird in New York. "This machine is the ideal measurement tool for these discoveries," said UCI graduate student Alex Stern, lead author on two of the papers. "It's the most accurate way to optically measure magnetism in a material." In a study to be published April 24 in Nature, the researchers detail their observation - via the Sagnac interferometer - of magnetism in a microscopic flake of chromium germanium telluride. The compound, which they created, was viewed at minus 387 degrees Fahrenheit. CGT is a cousin of graphene, a superthin atomic carbon film. Since its discovery, graphene has been considered a potential replacement for silicon in next-generation computers and other devices because of the speed at which electronic signals skitter across its almost perfectly flat surface. But there's a catch: Certain computer components, such as memory and storage systems, need to be made of materials that have both electronic and magnetic properties. Graphene has the former but not the latter. CGT has both. His lab also used the Sagnac interferometer for a study published in Science Advances examining what happens at the precise moment bismuth and nickel are brought into contact with one another - again at a very low temperature (in this case, minus 452 degrees Fahrenheit). Xia said his team found at the interface between the two metals "an exotic superconductor that breaks time-reversal symmetry." "Imagine you turn back the clock and a cup of red tea turns green. Wouldn't that make this tea very exotic? This is indeed exotic for superconductors," he said. "And it's the first time it's been observed in 2-D materials." The signal carriers in this 2-D superconductor are Majorana fermions, which could be used for a braiding operation that theorists believe is vital to quantum computing. "The issue now is to try to achieve this at normal temperatures," Xia said. The third study shows promise in overcoming that hurdle. In 2012, Xia's lab delivered to the Defense Advanced Research Projects Agency a radio-frequency oscillator built around samarium hexaboride. The substance is an insulator on the inside but allows signal-carrying current made of Dirac fermions to flow freely on its 2-D surface. Using a special apparatus built in the Xia lab - also one of only two in the world - UCI researchers applied tensile strain to the samarium hexaboride sample and demonstrated in the Nature Materials study that they could stabilize the 2-D surface state at minus 27 degrees Fahrenheit. "Believe it or not, that's hotter than some parts of Canada," Xia quipped. "This work is a big step toward developing future quantum computers at nearly room temperature." Funding for UCI's involvement in the three studies was provided by the National Science Foundation. Additional support was furnished by the U.S. Department of Energy. About the University of California, Irvine: Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. It's located in one of the world's safest and most economically vibrant communities and is Orange County's second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit http://www. . Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.


News Article | April 26, 2017
Site: www.eurekalert.org

It may not seem like a material as thin as an atom could hide any surprises, but a research team led by scientists at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) discovered an unexpected magnetic property in a two-dimensional material. The scientists found that a 2-D van der Waals crystal, part of a class of material whose atomically thin layers can be peeled off one by one with adhesive tape, possessed an intrinsic ferromagnetism. The discovery, to be published April 26 in the journal Nature, could have major implications for a wide range of applications that rely upon ferromagnetic materials, such as nanoscale memory, spintronic devices, and magnetic sensors. "This is an exciting discovery," said study principal investigator Xiang Zhang, senior faculty scientist at Berkeley Lab's Materials Sciences Division and UC Berkeley professor. "This experiment presents smoking-gun evidence for an atomically thin--and atomically flat--magnet, which surprised many people. It opens the door for exploring fundamental spin physics and spintronic applications at low dimensions." The study tackles a long-standing issue in quantum physics about whether magnetism would survive when materials shrink down to two dimensions. For half a century, the Mermin-Wagner theorem has addressed this question by stating that if 2-D materials lack magnetic anisotropy, a directional alignment of electron spins in the material, there may be no magnetic order. "Interestingly, we found that magnetic anisotropy is an inherent property in the 2-D material we studied, and because of this characteristic, we were able to detect the intrinsic ferromagnetism," said study lead author Cheng Gong, a postdoctoral researcher in Zhang's lab. Van der Waals forces, named after a Dutch scientist, refers to intermolecular forces of attraction that do not arise from the typical covalent or ionic bonds that keep molecules intact. These quantum forces are used by geckos as they effortlessly scamper along walls and ceilings. Van der Waals crystals describe materials in which the 2-D layers are not connected to each other via traditional bonds, allowing them to be easily exfoliated with tape. Research on graphene, the most well-known van der Waals material, earned the Nobel Prize in physics in 2010. "It's like the pages of a book," said Gong. "The pages can be stacked on top of each other, but the forces linking one page to another is much weaker than the in-plane forces that keep a single sheet intact." Gong estimates that for this study, he peeled off more than 3,000 flakes of chromium germanium telluride (Cr2Ge2Te6, or CGT). While CGT has existed as a bulk material for decades, the researchers say the 2-D flakes could represent an exciting new family of 2-D van der Waals crystal. "CGT is also a semiconductor, and the ferromagnetism is intrinsic," said co-senior author Jing Xia, UC Irvine associate professor of physics and astronomy. "That makes it cleaner for applications in memory and spintronics." The researchers detect the magnetization from atomically thin materials using a technique called magneto-optic Kerr effect. The method involves the super-sensitive detection of the rotation of linearly polarized light when it interacts with electron spins in the material. The key to one of the study's more surprising findings is that the magnetic anisotropy was very small in the CGT material. That enabled researchers to easily control the temperature at which the material loses its ferromagnetism, known as the transition or Curie temperature. "This is a significant discovery," said Gong, "People believe that the Curie temperature is an inherent property of a magnetic material and cannot be changed. Our study shows that it can." The researchers showed that they could control the transition temperature of the CGT flake using surprisingly small magnetic fields of 0.3 tesla or less. "Thin films of metals like iron, cobalt, and nickel, unlike 2-D van der Waals materials, are structurally imperfect and susceptible to various disturbances, which contribute to a huge and unpredictable spurious anisotropy," said Gong. "In contrast, the highly crystalline and uniformly flat 2-D CGT, together with its small intrinsic anisotropy, allows small external magnetic fields to effectively engineer the anisotropy, enabling an unprecedented magnetic field control of ferromagnetic transition temperatures." The study authors also pointed out that a striking feature of van der Waals crystals is that they can be easily combined with dissimilar materials without restrictions based on structural or chemical compatibility. "The opportunities to combine different materials to develop new functionalities are appealing," said co-senior author Steven Louie, senior faculty scientist at Berkeley Lab's Materials Sciences Division and UC Berkeley professor of physics. "This offers a huge amount of flexibility in designing artificial structures for diverse magneto-electric and magneto-optical applications." Other senior collaborators on this work include Zi Qiang Qiu at Berkeley Lab and UC Berkeley; and Robert Cava at Princeton University. This work was primarily supported by DOE's Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | April 17, 2017
Site: www.eurekalert.org

Zika virus is a mosquito-borne infectious disease linked to certain birth defects in infants in South and Central America and the United States. A Lawrence Berkeley National Laboratory (Berkeley Lab) researcher, Banumathi Sankaran, worked as part of a multi-institutional team to map a key viral protein called NS5. Necessary to virus reproduction, NS5 contains two enzyme activities: one reduces the body's ability to mount an immune response against infection and the other helps start the genetic replication process. The work was led by Indiana University's Cheng Kao and Pingwei Li at Texas A&M University (TAMU). In a study published March 27 in Nature Communications, the team described the structure and function of these two enzyme active sites. They also showed comparisons between this protein and those from other related viruses that cause dengue fever, West Nile virus, Japanese encephalitis virus, and hepatitis C. These comparisons will help researchers as they search for possible compounds to halt the ability of the virus to reproduce. Working with researchers from TAMU, Sankaran, a research scientist in the Molecular Biophysics and Integrated Bioimaging Division at Berkeley Lab, used X-ray crystallography to solve the atomic structure of NS5 in the Berkeley Center for Structural Biology at the Advanced Light Source (ALS). "The ALS was critical to the success of this project," said Li. "The powerful beam and the sensitive detector on beamline 5.0.2 made it possible for us to obtain data on our poor quality crystals." Sankaran is in charge of the Collaborative Crystallography (CC) program at the ALS. Funded by the NIH, this program is a fast, reliable and transparent mail-in crystallographic service for the structural biology community. Since the mosquitos carrying the vector have been spreading into the Southern U.S., interest in studying the virus has increased and TAMU's Li indicated that several groups currently are working on similar structural determination efforts. "Having access to state-of-the-art facilities provided by our Collaborative Crystallography program resulted in a rapid turnaround of this project," according to Sankaran. While the researchers have gathered a lot of information about how to target this protein, there are still puzzles remaining. "One of the most important unresolved questions about Zika NS5 is how it catalyzes the synthesis of RNA," said Li. "We look forward to further studies and future collaborations, which will be invaluable for therapeutics discovery." Funding for this work was provided by the National Institutes of Health and the Howard Hughes Medical Institute. The ALS is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | May 8, 2017
Site: www.eurekalert.org

Study over Eastern North Atlantic aims to clear up understanding of how aerosols affect low marine clouds and improve assessments of their impact on Earth's climate IMAGE:  Using an aircraft outfitted with 55 atmospheric instrument systems, scientists will traverse horizontal tracks above and through clouds and spiral down through atmospheric layers to provide detailed, on-site measurements of... view more UPTON, NY--For an intensive period throughout June and July, and again next January and February, scientists from U.S. Department of Energy (DOE) laboratories will take to the skies in a specially outfitted aircraft to gather data on the properties of aerosols and clouds above the Eastern North Atlantic. They'll traverse horizontal tracks above and through clouds and spiral down through atmospheric layers to gather samples and measurements that add fine-scale detail to 3D cloud profiles constructed by nearby instruments on the ground. This synergistic sampling from throughout the clouds and below will improve scientists' understanding of how aerosol properties affect low marine boundary layer clouds, and help improve the representations of clouds and aerosols in models of Earth's climate. "Low level clouds are abundant and can have a big influence on global climate by affecting how much solar energy is reflected back into space," said Jian Wang, an atmospheric scientist at DOE's Brookhaven National Laboratory and lead researcher on the study. "One of the biggest uncertainties in Earth system models is our understanding of how clouds respond to changes in the properties of aerosols, tiny particles wafted into the atmosphere as salty sea spray or emitted from distant industrial sources." To improve that understanding, DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility has conducted a variety of aerosol studies. They recently installed a fixed ground station with radars and other meteorological instruments on Graciosa Island in the Azores west of Portugal. This site operates 24/7 collecting data on aerosols, trace gases, atmospheric thermodynamics, and other factors that influence clouds and climate in the Eastern North Atlantic--an area influenced by a variety of aerosol sources where persistent low marine boundary layer clouds are highly susceptible to changes in aerosol particle properties. The new study will supplement the year-round, ground-based dataset with a series of airborne sampling missions--approximately 20 in the summer and 20 in the winter--flown aboard ARM's Gulfstream-159 (G-1) research aircraft. The G-1 is the centerpiece of the ARM Aerial Facility managed by DOE's Pacific Northwest National Laboratory (PNNL). For this study it will be outfitted with 55 atmospheric instrument systems that will provide detailed "on-site" measures of the physical and chemical properties of aerosols and cloud particles, including variations in their distribution throughout the horizontal and vertical structure of the clouds. One instrument on board for the first time on a DOE aircraft will be a Holographic Detector for Clouds (HOLODEC), a wing-mounted device that takes detailed multidimensional pictures of cloud droplets to provide information about how they interact on microphysical scales. The data will help the scientists learn how the size distribution of cloud droplets is affected by the mixing of cloudy, wet air and drier, non-cloudy air. On some flights, the scientists will also sample the free troposphere, high above the boundary layer clouds. Data collected there will help determine whether the observed aerosol particles are coming from local sources or being transported long distances. The G-1 will take off from a joint U.S.-Portugal airbase 90 kilometers from Graciosa, with a large fraction of the computer-choreographed flight tracks crossing within the ranges of the ground-based scanning radars. Flying within range of the radars will allow scientists to use the detailed microphysical information obtained from the aircraft flights to evaluate, validate, and improve how they interpret the ground-based data. Collaborators will also bring samples of aerosol particles collected during the flights to PNNL's Environmental Molecular Sciences Laboratory and Lawrence Berkeley National Laboratory's Advanced Light Source--both DOE Office of Science User Facilities--for subsequent analysis. "Overall, the combination of ground-based and airborne data will give us a more complete picture of aerosols and clouds, particularly allowing us to better quantify the processes controlling the life cycles of aerosol and clouds, and how clouds respond to aerosol changes," said Wang. The ARM Climate Research Facility is a DOE Office of Science User Facility. The ARM Facility is operated by nine DOE national laboratories. Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | May 5, 2017
Site: www.rdmag.com

A recent discovery by Sandia National Laboratories researchers may unlock the potential of biofuel waste -- and ultimately make biofuels competitive with petroleum. Fuel made from plants is much more expensive than petroleum, but one way to decrease the cost would be to sell products made from lignin, the plant waste left over from biofuel production. Lignin typically is either burned to produce electricity or left unused in piles because no one has yet determined how to convert it into useful products, such as renewable plastics, fabrics, nylon and adhesives. The electricity isn't even available to the general public; it's only used by companies that create large amounts of lignin, like pulp and paper manufacturers. Now Sandia scientists, working with researchers from Lawrence Berkeley National Laboratory at the Joint BioEnergy Institute, have decoded the structure and behavior of LigM, an enzyme that breaks down molecules derived from lignin. The group's work on LigM appears in the latest issue of the Proceedings of the National Academy of Sciences. The enzyme has little in common with other, better understood proteins, which previously made it impossible for scientists to guess how it functions. This paper marks the first time anyone has solved the structure of LigM, opening a path toward new molecules and new, marketable products. For decades, scientists have wrestled with the problem of breaking down lignin, the part of plant cell walls that provides structure and protection from bacterial and insect attacks. This strength also makes lignin difficult to deconstruct, though there have been recent breakthroughs. The plant matter used to produce ethanol can be chemically or physically pre-treated so that the lignin is deconstructed in the process. However, these methods can be expensive and reduce the amount of biofuel that can be harvested. They could also interfere with later-stage lignin harvesting. That's why some researchers are focused on finding enzymes that convert lignin naturally and gently. Sandia scientist and lead author Amanda Kohler said her team knew enzymes could metabolize lignin and its derivatives because there are decades-old records of bacteria using enzymes for this purpose. Sphingonomas bacteria was discovered living in the waste water of a pulp mill more than 30 years ago. Once researchers realized the bacterium's unique enzymatic pathways enabled it to live on lignin, their challenge was then to understand the enzymes in these pathways so they could mimic what nature had already done, and use that understanding productively. Kohler and her team focused on LigM, an enzyme used by Sphingomonas, because it performs a key step in the conversion of lignin derivatives and it is the simplest of the known enzyme systems that perform this function. "When trying to mimic natural systems in a laboratory setting, the simplest, most direct systems are the best," Kohler explained. The team found that half of LigM's structure is composed of a common protein architecture found in all forms of life, from bacteria to humans. The rest of the enzyme -- the active portion -- is not found in any other known protein structure. This unique structure gives LigM the ability to bind specifically to molecules derived from lignin. "Solving the structure allows us to understand how the organism may have evolved its unique function, which I think is scientifically one of the most interesting findings," said paper co-author and Sandia scientist Ken Sale. The team used the Advanced Light Source Synchrotron at Lawrence Berkeley National Laboratories, along with high-performance computing and fundamental biochemistry to gain their insights into LigM. LigM is designed to break down lignin derivatives, not lignin itself. It is important to understand that LigM's function is only one key step in a longer pathway of reactions needed to fully deconstruct lignin. One active area of research involves finding other organisms, possibly fungi, that can execute the first step of breaking down large lignin mass into smaller fragments. Some of the Sandia scientists who solved LigM's structure, Sale and Matthew Mills, have recently learned more about another enzyme that helps drive the breakdown of lignin into smaller fragments. LigM works on a later stage in the process, when smaller lignin fragments already have been converted into a molecule called vanillic acid. "There is still work to be done to figure out the whole reaction pathway," Kohler says. "But now we have a much-needed understanding of a key step in this process, and are developing enzymes to fit our end goals of lowering the cost of biofuels by making products from lignin."


News Article | May 8, 2017
Site: www.eurekalert.org

Just four letters -- A, C, T, and G -- make up an organism's genetic code. Changing a single letter, or base, can lead to changes in protein structures and functions, impacting an organism's traits. In addition, though, subtler changes can and do happen, involving modifications of the DNA bases themselves. The best-known example of this kind of change is a methylation of the base cytosine at the 5th position on its carbon ring (5mC). In eukaryotes, a less-well known modification involves adding a methyl group to base 6 of adenine (6mA). In the May 8, 2017 issue of Nature Genetics, a team led by scientists at the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, report the prevalence of 6mA modifications in the earliest branches of the fungal kingdom. Though fungi have been around for a billion years and collectively are capable of degrading nearly all naturally-occurring polymers and even some human-made ones, most of the species that have been studied belong to just two phyla, the Ascomycota and Basidiomycota. The remaining 6 groups of fungi are classified as "early diverging lineages," the earliest branches in fungal genealogy. They comprise a little-explored realm of fungi, providing a repertoire of important and valuable gene products for DOE missions in bioenergy and environment. "By and large, early-diverging fungi are very poorly understood compared to other lineages. However, many of these fungi turn out to be important in a variety of ways," said study first author and DOE JGI analyst Stephen Mondo. "Consider the Neocallimastigomycetes -- these fungi are one of the most powerful degraders of plant biomass currently known and have a tremendous arsenal of plant cell wall degrading enzymes which may be useful for bioenergy production. They are a good example of how exploring these understudied lineages leads to valuable biological and technological insights." Many of the fungal genomes used in the study were sequenced as part of the DOE JGI's 1000 Fungal Genomes initiative aimed at producing at least one reference genome for every family of fungi. For the study, the team used 16 fungal genomes sequenced at the DOE JGI using the Pacific Biosciences sequencing platform. While the technology was used with the goal of attaining very high quality genome assemblies, DOE JGI scientists have now additionally taken advantage of this sequencing platform to explore epigenetic (5mC, 6mA) modifications. They discovered very high levels of 6mA in fungi, where up to 2.8% of all adenines were methylated, confirming these findings using multiple independent methods. The previous record holder for genomic 6mA, noted Mondo, is the alga Chlamydomonas reinhardtii (sequenced and annotated by the DOE JGI), in which just 0.4% of adenines were methylated. "This is one of the first direct comparisons of 6mA and 5mC in eukaryotes, and the first 6mA study across the fungal kingdom," said DOE JGI Fungal Genomics head and senior author Igor Grigoriev. "6mA has been shown to have different functions depending on the organism. For example, in animals it is involved in suppressing transposon activity, while in algae it is positively associated with gene expression. Our analysis has shown that 6mA modifications are associated with expressed genes and is preferentially deposited based on gene function and conservation, revealing 6mA as a marker of expression for important functionally-relevant genes." In addition to 6mA performing what seems to be the opposite role of 5mC (which suppresses expression), the team found that the presence of 5mC and 6mA are inversely correlated. Specifically, while 5mC is found at repetitive regions of the genome, the methylated adenines were clustered into dense "methylated adenine clusters" (MACs) at gene promoters. 6mA was also found consistently on both strands of DNA, which may enable propagation of methylation through cell division. "Using genomics, we explore the diversity of fungi to develop catalogs of genes, enzymes, and pathways -- parts lists for bio-based economy and bioenergy applications," said Grigoriev. "A lot of this is encoded in early diverging fungi. In these fungi, we found that majority of expressed genes have 6mA MACs. Thus, the discovery of DNA methylation in early diverging fungi helps the research community better understand regulation of genes that encode the parts for bio-based economy and bioenergy applications." Mondo described the identification and proposed role of 6mA in early diverging fungi at the 2017 Genomics of Energy & Environment Meeting. Watch his talk on the DOE JGI's YouTube channel at http://bit. . The U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, is committed to advancing genomics in support of DOE missions related to clean energy generation and environmental characterization and cleanup. DOE JGI, headquartered in Walnut Creek, Calif., provides integrated high-throughput sequencing and computational analysis that enable systems-based scientific approaches to these challenges. Follow @doe_jgi on Twitter. DOE's Office of Science is the largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | March 26, 2017
Site: www.huffingtonpost.com

Astronomers hate dust as much as the rest of us ― even more, perhaps. It’s one thing to get a little dust in your eye that takes your vision out of focus. Now imagine this kind of dust problem on a galactic scale where it hinders scientists trying to focus their telescopes into space. That’s part of the challenge faced by researchers at the Lawrence Berkeley National Laboratory in California, reports Space.com. They’re creating an instrument that will offer specific information on how fast the universe is growing. This device ― the Dark Energy Spectroscopic Instrument ― will create a map of over 30 million galaxies. But in order for that map to be precise, astronomers will have to correct for the problems created by all the dust particles in the Milky Way galaxy that impede their deep space investigations. “Because outer space is a very dusty place, that makes things very difficult for astronomers and astrophysicists who are trying to peer farther across the universe or deep into the center of our own galaxy to learn more about their structure, formation and evolution,” according to a Berkeley Lab statement. To help with the DESI project, Berkeley researchers created the following detailed 3D animation showing dust that spans thousands of light-years in our home galaxy. The study was published this week in The Astrophysical Journal. Here’s the animation: “The light from those distant galaxies travels for billions of years before we see it,” said Edward F. Schlafly in the Berkeley Lab statement. “But in the last thousand years of its journey toward us, a few percent of that light is absorbed and scattered by dust in our own galaxy. We need to correct for that.” Schlafly and his team created the 3D map (seen above) fly-through of the Milky Way by using data composed of separate sky surveys from telescopes on Maui and in New Mexico, resulting in 3D dust maps “many times better than anything that previously existed,” Schlafly said. The Berkeley scientists are looking to create higher quality dust maps which will, in turn, provide them with improved details about space dust properties. “This can give us a much more precise gauge of distances to faraway stars in the Milky Way, like a galactic GPS,” they said.


News Article | April 21, 2017
Site: www.materialstoday.com

In a surprising new discovery, alpha-tin, commonly called gray tin, exhibits a novel electronic phase when its crystal structure is strained, putting it in a rare new class of three-dimensional (3D) materials called topological Dirac semi-metals (TDSs). Only two other TDS materials are known to exist, discovered as recently as 2013. Alpha-tin now joins this class as its only simple-element member. This discovery, which is reported in a paper in Physical Review Letters, holds promise for novel physics and many potential technological applications. It is the work of Caizhi Xu, a physics graduate student at the University of Illinois at Urbana-Champaign, working under Tai-Chang Chiang and in collaboration with scientists at the Advanced Light Source at the Lawrence Berkeley National Laboratory and six other institutions internationally. TDSs exhibit electronic properties akin to those found at the surface of the now much-studied topological insulators (TIs). At the surfaces of TIs, electrons are able to conduct freely, like in a metal, while the ‘bulk’ or interior acts as an insulator. The surface electrons behave as two-dimensional (2D) massless, spin-polarized Dirac fermions that are robust against non-magnetic impurities, giving them potential applications in spintronic devices and fault-tolerant quantum computing. By contrast, the bulk electrons in TDSs can behave as massless Dirac fermions in all three dimensions, which leads to additional possibilities for novel physical behaviors. "TDSs are of profound interest to condensed matter physicists, primarily because they exhibit a number of novel physical properties, including ultrahigh carrier mobility, giant linear magnetoresistance, chiral anomaly and novel quantum oscillations," explains Xu. "Secondly, this class of materials can realize many interesting topological phases – under controlled conditions, the material can undergo phase transitions and can become a topological insulator, a Weyl semimetal or a topological superconductor." Tin has two well-known allotropes: at 13.2°C and above, white tin, or beta-tin, is metallic; below that temperature, the atomic structure of tin transitions, and the material becomes gray tin, or alpha-tin, which is semi-metallic. In thin films grown on a substrate such as indium antimonide (InSb), however, the transition temperature of tin goes up to 200°C, allowing alpha-tin to remain stable well above room temperature. Normally, alpha-tin's diamond-cubic crystal structure exhibits an ordinary semi-metallic phase – and the material has no common uses at the present time. In fact, gray tin can be problematic in many applications that involve tin – the so-called ‘tin pest’ problem involves the formation of gray tin that can cause disintegration of parts containing white tin. In their experiment, Xu and his colleagues engineered a strain in the material by growing layers of alpha-tin on a substrate of InSb, which has a slightly different lattice constant. "That lattice mismatch leads to strain, or compression, in the alpha-tin," Xu says. "It was believed that strain would open a band gap in gray tin and turn it into a TI. In a few recent studies researchers observed topological surface states in strained tin, but they didn't observe the strain-induced band gap because they were not able to access the conduction band. In this study, we used potassium doping and with this simple method were able to reach the conductance band. We were able to see the gapless and linear band dispersion that is the hallmark of a Dirac semi-metal. "This discovery is kind of unexpected. I decided to study the material because of its known TI phase. Once I dug into the experimental results and performed some theoretical calculations, what I found is that alpha-tin under a compressive strain is not an insulator, as had been thought. It turns out to be a Dirac semi-metal. Our calculations also show that it is only under a tensile strain that alpha-tin becomes a TI." Chiang believes these findings will open up new avenues of research. "Caizhi Xu's work illustrates that interesting new physics can still be found in simple common materials, such as gray tin, which has been known and studied for decades," he says. "It's clear from this study that strain engineering can open up many possibilities. My group is currently exploring a different way to apply strain, by mechanically stretching a sample. The strain will be uniaxial – along one direction only – and it will be tunable, but limited by sample breakage." Mankind has extracted and used tin in alloys since the Bronze Age, around 3000BC. Before the advent of aluminum cans, tin cans, which were actually steel lined with tin, were used for preserving food. This new discovery could allow alpha-tin to be a highly useful material in future technologies. "Potential applications of alpha-tin as a topological Dirac semi-metal could include taking advantage of its high carrier mobility to generate ultrafast electronic devices. Additionally, the giant magneto resistance could be useful in developing ultra-compact storage devices, like computer hard disks," suggests Xu. "Furthermore, this material could be a platform for further fundamental research related to optical properties, or to transport properties, including superconductivity. There is even potential that it could be used as a platform to realize Majorana fermions. I believe our new finding will be of interest to many physicists." This story is adapted from material from the University of Illinois at Urbana-Champaign, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | April 17, 2017
Site: www.eurekalert.org

In a surprising new discovery, alpha-tin, commonly called gray tin, exhibits a novel electronic phase when its crystal structure is strained, putting it in a rare new class of 3D materials called topological Dirac semimetals (TDSs). Only two other TDS materials are known to exist, discovered as recently as 2013. Alpha-tin now joins this class as its only simple-element member. This discovery holds promise for novel physics and many potential applications in technology. The findings are the work of Caizhi Xu, a physics graduate student at the University of Illinois at Urbana-Champaign, working under U. of I. Professor Tai-Chang Chiang and in collaboration with scientists at the Advanced Light Source at the Lawrence Berkeley National Laboratory and six other institutions internationally. TDSs exhibit electronic properties akin to those of the now much-studied topological insulators (TIs) on their surfaces. The surfaces of TIs allow electrons to conduct freely like a metal, while the "bulk" or interior behaves as an insulator. The surface electrons behave as 2D massless spin-polarized Dirac fermions that are robust against non-magnetic impurities, which yields potential applications in spintronic devices and fault-tolerant quantum computing. By contrast, the bulk electrons in TDSs behave as massless Dirac fermions in all three dimensions, which leads to additional possibilities for novel physical behaviors. Xu explains, "TDSs are of profound interest to condensed matter physicists, primarily because they exhibit a number of novel physical properties, including ultrahigh carrier mobility, giant linear magnetoresistance, chiral anomaly, and novel quantum oscillations. Secondly, this class of materials can realize many interesting topological phases--under controlled conditions, the material can undergo phase transitions and can become a topological insulator, a Weyl semimetal, or a topological superconductor." Tin has two well-known allotropes: at 13.2? Celsius and above, white tin, or beta-tin, is metallic. Below that temperature, the atomic structure of tin transitions, and the material becomes gray tin, or alpha-tin, which is semi-metallic. In thin films grown on a substrate such as indium antimonide (InSb), however, the transition temperature of tin goes up to 200? C, which means alpha-tin remains stable well above room temperature. Normally, alpha-tin's diamond-cubic crystal structure exhibits an ordinary semi-metallic phase--and the material has no common uses at the present time. In fact, gray tin can be problematic in many applications that involve tin - the so-called "tin pest" problem is the formation of gray tin that causes disintegration of parts containing white tin. In their experiment, Xu et al. engineered a strain on the material by growing alpha-tin samples in layers on a substrate of another crystalline material, InSb, which has a slightly different lattice constant. "That lattice mismatch leads to strain, or compression, in the alpha-tin," Xu goes on to explain. "It was believed that strain would open a band gap in gray tin and turn it into a TI. In a few recent studies researchers observed topological surface states in strained tin, but they didn't observe the strain-induced band gap because they were not able to access the conduction band. In this study, we used potassium doping and with this simple method were able to reach the conductance band. We were able to see the gapless and linear band dispersion that is the hallmark of a Dirac semimetal. "This discovery is kind of unexpected. I decided to study the material because of its known TI phase. Once I dug into the experimental results and performed some theoretical calculations, what I found is that alpha-tin under a compressive strain is not an insulator, as had been thought. It turns out to be a Dirac semimetal. Our calculations also show that it is only under a tensile strain that alpha-tin becomes a TI." Chiang believes these findings will open new avenues of research: "Caizhi Xu's work illustrates that interesting new physics can still be found in simple common materials, such as gray tin, which has been known and studied for decades." "It's clear from this study that strain engineering can open up many possibilities," Chiang continues. "My group is currently exploring a different way to apply strain, by mechanically stretching a sample. The strain will be uniaxial--along one direction only--and it will be tunable, but limited by sample breakage." Mankind has extracted and used tin in alloys since the Bronze Age, c. 3000 BC. Before the advent of aluminum cans, tin cans, which were actually steel lined with tin, were used in preservation of food. With this discovery, alpha-tin may prove a highly useful material in future technologies. Xu shares, "Potential applications of alpha-tin as a topological Dirac semimetal could include taking advantage of its high carrier mobility to generate ultrafast electronic devices. Additionally, the giant magneto resistance could be useful in developing ultra-compact storage devices, like computer hard disks. "Furthermore, this material could be a platform for further fundamental research related to optical properties, or to transport properties, including superconductivity. There is even potential that it could be used as a platform to realize Majorana fermions. I believe our new finding will be of interest to many physicists." These findings are published in the April 4, 2017 Physical Review Letters, in the article "Elemental topological Dirac semimetal α-Sn on InSb," which is highlighted as a PRL Editor's Suggestion. This research is supported by the U. S. Department of Energy and the National Science Foundation, and by various international institutions. The samples were grown at Lawrence Berkeley National Laboratory's Advanced Light Source, which is supported by the U.S. Department of Energy.


In three separate studies appearing this month in Nature, Science Advances and Nature Materials, UCI researchers and colleagues from UC Berkeley, Lawrence Berkeley National Laboratory, Princeton University, Fudan University and the University of Maryland explored the physics behind the 2-D states of novel materials and determined they could push computers to new heights of speed and power. The common threads running through the papers are that the research is conducted at extremely cold temperatures and that the signal carriers in all three studies are not electrons - as with traditional silicon-based technologies - but Dirac or Majorana fermions, particles without mass that move at nearly the speed of light. "Finally, we can take exotic, high-end theories in physics and make something useful," said UCI associate professor of physics & astronomy Jing Xia, a corresponding author on two of the studies. "We're exploring the possibility of making topological quantum computers [currently theoretical] for the next 100 years." One of the key challenges of such research is handling and analyzing miniscule material samples, just two atoms thick, several microns long and a few microns across. Xia's lab at UCI is equipped with a fiber-optic Sagnac interferometer microscope that he built. (The only other one in existence is at Stanford University, assembled by Xia when he was a graduate student there.) Calling it the most sensitive magnetic microscope in the world, Xia compares it to a telescope that an ornithologist in Irvine could use to inspect the eye of a bird in New York. "This machine is the ideal measurement tool for these discoveries," said UCI graduate student Alex Stern, lead author on two of the papers. "It's the most accurate way to optically measure magnetism in a material." In a study to be published April 24 in Nature, the researchers detail their observation - via the Sagnac interferometer - of magnetism in a microscopic flake of chromium germanium telluride. The compound, which they created, was viewed at minus 387 degrees Fahrenheit. CGT is a cousin of graphene, a superthin atomic carbon film. Since its discovery, graphene has been considered a potential replacement for silicon in next-generation computers and other devices because of the speed at which electronic signals skitter across its almost perfectly flat surface. But there's a catch: Certain computer components, such as memory and storage systems, need to be made of materials that have both electronic and magnetic properties. Graphene has the former but not the latter. CGT has both. His lab also used the Sagnac interferometer for a study published in Science Advances examining what happens at the precise moment bismuth and nickel are brought into contact with one another - again at a very low temperature (in this case, minus 452 degrees Fahrenheit). Xia said his team found at the interface between the two metals "an exotic superconductor that breaks time-reversal symmetry." "Imagine you turn back the clock and a cup of red tea turns green. Wouldn't that make this tea very exotic? This is indeed exotic for superconductors," he said. "And it's the first time it's been observed in 2-D materials." The signal carriers in this 2-D superconductor are Majorana fermions, which could be used for a braiding operation that theorists believe is vital to quantum computing. "The issue now is to try to achieve this at normal temperatures," Xia said. The third study shows promise in overcoming that hurdle. In 2012, Xia's lab delivered to the Defense Advanced Research Projects Agency a radio-frequency oscillator built around samarium hexaboride. The substance is an insulator on the inside but allows signal-carrying current made of Dirac fermions to flow freely on its 2-D surface. Using a special apparatus built in the Xia lab - also one of only two in the world - UCI researchers applied tensile strain to the samarium hexaboride sample and demonstrated in the Nature Materials study that they could stabilize the 2-D surface state at minus 27 degrees Fahrenheit. "Believe it or not, that's hotter than some parts of Canada," Xia quipped. "This work is a big step toward developing future quantum computers at nearly room temperature." More information: Cheng Gong et al, Discovery of intrinsic ferromagnetism in two-dimensional van der Waals crystals, Nature (2017). DOI: 10.1038/nature22060


News Article | April 26, 2017
Site: phys.org

Lawrence Livermore National Laboratory (LLNL) physicist Felicie Albert led an international team pursuing this new regime in laser research, which was described in a Physical Review Letters (PRL) paper published online March 31. Albert and the team spent more than two years experimenting with new ways to generate X-rays capable of probing the size, density, pressure and composition of highly transient states of matter, such as those found in the cores of planets and in fusion plasmas. Plasmas make up 99 percent of the known universe. The researchers studied betatron X-ray radiation, emitted when electrons are accelerated to relativistic energies and wiggle in the plasma wave produced by the interaction of a short, intense laser pulse with a gas. Traditionally, this source has been well studied for laser pulses with femtosecond (quadrillionth of a second)-long durations.To study betatron X-ray emission at the intensities and pulse durations relevant to larger-scale laser facilities, such as LLNL's Advanced Radiographic Capability (ARC) laser, the researchers conducted an experiment on the Titan Laser at the Laboratory's Jupiter Laser Facility. There they observed betatron X-ray radiation driven by much longer, picosecond-duration laser pulses. "For me a picosecond is forever," Albert joked. While picoseconds measure time in trillionths of a second, that's slow to a researcher who prefers even shorter laser pulses. The experimental work shows that the new radiation source holds great promise for undertaking applications at international large-scale laser facilities, where it potentially could be used for X-ray radiography and phase contrast imaging of laser-driven shocks, absorption spectroscopy and opacity measurements. Other LLNL colleagues include Nuno Lemos, Brad Pollock, Clement Goyon, Arthur Pak, Joseph Ralph and John Moody, along with collaborators from the University of California-Los Angeles, the SLAC National Accelerator Laboratory, Lawrence Berkeley National Laboratory, the University of California-Berkeley and the University of Lisbon in Portugal. Albert noted that the results did not reveal themselves immediately as in some experiments, and that it took the team a lot of analysis and hard work to uncover the new regime. They note in their paper the wide variety of potential uses of the technology: Betatron X-ray radiation driven by short-pulse lasers has been used for biological and medical purposes, such as X-ray phase contrast imaging of insects and hard X-ray radiography of bone. Its unique properties also make it suitable for studying the dynamics of high-energy-density plasmas and warm dense matter - a state near solid densities - and temperatures found in the cores of giant planets like Jupiter and in inertial confinement fusion plasmas. More information: F. Albert et al. Observation of Betatron X-Ray Radiation in a Self-Modulated Laser Wakefield Accelerator Driven with Picosecond Laser Pulses, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.118.134801


A team of researchers, led by the University of Minnesota, have discovered a new nano-scale thin film material with the highest-ever conductivity in its class. The new material could lead to smaller, faster, and more powerful electronics, as well as more efficient solar cells. The discovery is being published today in Nature Communications, an open access journal that publishes high-quality research from all areas of the natural sciences. Researchers say that what makes this new material so unique is that it has a high conductivity, which helps electronics conduct more electricity and become more powerful. But the material also has a wide bandgap, which means light can easily pass through the material making it optically transparent. In most cases, materials with wide bandgap, usually have either low conductivity or poor transparency. “The high conductivity and wide bandgap make this an ideal material for making optically transparent conducting films which could be used in a wide variety of electronic devices, including high power electronics, electronic displays, touchscreens and even solar cells in which light needs to pass through the device,” said Bharat Jalan, a University of Minnesota chemical engineering and materials science professor and the lead researcher on the study. Currently, most of the transparent conductors in our electronics use a chemical element called indium. The price of indium has generally gone up over the last two decades, which has added to the cost of current display technology. As a result, there has been tremendous effort to find alternative materials that work as well, or even better, than indium-based transparent conductors. In this study, researchers found a solution. They developed a new transparent conducting thin film using a novel synthesis method, in which they grew a BaSnO  thin film (a combination of barium, tin and oxygen, called barium stannate), but replaced elemental tin source with a chemical precursor of tin. The chemical precursor of tin has unique, radical properties that enhanced the chemical reactivity and greatly improved the metal oxide formation process. Both barium and tin are significantly cheaper than indium and are abundantly available. “We were quite surprised at how well this unconventional approach worked the very first time we used the tin chemical precursor,” said University of Minnesota chemical engineering and materials science graduate student Abhinav Prakash, the first author of the paper. “It was a big risk, but it was quite a big breakthrough for us.” Jalan and Prakash said this new process allowed them to create this material with unprecedented control over thickness, composition, and defect concentration and that this process should be highly suitable for a number of other material systems where the element is hard to oxidize. The new process is also reproducible and scalable. They further added that it was the structurally superior quality with improved defect concentration that allowed them to discover high conductivity in the material. They said the next step is to continue to reduce the defects at the atomic scale. “Even though this material has the highest conductivity within the same materials class, there is much room for improvement in addition, to the outstanding potential for discovering new physics if we decrease the defects. That’s our next goal,” Jalan said. The research was funded by the National Science Foundation (NSF), Air Force Office of Scientific Research (AFOSR), and U.S. Department of Energy. In addition to Jalan and Prakash, the research team included Peng Xu, University of Minnesota chemical engineering and materials science graduate student; Cynthia S. Lo, Washington University assistant professor; Alireza Faghaninia, former graduate student at Washington University; Sudhanshu Shukla, researcher at Lawrence Berkeley National Laboratory and Nanyang Technological University; and Joel W. Ager III, Lawrence Berkeley National Laboratory and University of California Berkeley adjunct professor. To read the full paper, entitled “Wide Bandgap BaSnO  Films with Room Temperature Conductivity Exceeding 104 Scm-1,” visit the Nature Communications website.


Home > Press > Discovery of new transparent thin film material could improve electronics and solar cells: Conductivity is highest-ever for thin film oxide semiconductor material Abstract: A team of researchers, led by the University of Minnesota, have discovered a new nano-scale thin film material with the highest-ever conductivity in its class. The new material could lead to smaller, faster, and more powerful electronics, as well as more efficient solar cells. The discovery is being published today in Nature Communications, an open access journal that publishes high-quality research from all areas of the natural sciences. Researchers say that what makes this new material so unique is that it has a high conductivity, which helps electronics conduct more electricity and become more powerful. But the material also has a wide bandgap, which means light can easily pass through the material making it optically transparent. In most cases, materials with wide bandgap, usually have either low conductivity or poor transparency. "The high conductivity and wide bandgap make this an ideal material for making optically transparent conducting films which could be used in a wide variety of electronic devices, including high power electronics, electronic displays, touchscreens and even solar cells in which light needs to pass through the device," said Bharat Jalan, a University of Minnesota chemical engineering and materials science professor and the lead researcher on the study. Currently, most of the transparent conductors in our electronics use a chemical element called indium. The price of indium has gone up tremendously in the past few years significantly adding to the cost of current display technology. As a result, there has been tremendous effort to find alternative materials that work as well, or even better, than indium-based transparent conductors. In this study, researchers found a solution. They developed a new transparent conducting thin film using a novel synthesis method, in which they grew a BaSnO3 thin film (a combination of barium, tin and oxygen, called barium stannate), but replaced elemental tin source with a chemical precursor of tin. The chemical precursor of tin has unique, radical properties that enhanced the chemical reactivity and greatly improved the metal oxide formation process. Both barium and tin are significantly cheaper than indium and are abundantly available. "We were quite surprised at how well this unconventional approach worked the very first time we used the tin chemical precursor," said University of Minnesota chemical engineering and materials science graduate student Abhinav Prakash, the first author of the paper. "It was a big risk, but it was quite a big breakthrough for us." Jalan and Prakash said this new process allowed them to create this material with unprecedented control over thickness, composition, and defect concentration and that this process should be highly suitable for a number of other material systems where the element is hard to oxidize. The new process is also reproducible and scalable. They further added that it was the structurally superior quality with improved defect concentration that allowed them to discover high conductivity in the material. They said the next step is to continue to reduce the defects at the atomic scale. "Even though this material has the highest conductivity within the same materials class, there is much room for improvement in addition, to the outstanding potential for discovering new physics if we decrease the defects. That's our next goal," Jalan said. ### The research was funded by the National Science Foundation (NSF), Air Force Office of Scientific Research (AFOSR), and U.S. Department of Energy. In addition to Jalan and Prakash, the research team included Peng Xu, University of Minnesota chemical engineering and materials science graduate student; Cynthia S. Lo, Washington University assistant professor; Alireza Faghaninia, former graduate student at Washington University; Sudhanshu Shukla, researcher at Lawrence Berkeley National Laboratory and Nanyang Technological University; and Joel W. Ager III, Lawrence Berkeley National Laboratory and University of California Berkeley adjunct professor. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


A team of researchers, led by the University of Minnesota, have discovered a new nano-scale thin film material with the highest-ever conductivity in its class. The new material could lead to smaller, faster, and more powerful electronics, as well as more efficient solar cells. The discovery is being published today in Nature Communications, an open access journal that publishes high-quality research from all areas of the natural sciences. Researchers say that what makes this new material so unique is that it has a high conductivity, which helps electronics conduct more electricity and become more powerful. But the material also has a wide bandgap, which means light can easily pass through the material making it optically transparent. In most cases, materials with wide bandgap, usually have either low conductivity or poor transparency. "The high conductivity and wide bandgap make this an ideal material for making optically transparent conducting films which could be used in a wide variety of electronic devices, including high power electronics, electronic displays, touchscreens and even solar cells in which light needs to pass through the device," said Bharat Jalan, a University of Minnesota chemical engineering and materials science professor and the lead researcher on the study. Currently, most of the transparent conductors in our electronics use a chemical element called indium. The price of indium has generally gone up over the last two decades, which has added to the cost of current display technology. As a result, there has been tremendous effort to find alternative materials that work as well, or even better, than indium-based transparent conductors. In this study, researchers found a solution. They developed a new transparent conducting thin film using a novel synthesis method, in which they grew a BaSnO3 thin film (a combination of barium, tin and oxygen, called barium stannate), but replaced elemental tin source with a chemical precursor of tin. The chemical precursor of tin has unique, radical properties that enhanced the chemical reactivity and greatly improved the metal oxide formation process. Both barium and tin are significantly cheaper than indium and are abundantly available. "We were quite surprised at how well this unconventional approach worked the very first time we used the tin chemical precursor," said University of Minnesota chemical engineering and materials science graduate student Abhinav Prakash, the first author of the paper. "It was a big risk, but it was quite a big breakthrough for us." Jalan and Prakash said this new process allowed them to create this material with unprecedented control over thickness, composition, and defect concentration and that this process should be highly suitable for a number of other material systems where the element is hard to oxidize. The new process is also reproducible and scalable. They further added that it was the structurally superior quality with improved defect concentration that allowed them to discover high conductivity in the material. They said the next step is to continue to reduce the defects at the atomic scale. "Even though this material has the highest conductivity within the same materials class, there is much room for improvement in addition, to the outstanding potential for discovering new physics if we decrease the defects. That's our next goal," Jalan said. The research was funded by the National Science Foundation (NSF), Air Force Office of Scientific Research (AFOSR), and U.S. Department of Energy. In addition to Jalan and Prakash, the research team included Peng Xu, University of Minnesota chemical engineering and materials science graduate student; Cynthia S. Lo, Washington University assistant professor; Alireza Faghaninia, former graduate student at Washington University; Sudhanshu Shukla, researcher at Lawrence Berkeley National Laboratory and Nanyang Technological University; and Joel W. Ager III, Lawrence Berkeley National Laboratory and University of California Berkeley adjunct professor. To read the full paper, entitled "Wide Bandgap BaSnO3 Films with Room Temperature Conductivity Exceeding 104 Scm-1," visit the Nature Communications website.


News Article | April 30, 2017
Site: www.theguardian.com

It started with a crowdfunding startup, an investment from Prince, and the idea to help new solar companies tackle business challenges that can be hard to overcome on their own. Now, four years later, the idea has morphed into a group called Powerhouse, and notably, in a world flush with tech startups, it’s one of the only incubators out there focused on launching and growing solar companies. Powerhouse runs an accelerator and an incubator program. An accelerator typically provides a small amount of funding, free or low-cost office space, and networking opportunities with investors and customers for young companies that are still developing their first technology and business plans. Since its launch in 2013, Powerhouse has invested hundreds of thousands of dollars collectively into 15 startups, and this summer plans to welcome another few solar entrepreneurs into the program. The group’s incubator division rents office space to more established solar and energy startups across 15,000 sq ft and three floors in downtown Oakland, California. Sometimes the accelerator entrepreneurs graduate into rent-paying companies in the co-working space. Powerhouse now hosts about 15 companies and about 100 people across both groups. Its goal is simple. The organization wants to play a unique role in fostering a new wave of tech innovation in the solar market. Many of the Powerhouse companies are using software, data and the web to make selling or designing solar systems cheaper and easier. They rely on the advice and networking opportunities through Powerhouse to raise money, find customers or exit – through an initial public offering or acquisition. “Powerhouse gave us so much validation and credibility at the beginning, when we didn’t have much to show. It was just enough to get people to believe in us,” says Elena Lucas, the co-founder and CEO of UtilityAPI, an energy data startup. An earlier wave of solar startups was dominated by companies experimenting with different materials and designs for solar cells and panels. Many of those materials-focused solar startups failed in getting the desired technical performance despite large investments from the Bay Area’s venture capitalists. As the price of solar panels dropped dramatically in recent years, the new generation of entrepreneurs and startups are chipping away at other stubborn problems, such as shortening the time it takes to get permits or honing the sales pitch to homeowners. It’s like when fast internet connections finally got cheap and ubiquitous enough to attract the entrepreneurial-minded to build new websites and services on top of it. Tough challenges remain for solar startups. Big utilities and power companies, who are potential investors or customers, don’t generally have experience working with young, renewable energy companies. Meanwhile, US government funding for energy innovation is minimal, particularly with potential federal budget cuts looming and a lack of clean energy support in the White House. But as solar energy becomes cheaper, it’s attracting public and private investments worldwide, evidenced by the $116bn that flowed into solar projects, companies and technologies in 2016, according to Bloomberg New Energy Finance. “The ultimate mission of Powerhouse is to make solar energy the most accessible form of energy in the world,” says Emily Kirsch, co-founder of Powerhouse. Sitting on a bean bag in a nook of the seventh floor of Powerhouse’s headquarters, Kirsch says that despite the rise and success of Silicon Valley-style tech accelerators such as Y Combinator and Techstars, no one else has tried to do the same targeting only the solar industry: “We’re it so far.” The group’s model is showing some success, at least on a small scale, though it’s still early days. Powerhouse takes a small equity stake in its accelerator companies and makes money if they get acquired or go public. Currently Powerhouse gets the bulk of its investment money from a combination of grants, corporate sponsors, like SolarCity and SunPower, and office space rental fees. It’s considering raising money from angel investors so that it could make larger investments and in more companies. None of the companies in its portfolio has gone public or been bought yet, but some of them have attracted funding since going through the accelerator program and increased the value of the companies in the process. Kirsch says the top startups in the accelerator program have seen their values increase by as many as 40 times. Four of the startups in its incubator program have been acquired so far, says Kirsch, though the company doesn’t take a stake in those. But their exits help to build Powerhouse’s reputation among entrepreneurs and investors. Kirsch has been involved since day one. Years ago, when Kirsch was working for Van Jones, an environmental and human rights activist who briefly served as a green jobs adviser to former President Obama, he asked her if she would be interested in helping the then new startup called Solar Mosaic, which provides financing to install solar panels on rooftops, pilot a solar program in Oakland. Meanwhile Jones’s friend Prince was looking to invest a quarter of a million dollars into solar projects in Oakland, and ended up funding Solar Mosaic’s first four solar buildings. Based on that experience – connecting a young solar startup with partners and capital – Kirsch and Danny Kennedy, a former Greenpeace campaign manager who co-founded solar installer Sungevity, launched a company to try to see if the model could work for many more young solar companies. They changed the name of the company, SFunCube, to Powerhouse two years ago. On a visit earlier this year to Powerhouse’s headquarters, dozens of entrepreneurs were heads down working on their products and mingling with potential partners during a weekly open house event. The Powerhouse team connected UtilityAPI with its first investor, Better Ventures, as well as an adviser, Jon Wellinghoff, who is a former chairman of the Federal Energy Regulatory Commission. After going through the accelerator program, UtilityAPI, which creates software to collect data about a building’s energy use and deliver it to customers such as solar or energy storage installers, has grown to nine people from the two co-founders. It now has an office space on the sixth floor of Powerhouse after previously using shared desks. Lucas says the co-working space served as a “brain trust” because all the entrepreneurs brought with them different types of expertise. That allowed her to get quick answers about energy policy or technical standards. Another accelerator program graduate, BrightCurrent, which works with big box retailers and solar companies on marketing solar panels and installation services, now employs 120 people and became profitable last year, says John Bourne, the co-founder and CEO of the five-year-old company. Bourne says Powerhouse helped his company connect with investors (like Better Ventures) and customers and hone his sales pitch. During the accelerator program, Bourne met with Kirsch or Kennedy once a week to walk through BrightCurrent’s plans and brainstorm for ways to overcome obstacles. “It can be really isolating, lonely and tough being an entrepreneur. You’re working alone and trying to build something,” Bourne says. When he joined, Powerhouse was operating out of Sungevity’s offices and, he says: “It was a warm great environment, and I found people who cared about what I cared about. That was a huge win for me.” Solar Mosaic’s co-founder and CEO, Billy Parish, says that his company – which is now six years old and employs more than 150 people – has partnered with at least three of the Powerhouse startups on projects, including UtilityAPI, Sunible and BrightCurrent. “Powerhouse is one of the hubs of the solar ecosystem and they are helping bring breakthrough ideas for the industry into existence. Being close to them keeps us in touch with those new ideas and entrepreneurs,” says Parish. In total numbers, Powerhouse is still pretty small. Its companies have contributed to the installation of 242 megawatts of solar, employ 386 people, and have generated $52m in revenue. That’s probably the group’s biggest drawback – it’s limited, it’s very narrowly focused and it’s still operating on a tiny scale. But they’re part of a larger movement to invest and nurture new companies in low-carbon energy. Other companies running energy-related accelerator programs include Cyclotron Road, which has partnered with Lawrence Berkeley National Laboratory, and Otherlab in the Mission District of San Francisco. Last year, Bill Gates and a group of investors launched Breakthrough Energy Ventures to spend $1bn on early stage breakthroughs in energy. Powerhouse co-founder Danny Kennedy, who now heads up the California Clean Energy Fund, describes the importance of ventures like Powerhouse and the California Clean Energy Fund like this: “We need early-stage energy investing programs now more than ever to enable the energy transition. It’s critical.”


Martin L.W.,University of Illinois at Urbana - Champaign | Chu Y.-H.,National Chiao Tung University | Ramesh R.,University of California at Berkeley | Ramesh R.,Lawrence Berkeley National Laboratory
Materials Science and Engineering R: Reports | Year: 2010

The growth and characterization of functional oxide thin films that are ferroelectric, magnetic, or both at the same time are reviewed. The evolution of synthesis techniques and how advances in in situ characterization have enabled significant acceleration in improvements to these materials are described. Methods for enhancing the properties of functional materials or creating entirely new functionality at interfaces are covered, including strain engineering and layering control at the atomic-layer level. Emerging applications of these functional oxides such as achieving electrical control of ferromagnetism and the future of these complex functional oxides is discussed. © 2010 Elsevier B.V. All rights reserved.


Mony L.,University of California at Berkeley | Berger T.K.,University of California at Berkeley | Isacoff E.Y.,University of California at Berkeley | Isacoff E.Y.,Lawrence Berkeley National Laboratory
Nature Structural and Molecular Biology | Year: 2015

The Hv1 proton channel is unique among voltage-gated channels for containing the pore and gate within its voltage-sensing domain. Pore opening has been proposed to include assembly of the selectivity filter between an arginine (R3) of segment S4 and an aspartate (D1) of segment S1. We determined whether gating involves motion of S1, using Ciona intestinalis Hv1. We found that channel opening is concomitant with solution access to the pore-lining face of S1, from the cytoplasm to deep inside the pore. Voltage- and patch-clamp fluorometry showed that this involves a motion of S1 relative to its surroundings. S1 motion and the S4 motion that precedes it are each influenced by residues on the other helix, thus suggesting a dynamic interaction between S1 and S4. Our findings suggest that the S1 of Hv1 has specialized to function as part of the channel's gate. © 2015 Nature America, Inc. All rights reserved.


Cappa C.D.,University of California at Davis | Wilson K.R.,Lawrence Berkeley National Laboratory
Atmospheric Chemistry and Physics | Year: 2011

Vacuum Ultraviolet (VUV) photoionization mass spectrometry has been used to measure the evolution of chemical composition for two distinct organic aerosol types as they are passed through a thermodenuder at different temperatures. The two organic aerosol types considered are primary lubricating oil (LO) aerosol and secondary aerosol from the α-pinene + O3 reaction (αP). The evolution of the VUV mass spectra for the two aerosol types with temperature are observed to differ dramatically. For LO particles, the spectra exhibit distinct changes with temperature in which the lower peaks, corresponding to compounds with higher vapor pressures, disappear more rapidly than the high peaks. In contrast, the αP aerosol spectrum is essentially unchanged by temperature even though the particles experience significant mass loss due to evaporation. The variations in the LO spectra are found to be quantitatively in agreement with expectations from absorptive partitioning theory whereas the αP spectra suggest that the evaporation of αP derived aerosol appears to not be governed by partitioning theory. We postulate that this difference arises from diffusivity within the ±P particles being sufficiently slow that they do not exhibit the expected liquid-like behavior and perhaps exist in a glassy state. To reconcile these observations with decades of aerosol growth measurements, which indicate that OA formation is described by equilibrium partitioning, we present a conceptual model wherein the secondary OA is formed and then rapidly converted from an absorbing form to a non-absorbing form. The results suggest that, although OA growth may be describable by equilibrium partitioning theory, the properties of organic aerosol once formed may differ significantly from the properties determined in the equilibrium framework. © 2011 Author(s).


Luo Y.,Beijing Normal University | Wang L.-W.,Lawrence Berkeley National Laboratory
ACS Nano | Year: 2010

The electronic structures of the CdSe/CdS core-shell nanorods are systemically investigated by large-scale first-principles quality calculations. The effects of band alignment, quantum confinement, piezoelectric field, and dipole moments are analyzed and delineated by comparing the results of systems with or without some of these attributes. We found complicated interplays between these effects in determining the nanorod band gap and electron hole wave function localizations. The hole wave function is found to be localized inside the CdSe core, while the electron wave function is localized in the CdS shell, with its distance to the CdSe core depending on the surface passivation. The permanent dipole moment induced by different surface passivations can change the electron hole separation, while the piezoelectric effect plays a relatively minor role. Finally, we demonstrate that it is straightforward to manipulate the nanorod electronic structure by changing its CdSe core position. © 2010 American Chemical Society.


Nord A.S.,University of California at Davis | Pattabiraman K.,University of California at San Francisco | Visel A.,Lawrence Berkeley National Laboratory | Visel A.,U.S. Department of Energy | And 2 more authors.
Neuron | Year: 2015

The forebrain is the seat of higher-order brain functions, and many human neuropsychiatric disorders are due to genetic defects affecting forebrain development, making it imperative to understand the underlying genetic circuitry. Recent progress now makes it possible to begin fully elucidating the genomic regulatory mechanisms that control forebrain gene expression. Herein, we discuss the current knowledge of how transcription factors drive gene expression programs through their interactions with cis-acting genomic elements, such as enhancers; how analyses of chromatin and DNA modifications provide insights into gene expression states; and how these approaches yield insights into the evolution of the human brain. © 2015 Elsevier Inc.


Cappa C.D.,University of California at Davis | Wilson K.R.,Lawrence Berkeley National Laboratory
Atmospheric Chemistry and Physics | Year: 2012

A new model of secondary organic aerosol (SOA) formation is developed that explicitly takes into account multi-generational oxidation as well as fragmentation of gas-phase compounds, and assumes equilibrium gas-particle partitioning. The model framework requires specification of a limited number of tunable parameters to describe the kinetic evolution of SOA mass, the average oxygen-to-carbon atomic ratio and the mean particle volatility as oxidation proceeds. These parameters describe (1) the relationship between oxygen content and volatility, (2) the probability of fragmentation and (3) the amount of oxygen added per reaction. The time-evolution and absolute value of the simulated SOA mass depends sensitively on all tunable parameters. Of the tunable parameters, the mean O : C is most sensitive to the oxygen/volatility relationship, exhibiting only a weak dependence on the other relationships. The model mean particle O : C produced from a given compound is primarily controlled by the number of carbon atoms comprising the SOA precursor, with some sensitivity to the specified oxygen/volatility relationship. The model is tested against laboratory measurements of time-dependent SOA formation from the photooxidation of α-pinene and n-pentadecane and performs well (after tuning). The model can also accurately simulate the carbon-number dependence of aerosol yields previously observed for oxidation of straight-chain alkanes. This model may provide a generalized framework for the interpretation of laboratory SOA formation experiments in which explicit consideration of multiple-generations of products is required, which is true for all photo-oxidation experiments. © 2012 Author(s).


Wang F.,Peking University | Kivelson S.A.,Stanford University | Lee D.-H.,University of California at Berkeley | Lee D.-H.,Lawrence Berkeley National Laboratory
Nature Physics | Year: 2015

In common with other iron-based high-temperature superconductors, FeSe exhibits a transition to a nematic phase below 90 K in which the crystal rotation symmetry is spontaneously broken. However, the absence of strong low-frequency magnetic fluctuations near or above the transition has been interpreted as implying the primacy of orbital ordering. In contrast, we establish that quantum fluctuations of spin-1 local moments with strongly frustrated exchange interactions can lead to a nematic quantum paramagnetic phase consistent with the observations in FeSe. We show that this phase is a fundamental expression of the existence of a Berry s phase associated with the topological defects of a Néel antiferromagnet, in a manner analogous to that which gives rise to valence bond crystal order for spin-1/2 systems. We present an exactly solvable model realizing the nematic quantum paramagnetic phase, discuss its relation with the spin-1 J 1 -J 2 model, and construct a field theory of the Landau-forbidden transition between the Néel state and this nematic quantum paramagnet. © 2015 Macmillan Publishers Limited. All rights reserved.


Doudna J.A.,Howard Hughes Medical Institute | Doudna J.A.,University of California at Berkeley | Doudna J.A.,Lawrence Berkeley National Laboratory
Annual Review of Biophysics | Year: 2013

Small RNA molecules regulate eukaryotic gene expression during development and in response to stresses including viral ection. Specialized ribonucleases and RNA-binding proteins govern the production and action of small regulatory RNAs. After initial processing in the nucleus by Drosha, precursor microRNAs (pre-miRNAs) are transported to the cytoplasm, where Dicer cleavage generates mature microRNAs (miRNAs) and short interfering RNAs (siRNAs). These double-stranded products assemble with Argonaute proteins such that one strand is preferentially selected and used to guide sequence-specific silencing of complementary target mRNAs by endonucleolytic cleavage or translational repression. Molecular structures of Dicer and Argonaute proteins, and of RNA-bound complexes, have offered exciting insights into the mechanisms operating at the heart of RNA-silencing pathways. Copyright © 2013 by Annual Reviews.


Catalan G.,Catalan Institution for Research and Advanced Studies | Catalan G.,Catalan Institute of Nanoscience and Nanotechnology | Seidel J.,Lawrence Berkeley National Laboratory | Seidel J.,University of California at Berkeley | And 4 more authors.
Reviews of Modern Physics | Year: 2012

Domains in ferroelectrics were considered to be well understood by the middle of the last century: They were generally rectilinear, and their walls were Ising-like. Their simplicity stood in stark contrast to the more complex Bloch walls or Néel walls in magnets. Only within the past decade and with the introduction of atomic-resolution studies via transmission electron microscopy, electron holography, and atomic force microscopy with polarization sensitivity has their real complexity been revealed. Additional phenomena appear in recent studies, especially of magnetoelectric materials, where functional properties inside domain walls are being directly measured. In this paper these studies are reviewed, focusing attention on ferroelectrics and multiferroics but making comparisons where possible with magnetic domains and domain walls. An important part of this review will concern device applications, with the spotlight on a new paradigm of ferroic devices where the domain walls, rather than the domains, are the active element. Here magnetic wall microelectronics is already in full swing, owing largely to the work of Cowburn and of Parkin and their colleagues. These devices exploit the high domain wall mobilities in magnets and their resulting high velocities, which can be supersonic, as shown by Kreines' and co-workers 30 years ago. By comparison, nanoelectronic devices employing ferroelectric domain walls often have slower domain wall speeds, but may exploit their smaller size as well as their different functional properties. These include domain wall conductivity (metallic or even superconducting in bulk insulating or semiconducting oxides) and the fact that domain walls can be ferromagnetic while the surrounding domains are not. © 2012 American Physical Society.


Haxton W.C.,University of California at Berkeley | Haxton W.C.,Lawrence Berkeley National Laboratory | Hamish Robertson R.G.,University of Washington | Serenelli A.M.,Institute Ciencias Del Espacio CSIC IEEC | Serenelli A.M.,Max Planck Institute for Astrophysics
Annual Review of Astronomy and Astrophysics | Year: 2013

We describe the current status of solar neutrino measurements and of the theory - both neutrino physics and solar astrophysics - employed in interpreting measurements. Important recent developments include Super-Kamiokande's determination of the ν-e elastic scattering rate for 8B neutrinos to 3%; the latest Sudbury Neutrino Observatory (SNO) global analysis in which the inclusion of low-energy data from SNO I and II significantly narrowed the range of allowed values for the neutrino mixing angle θ12; Borexino results for both the 7Be and proton-electron-proton (pep) neutrino fluxes, the first direct measurements constraining the rate of proton-proton (pp) I and pp II burning in the Sun; global reanalyses of solar neutrino data that take into account new reactor results on θ13; a new decadal evaluation of the nuclear physics of the pp chain and CNO cycle defining best values and uncertainties in the nuclear microphysics input to solar models; recognition of an emerging discrepancy between two tests of solar metallicity, helioseismological mappings of the sound speed in the solar interior, and analyses of the metal photoabsorption lines based on our best current description of the Sun's photosphere; a new round of standard solar model calculations optimized to agree either with helioseismology or with the new photospheric analysis; and, motivated by the solar abundance problem, the development of nonstandard, accreting solar models, in order to investigate possible consequences of the metal segregation that occurred in the proto-solar disk. We review this progress and describe how new experiments such as SNO+ could help us further exploit neutrinos as a unique probe of stellar interiors. Copyright ©2013 by Annual Reviews. All rights reserved.


News Article | August 31, 2016
Site: www.greencarcongress.com

« Hyundai Mobis develops Electronic Parking Brakes (eDIH) for large pickup trucks | Main | SAKOR Technologies provides dual-dynamometer testing system to major global powertrain manufacturer in China » Among the 43 small businesses participating in the newly announced second round of the US Department of Energy (DOE) Small Business Vouchers (SBV) pilot are 10 projects in the area of hydrogen and fuel cells. These selected projects are aimed at reducing the costs and improving the performance of hydrogen and fuel cell technologies, such as finding ways to lower costs of membrane electrode assemblies (MEAs), a critical component of fuel cell stacks and electrolyzers. Some of these collaborations will explore using lower cost materials, reducing the quantity of pricey raw material required, and developing better and more precise methods in manufacturing MEAs. The ten companies receiving Round 2 SBVs for hydrogen and fuel cells projects include: Altergy. Altergy will work with the National Renewable Energy Laboratory (NREL) to leverage lab expertise to reduce cost and improve performance in its proprietary fuel cell products. The objective of their work with NREL is to observe the fabrication of MEAs at NREL using non- proprietary methods, and evaluate MEAs made at NREL with ETFECS-based catalyst to reduce costs and/or enhance durability. American Fuel Cell. AFC will work with Oak Ridge National Laboratory (ORNL) to optimize deposition techniques for roll-to-roll direct coating of electrodes on anode and cathode gas diffusion media leading to improved quality and lower-cost manufacturing of various fuel cell applications. Garmor Inc. Garmor, working with NREL, will utilize existing bipolar plate (BPP) technology to develop a BPP that will include composites that will be used to form microstructured surface features to improve BPP surface tension for water droplet control. Ion Power. Ion Power will work with Los Alamos National Laboratory (LANL) to analyze various performance improvement mechanisms to improve the impact on durability of its fuel cell catalyst layer. Nanosonic. NanoSonic, working with LANL, will receive assistance in the preparation of membrane electrode assembly and performance of fuel cell testing and validation of new anion exchange fuel cell membranes. Nzyme2HC LLC. Nzyme2HC has novel acellular biological approach to scalable/clean/potentially low-cost H generation. Combining minimal electric current, bacterially-extricated hydrogenase, and industrial waste as the feedstocks/production materials. RFA award monies would go to building end-to-end bench prototype at NREL to test/metric. Oorja Fuel Cells. Oorja Fuel Cells will work with NREL to determine the efficacy and performance of membrane electrode assembly (MEA) of Oorja’s direct methanol cells operating conditions (DMFC) and compare the results with the commercially available MEA used in Oorja DMFC power system. Opus 12. Opus 12 will receive assistance to develop a computational model of ions, products, reactants, and water transport within their PEM electrolyzer for the co-electrolysis of water and carbon dioxide to make syngas. The model will aid in increasing system performance to make Opus 12’s syngas cost-competitive with traditional syngas sources. Working with Lawrence Berkeley National Laboratory. Pajarito Powder. Pajarito will work with LANL to improve electrode structures and to implement electrode manufacturing techniques that allow full catalyst utilization. Ultimately, this work will increase customer confidence in the technology and accelerate commercial adoption. Proton Energy Systems. Proton Energy Systems will work with NREL to develop, design and validate a small-scale advanced power converter to provide a pathway for reduced capital cost and improved efficiency of Proton’s M-series electrolyzer that offers a carbon-free source of hydrogen fuel or process gas.


News Article | August 18, 2016
Site: www.aweablog.org

A number of recent reports paint a bright future for the U.S. wind industry thanks to continuing cost declines, with reductions of more than 60 percent since 2009. This cost trajectory is attributable to, among other things, advanced manufacturing techniques, improved construction methods, larger turbine blades, economies of scale, and other advances in turbine technology. While manufacturers and construction companies continue to bring down the cost of building and installing wind turbines, innovative technology enables turbines to access better wind speeds at higher elevations, capturing more of the wind’s energy and turning it into electricity with greater efficiency. Wind energy has reached a pivotal cost position in the power sector. Many recent reports from a variety of consultants, research groups and banks highlight the positive outlook for the U.S. wind industry thanks to the compelling cost position of new wind projects compared to other forms of new power generation. This builds on several earlier reports that we’ve highlighted previously. Thanks to the low cost of wind combined with the current availability of the wind production tax credit (PTC), a number of utilities have recently announced plans for very large wind investments. Here’s a review of some of those reports, along with highlights of some of those utility announcements. In an equity research note dated June 30, Goldman Sachs concluded that “when compared to other incremental sources of power, wind provides the lowest cost source of new capacity.” The investment bank calculated current wind levelized cost of electricity at $29/Megawatt Hour (MWh), compared to $38/MWh for natural gas combined-cycle plants and $57/MWh for utility-scale solar PV projects. For reference, a MWh provides enough electricity to power a typical home for a month. Goldman expects this comparative cost advantage to drive “robust” wind development into 2020. Into the next decade, Goldman expects wind capacity factors to increase due to technology advancements while costs to continue to fall. In combination, the bank expects wind costs to fall in the $30-$35/MWh range as the PTC expires. In sum, Goldman finds “the economics of wind development remain compelling.” On July 11, Macquiaire Research published new analysis showing wind to be cost competitive on a levelized cost basis in 2023, following the expiration of the PTC. Macquiaire projects wind costs to lie around $42/MWh compared to $66/MWh for natural gas combined-cycle plants.  As the 2020s progress, wind continues to be cheaper than gas plants through the end of the decade. Macquarie Research also looked at levelized cost of electricity comparisons in 2016 including the PTC. With the tax credit in place, Macquarie projects electricity from new wind projects to cost around $23/MWh compared to $50/MWh for new gas plants. Given U.S. wind energy’s cost position, Macquarie finds the U.S. to be the most attractive wind market in the world, surpassing both China and Europe. More recently, Morgan Stanley published a report on July 27 concluding that wind and solar developments will perform strongly through the next decade regardless of who is elected president, primarily due to their economics. In the interior of the country, Morgan Stanley calculates the levelized cost of wind electricity at around $15-$25/MWh range – far lower than the cost of a new natural gas combined-cycle facility, which costs roughly $55-$65/MWh in the same region. One of the first reports this summer highlighting wind’s cost competitiveness came from Bloomberg New Energy Finance (BNEF). On June 13, BNEF published an article noting that “in many regions, the lifetime cost of wind and solar is less than the cost of building new fossil fuel plants.” BNEF see technological innovations driving much of wind’s cost declines. Larger turbine blades that are able to capture more wind – combined with smart technology that allows project developers to optimize turbine placement and operators to optimize production and predict component issues before they happen –  enable wind projects to operate at higher capacity factors. These higher capacity factors mean wind turbines are generating more energy over a given time period. Given zero fuel costs and fixed capital costs, this lowers the levelized cost of wind generated electricity. Looking forward, BNEF expects wind costs to fall even further. Globally, for every doubling of installed wind power capacity, prices fall 19 percent. At that rate “wind and solar will be the cheapest forms of producing electricity in most of the world by the 2030s.” While this collection of new reports is a refreshing confirmation of wind’s low cost, the results are nothing new. Yesterday, Lawrence Berkeley National Laboratory (LBNL) released the latest version of its Wind Technologies Market Report. Besides a rich discussion of industry, technology, and performance trends, the report provides an insightful section tracking historical wind cost and power price trends. Documenting levelized power purchase agreement (PPA) prices going back to 1996, LBNL’s sample of wind power purchase agreements (PPAs) shows national average wind prices to be “around the $20/MWh level” compared to $70/MWh during the peak in 2009. The price trend has been consistently downward since 2009, with some variability in prices across different regions and projects, as one would expect. LBNL attributes wind’s low price to two primary factors: 1) low wind turbine pricing pushing down installed project costs and 2) larger, more advanced turbine designs delivering increased output. Installed wind project costs in the U.S. in 2015 averaged roughly $1,690/kW, down 27 percent from peak installed costs back in 2009 and 2010. More interestingly, installed costs are down even as wind turbines grow larger and increase their rotor diameters, which tends to increase costs per kilowatt (kW). These technological advances are helping new wind turbines to capture more wind and perform more efficiently. LBNL notes that wind turbines installed in 2014 had a capacity factor of 41 percent during 2015, compared to just 31 percent for projects built in the period from 2004 to 2011, one of the main reasons that PPA prices and levelized costs per MWh continue to fall. As shown in the chart below, the increase in capacity factor has been very rapid over the last several years of project installations. The report finds that this increase in output has been driven by both the move to larger turbines and by siting turbines in areas with higher wind resources, the latter being driven in part by recent grid upgrades. Earlier this month, the Energy Information Administration (EIA) also released its widely-used estimate of wind energy costs. EIA reports an average installed cost for wind of $1,536/kW, much lower than its cost assumption in recent years. EIA’s estimate for the capacity factor of new wind projects has also significantly increased from prior years. As a result, EIA shows wind as having a levelized cost of $58.50/MWh without incentives, which is down significantly from $73.60/MWh last year. This levelized wind cost is about the same as its estimate of $56.40/MWh for conventional combined cycle gas generation or $55.80/MWh for “advanced combined cycle.” Notably, EIA’s analysis understates the real-world cost of gas combined cycle generation by assuming that the plant will operate at a very high 87 percent capacity factor, thereby diluting the fixed costs across more MWh, even though the average natural gas combined cycle plant had a capacity factor of 56.3 percent in 2015. Correcting to the actual average capacity factor yields a gas combined cycle cost that is $8-9/MWh higher than indicated by EIA’s levelized cost figures, and higher than wind’s levelized cost. EIA also reports an average weighted wind cost of $34/MWh after accounting for the value of the PTC. While EIA fixed the flaws in its assumptions that have previously caused it to overestimate wind costs and underestimate wind plant capacity factors, other flaws in its analysis cause it to still underestimate wind deployment going forward. Despite 60 percent cost reductions over the last seven years, and NREL projections of 20 percent or more cost declines for wind energy through 2030, EIA inexplicably assumes that wind energy is a mature technology that only sees five percent cost reductions through 2035, while other technologies like solar, nuclear, and even many types of gas generation are assumed to experience many times larger cost reductions. As a result, EIA’s report still greatly underestimates wind energy deployment, particularly in the 2020’s and beyond. Lazard, a Wall Street investment firm, also found new wind generation to be the lowest cost energy source in its latest annual levelized cost of energy analysis. At the low end, Lazard calculates the levelized cost of wind energy without the PTC at $32/MWh compared to utility-scale solar PV at $50/MWh and gas combined-cycle at $52/MWh, without accounting for incentives. With incentives, Lazard finds wind pricing available as low as $14/MWh. Lazard also reports regional ranges for wind costs, which show wind to be competitive in nearly all regions. A number of utilities are capitalizing on wind’s low cost As shown in the following chart from NREL’s February report, time is of the essence for utilities seeking to lock in maximum savings for their customers. Prices for wind purchases from projects that begin construction this year are likely to be at an all-time low, thanks to the combined impact of the full value PTC and recent wind cost reductions. Forward-looking utilities buying wind this year are locking in prices at the lowest point on that V-shaped price curve before the PTC begins to decline in value. Utilities across the country are aggressively pursuing wind investments and purchases to capitalize on recent reductions in wind costs and the full value PTC: -Kansas City Power and Light recently announced a 500 MW wind PPA, noting “This tax credit allows KCP&L to pass savings along to customers, keeping rates lower than would otherwise be possible.” -Appalachian Power announced a 120 MW wind PPA in June, explaining “The extension … of the wind PTC provides cost advantages to wind energy resources constructed over the next five years and makes this agreement cost-effective for Appalachian Power customers.” -MidAmerican Energy recently announced a $3.6 billion, 2,000 MW investment in wind, explaining that “Investments of this scale are viable because federal production tax credits are at their highest level.” -Earlier this year, PacifiCorp and Portland General Electric both announced RFPs for renewable resources, with Portland General noting it was doing so now “in order to procure renewable resources to maximize the economic value of available tax credits on behalf of our customers.” -WPPI Energy also announced a 100 MW RFP in June, stating that “With the recent extension of federal renewable energy tax credits, we anticipate that bidders might be able to offer renewables to cost-effectively fill our need for future power supply resources.” -Many other utilities, including Tennessee Valley Authority, are moving aggressively to pursue large quantities of wind energy while the tax credit is in place at full value. In sum, thanks to improved turbine technology, wind energy now consistently ranks as the lowest-cost source of new electricity generation across the country, providing a bright future for the U.S. wind industry.


News Article | December 16, 2016
Site: www.eurekalert.org

MADISON, Wis. - Mother-of-pearl or nacre (pronounced nay-ker), the lustrous, tough-as-nails biomineral that lines some seashells, has been shown to be a faithful record of ancient ocean temperature. Writing online Thursday, Dec. 15, in the journal Earth and Planetary Science Letters, a team led by University of Wisconsin-Madison physics Professor Pupa Gilbert describes studies of the physical attributes of nacre in modern and fossil shells showing that the biomineral provides an accurate record of temperature as the material is formed, layer upon layer, in a mollusk. "We can very accurately correlate nacre tablet thickness with temperature," says Gilbert, explaining that mother-of-pearl is formed as mollusks lay down microscopic polygonal tablets of the mineral aragonite like brickwork to build layers of the shiny biomineral. The work is important because it provides scientists with a new and potentially more accurate method of measuring ancient ocean temperatures, improving on methods now used with other biominerals to tease out the record of the environmental conditions at which the materials formed in the distant past. "Everyone else measures temperatures in the ancient world using chemical proxies," says Gilbert, referencing methods that, for example, use ratios of isotopic oxygen locked into tiny fossil shells made by marine microorganisms known as Foraminifera to get a snapshot of ocean temperatures in the distant past. The method devised by Gilbert and her collaborators is extraordinarily simple: using just a scanning electron microscope and a cross section of shell, it is possible to measure the thickness of the layered microscopic tablets that compose nacre in a shell. The thickness of the tablets, explains Gilbert, correlates with ocean temperature as measured in modern shells when ocean temperatures were known at the time the shells were formed. The new work by the researchers from Wisconsin, Harvard, and the Lawrence Berkeley National Laboratory provides a novel physical approach to measuring past climate, says Gilbert, an expert in biomineral formation. "If what you are measuring is a physical structure, you see it directly," says Gilbert. "You just measure nacre tablet thickness, the spacing of the lines, and it corresponds to temperature. When the temperature is warmer, the layers get thicker." The new study looked at fossil samples of nacre as old as 200 million years from a mollusk in the family Pinnidae, large, fast-growing saltwater clams that live in shallow ocean environments. Today, as in the distant past, the bivalves are widespread in tropical and temperate coastal and shallow continental shelf environments. The new method is potentially more accurate, Gilbert notes, because the chemistry of fossil shells can be altered by diagenesis. Diagenesis occurs over geologic time, during or after sediments rain down on ocean beds to form sedimentary rock. Fossil shells may partially dissolve and re-precipitate as calcite, which fills cracks in aragonite nacre, thus skewing the chemical analysis of a sample, if analyzed as a bulk sample. "If the chemistry changes after the death of a fossil, the formation chemistry isn't necessarily preserved," says Gilbert. On the other hand, "if the physical structure is altered by diagenesis, you will notice immediately that nacre is no longer layered, and so you will know that it's not worth analyzing that area. If just a few nacre tablets are preserved, their thickness can easily be measured" meaning the new technique can augment current geochemical methods used to assess past temperatures, and thus help reconstruct ancient climates, especially the shallow marine environments that preserve most of the world's invertebrate fossil record. The family of mollusks in the new study has lived in the world's oceans for more than 400 million years, potentially leaving a clear record of ocean temperatures into the distant past. For purposes of assessing climate, the record is valuable because not only does it say something about past climate, but the data can also help modelers forecast future climate and environmental change. "The only thing you can do to understand climate in the future is to look at climate in the past," Gilbert notes. The new study was supported by a grant from the U.S. Department of Energy, the National Science Foundation, and the Radcliffe Institute for Advanced Study Fellowship Program.


News Article | September 19, 2016
Site: motherboard.vice.com

No one said detecting dark matter would be easy. We didn't even know about the stuff until a couple of decades ago, after all, despite the fact that it represents some 85 percent of all of the mass in the universe and is what's responsible for giving structure to the cosmos. We see its effects across the universe, but we have yet to see it. We're not even sure what exactly we're looking for—there are many theories as to the exact properties of a dark matter particle. Some aren't even all that dark. The leading candidate for dark matter is a particle known as a WIMP, or weakly-interacting massive particle. These are really heavy, classically "dark" particles. They interact with other matter via only the gravitational force, crucially evading electromagnetic interactions, which are what most of the interactions we see out in the world are based on: from a baseball whapping into a catcher's mitt to the nanoscale electrical circuits enabling the machine you are now staring at. WIMP detection is premised on WIMPs having sufficient mass to smack into an atomic nuclei with enough force to create a bit of light or heat, which can then be registered by the detector. A problem then arises when we start trying to imagine dark matter particles that maybe aren't so heavy and, as such, may result in interactions below the sensitivity of current detectors. This is where the work of Kathryn Zurek and colleagues at the Lawrence Berkeley National Laboratory comes in—bagging superlight dark matter may require supermaterials. Zurek and co.'s work is described in two papers published last week in the Physical Review Letters. One looks at the possibility of using helium superfluids to detect dark matter while the other considers superconducting metals. Both possibilities offer detector sensitivities below current offerings of around 10 million electronvolts. The lower limit for light, "warm" dark matter is around 1,000 electronvolts. The idea behind the helium superfluid detector is that the incoming superlight dark matter particle doesn't just couple to one collision event. Instead, the one incoming particle is able to yield several interactions—enough to be detected. This is part of quirkiness of superfluidic materials, which are able to flow without losing kinetic energy thanks to their complete absence of viscosity. Somewhat famously, superfluidic helium has the bizarre capability of creeping up the walls of its container and leaking away. So, the incoming superlight DM particles can couple to the superfluid without burning off kinetic energy—that is, without stopping. A similar sort of thing happens in a metallic superconductor, which is the basis of the detector described in Zurek's second paper. Here, current is able to flow without resistance in much the same way that the superfluid is able to flow without viscosity. The difference detector-wise is that the superlight dark matter particles interact with pairs of electrons rather than atomic nuclei. The pairs are split and the result are particle-like sound waves called phonons. Because the electrons in a superconductor are already moving at velocities faster than those expected of a superlight dark matter particle, the momentum of the incoming DM particles isn't "lost" in the collision event and they can be observed as particles in the superconductor recoil from the collision. The superconductor method will only work with dark matter particles that are able to couple to electrons, which means that they interact to some extent electromagnetically and are thus not so dark. As odd as it sounds, light-based dark matter hasn't been excluded from the hunt. The superfluid-based detector would presumably be bagging more conventional dark-dark matter particles, albeit very light ones. In any case, as dark matter searches across the globe continue to come up dry, it's clear that we need more tools.


News Article | September 22, 2016
Site: www.washingtonpost.com

On Thursday the Rocky Mountain Institute — an energy-focused think tank known for its boldness in predicting technological change — released a truly sweeping report on the future of vehicles. And if it’s right — something that, to be sure, can be questioned — then we would be on the verge of an energy and technology transformation to rival the sudden decline of the coal industry. The institute certainly isn’t the first to suggest self-driving cars are something special. Uber and Lyft are already exploring self-driving robotaxis. Apple, Google, and Tesla are also in the game. But what’s striking is the optimism about a real technological transformation that could arrive from the merger of two vehicle trends — automation and electrification — and the idea that it could happen quite fast, with major consequences for reducing our greenhouse gas emissions. “It is possible that a new mobility system will emerge in the next few years that is superior to our existing system in almost every way,” write the study’s authors, Charlie Johnson and Jonathan Walker. Autonomous cars won’t just be safer and more energy efficient, the study argues. Hiring one to taxi you around will also become very cheap, they argue, once these vehicles can be electrified, rather than running on gas. The upshot is that at some point, perhaps quite soon, it will make more economic sense to have a robotaxi ferry you around on all your trips than to personally own a car. At the same time, as the trend becomes widespread — with fleets of autonomous vehicles, owned by companies or cities, on the road and not running on gas — our greenhouse gas emissions will plunge, the research suggests. By the year 2018, the study argues, “solely using autonomous taxis for transportation could cost the same as owning and operating a car.” More specifically, the researchers think that electric robotaxis might quickly rival the per-mile cost of personal car use, which they say is 85 cents per mile. “We think as soon as Uber is ready to pull the driver out of their autonomous vehicle — right now they have a safety driver in the vehicle, but they’re testing the autonomous software,” says Walker. “As soon as they can remove that safety driver, the cost of that service is going to be about the same as owning and operating a sedan in Pittsburgh.” And of course these vehicles will be electric because that, too, will save money — in this case, for the companies that own and operate them. The huge distances that autonomous vehicles might drive in their lifetimes will mean that even if an electric vehicle costs more than a gas vehicle to purchase, it will make up the difference over the course of its use. Most striking is what this would do to greenhouse gas emissions. The study suggests a major decrease in gasoline demand and, as a result, a major increase in electricity demand, and a plummeting of transportation-sector greenhouse-gas emissions. Walker said the reduction could be as large as 800 million tons per year in the U.S. by 2035, a huge contribution to fighting climate change. “Personal mobility hasn’t changed in 100 years, and it actually plausibly could in the next five to 10,” Walker said. The study also forecasts that this means that “peak car ownership in the United States will occur around 2020 and will drop quickly after that.” The Rocky Mountain Institute isn’t the only such bullish voice. John Zimmer, the co-founder of Lyft, recently wrote at Medium that “by 2025, private car ownership will all-but end in major U.S. cities.” Meanwhile, a study last year of the adoption electrified robotic cabs found they could slash our greenhouse gas emissions, by 2030, by tremendous percentages. But is this all too gung-ho? To find out, I queried several outside experts about the scenario. Jeffrey Greenblatt, a researcher at the Lawrence Berkeley National Laboratory who co-authored the just mentioned study on greenhouse gas reductions from electric, shared, and autonomous vehicles, said the work seemed pretty solid. “It’s clearly optimistic, but I think it’s based on reasonable assumptions of the future cost of automated technology, as well as the total cost of ownership of vehicles.,” he said. Margo Oge, a former EPA transportation and air quality official and author of the book “Driving the Future: Combating Climate Change with Cleaner, Smarter Cars” agreed about the Rocky Mountain Institute’s “overall findings that autonomous, shared mobility will utilize largely electric power trains as they will be cheaper to operate. It’s the most economical strategy  for autonomous shared mobility and it will help significantly reduce carbon pollution – especially as the utilization of those EVs will be much higher than the conventional powertrains they displace.” However, the speed of the transition is another matter, Oge said, and here, the group might be off. “In the end the overall success of autonomous mobility will be based on public trust,” she said. “It’s not just an issue of technology. Trust takes time to develop.” It’s still not clear how people — or, for that matter, legal systems — will deal with the possibility of occasional autonomous vehicle crashes, even if they are significantly more rare than today’s car crashes. Moreover, the programming of these vehicles to deal with situations like an automotive version of the ethical “trolley dilemma” — in which a vehicle would have to “choose” whether or not to kill its driver in order to save multiple other individuals — is bound to spur controversy. Another transportation researcher, Susan Shaheen of the University of California-Berkeley, praised the Rocky Mountain Institute work for trying to push us in thinking about what the future could be like. “I think the purpose of papers like this is to set a visionary plan,” she said. But Shaheen also noted some potential drawbacks — for instance, regulatory. “By 2018, a question to ask is, are cities going to be ready to convert to a shared, automated, electrified fleet?” she said. “Chances are, we will not be at that point.” The Rocky Mountain Institute’s Walker admits that there could also be regulatory and cultural hurdles to such a vast transition. “Certain cities and states may make it legal, and some cities and states may not,” he said. And then, there’s just the possibility that some Americans love their cars and won’t want to give them up, he said. And there’s yet another problem, noted Don MacKenzie, a transportation researcher at the University of Washington in Seattle — what happens outside of urban areas? “This will not work so well when you need a ride to your cottage or the state park for the weekend, where there is no one else for the vehicle pick up after dropping you off,” he wrote by email (though MacKenzie agreed with other aspects of the report). “In such conditions, an important question becomes whether people will be willing to give up owning a personal vehicle. If they still have that personal vehicle, there’s a good chance they will continue to drive it (and drive it more).” Nonetheless, Walker said that the bullish scenario is something the group is standing by. “There’s people who say the technology’s not going to be ready, but they’re quoting things like 5 or 10 years, when a year ago, they were quoting 30 years,” Walker said. Think California’s current drought is bad? Past incarnations have lasted hundreds of years Scientists may have solved a key riddle about Antarctica — and you’re not going to like the answer What the ‘sixth extinction’ will look like in the oceans: The largest species die off first For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.


News Article | March 3, 2016
Site: www.aweablog.org

Oregon just decided to a lot of renewable energy to its electricity mix. Yesterday, the Oregon state senate approved a bill that would increase Oregon’s renewable energy portfolio standard (RPS) to 50 percent, while phasing coal out of the state’s electricity mix. Having already passed the house, the bill will now be sent to Gov. Kate Brown, who is expected to sign it into law. The measure firmly establishes Oregon as a renewable energy leader. Vermont, California and Hawaii are the only other states with renewable energy requirements of 50 percent or higher, and Oregon is the first state to officially ban coal via legislative action. The new law, called the Clean Electricity and Coal Transition plan, is also particularly noteworthy because of the diversity of the coalition that supported it. Oregon’s two largest utilities were in favor of the bill, along with business groups, community organizations and environmental advocates. Supporters as varied as Pacific Power and the Sierra Club all advocated for the bill, recognizing the initiative was the in the best interests of all Oregonians. The Clean Electricity and Coal Transition Plan will have a big effect on carbon emissions. It’s expected that it will reduce carbon pollution by 30 million metric tons, or the same amount as 6.4 million cars’ worth. Here’s what some people are saying in the wake of this historic news. Oregon’s decision to deploy a stronger RPS is a smart choice from a policy perspective. The Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory recently examined the effects of state-level RPS policies. Their findings were overwhelmingly positive. State RPS policies resulted in $7.5 billion in annual environmental benefits from reduced air emissions, 27 billion gallons in reduced water consumption annually, $1.3 billion to $4.9 billion in reduced consumer energy prices, and 200,000 American jobs and $20 billion in annual gross domestic product from the renewable energy developed to meet state RPS’s through the year 2013. Hopefully other states will follow Oregon’s lead in the coming months and years. The New York state legislature is also currently mulling a 50 percent renewable energy standard, and similar initiatives may end up of the dockets of state legislatures throughout the country.


News Article | February 23, 2017
Site: www.rdmag.com

A national lab and NASA are working together with X-ray and 3D visualization technologies to create materials to utilize in space. The goal of the collaboration— between NASA and a science group at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory—is to establish a suite of tools that includes X-ray imaging and small laboratory experiments, computer-based analysis and simulation tools, as well as large-scale high heat and wind-tunnel tests, to allow for rapid development of new materials with established performance and reliability in space. This system can heat sample materials to thousands of degrees and subject them to a mixture of different gases found in other planets’ atmospheres—with pistons stretching the materials to their breaking points— all while imagining in real time their 3D behavior at the microstructure level. For humans to explore Mars and other large-payload missions, a new type of heat shield that is flexible and can remain folded up until needed is likely required. An initial X-ray study has been deemed successful and led to a renewed interest in exploring the use of X-ray experiments to guide a better understanding of meteorite breakup. Scientists are using data from these experiments in risk analysis and aid in assessing threats posed by large asteroids. Michael Barnhardt, a senior research scientist at NASA ARC and principal investigator of the Entry Systems Modeling Project, explained that the research led to a better understanding of what was going on at the microscale. “Before this collaboration, we didn’t understand what was happening at the microscale,” Barnhardt said in a statement. “We didn’t have a way to test it. “X-rays gave us a way to peek inside the material and get a view we didn’t have before,” he added. “With this understanding, we will be able to design new materials with properties tailored to a certain mission.” According to Barnhardt, this basis will build more predictive models, reduce risk and provide more assurance about a new material’s performance even at initial stages. Researchers at Berkeley are testing several candidates for a flexible heat shield in addition to fabrics for Mars-mission parachutes that can be deployed at supersonic speeds, using Berkeley Lab’s Advanced Light Source (ALS) and with other techniques. “We are developing a system at the ALS that can simulate all material loads and stresses over the course of the atmospheric entry process,” Harold Barnard, a scientist at Berkeley Lab’s ALS who is spearheading the Lab’s X-ray work with NASA, said in a statement. A group of researchers at NASA Ames Research Center (NASA ARC) can blast materials with a giant superhot blowtorch that accelerates hot air to velocities topping 11,000 miles per hour, with temperatures exceeding that of the surface of the Sun. The scientists also test parachutes and spacecraft at their wind-tunnel facilities, which can produce supersonic wind speeds faster than 1,900 miles per hour. While it takes rocket science to launch and fly spacecraft’s, an understanding of how materials perform under extreme conditions is also needed to enter and land on planets with different atmospheres. X-ray science is also necessary in ensuring spacecrafts survive in extreme environments as they descent through otherworldly atmospheres and touch down safely on foreign surfaces. Francesco Panerai, a materials scientist with NASA contractor AMA Inc. and the X-ray experiments test lead for NASA ARC, said the aim is to modernize how scientists produce and study materials. “We need to use modern measurement techniques to improve our understanding of material response,” Panerai said in a statement. The experiments are being conducted at an ALS experimental station that captures a sequence of images as a sample is rotated in front of an X-ray beam. These images, which provide views inside the samples and can resolve details less than one micron or one millionth of a meter, can be compiled to form detailed 3D images and animations of samples.


News Article | April 26, 2016
Site: www.topix.com

Apr 15 The renewable portfolio standards that many states have enacted are responsible for 60 percent of the growth in non-hydro renewable energy generation, according to a new Lawrence Berkeley National Laboratory study. Most of the additional capacity to meet RPS requirements has come from wind, but in recent years, solar energy is gaining traction.


Some laws are made to be broken, or at least challenged. A research team led by faculty scientist Ali Javey at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has done just that by creating a transistor with a working 1-nanometer gate. For comparison, a strand of human hair is about 50,000 nanometers thick. "We made the smallest transistor reported to date," said Javey, a lead principal investigator of the Electronic Materials program in Berkeley Lab's Materials Science Division. "The gate length is considered a defining dimension of the transistor. We demonstrated a 1-nanometer-gate transistor, showing that with the choice of proper materials, there is a lot more room to shrink our electronics." The key was to use carbon nanotubes and molybdenum disulfide (MoS ), an engine lubricant commonly sold in auto parts shops. MoS is part of a family of materials with immense potential for applications in LEDs, lasers, nanoscale transistors, solar cells, and more. The findings will appear in the Oct. 7 issue of the journal Science. Other investigators on this paper include Jeff Bokor, a faculty senior scientist at Berkeley Lab and a professor at UC Berkeley; Chenming Hu, a professor at UC Berkeley; Moon Kim, a professor at the University of Texas at Dallas; and H.S. Philip Wong, a professor at Stanford University. The development could be key to keeping alive Intel co-founder Gordon Moore's prediction that the density of transistors on integrated circuits would double every two years, enabling the increased performance of our laptops, mobile phones, televisions, and other electronics. "The semiconductor industry has long assumed that any gate below 5 nanometers wouldn't work, so anything below that was not even considered," said study lead author Sujay Desai, a graduate student in Javey's lab. "This research shows that sub-5-nanometer gates should not be discounted. Industry has been squeezing every last bit of capability out of silicon. By changing the material from silicon to MoS2, we can make a transistor with a gate that is just 1 nanometer in length, and operate it like a switch." When 'electrons are out of control' Transistors consist of three terminals: a source, a drain, and a gate. Current flows from the source to the drain, and that flow is controlled by the gate, which switches on and off in response to the voltage applied. Both silicon and MoS2 have a crystalline lattice structure, but electrons flowing through silicon are lighter and encounter less resistance compared with MoS2. That is a boon when the gate is 5 nanometers or longer. But below that length, a quantum mechanical phenomenon called tunneling kicks in, and the gate barrier is no longer able to keep the electrons from barging through from the source to the drain terminals. "This means we can't turn off the transistors," said Desai. "The electrons are out of control." Because electrons flowing through MoS are heavier, their flow can be controlled with smaller gate lengths. MoS can also be scaled down to atomically thin sheets, about 0.65 nanometers thick, with a lower dielectric constant, a measure reflecting the ability of a material to store energy in an electric field. Both of these properties, in addition to the mass of the electron, help improve the control of the flow of current inside the transistor when the gate length is reduced to 1 nanometer. Once they settled on MoS as the semiconductor material, it was time to construct the gate. Making a 1-nanometer structure, it turns out, is no small feat. Conventional lithography techniques don't work well at that scale, so the researchers turned to carbon nanotubes, hollow cylindrical tubes with diameters as small as 1 nanometer. They then measured the electrical properties of the devices to show that the MoS2 transistor with the carbon nanotube gate effectively controlled the flow of electrons. "This work demonstrated the shortest transistor ever," said Javey, who is also a UC Berkeley professor of electrical engineering and computer sciences. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over. We also have not developed self-aligned fabrication schemes for reducing parasitic resistances in the device. But this work is important to show that we are no longer limited to a 5-nanometer gate for our transistors. Moore's Law can continue a while longer by proper engineering of the semiconductor material and device architecture." Explore further: Breakthrough paves way for post-silicon future with carbon nanotube electronics


State renewables portfolio standards, known as RPS policies, have contributed to more than half of all renewable electricity growth in the United States since 2000. Most state RPS requirements will continue to rise through at least 2020, if not beyond, and collectively these policies will require substantial further growth in U.S. renewable electricity supplies. These findings are part of a new annual status report on state RPS policies, from Lawrence Berkeley National Laboratory (Berkeley Lab).


News Article | August 29, 2016
Site: cleantechnica.com

If you’ve never heard of covalent organic frameworks before, that could be about to change. To the eye, these delicate, super lightweight crystalline materials look like nothing more than a pile of fluffy powder, but they are beginning to emerge as a force to be reckoned with in the energy storage field. In the latest development, a research team based at Northwestern University has figured out how to get one such material to behave like a battery and a supercapacitor, all in one. That could be an important development for the EV market. An improved battery means greater range without adding more weight, and a more efficient supercapacitor translates into faster charging and more powerful discharging. For a quick primer on COF’s you can check out Lawrence Berkeley National Laboratory, which provides this rundown: “COFs and their cousin materials, metal organic frameworks (MOFs), are porous three-dimensional crystals with extraordinarily large internal surface areas that can absorb and store enormous quantities of targeted molecules.” If that sounds a little vague, CleanTechnica offered up the origin story last year: “COFs were first developed at the University of Michigan back in 2005, when researchers ‘coaxed’ rigid plastics to organize into predictable crystal structures — something that had never been done before. The research team did it by slowing down the reaction time involved in synthesizing rigid plastics.” Like their MOF cousins, COFs benefit from a hub-and-strut structure (think Tinker Toys and you’re on the right track) that in effect loads the material with nanoscale pores, like a sponge. The pores can be altered by adjusting the angles at which the struts connect to the hubs, to provide a COF with unique properties. As for why COFs instead of MOFs, for one thing, they contain no metal, so they are much lighter than MOFs (COFs are typically made with hydrogen, boron, carbon, nitrogen, and oxygen). As an organic (aka plastic) material, COFs also have the potential to come in at a far lower cost than other materials. MOFs have previously been investigated for energy storage, but COFs are at a huge disadvantage in that field because they lack sufficient conductivity. One key problem is the disordered nature of COFs as applied to electrodes. So far, the body of research has arrived at one solution, which is to apply them to an electrode as a two-dimensional material: “Two-dimensional covalent organic frameworks (2D COFs) address these limitations by predictably and deliberately organizing redox-active groups into insoluble, high-surface area polymer networks with uniform micropores.” That’s all well and good, but the problem is that the charge/discharge rate is very slow, so you can kiss any high-performance applications goodbye. That’s where the new Northwestern University research comes in. Instead of just fiddling with the angles of their COF, they modded it out with the conducting polymer PEDOT, which is short for poly(3,4-ethylenedioxythiophene). Northwestern provides a rundown on the process: “… Two organic molecules self-assembled and condensed into a honeycomb-like grid, one 2-D layer stacked on top of the other. Into the grid’s holes, or pores, the researchers deposited the conducting polymer.” The pores in the new material are 2.3 nanometers wide, and each pore translates into increased surface area: “A small amount of the fluffy COF powder, just enough to fill a shot glass and weighing the same as a dollar bill, has the surface area of an Olympic swimming pool.” I know right? Here’s the result in plain language: “The modified COF showed a dramatic improvement in its ability to both store energy and to rapidly charge and discharge the device. The material can store roughly 10 times more electrical energy than the unmodified COF, and it can get the electrical charge in and out of the device 10 to 15 times faster.” The new device is also quite durable. According to the study, it can withstand 10,000 charge/discharge cycles. You can get all the details from the American Chemical Society journal Central Science under the title, “Superior Charge Storage and Power Density of a Conducting Polymer-Modified Covalent Organic Framework.” The EV market is slowly gathering steam, and if the Northwestern research pans out, it looks like you ain’t seen nothing yet. The researchers are confident that this is a breakthrough for EV battery technology: “This work represents the first time that electroactive COFs or crystalline framework materials have shown volumetric energy and power densities comparable with other porous carbon-based electrodes, thereby demonstrating the promise of redox-active COFs for EES [electrical energy storage] devices.” In other words, if the EV of tomorrow is equipped with a COF battery (or some similar future technology), it will go significantly farther, cost less, and charge faster than the current crop. Make that the day after tomorrow. So far, the Northwestern research has been tested out on a coin-cell-sized prototype that can light up an LED for 30 seconds. So, don’t hold your breath. But, when that affordable, sporty, 500-mile range EV of the future does come rolling into your garage, the US Army (yes, the Army) might take partial credit. Among other funders, the Northwestern team is supported by a US Army Research Office Multidisciplinary University Research Initiatives grant. Follow me on Twitter and Google+.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | October 5, 2016
Site: www.cemag.us

When the local water management agency closes your favorite beach due to unhealthy water quality, how reliable are the tests they base their decisions on? As it turns out, those tests, as well as the standards behind them, have not been updated in decades. Now scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a highly accurate, DNA-based method to detect and distinguish sources of microbial contamination in water. Using the award-winning PhyloChip, a credit card-sized device that can detect the presence of more than 60,000 species of bacteria and archaea, the new method was found to be more sensitive than conventional methods at assessing health risks. In tests at the Russian River watershed in Northern California, the Berkeley Lab researchers found instances where their method identified potential human health risks that conventional fecal indicator tests had failed to detect. Conversely, they also found instances where the conventional tests flagged bacteria that weren’t likely risks to human health. The research was led by Eric Dubinsky and Gary Andersen, microbial ecologists at Berkeley Lab, and was published recently in the journal Water Research in an article titled, “Microbial source tracking in impaired watersheds using PhyloChip and machine-learning classification.” Steven Butkus of the North Coast Regional Water Quality Control Board, which supported part of the research, was also a co-author. “With the PhyloChip, in an overnight test we can get a full picture of the microorganisms in any given sample,” Dubinsky says. “Instead of targeting one organism, we’re essentially getting a fingerprint of the microbial community of potential sources in that sample. So it gives us a more comprehensive picture of what’s going on. It’s a novel way of going about source tracking.” What local water agencies currently do is collect water samples, culture the bacteria overnight, and then check the growth level of two types of bacteria, E. coli, and Enterococcus, which are presumed to be indicators of fecal contamination. However, this method doesn’t distinguish between sources. The bacteria could have come from humans, cows, ducks, sewage, or even decaying vegetation. “These tests have been used for decades and are relatively primitive,” Dubinsky says. “Back in the 1970s when the Clean Water Act was developed and we had sewage basically flowing into our waters, these tests worked really well. Epidemiological studies showed an association of these bacteria with levels of illness of people who used the water. These bacteria don’t necessarily get you sick, but they’re found in sewage and fecal matter. That’s why they’re measured.” As pollution from point sources — single identifiable sources such as sewage — has been cleaned up over time, the emerging concern has become what are known as nonpoint sources, or diffuse sources, throughout the watershed, such as agricultural lands. “The picture is much more complicated now than it was back then, when the concern was really point sources,” Dubinsky adds. The PhyloChip, which was developed by Andersen and several other Berkeley Lab scientists, has been used for a number of medical, agricultural, and environmental purposes, including understanding air pollution, the ecology of coral reefs, and environmental conditions of the Gulf of Mexico after the BP oil spill. With 1 million probes, it identifies microbes based on variations of a specific gene, with no culturing needed. “About seven years ago we started doing water quality work, and we realized the PhyloChip could provide a fundamentally new and improved method for doing source tracking,” Andersen says. Determining the source of any particular pathogen is not a straightforward task. In most cases, a single microbe is not a definitive marker of an animal or other source. “A microbial community is complex,” Dubinsky says. “A cow may have 1,000 different organisms.” So Andersen and Dubinsky had an idea. “We had Laleh Coté, an intern at the time and now a Lab employee, run around and basically collect poop from all sorts of animals,” says Andersen. “What we’ve done since then is develop a reference library of the microbial communities that occur in different types of poop — we have cows, horses, raccoons, humans, different types of birds, pigs, sea lions, and other animals, as well as sewage and septage. We used that library to develop a model.” The new method takes the unknown sample and compares it against this microbial reference library. “We’ve used the PhyloChip in a way that it hasn’t been used before by using machine learning models to analyze the data in order to detect and classify sources,” Andersen says. “It’s essentially giving you a statistical probability that a microbial community came from a particular source.” They validated their method by comparing it to about 40 other methods of microbial source tracking in a California study. “We were the only method that could detect all sources and get them right,” Dubinsky says. If the source is an animal that is not in the reference library, their method can still point you in the right direction. “For example, in that study, one sample was a chicken,” says Dubinsky. “We hadn’t analyzed chickens, but we had geese, gulls, and pigeons. We were still able to determine that the sample was a bird.” In extensive testing throughout the Russian River watershed, which is out of compliance with the Clean Water Act, the Berkeley Lab researchers found widespread contamination by human sources close to areas where communities rely on aging septic tanks. They also found significant human contamination immediately after a weekend jazz festival, whereas testing by conventional methods yielded a much weaker signal after a time lag of a couple days. “Our method is more sensitive to human contamination than those fecal indicator tests are,” Dubinsky says. The team is now working on characterizing the microbial community of naturally occurring E. coli and Enterococci, using Hawaii with its warm waters as a testing ground. “They can occur naturally in sediments and decaying kelp and vegetation,” Dubinsky says. “It is known that they do, but nobody has developed a test to definitively show that.” The researchers will also be able to study whether climate affects microbial communities. “Does a Hawaiian cow look like a California cow in terms of fecal bacteria composition? That’s a good question and something we’ll be able to find out,” he says. They are working closely with the U.S. Environmental Protection Agency (EPA), which is looking at new technologies for what it calls “next generation compliance.” Ultimately the goal is to develop their method — possibly with a downsized version of the PhyloChip — to the point where it can be universally used in any location and by non-experts. Dubinsky says the method should also be useful with the burgeoning issue of algal blooms, to understand, for example, the processes by which they form, the microbial dynamics before and after a bloom, and specifically, whether runoff from livestock production in the Midwest is related to algal blooms in the Great Lakes, a question they’re investigating with the EPA.


News Article | January 6, 2016
Site: www.aweablog.org

New report highlights many benefits of wind energy. Billions of dollars in benefits from wind power American wind energy drastically reduces pollution and energy prices while creating jobs, the Lawrence Berkeley National Laboratory (LBNL) and National Renewable Energy Laboratory (NREL) concluded today in “A retrospective analysis of the benefits and impacts of U.S. renewable portfolio standards.” These results confirm other recent analyses that found wind energy to be the most cost-effective way to cut carbon pollution. Today’s report found around $7.5 billion in annual environmental benefits from reduced air emissions, 27 billion gallons in reduced water consumption annually, $1.3 billion to $4.9 billion in reduced consumer energy prices, and 200,000 American jobs and $20 billion in annual GDP from the renewable energy developed to meet state Renewable Portfolio Standards through the year 2013. The following graphic from today’s report summarizes these benefits and impacts. The study’s results show wind energy is highly effective at reducing carbon pollution while also protecting ratepayers against fuel price increases, a critical message for states and utilities as they begin to craft plans to reduce emissions to comply with EPA’s Clean Power Plan. Wind is particularly effective at displacing carbon pollution as most of the best wind resources are located in regions with the most carbon-intensive electricity mixes. Analysis by DOE and others has shown that wind provides the most bang-for-the-buck in emissions reductions and also protects consumers against fuel price spikes from an over-reliance on any one fuel source. The above graphic from the report also translates the benefits of wind energy into per-kWh increments, illustrating that wind energy provides a large bang-for-the-buck. Specifically, the study found each kWh of wind provided around 2.2 cents/kWh in economic value from reducing carbon pollution and 5.3 cents/kWh from reducing other forms of air pollution, for total air pollution benefits of around 7.5 cents/kWh. Consumers also benefit from lower energy prices, to the tune of 1.3 cents to 4.9 cents/kWh. Water savings and job creation impacts are additional to those amounts. For reference, in 2014 utilities paid an average of 2.3 cents per kWh for wind energy. In most cases that is well below the cost of other energy sources, so wind energy provides net savings to consumers even without factoring in the environmental benefits and energy price reduction impacts. Once the approximately 7.5 cents/kWh of environmental benefits and several cents/kWh of energy price reduction impacts are factored in, wind energy is a clear winner for consumers and the environment. Policies to promote wind energy, most notably the 2.3 cent/kWh renewable Production Tax Credit and low-cost state RPS policies, also clearly provide net benefits to society because the environmental benefits of wind energy are much higher than any net cost of wind energy and those policies. It is widely appreciated that wind energy protects the environment by directly displacing the burning of fossil fuels at power plants. Today’s study confirms that wind energy also benefits consumers by keeping electricity and natural gas prices low and protecting against energy price spikes, as AWEA has noted in previous reports. Today’s study presents a conservative view of the environmental and consumer benefits of total U.S. wind energy, as it only examined the impacts of renewable energy developed to meet state Renewable Portfolio Standards (RPSs). Wind energy accounted for around 75 percent of RPS generation in 2013, and there has been a large amount of additional wind deployment in states that have exceeded their RPSs and in states without RPSs. Specifically, today’s report examined the benefits of 98 million MWh of total renewable energy in 2013, while total U.S. wind generation in 2014 was nearly 181.7 million MWh. Had the study examined the total nationwide benefits of wind energy, as AWEA has done in its own calculations, the environmental benefits and likely the consumer savings would have been around twice as large. Environmental, health benefits from wind spread to all regions Importantly, these environmental benefits accrue to all regions of the country, as AWEA has also noted. The following maps from today’s study show that the renewable energy deployed due to state RPS policies reduced water consumption and provided millions of dollars in health and economic savings from reduced air pollution in all regions of the country. Notably, the air pollution map uses a conservative estimate of the cost of air pollution and only focuses on savings from renewable energy’s reduction of one air pollutant: particulate matter. Wind energy and some other forms of renewable energy drastically reduce the large amount of water that is withdrawn and consumed at most conventional power plants. AWEA has previously noted the important role of wind energy in reducing water consumption in drought-parched California, calculations that are confirmed by today’s report. The following map from today’s report shows that renewable energy is saving the most water in Western and Interior regions of the country that have been most severely affected by drought in recent years. Not only does wind energy reduce water consumption, it also improves electric reliability by making our power system more resilient against climate change-induced drought and temperature increases that are expected to reduce the output of conventional power plants.


News Article | January 28, 2016
Site: cleantechnica.com

How low can solar go? We keep asking that question, and solar researchers keep coming up with new answers. In the latest cost-cutting development, researchers at the Lawrence Berkeley National Laboratory have come up with a relatively inexpensive, energy efficient process for manufacturing standard silicon solar cells without the use of an atomic “dopant.” Dopants are atomic-scale impurities added to the contacts of an electronic device, such as a silicon solar cell, to juice up their conductivity. An archived article from Berkeley lab explains: “Doping materials is a fundamental component of the entire modern electronics industry,” Crommie says, referring to the process of adding impurities like phosphorus or boron to semiconductors like silicon to give the doped material an excess of negatively charged electrons or positively charged holes. Fundamental or not, doping also adds complexity to electronic devices, which leads to increased manufacturing costs and can limit the lifespan of optimal performance. Considering the razor-thin margins of the global solar industry, every tiny increment of cost savings can result in a significant competitive edge, which accounts for Berkeley Lab’s interest in dopant-free solar cells. The Berkeley Lab DASH (dopant free asymmetric heterocontact) solar cell is the result of a collaboration with scientists from Australian National University and the Swiss Federal Institute of Technology of Lausanne. The issue that the solar industry face is one of balance. By not doping, you reduce costs but you also reduce efficiency. Doped silicon solar cells have achieved a range of 20 percent efficiency and slightly higher, but until now dopant-free versions have only gotten up to 14 percent. The new DASH solar cell cranked that up to 19 percent. It consists of a crystalline silicon wafer covered in thin layers of amorphous silicon (amorphous is fancyspeak for dopant-free). The next step is the application of nano-thin coatings of “moly” or molybdenum oxide on top (molybdenum is a brittle transition metal and is getting to be one of our favorite clean tech materials). The moly oxide coatings serve as a dopant-free contact to absorb solar energy. On the back side is a thin layer of lithium fluoride, also dopant-free, that acts as a contact for electrons. The two materials can be layered on at room temperature using a standard thermal evaporation process, which helps to keep manufacturing costs down. From end to end, the entire process only takes seven steps (following is from the study, see link below): …the simplified architectures inherent to this approach allow cell fabrication in only seven low-temperature (≤200 ∘C), lithography-free steps. This is a marked improvement on conventional doped-silicon high-efficiency processes, and highlights potential improvements on both sides of the cost-to-performance ratio for c-Si photovoltaics. The two materials were selected because they have a successful track record when used in other types of devices, though not specifically solar cells. They are both transparent, and their electronic structures complement each other. In addition, the naturally occurring defects in thin films of moly oxide are actually a positive feature that enhances the material’s electronic properties. According to the lab, that’s just one combination so the next phase of the research will involve trying out other materials in different combinations. If you want more details you can find them in the journal Nature Energy under the title “Efficient silicon solar cells with dopant-free asymmetric heterocontacts.“ The cost of solar cells accounts for only the “hard” part of a solar installation. Other costs include labor, permitting and other “soft” factors. While the soft costs of an installation are not dropping as fast as the hard costs, both are falling, and that brings us to the political background. In tandem with energy storage — which is also becoming more accessible and affordable — low cost solar enables more individual property owners to drift off the grid either partly or fully. That trend is already roiling the established utility marketplace, with a consequent pushback from the utility industry and its political supporters. One hotspot that we’re watching in the US (along with our sister site PlanetSave) is Florida, where a battle royal has erupted over a new solar initiative that would enable property owners to buy solar arrays from independent companies. Florida is currently one of only four states in the US that prohibit such deals. As a high heat state, Florida’s electricity consumption is tied to air conditioning, which is a ripe condition for distributed solar even without storage. We’re also watching the state of play in Nevada, where a recent net metering decision by the state’s Public Utility Commission has resulted in a “black hole” for independent solar customers. The independent solar industry has already pulled hundreds of jobs out of Nevada as result, with SolarCity leading the pack. We’re thinking that pushback from the solar industry won’t change any minds on the Nevada Public Service Commission, but it could serve as a warning to other states that are considering changing the rules in midstream. Follow me on Twitter and Google+. …the top images show a cross-section of a solar cell design that uses a combination of moly oxide and lithium fluoride. These materials allow the device to achieve high efficiency in converting sunlight to energy without the need for a process known as doping. The bottom images shows the dimensions of the DASH solar cell components.   Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | August 15, 2016
Site: www.scientificcomputing.com

Imagine a future in which hyper-efficient solar panels provide renewable sources of energy, improved water filters quickly remove toxins from drinking water, and the air is scrubbed clean of pollution and greenhouse gases. That could become a reality with the right molecules and materials. Scientists from Harvard and Google have taken a major step toward making the search for those molecules easier, demonstrating for the first time that a quantum computer could be used to model the electron interactions in a complex molecule. The work is described in a new paper published in the journal Physical Review X by Professor Alán Aspuru-Guzik from the Department of Chemistry and Chemical Biology and several co-authors. “There are a number of applications that a quantum computer would be useful for: cryptography, machine learning, and certain number-theory problems,” Aspuru-Guzik said. “But one that has always been mentioned, even from the first conceptions of a quantum computer, was to use it to simulate matter. In this case, we use it to simulate chemistry.” “There are millions or even trillions of possible molecules,” said Jonathan Romero Fontalvo, a Ph.D. student in Aspuru-Guzik’s lab and one of the lead authors of the study. “If you are an experimentalist hoping to find a new molecule for a drug, you need to consider a huge number of possibilities, and you need to synthesize and test them all. That is extremely costly and requires a great deal of time and effort.” Classical computers can model simple molecules, but they lack the horsepower needed to model all the possible interactions that take place in more complex molecules. A molecule like cholesterol, Aspuru-Guzik said, is all but impossible to model exactly in traditional systems because it would require decades to describe how its electrons interact. Though Aspuru-Guzik and colleagues had described an algorithm to model molecules using quantum computers more than a decade ago, quantum computing resources were limited at the time, meaning the team was only able to test certain parts of the algorithm. The new study not only marks the first time the entire algorithm has been tested in a scalable manner, but also implements it with a new algorithm, dubbed the variational quantum eigensolver. Even more importantly, Aspuru-Guzik said, both algorithms were implemented in a scalable approach, meaning that while they were tested on a small molecule, they would work equally well on a larger, more complex compound. “We were actually able to compare our old algorithm against the new one,” he said. “The machine is so powerful we can do that. And because it’s scalable, the same algorithm we would run against any molecule in this case was run against a small molecule.” Using the algorithm, Aspuru-Guzik and colleagues are able to model the electronic structure of a given molecule, and then to “read” that information, giving them precise data about behavior and interactions. Armed with that information, he said, researchers can understand whether a molecule will possess the properties desired — whether it will bind to an enzyme or protein, whether it will catalyze certain reactions, and whether a material will possess specific traits. “This is arguably the most valuable application for a quantum computer,” he continued. “Commercially, the market for fine chemicals is estimated to be worth $3 trillion. A number of other teams, including researchers at Microsoft and national labs, have made this area a priority.” But without quantum computers, Aspuru-Guzik said, that search would amount to little more than a guessing game. That is why “I like to call this a disruptive innovation,” he said. “All the methods we have now are approximations, which means if we want to discover a new battery or a better LED, our calculations will sometimes fail. But the change a quantum computer brings is that the answer is exact. Period. That trust will allow us to discover molecules much quicker, make molecules that are much more interesting, and explore chemical spaces significantly faster. When I started at Harvard in 2006, I never imagined that 10 years later we would be at this point.” The future is likely to bring hardware advances. “We are currently fabricating a new generation of quantum chips that will enable much larger calculations,” said co-author Ryan Babbush, a quantum engineer at Google. “We are optimistic that these chips will be large enough to model small transition metal complexes, which are notoriously difficult for classical computers.” The study has implications to other areas of study, such as machine learning. “I very much like to think of variational quantum algorithms as a quantum generalization of neural networks,” Google co-author Hartmut Neven said. “Naturally we expect quantum neural networks to be more capable than their classical counterparts in describing quantum systems. An interesting open question is whether they will also prove superior in learning classical data.” Harvard’s Office of Technology Development has filed several patent applications relating to the software that drives Aspuru-Guzik’s quantum computing platforms. The study was funded by the Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences at Lawrence Berkeley National Laboratory, the Air Force Office of Scientific Research, the Army Research Office, the Office of Naval Research, and the National Science Foundation.


News Article | February 15, 2017
Site: cen.acs.org

Chaos reigned as visitors from the seven countries—Iraq, Iran, Libya, Somalia, Sudan, Syria, and Yemen—were detained at airports or prevented from leaving on trips that had already been approved. Permanent residents who have lived in the U.S. for years were also affected. Universities, federal researchers, and companies all wrestled with what the ban would mean for them, their employees, or their students. The ban was designed to help prevent terrorist attacks, Trump said. Administrative officials say it is temporary in most cases; entry to the U.S. was suspended for 90 days for most of those affected. Syrian refugee programs were halted altogether. But enforcement has been erratic, and it’s unclear when people with visas will be allowed to enter the U.S. C&EN did not uncover reports of chemists who were caught up in the immediate detentions at airports. But the normally cautious American Chemical Society, which publishes C&EN, reacted strongly. “The order itself is overly broad in its reach, unfairly targets individuals from a handful of nations, ignores established mechanisms designed to achieve the ends sought by the order, and sets potential precedent for future executive orders,” the society said in a statement. ACS has 109 members in Iraq, 41 in Iran, two in Libya, and one in Sudan. The largest number of affected scientists likely comes from Iran. More than 12,000 students came from Iran to the U.S. last year, according to data from the Institute of International Education. That’s far more than the other countries covered by the ban. Universities scrambled to help international students and scholars who were held in airports or turned back. “The order is stranding students who have been approved to study here and are trying to get back to campus and threatens to disrupt the education and research of many others,” says Mary Sue Coleman, president of the Association of American Universities, a coalition of the top research universities. She urged the Administration to make it clear that the U.S. “continues to welcome the most talented individuals from all countries.” Alison Hatt, the user program director at Lawrence Berkeley National Laboratory’s Molecular Foundry, says an Iranian postdoc from an Italian research institution was supposed to work on a project there but would be unlikely to get a visa. “Our researchers are definitely being impacted by the anti-immigration order.” Collaborators from other countries will be reluctant to work with U.S. scientists if this is not resolved, says Zafra Lerman, president of the Malta Conferences Foundation, which promotes international scientific and technical collaborations. “We are losing the opportunity to lead by example that was ours for so many years, and we are impacting the progress of science,” Lerman says. “By putting obstacles on science diplomacy, we will just torpedo its purpose of security and safety.” Although many companies spoke out, the life sciences industry was largely silent on the ban. Neither of the main industry groups—the Pharmaceutical Research & Manufacturers of America and the Biotechnology Industry Organization—commented on the executive order. Neither did major U.S. chemical companies, none of which responded to C&EN’s request for comment. Other than Iran, most of the countries with large chemical industries, such as Saudi Arabia, Kuwait, and the United Arab Emirates, were excluded from the order.


News Article | January 11, 2016
Site: cleantechnica.com

An estimated $2.2 billion in benefits accompanying reduced greenhouse gas emissions, and $5.2 billion in benefits accompanying the reduction of other air pollutant emissions, resulted from US state renewable portfolio standard (RPS) policies operating in 2013, according to a new study from the US Department of Energy’s National Renewable Energy Laboratory (NREL) and Lawrence Berkeley National Laboratory (Berkeley Lab). The numbers quoted above are for mid-range estimates, it should be noted, so actual benefits could be notably higher (or perhaps lower). If you are not familiar with the policy in question, the above-mentioned RPS policies force utility companies (or other electricity providers) to utilize a minimum level of renewable electricity generation amongst their load. Also worth noting, the report — “A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards” — demonstrated that “national water withdrawals and water consumption by fossil-fuel plants were reduced by 830 billion gallons and 27 billion gallons in 2013, respectively.” There were around 200,000 renewable energy related jobs supported by RPS policies in 2013, reportedly — most of theses jobs were in California, though, and were mostly related to utility-scale solar photovoltaic (PV) projects under development at the time. “This work is intended to inform ongoing policy discussions by helping states evaluate RPS programs,” stated Berkeley Lab’s Ryan Wiser, one of the report authors. While the overall benefits reported are large, the study carefully documents its methods and highlights where uncertainties exist. For example, benefits from greenhouse gas reductions were found to range from $0.7 to $6.3 billion, reflecting differences in underlying estimates of potential damages caused by climate change. Similarly, air pollution reduction benefits-which arise primarily from avoided premature mortality-were estimated to range from $2.6 to $9.9 billion in 2013, reflecting differences in underlying epidemiological literature, among other factors. Although the study takes a national view — evaluating all state RPS programs as a whole — many of the associated benefits and impacts were highly regional. For example, the economic benefits from air pollution reductions are associated mostly with reduced sulfur dioxide (SO2) emissions from coal-fired power plants, and are concentrated primarily in the Mid-Atlantic, Great Lakes, Northeast, and Texas. Reductions in water withdrawal and consumption were largest in California and Texas, respectively-both states that experience intermittent drought conditions. Having examined both the costs and benefits of state RPS programs historically, the researchers are planning a follow-up effort to evaluate the costs and benefits of RPS programs prospectively, considering scheduled increases to each state’s requirements as well as potential policy revisions. “Our goal was to estimate the magnitude of RPS benefits and impacts at a national-level, using established, consistent methodologies, while recognizing that states could perform their own more-detailed assessments,” NREL’s Jenny Heeter, one of the report authors, said. Image by Bonita de Boer (some rights reserved)    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | April 15, 2016
Site: www.greentechmedia.com

The renewable portfolio standards that many states have enacted are responsible for 60 percent of the growth in non-hydro renewable energy generation, according to a new study from Lawrence Berkeley National Laboratory. Most of the additional capacity to meet RPS requirements has come from wind, but in recent years, solar energy is gaining traction. Renewable portfolio standards call for a certain percentage of power generation to come from renewable resources, although what qualifies as renewable and which power generators have to meet the obligations varies by state. The majority of states are on target to meet their differing requirements in the near and long term. In 2014, RPS achievement averaged 95 percent, with only Illinois and New York lagging significantly. Each state has very different rules around RPS requirements, but the outcome is that they are achieving what they set out to do. The Berkeley Lab study found that build rates are on schedule to deliver the requirements of more than 400 terawatt-hours of clean energy by 2030, roughly double what is available today. Many states have strengthened their RPS criteria in the past few years, and only two have weakened them. For most states, the adjustments have either been higher overall targets, or specific carve-outs for solar or distributed generation. Cost compliance is also all over the map, but in all cases it is below, and often well below, cost caps. The average cost compliance for RPS requirements equated to approximately 1.3 percent of retail electricity bills in 2014, up from 1 percent in 2013. In states with solar carve-outs, the price for solar renewable energy credits varies wildly. As with other aspects of an RPS, the rules are highly state-specific. Some states, such as Massachusetts and New Jersey, have an oversupply, while District of Columbia and New Hampshire have an undersupply. The states that have an RPS are also driving investment in renewables in other states that don’t have the mandates. More than 10 percent of projects to support renewable portfolio standards have been built in states that do not have an RPS. Only a handful of states in the U.S. have not seen any renewable generation build-out because of an RPS. Moving forward, RPS mandates will not be as a big a driver for renewable adoption as they had been in the past, even though they are expected to add another 60 gigawatts of clean power capacity to the U.S. grid by 2030. Instead, market drivers and other regulations are increasingly bringing wind and solar on-line across the U.S. as prices for renewable energy continue to drop. The shift away from a reliance on RPS mandates to deploy renewables is already underway. In 2013, 71 percent of renewable energy installments were due to RPS requirements, a figure that dropped to 46 percent in 2015.


News Article | January 6, 2016
Site: www.greentechmedia.com

State renewable portfolio standards, the key policies driving renewable power growth across the United States, are under attack from opponents who say they increase electricity costs for consumers. But a new report finds that any costs are far outweighed by the economic benefits derived from the environmental and health improvements that come with greening the energy mix in the 29 states that have adopted RPS targets. That’s not counting the potential for billions of dollars in reduced natural-gas costs and wholesale electricity costs -- although these additional cost reductions come at the expense of the companies that make money on generating electricity from fossil fuels. Wednesday’s report, A Retrospective Analysis of the Benefits and Impacts of U.S. Renewable Portfolio Standards, found an average of $2.2 billion in economic benefits from reduced greenhouse gas emissions, and another $5.2 billion in benefits from reductions in sulfur dioxide and other air pollutants, for RPS programs in 2013. These averages are taken from wider-range estimates of the value of mitigating the negative effects of climate change for GHG reductions, as well as reducing mortality from air pollution. These figures are much higher than the roughly $1 billion in annual compliance costs (that is, higher electricity prices) that have come about as a result of state RPS mandates between the years 2010 and 2013. Those figures come from a previous study from the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) and the National Renewable Energy Laboratory, which also co-authored Wednesday’s report. The two reports use different methodologies, which makes a direct comparison difficult, said Ryan Wiser, LBNL analyst and report co-author, in an interview. Specifically, the previous study relied on state-by-state measures of RPS compliance costs, which vary widely in their sophistication and methods, while the new report used a common methodology across all 29 states (plus the District of Columbia) that have RPS mandates in place. Still, the new report does help provide an important counterweight to arguments that state RPS targets are driving up electricity costs to unacceptable levels, he said. “Most people and most organizations that have come out in opposition to RPS programs have focused on the cost angle,” said Wiser. “We did this work previously that compiled all these RPS state cost analyses, while recognizing that the states developed their RPS programs not to deliver short-term savings, but to deliver long-term benefits.” “This more recent work helps us understand what we are getting for these costs, and [whether] those benefits seem like they’re in excess of those costs,” he said. Groups such as American Legislative Exchange Council, Americans for Prosperity, Americans for Tax Reform and the Chamber of Commerce have been fighting to block or overturn RPS mandates in individual states, citing costs to consumers. These groups are also funded by fossil fuel companies that stand to lose money in the transition to green energy -- and which have been caught many times making misleading claims about the costs of wind and solar power. The new report also calculates a separate set of economic measures that are termed “impacts,” rather than benefits, Wiser said. These include estimates of nearly 200,000 jobs created in the renewable energy sector, driving over $20 billion in gross domestic product in 2013, along with some potentially large (but still unknown) effects on lowering wholesale electricity prices and natural-gas prices. The difference here, he said, is that “benefits are undoubtedly societal benefits." Few would argue that reducing early deaths from air pollution and mitigating the harm of climate change aren’t good for everyone. But new green energy jobs could be seen as coming at the expense of jobs lost in the fossil fuel sector, and reduced wholesale electricity and natural-gas prices, while good for people who pay for them, aren’t good for the power-plant owners and natural-gas producers who make money on them. One very interesting figure is the calculation of how state RPS goals could lead to reducing wholesale electricity prices. According to Wednesday’s report, the growth of wind, solar and other renewable energy driven by these mandates could lead to up to $1.2 billion in reduced electricity costs and consumer bills -- but it could also be calculated as delivering an impact of zero, or no reduction at all. “In simplest form, the question here becomes one of the adjustment time period,” Wiser explained. “The year you inject a huge amount of solar and wind into the system, you depress wholesale electricity prices,” which one might assume leads to lower costs. But if that leads to the retirement of existing coal, natural gas or nuclear power plants, or the decision not to build new power plants, that could lead to a shortage of electricity, which would cause prices to rise yet again, potentially erasing the short-term benefits. One can see these concepts playing out in Germany, where massive injections of solar and wind power under lucrative feed-in tariff regimes have led to significant disruptions in its energy markets and pushed traditional utility business models to the breaking point. Of course, the United States isn’t nearly as far along in the green energy transformation as is Germany, Wiser noted. Even so, it’s a factor that is being considered by states that are targeting 50 percent or higher shares of renewables within traditional energy market structures that weren’t designed to manage them.


News Article | November 10, 2016
Site: www.gizmag.com

A new study has found that in response to increasing rates of human-induced CO2 in the atmosphere, the Earth's vegetation has increased the rate at which it absorbs carbon(Credit: ChiPhoto/Depositphoto ) We might have to treat our humble house plants to an extra helping of sunlight this week. As human-induced CO2 emissions continue to increase, a new study suggests that the Earth's vegetation has upped its game to help offset that growth. After a 40-year upward trend, the rate at which atmospheric CO2 levels increased hit a plateau between 2002 and 2014, thanks largely to plants plucking more CO2 out of the air than they have previously. In 1959, the rate of growth of carbon dioxide levels in the atmosphere sat at about 0.75 parts per million (ppm) per year, and over the second half of the 20th century, that figure increased rapidly, up to 1.86 ppm per year in 2002. But in the years since then, the rate of growth has flatlined, despite the fact that human activity continues to pump out more and more CO2. That's not to say the amount of CO2 in the atmosphere isn't still going up – it is – but the rate of increase has slowed, remaining steady on about 1.9 ppm each year since 2002. Not only that, but the amount of anthropogenic CO2 that stays in the air has dropped by around 20 percent. So where is it all going? New research, headed up by the Department of Energy's Lawrence Berkeley National Laboratory, analyzed data from the Global Carbon Project and ran computer models to determine what changes may be occurring in Earth's carbon cycle. "We believed one of the planet's main carbon sinks had unexpectedly strengthened," says Trevor Keenan, a corresponding author of the paper. "The question was: which one?" Oceans are considered a carbon sink, but computer models indicate their rate of absorption remains fairly stable. Instead, the team looked to vegetation, which can fluctuate year-to-year in terms of carbon uptake, and tested 10 "global dynamic vegetation models," which simulate different combinations of atmospheric CO2 levels, temperature, soil moisture, and other factors, to see how that carbon cycle might change over time. From those models, it appears that over the decades these ecosystems have effectively doubled the rate of CO2 they absorb. In the 1950s, the Earth's vegetation absorbed between 1 and 2 petagrams (1 trillion to 2 trillion kg, or 2.2 to 4.4 trillion lb) of CO2 each year, but in the 21st century that number had risen to between 2 and 4 petagrams annually. Interestingly, the models suggest that the increase in CO2 levels is itself responsible for the improved absorption performance of the planet's plants. With more CO2 in the air, the plants are better able to photosynthesize, meaning they grow better and further increase their rate of photosynthesis. That boom was helped along by the fact that plant respiration, a process which sees them take in oxygen and "exhale" CO2, didn't increase at the same rate. "These changes decreased the amount of anthropogenic CO2 that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO2," says Keenan. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly." But as encouraging as these figures may appear, the news isn't quite as rosy as it may seem at a glance. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate."


News Article | December 6, 2016
Site: www.theguardian.com

Tech giants are jockeying to be the first to hit a 100% renewable energy goal. Google, which has invested in solar and wind energy for a decade, intends to get there by 2017. Google is the largest corporate buyer of renewable energy, and plans to buy enough wind and solar energy to offset all the electricity used by its 13 data centers and offices in 150 cities worldwide, the company said Tuesday. Apple seems close to reaching its own 100% goal as well. The company said it achieved 93% in 2015. An Apple spokeswoman said the company has yet to set a year for when they would likely cross the finish line. For Google, hitting the 100% target means for every unit of electricity it consumes – typically from coal or natural gas power plants – it would buy a unit of wind or solar electricity. The company wouldn’t say how much electricity it will need to have purchased by the end of next year to reach its 100% goal, but did say that the amount would exceed the 5.7 terawatt-hours solar and wind energy that it bought in 2015. “We want to run our business in an environmentally responsible way, and energy consumption is the largest portion,” said Neha Palmer, head of energy strategy and development at Google’s Global Infrastructure Group. Google is taking a big leap to that 100% goal, having achieved just 37% in 2014. The company has invested in renewable energy ever since it kicked off the construction of a 1.6-megawatt solar energy system in 2006. Since 2010, it’s signed 2.6 gigawatts worth of solar and wind contracts. The tech giant isn’t alone in setting the 100% target. A global campaign to promote 100% renewable energy use in the business world includes Ikea, Facebook, Starbucks and Johnson & Johnson. Businesses, like homeowners, have historically relied on their local utilities for power. In 2015, about 67% of the electricity generated in the US came from fossil fuels. Businesses would have to build and run their own solar or wind farms if they want to hit their 100% more quickly, but that will require hefty investments and expertise they don’t have. As a result, when companies set strong renewable energy goals, they often reach them by buying enough solar and wind electricity or renewable energy credits to offset their use of electricity from coal or natural gas power plants. Some companies, such as Google and Microsoft, have invested in solar and wind power plants to help increase the amount of renewable energy the local electric grids. Or, using their clout as large energy customers, they work on convincing their local utilities to invest in renewable energy. Once extremely expensive, solar and wind have seen their prices falling significantly in the past 10 years. Government tax breaks have also helped to make solar and wind more affordable to both businesses and homeowners. The price for building large solar farms, that have the scale to provide utilities, dropped 12% in 2015, according to Lawrence Berkeley National Laboratory, part of the US Department of Energy. Market analysts expect the prices to continue to fall even with the abundance of cheap coal and natural gas. The pressure on countries around the world to meet their targets from the Paris climate agreement, which went into effect last month, will make renewable energy attractive, said Bloomberg New Energy Finance. The research firm has projected that solar and wind will become the cheapest sources of electricity for “most of the world” after 2030. “Renewable energy has become incredibly cost effective,” said Dan Kammen, a professor of energy at the University of California-Berkeley. “There is no company on the planet that can’t make a 100% energy target a viable, cost-effective strategy.” Yet most businesses have yet to set a renewable energy target. That’s because cost isn’t the only issue. Big companies tend to have an advantage over smaller firms because they have the resources to understand the energy markets and negotiating contracts to buy renewable energy, said Colin Smith, solar analyst for GTM Research. “You’re talking about very complex deals and arrangements with unique risk profiles that most companies aren’t fully well equipped to understand,” he said. Google currently pays for wind and solar power from 20 renewable energy projects in the US and abroad, in places such as Sweden and Chile. However, it doesn’t limit itself to buying solar and wind only in regions where it operates, and sees itself as a champion of reducing the emissions produced by the electric industry, which is the largest source of greenhouse gas emissions in the US. “Wind and solar developers couldn’t get financing without us guaranteeing to pay for the power in the long term,” said Palmer. Setting the 100% renewable energy goal is not the only way to reduce a company’s carbon footprint, said Dan Reicher, executive director of the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. Businesses should first look at cutting down on their energy use and becoming more energy efficient, said Reicher. That will then reduce the amount of renewable energy they need to procure. “Energy efficiency tends not to sound as sexy, just as putting solar panels on your roof is more interesting than putting an efficient furnace in your basement,” he said. “But from an economic and environmental perspective, you want to start with energy efficiency.” Reicher also noted that electricity isn’t always the biggest source of energy for a business. FedEx, for example, uses far more energy in transportation, he said. “Often a big chunk of a company’s business goes beyond electricity,” said Reicher. “What are they doing on the industrial side, on the transportation side, for heating and cooling?” The election of Donald Trump as president has worried renewable energy supporters about the progress made to make solar and wind competitive against coal and natural gas. With the new administration under Donald Trump, renewable energy supporters worry that the government will stop supporting renewable energy. Trump has vowed to revive the coal industry, which has seen half a dozen bankruptcies in recent years as it struggles to compete with the cheaper natural gas. But whether the new administration will create anti-renewable energy policies remains to be seen, Reicher said. Google doesn’t anticipate changes to its renewable energy initiatives in the near future. “The results of any one election won’t change our plans,” said Palmer.


News Article | October 4, 2016
Site: www.chromatographytechniques.com

When the local water management agency closes your favorite beach due to unhealthy water quality, how reliable are the tests they base their decisions on? As it turns out, those tests, as well as the standards behind them, have not been updated in decades. Now scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a highly accurate, DNA-based method to detect and distinguish sources of microbial contamination in water. Using the award-winning PhyloChip, a credit card-sized device that can detect the presence of more than 60,000 species of bacteria and archaea, the new method was found to be more sensitive than conventional methods at assessing health risks. In tests at the Russian River watershed in Northern California, the Berkeley Lab researchers found instances where their method identified potential human health risks that conventional fecal indicator tests had failed to detect. Conversely, they also found instances where the conventional tests flagged bacteria that weren’t likely risks to human health. The research was led by Eric Dubinsky and Gary Andersen, microbial ecologists at Berkeley Lab, and was published recently in the journal Water Research in an article titled, “Microbial source tracking in impaired watersheds using PhyloChip and machine-learning classification.” Steven Butkus of the North Coast Regional Water Quality Control Board, which supported part of the research, was also a co-author. “With the PhyloChip, in an overnight test we can get a full picture of the microorganisms in any given sample,” Dubinsky said. “Instead of targeting one organism, we’re essentially getting a fingerprint of the microbial community of potential sources in that sample. So it gives us a more comprehensive picture of what’s going on. It’s a novel way of going about source tracking.” What local water agencies currently do is collect water samples, culture the bacteria overnight, and then check the growth level of two types of bacteria, E. coli and Enterococcus, which are presumed to be indicators of fecal contamination. However, this method doesn’t distinguish between sources­. The bacteria could have come from humans, cows, ducks, sewage, or even decaying vegetation. “These tests have been used for decades and are relatively primitive,” Dubinsky said. “Back in the 1970s when the Clean Water Act was developed and we had sewage basically flowing into our waters, these tests worked really well. Epidemiological studies showed an association of these bacteria with levels of illness of people who used the water. These bacteria don’t necessarily get you sick, but they’re found in sewage and fecal matter. That’s why they’re measured.” As pollution from point sources—single identifiable sources such as sewage—has been cleaned up over time, the emerging concern has become what are known as nonpoint sources, or diffuse sources, throughout the watershed, such as agricultural lands. “The picture is much more complicated now than it was back then, when the concern was really point sources,” Dubinsky added. The PhyloChip, which was developed by Andersen and several other Berkeley Lab scientists, has been used for a number of medical, agricultural, and environmental purposes, including understanding air pollution, the ecology of coral reefs, and environmental conditions of the Gulf of Mexico after the BP oil spill. With 1 million probes, it identifies microbes based on variations of a specific gene, with no culturing needed. “About seven years ago we started doing water quality work, and we realized the PhyloChip could provide a fundamentally new and improved method for doing source tracking,” Andersen said. Determining the source of any particular pathogen is not a straightforward task. In most cases, a single microbe is not a definitive marker of an animal or other source. “A microbial community is complex,” Dubinsky said. “A cow may have 1,000 different organisms.” So Andersen and Dubinsky had an idea. “We had Laleh Coté, an intern at the time and now a Lab employee, run around and basically collect poop from all sorts of animals,” said Andersen. “What we’ve done since then is develop a reference library of the microbial communities that occur in different types of poop—we have cows, horses, raccoons, humans, different types of birds, pigs, sea lions, and other animals, as well as sewage and septage. We used that library to develop a model.” The new method takes the unknown sample and compares it against this microbial reference library. “We’ve used the PhyloChip in a way that it hasn’t been used before by using machine learning models to analyze the data in order to detect and classify sources,” Andersen said. “It’s essentially giving you a statistical probability that a microbial community came from a particular source.” They validated their method by comparing it to about 40 other methods of microbial source tracking in a California study. “We were the only method that could detect all sources and get them right,” Dubinsky said. If the source is an animal that is not in the reference library, their method can still point you in the right direction. “For example, in that study, one sample was a chicken,” said Dubinsky. “We hadn’t analyzed chickens, but we had geese, gulls, and pigeons. We were still able to determine that the sample was a bird.” In extensive testing throughout the Russian River watershed, which is out of compliance with the Clean Water Act, the Berkeley Lab researchers found widespread contamination by human sources close to areas where communities rely on aging septic tanks. They also found significant human contamination immediately after a weekend jazz festival, whereas testing by conventional methods yielded a much weaker signal after a time lag of a couple days. “Our method is more sensitive to human contamination than those fecal indicator tests are,” Dubinsky said. The team is now working on characterizing the microbial community of naturally occurring E. coliand Enterococci, using Hawaii with its warm waters as a testing ground. “They can occur naturally in sediments and decaying kelp and vegetation,” Dubinsky said. “It is known that they do, but nobody has developed a test to definitively show that.” The researchers will also be able to study whether climate affects microbial communities. “Does a Hawaiian cow look like a California cow in terms of fecal bacteria composition? That’s a good question and something we’ll be able to find out,” he said. They are working closely with the U.S. Environmental Protection Agency (EPA), which is looking at new technologies for what it calls “next generation compliance.” Ultimately the goal is to develop their method—possibly with a downsized version of the PhyloChip—to the point where it can be universally used in any location and by non-experts. Dubinsky says the method should also be useful with the burgeoning issue of algal blooms, to understand, for example, the processes by which they form, the microbial dynamics before and after a bloom, and specifically, whether runoff from livestock production in the Midwest is related to algal blooms in the Great Lakes, a question they’re investigating with the EPA.


News Article | October 26, 2016
Site: www.nature.com

List of discoveries shows US contributions have declined, but Japanese, Russian and European work is on the rise. When it comes to discovering nuclear isotopes, retired physicist Gottfried Münzenberg is top, with 222. His colleague Hans Geissel, from the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany, is next with 211. Then comes Francis Aston, the British chemist who won the 1922 Nobel Prize in Chemistry, with 207. These provisional rankings have been assembled by Michael Thoennessen, a physicist at the National Superconducting Cyclotron Laboratory at Michigan State University in East Lansing. With the help of several undergraduate students, he has assembled a database of the almost 3,100 known nuclear isotopes — atoms that share the same number of protons but have varying numbers of neutrons. He lists the discovery of each by researcher, the lab at which they worked and the method by which they produced the isotope. The vast majority of these isotopes only exist naturally for fleeting moments inside stars, formed during the production of heavy elements. They never exist on Earth at all, except for a few nanoseconds in elite nuclear-physics labs. The International Union of Pure and Applied Chemistry, headquartered in Zurich, Switzerland, assigns the official credit for discovering elements, but Thoennessen is now doing the same for all the isotopes of those elements. "I don't think it matters so much who gets the credit, but historically it's important to identify which countries and labs are driving the field," he says. The database lets Thoennessen rank labs as well as people, and to track how different countries' rising and falling investments in nuclear technology have affected where and how isotopes are discovered (Click here for an animated chart of known isotopes that shows how many were found in quick succession with each new wave of technology). Top of the lab list is Lawrence Berkeley National Laboratory in Berkeley, California, with 634 isotopes. GSI Darmstadt is second with 368; the University of Cambridge, UK, is third with 222, its discoveries made mostly in the days of Aston; and the Joint Institute for Nuclear Research in Dubna, Russia, is fourth with 215. Berkeley may reign supreme in terms of sheer numbers, but the trend over time tells a different story. In the 1980s and 1990s the lab failed to upgrade its equipment and so fell behind other labs. It also lost some credibility in 2002, when allegations surfaced that physicist Victor Ninov had faked the discovery of elements 116 and 118 there. These days, most new finds come from GSI Darmstadt, Dubna and other locations, such as the Radioactive Isotope Beam Factory(RIBF) in Wako, part of Japan's RIKEN national network of labs. Although the decline of Berkeley means that the United States is losing ground, in the longer term Thoennessen expects the country to maintain its supremacy with the Facility for Rare Isotope Beams (FRIB), construction of which is expected to begin in 2012 at Michigan State University. Not everyone agrees that this ranking is the best way to capture the science of the field. Krzysztof Rykaczewski, a nuclear physicist at Oak Ridge National Laboratory in Tennessee, who according to Thoennessen's list has discovered 60 isotopes, would like to see a ranking include not only discoveries, but also the first person to study the properties and nuclear structure of the atoms. "Identifying is only the first step," he says. But Patrick Regan, a physicist at the University of Surrey in Guildford, UK, who has 20 isotope discoveries to his name, thinks that the discovery itself does reflect something important. "I'm proud of each one of those 20," he says, "each of them is like a child to science." Thoennessen set the bar high when deciding what counted as a discovery, in the hope of improving standards in the field. He considers an isotope "discovered" only if its mass and charge have been identified in a peer-reviewed paper. As he dug into the literature, he found that some generally accepted isotopes in fact appeared only in less rigorously vetted sources, such as PhD theses, conference proceedings or unpublished 'private communications'. He discounted such discoveries. "There are numbers people take for granted where one really shouldn't," says Thoennessen. One person who is pleasantly surprised by the ranking is Münzenberg. He was aware that he had made a large contribution, including discovering elements 107–112 and some of their isotopes, but he hadn't realized that he would be top in the world. "I didn't know how many we'd made," he says. GSI Darmstadt is expected to announce a further 60 isotopes in the next year, and as equipment Münzenberg designed is still in use he may well remain ahead for the foreseeable future.


News Article | February 21, 2017
Site: www.eurekalert.org

UC Riverside's Ming Lee Tang is focused on developing nanoparticles that can increase the efficiency of solar panels RIVERSIDE, Calif. -- Ming Lee Tang, an assistant professor of chemistry at the University of California, Riverside, has been awarded a Sloan Research Fellowship for her research with nanoparticles that could have a strong impact on the solar power industry and biomedical fields. Awarded annually since 1955, Sloan Research Fellowships honor early-career scientists and scholars whose achievements and potential identify them as rising stars, the next generation of scientific leaders. The two-year, $60,000 fellowships are awarded yearly to 126 researchers. "I am happy to get it," Tang said, referring to the fellowship. Tang, who has been at UC Riverside since 2012, is focused on designing, synthesizing and characterizing hybrid organic-inorganic nanocrystal-based materials. Her lab uses the toolbox of synthetic chemistry to create novel nanostructures that have synergy between soft matter and traditional semiconductor materials. This ability to confine light at nanoscale dimensions provides unique properties. Applications include biomedical imaging to treat diseases such as cancer and in the solar industry. For example, she recently co-authored a paper that outlines a new way to make solar energy conversion more efficient. The researchers created a hybrid material that captures two infrared photons that would normally pass right through a solar cell without being converted to electricity. The ability to utilize the infrared portion of the solar spectrum could boost the efficiency in generating solar power by 30 percent or more. Tang received her Ph.D. from Stanford University and then conducted postdoctoral research at the University of California, Berkeley and Lawrence Berkeley National Laboratory. The fellowships -- given by the Alfred P. Sloan Foundation, a philanthropic, not-for-profit grant-making institution based in New York City -- are awarded in eight scientific and technical fields--chemistry, computer science, economics, mathematics, computational and evolutionary molecular biology, neuroscience, ocean sciences, and physics. Candidates must be nominated by their fellow scientists and winning fellows are selected by an independent panel of senior scholars on the basis of a candidate's independent research accomplishments, creativity, and potential to become a leader in his or her field. UC Riverside has had 12 fellows in its history, according to the Sloan Foundation website. Seven of those have been in chemistry. The most recent faculty member named a fellow was Naveen Reddy, an assistant professor in the Department of Physics and Astronomy, in 2013.


News Article | March 28, 2016
Site: www.greentechmedia.com

The growth of distributed energy resources is causing a transition as fundamental as the shift to competitive power markets in the 1990s. Distributed energy resources (DERs) are diverse -- from energy-efficiency and peak-shaving initiatives to distributed generation, storage, and microgrids -- but they all enable customers to reduce their use of traditional grid services, and even to provide valuable services to the grid. These resources fit like a square peg in the round hole of the traditional regulatory paradigm. As a result, regulators are taking a more serious look at a concept called performance-based regulation (PBR). Consultants Mark Newton Lowry and Tim Woolf explore PBR as part of a new series of reports called Future Electric Utility Regulation from Lawrence Berkeley National Laboratory, funded by the U.S. Department of Energy. In their paper, Lowry and Woolf ask: Do today’s utilities need a PBR? Under traditional “cost of service” regulation, revenues are based on the utility’s total costs of providing service. All the investments made by the utility (net of depreciation) are known as the “rate base,” and utilities earn a percentage return on it. The bigger the rate base, the greater the profit. And if sales of electricity increase more than expected between rate cases, the utility keeps the additional margin. As the Alberta Utility Commission put it in a 2010 proceeding that led to a PBR mandate, traditional regulation “offers few incentives to improve efficiency, and produces incentives for regulated companies to maximize costs and inefficiently allocate resources.” Customer-side DERs erode utility earnings by reducing utility sales and opportunities for capital spending. This gives utilities “a material disincentive to accommodate DERs, even when DERs meet customer needs at lower cost than traditional grid service,” write Lowry and Woolf. This is where the round hole comes in. Through proposed rate designs, engineering requirements, demand-side management budgets, and other means, utilities play a key role in determining whether DERs thrive. Because of the incentives inherent in traditional regulation, utilities are unlikely to support new programs and technologies for DERs -- resources that offer considerable benefits to customers. Regulatory approaches that align the interests of consumers and utilities can help achieve a reliable, affordable and flexible power system. While New York’s Reforming the Energy Vision (REV) proceeding is the most high-profile attempt in the United States to solve this problem, states across the country are dealing with it in different ways. Regulators are witnessing increasingly complex dockets brought on by the electric sector’s transformation. Some are asking: Why not tie utility earnings to outcomes rather than to investments and throughput? PBR is an alternative to traditional utility regulation that emphasizes incentives for good performance. It can reduce utilities’ incentives to grow their rate base and use of their system and strengthen incentives to use DERs to reduce costs. PBR allows utilities to innovate and test new approaches, becoming more efficient in the process. If the utility succeeds, it gets to keep a share of the savings, creating a win for both customers and the utility. The flexibility inherent in PBR is especially important in a time of rapid technology change. This is not to say that PBR is the right solution in every case. According to Lowry and Woolf, “It has its pros and cons, and it looks different from the perspective of consumers and utilities. Nor is it always the same; different states and countries have developed many variations.” They document a variety of PBR tools that are available and in active use. Revenue regulation: Also known as “decoupling,” revenue regulation sets utility revenues based on factors other than the megawatt-hours of throughput. This reduces the impact of lower throughput on utility profits, and thus makes utilities more open to the growth of DERs. Multiyear rate plans: MRPs are the most common approach to PBR worldwide. Instead of coming in for a rate case every year or two, the utility operates under a rate plan that generally lasts four to five years. Formulas trigger automatic adjustments to the utility’s allowed revenues between rate cases without linking these adjustments to a utility's actual cost. These formulas are known as attrition relief mechanisms, or ARMs. Stronger incentives for cost containment can help utilities to better appreciate the cost-saving potential of DERs. Performance incentive mechanisms: PIMs strengthen incentives for good performance in targeted areas such as reliability, safety and demand-side management. PIMs for DERs could include things like the speed of the interconnection process and the success of peak load management programs. PIMs can be used with multi-year rate plans or added to traditional cost-of-service regulation. For example, nearly half of all U.S. retail jurisdictions already use PIMs to encourage demand-side management. Regulators have tried PBR in many forms and places over the past 25 years. The “RIIO” approach in Britain has sparked particular interest in some regions. RIIO, short for Revenues = Incentives + Innovation + Outputs, brings all of these elements together into an eight-year plan, with an extensive set of performance metrics and PIMs. While Lowry and Woolf consider the RIIO incentives to be strong and balanced, they note that RIIO plans are complicated, with each plan taking two and a half years to develop. They cite other advantages and disadvantages that PBR offers to both customers and utilities. As demand for power flattens or falls -- as it has for the past decade -- investment opportunities dry up. With PIMs, they note, “Improved performance can become a new profit center for a utility at a time when traditional opportunities for earnings growth are diminishing.” MRPs can even “change the culture of utility management by fostering innovation, giving greater flexibility, and creating an increased focus on opportunities to reduce cost and improve long-term performance.” Another benefit is increased awareness of how a utility’s performance compares to those of peer utilities. Customers can also benefit from PBR. If done right, customers could get better performance from the utility and possibly lower costs. They’ll find it easier to use DERs and may even get proactive help from their utility. Of course, setting incentives and formulas is not easy, and mistakes can lead to problems. If the incentives are too low, utilities won’t respond. If they are too rich, profits will rise at the expense of consumers. PBR mechanisms can be complex and controversial. And having many years between rate cases -- as is the case in Britain -- can provide ample opportunity to go off track, especially in a period of rapid change. While regulators in the United States may not go as far as RIIO, several states have approved MRPs, many states have approved PIMs without MRPs, and many states use decoupling. These PBR approaches, along with continuing adaptations to traditional cost of service regulation, can help support cost-effective use of DERs. Since goals and situations vary widely from state to state, Lowry and Woolf stop short of recommending a single approach to regulators and stakeholders. “If their main concern is to improve performance in specific areas, standalone PIMs might be sufficient. […] If they instead seek wide-ranging performance improvements, including better capital-cost management, MRPs may be better suited to these goals than PIMs alone. Regulators and stakeholders who wish to improve performance comprehensively and also focus on specific areas of performance in need of improvement should consider MRPs with an appropriately tailored package of PIMs.”


State renewable portfolio standards are responsible for over half of all renewable electricity growth in the US since 2000. This is the primary conclusion from a new report published earlier this month by Lawrence Berkeley National Laboratory, which also concluded that total renewable portfolio standards (RPS) demand over the next 15 years is only going to increase, requiring significant growth from the US power sector. The report, US Renewables Portfolio Standards: 2016 Annual Status Report, presented in slide format here (PDF), revealed that RPS policies currently apply to 55% of all US retail electricity supplies. Additionally, 60% of all growth in renewable electricity generation, and 57% of capacity increase since 2000, can be associated with state RPS requirements. Looking forward, however, RPS demand is going to require a lot of work. Total RPS demand will double from 215 TWh in 2015 to 431 TWh in 2030, requiring non-hydro renewable electricity generation to reach 12.1% of retail sales to keep pace. In the end, total RPS demand could require an additional 60 GW of renewable electricity capacity by 2030 to meet this goal, which is approximately a 50% increase from current non-hydro levels of renewable energy capacity, which sits at 114 GW. “This document is intended as a progress report to help policy-makers and other electricity industry participants understand the past and future role of state RPS programs—recognizing that they are but one of a number of key drivers affecting renewable energy development in the United States,” said Berkeley Lab’s Galen Barbose, the report’s author. RPS requirements have mostly been met, with states collectively meeting 95% of their interim RPS targets over the last few years. Specifically, the report highlighted “significant recent policy revisions” in locations such as California, Hawaii, Oregon, Vermont, and New York. In total, nearly 150 RPS-related bills have been introduced throughout the US since the beginning of 2015. Publicity surrounding renewable portfolio standards has risen dramatically over the past 14 months, thanks in part to their obvious success. A new report published earlier this year found that the economic benefits of state renewable portfolio standards significantly outweigh the costs. The report concluded that the economic benefits of these standards in terms of their impact on greenhouse gas emissions averaged $2.2 billion, with the economic benefits accompanying the reduction of other air pollution averaging $5.2 billion in 2013. In March, a new report from GTM Research, found that more than half of all new US utility-scale solar installations in 2016 will be built above obligations set in state renewable portfolio standards. Specifically, GTM predicts more than 6 GW of non-RPS utility-scale solar will come online in 2016 — well above the 4.1 GW installed for the entire sector during 2015, which itself was a record-breaking year.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | February 5, 2016
Site: www.theenergycollective.com

State renewable portfolio standards are widely recognized for their critical role in driving renewable energy growth. However, their full impacts on future renewable energy development have not been fully recognized, even by key government agencies. In a recent study, Benjamin Sovacool and I found that the Energy Information Administration has consistently under-projected renewable energy growth, due in part to improper modelling of state RPS mandates. Last year, the Union of Concerned Scientists similarly found that EPA’s modelling for the proposed Clean Power Plan did not fully account for state RPS policies in several states. These forecasts are critical to shaping perspectives about renewable energy and are major inputs in developing policy, particularly at the federal level. To develop a better idea of the future impact of state RPS policies, this analysis develops quantitative estimates of how much renewable is mandated by existing state policies. Although state RPSs are only part of a picture, this bottom-up approach is a first step in assessing how the future of renewable energy in the U.S. State RPS Policies Likely to Drive Continuing Gains in Renewable Energy To date, state renewable policies have been critical in driving renewable energy growth. By 2013, RPS-compliant renewable energy comprised almost 5% of total U.S. electricity generation, bringing in billions in economic and environmental benefits at limit cost. As states ramp up their RPS requirements in the next fifteen years, state RPS policies are likely to become even more important. Data from Lawrence Berkeley National Laboratory provides an initial indication of why. In addition to tracking compliance, LBNL provides estimates of how much renewable energy will be needed to meet current renewable mandates during the next twenty years. These projections indicate that at least 359,474 GWh of eligible renewables will be needed to comply with state standards by 2030. This nears 9% of total U.S. electricity generation and is more than double the amount of renewable energy required by RPS mandates in 2013. Notably, RPS-compliant renewable is not expected to grow uniformly between 2015 and 2030. Instead we see a rapid ramp up in RPS requirements between 2015 and 2021, with an annualized growth above 9% per year. This is a result of many state RPSs having final goals around 2020. Geographically, RPS-compliant energy to date has been very concentrated. Over the next fifteen years, RPS-compliant will become somewhat less concentrated. California is likely to remain a major RPS state – LBNL’s projections for 2030 indicate California would be responsible for around 26% of total RPS-compliant renewable energy in the U.S, near 2013 levels. However, the share of RPS-compliant energy from Texas, New Jersey, New York, and Pennsylvania would shrink from 26% in 2013 to less than 11% in 2030. The remaining share of RPS-compliant renewable energy (63%) will become more geographically diverse, coming from the remaining 24 states with mandatory RPS policies. Critically, the above estimates from LBNL were last updated in July 2015 and hence do not include several major changes in state policies since then. Most dramatically, they do not include California increasing its RPS goals from 33% by 2020 to 50% by 2030. Similarly, New York Governor Andrew Cuomo has directed the New York Public Service Commission to increase the state’s RPS to 50% by 2030 from its current 30% in 2015 level. Hawaii has also increased its RPS to 100% by 2045. One of the key findings in Benjamin Sovacool and I’s recent paper was that a dynamic policy environment is a challenge to estimating future levels of renewable energy. These new policies are prime examples. My ball park estimates indicate that the revisions in California and New York alone could increase the amount of renewable energy required for state RPS policies in 2030 by 70,000-80,000 GWh, depending on load growth. Thus, current state RPS policies would lead to around 11% of U.S. generation coming from RPS-compliant renewable electricity in 2030, up from less than 3% in 2010. This is more than a 20% increase from what RPS mandates would have required six months ago. Clearly, our expectations regarding the future of energy must be continuously updated as policies and technology change. Wind generation has been the primary compliance option for utilities to meet RPS goals in the last decade, primarily due to the relatively low cost of wind generation and federal tax credit support. Long term contracts and continued wind installations mean that wind will continue to play a major role in meeting RPS requirements moving forward. However, solar is likely to become a major compliance method for renewable standards for the next fifteen years. The primary reason are massive and rapid decreases in solar PV costs. In the past year, the rapid decreases in solar costs and corresponding increase in installations have been pretty thoroughly covered, so I’ll keep the discussion brief. Again, LBNL provides great in-depth analysis for utility-scale solar and distributed solar. In LBNL’s sample, the average price for power purchase agreements for utility solar have dropped from more than $100/MWh in 2011 to less than $50/MWh in 2015. These major cost decreases are driving solar installations to new levels. Installation levels in 2015 set new records, with as much as 3 GW installed in Q4 2015 alone. More dramatically, EIA data indicates that more than 8.3 GW of large-scale solar PV are planned for 2016. That’s excluding distributed generation less than 1.0 MW in capacity, which is likely to drive several more gigawatts of new solar. In many ways, solar is more attractive than wind. While wind generation is primarily concentrated in the Great Plains, solar PV resources are pretty good across most of the country. Further, most solar generation occurs during peak, daylight hours when wholesale electricity prices are higher. Even if solar costs somewhat more than wind on a per MWh basis, it usually offsets higher- cost electricity, making it more financially attractive. As solar reaches full maturity and commercialization nationwide, we are likely to see it be adopted even quicker than wind, with RPS policies providing major demand support. Already, solar is beginning to dominate new builds for RPS-compliance. Between 2013 and 2014, LBNL and NREL estimate that solar was responsible for 76% of new capacity installations to meet RPS requirements. RPS policies have been critical policy mechanisms to incentivize the development of wind and, increasingly, solar markets across the United States. This analysis indicates that existing RPS policies are likely to drive non-hydro renewables to more than 11% of total U.S. electricity generation by 2030, more than double current levels. Voluntary renewable markets, boosted by cost innovation from renewable deployment in RPS states, could lead to at least an additional 2-4% of U.S. generation coming from renewables by 2030. This provides a solid base from which renewables can become a central electricity source in the U.S. However, the recent extension of federal renewable energy tax credits only makes the future more bullish for both wind and solar. The 30% investment tax credit for solar has been extended in full for another three years before ramping down to a permanent 10% in 2022. Meanwhile, the wind production tax credit will be ramped down through 2020. This additional financial support, coming at a time when many state RPSs began to ramp up, will drive a major boom in both solar and wind installations. Critically, the new tax credits providing support through the beginning of Clean Power Plan implementation. Existing state RPS mandates, renewed federal tax credits, and the Clean Power Plan mean further cost reductions in both industries are likely in the next few years. Federal and state policies promise to be even more complementary in the future, with significant economic and environmental ramifications. As it becomes cheaper and cheaper to implement a RPS, more states may create mandatory standards or significantly strengthen existing ones. The recent moves by California, New York, and Hawaii could signal the beginnings of a new wave of RPS policies.


News Article | October 28, 2016
Site: www.prweb.com

Enbala Power Networks has been selected by General Electric Global Research as a key partner in its DOE-funded ARPA-E Network Optimized Distributed Energy Systems (NODES) project. The $3.9 million project’s objective is to create transformational distributed flexibility resource (DFR) technology that aggregates responsive flexible loads and distributed energy resources (DERs) to provide synthetic reserve services to the grid while maintaining high customer quality-of-service. Specifically, a fast reserve similar to a regulating/spinning reserve and a multi-hour, ramping reserve will be developed to provide the same kind of grid balancing flexibility now provided by power plants and large-scale demand response. Other project participants include GE Energy Consulting, the Lawrence Berkeley National Laboratory, Consolidated Edison, Inc., Southern California Edison, Sacramento Municipal Utility District and California Independent System Operator. The 12 NODES projects, including this one, aim to develop innovative and disruptive technologies for real-time management of T&D networks through system-wide control and coordination of flexible load and DERs. In a DOE press release ARPA-E Director Dr. Ellen D. Williams commented, “The research and development of these grid control technologies will make the concept of virtual energy storage a practical reality. The result will enhance the resiliency, security and flexibility of our nation’s electric grid and allow the U.S. to make the best use of its abundant renewable energy resources.” One novel aspect of the GE ARPA-E project is development of a tool that will use short-term and real-time weather forecasts along with other data to estimate the reserve potential of aggregate loads and DERs on a day-ahead basis. An optimization framework that will enable aggregation of large numbers of flexible loads and DERs and determine the optimal day-ahead schedule to bid into the market will also be developed. This will provide the flexible resources required to meet the transformational requirements of today’s evolving grid, while also opening up new opportunities for customers to monetize their assets. Using its Symphony by Enbala distributed energy resource management platform, Enbala is responsible for the project’s control infrastructure and for working collaboratively with Consolidated Edison to recruit a diverse set of customers and distributed energy assets. The advanced control functionality developed within the NODES project will be implemented as micro-services leveraging the GE Predix cloud platform. “This is an innovative project that will effectively demonstrate how grid edge assets can be effectively networked into virtual storage systems that manage the intermittency of renewable energy and help us meet the growing operational challenges of grid infrastructure management,” commented Enbala President and CEO Arthur “Bud” Vos. The DFR technology being created must be able to aggregate and control thousands of customer DERs in real time and match them with production projections. GE electrical engineer Naresh Acharya explained that this project will enable a grid that can reliably manage a power mix where nearly half or more is supplied by renewables. About Enbala Power Networks® Enbala Power Networks is focused on making the world’s power grids greener and more reliable, efficient and predictable by harnessing the power of distributed energy. Enbala’s real-time energy-balancing platform - Symphony by Enbala - is transforming energy system operations through its revolutionary, highly flexible approach for creating controllable and dispatchable energy resources. It unobtrusively captures and aggregates available customer loads, energy storage and renewable energy sources to form a network of continuously controlled energy resources. The platform dynamically optimizes and dispatches these resources to respond to the real-time needs of the power system – all without impacting customer operations. For more information, visit http://www.enbala.com or follow @enbala on Twitter.


News Article | January 16, 2016
Site: cleantechnica.com

The economic benefits of state renewable portfolio standards greatly outweigh the costs, according to a new report. There are now 29 US states that have adopted renewable portfolio standards targets, and they are benefiting as a result. As it stands currently, state renewable portfolio standards are an important driver of renewable energy generation growth in the US — so the new findings are very worth taking note of, especially as the argument used by most critics of the standards is that the raise electricity costs. The new report — A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards — stated that the economic benefits of renewable portfolio standards with regard to reduced greenhouse gas emissions averaged $2.2 billion, and the economic benefits accompanying the reduction of other air pollution averaged $5.2 billion, in 2013. Greentech Media provides some more info: These averages are taken from wider-range estimates of the value of mitigating the negative effects of climate change for GHG reductions, as well as reducing mortality from air pollution. These figures are much higher than the roughly $1 billion in annual compliance costs (that is, higher electricity prices) that have come about as a result of state RPS mandates between the years 2010 and 2013. Those figures come from a previous study from the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) and the National Renewable Energy Laboratory, which also co-authored Wednesday’s report. The two reports use different methodologies, which makes a direct comparison difficult, said Ryan Wiser, LBNL analyst and report co-author, in an interview. Specifically, the previous study relied on state-by-state measures of RPS compliance costs, which vary widely in their sophistication and methods, while the new report used a common methodology across all 29 states (plus the District of Columbia) that have RPS mandates in place. Still, the new report does help provide an important counterweight to arguments that state RPS targets are driving up electricity costs to unacceptable levels, he said. “Most people and most organizations that have come out in opposition to RPS programs have focused on the cost angle,” continued Wiser. “We did this work previously that compiled all these RPS state cost analyses, while recognizing that the states developed their RPS programs not to deliver short-term savings, but to deliver long-term benefits. This more recent work helps us understand what we are getting for these costs, and (whether) those benefits seem like they’re in excess of those costs.”    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 9, 2016
Site: www.sciencedaily.com

New findings suggest the rate at which CO is accumulating in the atmosphere has plateaued in recent years because Earth's vegetation is grabbing more carbon from the air than in previous decades. That's the conclusion of a new multi-institutional study led by a scientist from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). It's based on extensive ground and atmospheric observations of CO , satellite measurements of vegetation, and computer modeling. The research is published online Nov. 8 in the journal Nature Communications. To be clear, human activity continues to emit increasing amounts of carbon, and the atmospheric concentration of CO , now at 400 parts per million (ppm), continues to rise. But the scientists found that between 2002 and 2014, the rate at which CO increased in the atmosphere held steady at about 1.9 ppm/year. In addition, the proportion of the CO emitted annually by human activity that remains in the atmosphere declined by about 20 percent. This slowdown can't keep pace with emissions, so the overall amount of human-caused CO in the atmosphere increased, just not as quickly. And for that, new research suggests, we can thank plants. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly," says Trevor Keenan, a research scientist in Berkeley Lab's Climate & Ecosystem Sciences Division and the corresponding author of the paper. The scientists attribute the stalled CO growth rate to an uptick in land-based photosynthetic activity, fueled by rising CO levels from fossil fuel emissions. It's a snowball effect: as CO levels rise in the atmosphere, photosynthetic activity flourishes and plants take in more carbon, sparking more plant growth, more photosynthesis, and more carbon uptake. They also identified another player. Plant respiration, a process in which plants use oxygen and produce CO , did not increase as quickly as photosynthesis in recent years. This is because plant respiration is sensitive to temperature, and it was affected by the recent slowdown in global warming that was observed most significantly over vegetated land. So, between 2002 and 2014, plants took in more CO through photosynthesis, but did not "exhale" more CO into the atmosphere through respiration. "These changes decreased the amount of anthropogenic CO that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO ," says Keenan. Their findings provide a possible answer to a climatic mystery. The growth rate of atmospheric CO climbed steadily during the latter half of the 20th century, from 0.75 ppm/year in 1959 to 1.86 ppm/year in 2002. But Keenan and colleagues discovered an inflection point last year when they analyzed the latest data from the Global Carbon Project, which quantifies carbon emissions and their sources annually. Since 2002, the growth rate has remained flat. This pause is especially surprising because it has occurred as human activity pumps more and more carbon into the atmosphere. All that CO must be going somewhere, so the scientists suspected something about the carbon cycle has recently changed in a big way. "We believed one of the planet's main carbon sinks had unexpectedly strengthened. The question was: which one?" says Keenan. The scientists ruled out oceans as a dominant cause because most computer models agree the amount of carbon taken in by oceans has increased steadily in recent years. That left terrestrial ecosystems, which undergo a large year-to-year variability in carbon uptake, and the two biggest influences on this variability are photosynthesis and plant respiration. To study these influences, the scientists used ten "global dynamic vegetation models" that predict how the terrestrial carbon cycle changes over time. They also used a model that incorporates satellite measurements of vegetation cover and plant activity to predict global photosynthesis and respiration rates. They validated the model by comparing its results with data from AmeriFlux and FLUXNET, which are networks of eddy-covariance research towers that measure ecosystem carbon, water, and energy fluxes in North and South America. Berkeley Lab manages AmeriFlux for the Department of Energy. Model projections were generated using different scenarios of atmospheric CO level, temperature, soil moisture, and other processes. This enabled the researchers to evaluate the impacts of these processes on the planet's terrestrial carbon cycle. Taken together, the models zeroed in on rising CO levels as having the biggest impact on photosynthesis and plant respiration. The result is a boost in terrestrial carbon uptake, particularly in tropical and high-latitude ecosystems. Specifically, the models suggest rising CO levels caused terrestrial ecosystems to double the rate at which they take in carbon, from between one and two petagrams of carbon per year in the 1950s, to between two and four petagrams of carbon per year in the 2000s. For comparison, human activity emits between nine and ten petagrams of carbon per year (one petagram is one trillion kilograms). The scientists conclude this increase in carbon uptake put the brakes on the growth rate of atmospheric CO between 2002 and 2014. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan, adding that their results answer questions and pose new ones. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate."


News Article | December 8, 2016
Site: www.eurekalert.org

Rapid advances in computing constantly translate into new technologies in our everyday lives. The same is true for high-energy physics. The field has always been an early adopter of new technologies, applying them in ever more complex experiments that study fine details of nature's most fundamental processes. However, these sophisticated experiments produce floods of complex data that become increasingly challenging to handle and analyze. Researchers estimate a decade from now, computing resources may have a hard time keeping up with the slew of data produced by state-of-the-art discovery machines. CERN's Large Hadron Collider, for example, already generates tens of petabytes (millions of gigabytes) of data per year today, and it will produce ten times more after a future high-luminosity upgrade. Big data challenges like these are not limited to high-energy physics. When the Large Synoptic Survey Telescope begins observing the entire southern sky in never-before-seen detail, it will create a stream of 10 million time-dependent events every night and a catalog of 37 billion astronomical objects over 10 years. Another example is the future LCLS-II X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory, which will fire up to a million X-ray pulses per second at materials to provide unprecedented views of atoms in motion. It will also generate tons of scientific data. To make things more challenging, all big data applications will have to compete for available computing resources, for example when shuttling information around the globe via shared networks. What are the tools researchers will need to handle future data piles, sift through them and identify interesting science? How will they be able to do it as fast as possible? How will they move and store tremendous data volumes efficiently and reliably? And how can they possibly accomplish all of this while facing budgets that are expected to stay flat? "Clearly, we're at a point where we need to discuss in what direction scientific computing should be going in order to address increasing computational demands and expected shortfalls," says Richard Mount, head of computing for SLAC's Elementary Particle Physics Division. The researcher co-chaired the 22nd International Conference on Computing in High-Energy and Nuclear Physics (CHEP 2016), held Oct. 10-14 in San Francisco, where more than 500 physicists and computing experts brainstormed possible solutions. Here are some of their ideas. Scientific computing has greatly benefited from what is known as Moore's law--the observation that the performance of computer chips has doubled every 18 months or so for the past decades. This trend has allowed scientists to handle data from increasingly sophisticated machines and perform ever more complex calculations in reasonable amounts of time. Moore's law, based on the fact that hardware engineers were able to squeeze more and more transistors into computer chips, has recently reached its limits because transistor densities have begun to cause problems with heat. Instead, modern hardware architectures involve multiple processor cores that run in parallel to speed up performance. Today's fastest supercomputers, which are used for demanding calculations such as climate modeling and cosmological simulations, have millions of cores and can perform tens of millions of billions of computing operations per second. "In the US, we have a presidential mandate to further push the limits of this technology," says Debbie Bard, a big-data architect at the National Energy Research Scientific Computing Center. "The goal is to develop computing systems within the next 10 years that will allow calculations on the exascale, corresponding to at least a billion billion operations per second." Running more data analyses on supercomputers could help address some of the foreseeable computing shortfalls in high-energy physics, but the approach comes with its very own challenges. "Existing analysis codes have to be reengineered," Bard says. "This is a monumental task, considering that many have been developed over several decades." Maria Girone, chief technology officer at CERN openlab, a collaboration of public and private partners developing IT solutions for the global LHC community and other scientific research, says, "Computer chip manufacturers keep telling us that our software only uses a small percentage of today's processor capabilities. To catch up with the technology, we need to rewrite software in a way that it can be adapted to future hardware developments." Part of this effort will be educating members of the high-energy physics community to write more efficient software. "This was much easier in the past when the hardware was less complicated," says Makoto Asai, who leads SLAC's team for the development of Geant4, a widely used simulation toolkit for high-energy physics and many other applications. "We must learn the new architectures and make them more understandable for physicists, who will have to write software for our experiments." Today, LHC computing is accomplished with the Worldwide LHC Computing Grid, or WLCG, a network of more than 170 linked computer centers in 42 countries that provides the necessary resources to store, distribute and analyze the tens of petabytes of data produced by LHC experiments annually. "The WLCG is working very successfully, but it doesn't always operate in the most cost-efficient way," says Ian Fisk, deputy director for computing at the Simons Foundation and former computing coordinator of the CMS experiment at the LHC. "We need to move large amounts of data and store many copies so that they can be analyzed in various locations. In fact, two-thirds of the computing-related costs are due to storage, and we need to ask ourselves if computing can evolve so that we don't have to distribute LHC data so widely." More use of cloud services that offer internet-based, on-demand computing could be a viable solution for remote data processing and analysis without reproducing data. Commercial clouds have the capacity and capability to take on big data: Google, receives billions of photos per day and hundreds of hours of video every minute, posing technical challenges that have led to the development of powerful computing, storage and networking solutions. While conventional computer algorithms perform only operations that they are explicitly programmed to perform, machine learning uses algorithms that learn from the data and successively become better at analyzing them. In the case of deep learning, data are processed in several computational layers that form a network of algorithms inspired by neural networks. Deep learning methods are particularly good at finding patterns in data. Search engines, text and speech recognition, and computer vision are all examples. "There are many areas where we can learn from technology developments outside the high-energy physics realm," says Craig Tull, who co-chaired CHEP 2016 and is head of the Science Software Systems Group at Lawrence Berkeley National Laboratory. "Machine learning is a very good example. It could help us find interesting patterns in our data and detect anomalies that could potentially hint at new science." At present, machine learning in high-energy physics is in its infancy, but researchers have begun implementing it in the analysis of data from a number of experiments, including ATLAS at the LHC, the Daya Bay neutrino experiment in China and multiple experiments at Fermi National Accelerator Laboratory near Chicago. The most futuristic approach to scientific computing is quantum computing, an idea that goes back to the 1980s when it was first brought up by Richard Feynman and other researchers. Unlike conventional computers, which encode information as a series of bits that can have only one of two values, quantum computers use a series of quantum bits, or qubits, that can exist in several states at once. This multitude of states at any given time exponentially increases the computing power. A simple one-qubit system could be an atom that can be in its ground state, excited state or a superposition of both, all at the same time. "A quantum computer with 300 qubits will have more states than there are atoms in the universe," said Professor John Martinis from the University of California, Santa Barbara, during his presentation at CHEP 2016. "We're at a point where these qubit systems work quite well and can perform simple calculations." Martinis has teamed up with Google to build a quantum computer. In a year or so, he says, they will have built the first 50-qubit system. Then, it will take days or weeks for the largest supercomputers to validate the calculations done within a second on the quantum computer. We might soon find out in what directions scientific computing in high-energy physics will develop: The community will give the next update at CHEP 2018 in Bulgaria.


News Article | February 4, 2016
Site: www.theenergycollective.com

Federal energy policy developments received a significant amount of attention in 2015 – most notably, the finalized Clean Power Plan and renewal of renewable energy tax credits dominated headlines. However, this focus on national issues tends to obscure critical actions at the local and state levels that are just as important for energy systems. One of the most important state-level policies, renewable portfolio standards, play essential, but not fully appreciated roles in complementing and enabling federal policy action. Renewable portfolio standards (RPSs) are the most influential state-level policies for renewable energy development in the United States. Combined with federal tax incentives, state RPSs have driven wind generation from less than 0.4% of U.S. generation in 2004 to more than 4.4% in 2014 while boosting solar above 1% of total electricity in the U.S. in 2015 for the first time ever. Despite broad recognition of their importance, the central role that state RPSs have played in getting renewable energy to the point it is today is often not fully appreciated. By understanding what state RPS policies have accomplished to date, we can better understand what they are likely to do in the next several years. While the first RPS was passed in Iowa in 1983, it was not until the late 1990’s and mid-2000’s that many states began to follow suit. These early policies often came in response to increasing concerns about energy security and environmental considerations specifically at the state level. While the exact requirements vary by state, RPSs generally require that a certain percentage of state electricity retail sales come from renewable energy by a specified year. As of October 2015, 29 states have mandatory standards while an additional 8 states have voluntary goals. Remarkably, renewable energy mandates are one of the most politically attractive energy policies at the state level. They have been enacted in states ranging from liberal California to moderate Ohio to conservative Texas. Note: This map does not reflect several recent policy changes increasing RPS requirements in CA, NY, HI, and VT Critically, state RPS mandates are generally non-partisan. Despite recent legislative attempts in some states to remove RPS goals, they have proven surprisingly resilient. With only a handful of exceptions, the vast majority of revisions to renewable mandates have led to more stringent renewable requirements. Public opinion polls show consistent support for efforts to increase wind and solar generation, making RPS policies popular for state governors and legislatures. State Renewable Policies Key to Renewable Growth of the Last Decade These widespread policies have had a tremendous impact on renewable energy growth. Data from the Lawrence Berkeley National Laboratory (LBNL) indicates that, in 2013, at least 171,950 GWh of renewable energy was used to comply with state renewable mandates. This nearly equals 5% of total U.S. energy generation in 2013. LBNL’s estimates are based on RPS compliance filings at the state level, meaning they have a high level of reliability. By tracking the retirement of renewable energy credits (great primer here), we can accurately gauge how much renewable energy has been used to comply with state mandates. Source: Spark Library, based on data from LBNL and U.S. EIA Critically, RPS compliance to date has been heavily concentrated geographically. In 2013, almost 27% of RPS compliant generation came from California alone (where it made up almost 18% of total retail sales). Texas, New Jersey, New York, and Pennsylvania combined made up another 26%. Hence, the remaining 24 states with mandatory RPSs contributed less than half of RPS-compliant renewable energy in 2013. This is partially due to California and the other states’ being large electricity consumers. However, it is also a timing issue – California, Texas, New Jersey, and Pennsylvania were some of the first states to enact RPS mandates. RPS requirements in other states are still ramping up, meaning that renewable generation is likely to become somewhat less geographically concentrated in the next decade. Most of the renewable energy used for RPS compliance so far has been from wind energy. A separate estimate from LBNL indicated that as much as 88% of new capacity motivated by state RPSs between 1998 and 2012 was wind. As the figure above clearly illustrates, total U.S. generation very closely tracks the total amount of RPS-compliant renewable energy between 2001 and 2013. This high level of wind generation is not necessarily surprising, even though wind capacity was limited as recently as ten years ago.  Even then, there was significant ongoing research and large deployments in Europe helping the industry reach large-scale commercial viability. Perhaps more critically, the federal production tax credit made wind generation even more cost competitive in the mid-2000’s.  Nationwide average prices for wind capacity installed between 2002 and 2011 ranged between the mid-$30’s/MWh and the upper-$60’s/MWh. Compared to prevailing wholesale electricity prices at the time, these were cost competitive levels. Without the tax credit, wind prices would have been around $20-23/MWh more expensive in all years. In effect, state RPSs and federal tax incentives should be seen as complementary policies. Federal tax incentives made it very inexpensive for states to meet RPS mandates with wind energy, helping encourage the adoption of new and more stringent renewable policies. These state policies then drove significant state economic benefits, strengthening the argument for continued tax credit support. As a quick side note, the price of wind power purchase agreements have been falling since 2009, reaching as low as an average $23.50/MWh in 2014. This is important as it indicates that wind may remain cost competitive with low-cost natural gas resulting from the shale revolution, particularly with most recent renewal of the wind production tax credit. Although wind has been the primary RPS-compliant generation source historically, existing state RPS policies have also benefitted solar energy. Many state renewable mandates have specific provisions (or carve-outs) intended to directly benefit solar and/or distributed generation. The form of these provisions varies significantly. For example, in Colorado, solar located in cooperatives or municipal utility service areas receive a 3X multiplier for the state RPS. Meanwhile, Massachusetts’ RPS contains a specific target of 1,600 MW of solar PV in its RPS. These solar provisions have played notable roles in driving initial solar market growth in many states. The economic benefits of renewable standards are significant. A recent joint study by two federal energy laboratories (LBNL and NREL) found that in 2013 renewables used for RPS compliance: In particular, the direct economic impacts were a principal cause behind the recent decisions to extend both the federal wind and solar tax credits. RPS and federal tax credits continue to be mutually reinforcing, complementary policies. Perhaps more importantly, state RPSs and federal tax credits have driven significant cost innovation in both the wind and solar industries. Long term, renewable energy will continue to grow, bringing in greater economic and environmental benefits at lower (and potentially negative) costs. Further, renewable portfolio standards have come at relatively little cost. State renewable standards have rather limited cost impacts, as demonstrated by yet another joint NREL and LBNL study. In most states, incremental compliance costs for existing standards through 2012 was generally at or below 2% of average retail rates. Even if existing standards are implemented fully, rates are expected to increase by less than 5% in most states, while delivering even greater economic and environmental benefits.


News Article | February 21, 2017
Site: www.chromatographytechniques.com

Note: This is the first installment in a four-part series that focuses on a partnership between NASA and Berkeley Lab to explore spacecraft materials and meteorites with X-rays in microscale detail. It takes rocket science to launch and fly spacecraft to faraway planets and moons, but a deep understanding of how materials perform under extreme conditions is also needed to enter and land on planets with atmospheres. X-ray science is playing a key role, too, in ensuring future spacecraft survive in extreme environments as they descend through otherworldly atmospheres and touch down safely on the surface. Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and NASA are using X-rays to explore, via 3-D visualizations, how the microscopic structures of spacecraft heat shield and parachute materials survive extreme temperatures and pressures, including simulated atmospheric entry conditions on Mars. Human exploration of Mars and other large-payload missions may require a new type of heat shield that is flexible and can remain folded up until needed. Candidate materials for this type of flexible heat shield, in addition to fabrics for Mars-mission parachutes deployed at supersonic speeds, are being tested with X-rays at Berkeley Lab’s Advanced Light Source (ALS) and with other techniques. “We are developing a system at the ALS that can simulate all material loads and stresses over the course of the atmospheric entry process,” said Harold Barnard, a scientist at Berkeley Lab’s ALS who is spearheading the Lab’s X-ray work with NASA. The success of the initial X-ray studies has also excited interest from the planetary defense scientific community looking to explore the use of X-ray experiments to guide our understanding of meteorite breakup. Data from these experiments will be used in risk analysis and aid in assessing threats posed by large asteroids. The ultimate objective of the collaboration is to establish a suite of tools that includes X-ray imaging and small laboratory experiments, computer-based analysis and simulation tools, as well as large-scale high-heat and wind-tunnel tests. These allow for the rapid development of new materials with established performance and reliability. This system can heat sample materials to thousands of degrees, subject them to a mixture of different gases found in other planets’ atmospheres, and with pistons stretch the material to its breaking point, all while imaging in real time their 3-D behavior at the microstructure level. NASA Ames Research Center (NASA ARC) in California’s Silicon Valley has traditionally used extreme heat tests at its Arc Jet Complex to simulate atmospheric entry conditions. Researchers at ARC can blast materials with a giant superhot blowtorch that accelerates hot air to velocities topping 11,000 miles per hour, with temperatures exceeding that at the surface of the sun. Scientists there also test parachutes and spacecraft at its wind-tunnel facilities, which can produce supersonic wind speeds faster than 1,900 miles per hour. Michael Barnhardt, a senior research scientist at NASA ARC and principal investigator of the Entry Systems Modeling Project, said the X-ray work opens a new window into the structure and strength properties of materials at the microscopic scale, and expands the tools and processes NASA uses to “test drive” spacecraft materials before launch. “Before this collaboration, we didn’t understand what was happening at the microscale. We didn’t have a way to test it,” Barnhardt said. “X-rays gave us a way to peak inside the material and get a view we didn’t have before. With this understanding, we will be able to design new materials with properties tailored to a certain mission.” He added, “What we’re trying to do is to build the basis for more predictive models. Rather than build and test and see if it works,” the X-ray work could reduce risk and provide more assurance about a new material’s performance even at the drawing-board stage. Francesco Panerai, a materials scientist with NASA contractor AMA Inc. and the X-ray experiments test lead for NASA ARC, said that the X-ray experiments at Berkeley Lab were on samples about the size of a postage stamp. The experimental data is used to improve realistic computer simulations of heat shield and parachute systems. “We need to use modern measurement techniques to improve our understanding of material response,” Panerai said. The 3-D X-ray imaging technique and simulated planetary conditions that NASA is enlisting at the ALS provide the best pictures yet of the behavior of the internal 3-D microstructure of spacecraft materials. The experiments are being conducted at an ALS experimental station that captures a sequence of images as a sample is rotated in front of an X-ray beam. These images, which provide views inside the samples and can resolve details less than 1 micron, or 1 millionth of a meter, can be compiled to form detailed 3-D images and animations of samples. This study technique is known as X-ray microtomography. “We have started developing computational tools based on these 3-D images, and we want to try to apply this methodology to other research areas, too,” he said.


News Article | November 2, 2016
Site: www.sciencenews.org

Scientists have lost their latest round of hide-and-seek with dark matter, but they’re not out of the game. Despite overwhelming evidence that an exotic form of matter lurks unseen in the cosmos, decades of searches have failed to definitively detect a single particle of dark matter. While some scientists continue down the road of increasingly larger detectors designed to catch the particles, others are beginning to consider a broader landscape of possibilities for what dark matter might be. “We’ve been looking where our best guess told us to look for all these years, and we’re starting to wonder if we maybe guessed wrong,” says theoretical astrophysicist Dan Hooper of Fermilab in Batavia, Ill. “People are just opening their minds to a wider range of options.” Dark matter permeates the cosmos: The material keeps galaxies from flying apart and has left its imprints in the oldest light in the universe, the cosmic microwave background, which dates back to just 380,000 years after the Big Bang. Indirect evidence from dark matter’s gravitational influences shows that it makes up the bulk of the mass in the universe. But scientists can’t pin down what dark matter is without detecting it directly. In new results published in August and September, three teams of scientists have come up empty-handed, finding no hints of dark matter. The trio of experiments searched for one particular variety of dark matter — hypothetical particles known as WIMPs, or weakly interacting massive particles, with a range of possible masses that starts at several times that of a proton. WIMPs, despite their name, are dark matter bigwigs — they have long been the favorite explanation for the universe’s missing mass. WIMPs are thought to interact with normal matter only via the weak nuclear force and gravity. Part of WIMPs’ appeal comes from a prominent but unverified theory, supersymmetry, which independently predicts such particles. Supersymmetry posits that each known elementary particle has a heavier partner; the lightest partner particle could be a dark matter WIMP. But evidence for supersymmetry hasn’t materialized in particle collisions at the Large Hadron Collider in Geneva, so supersymmetry’s favored status is eroding (SN: 10/1/16, p. 12). Supersymmetry arguments for WIMPs are thus becoming shakier — especially since WIMPs aren’t showing up in detectors. Scientists typically search for WIMPs by looking for interactions with normal matter inside a detector. Several current experiments use tanks of liquefied xenon, an element found in trace amounts in Earth’s atmosphere, in hopes of detecting the tiny amounts of light and electric charge that would be released when a WIMP strikes a xenon nucleus and causes it to recoil. The three xenon experiments are the Large Underground Xenon, or LUX, experiment, located in the Sanford Underground Research Facility in Lead, S.D.; the PandaX-II experiment, located in China’s JinPing underground laboratory in Sichuan; and the XENON100 experiment, located in the Gran Sasso National Laboratory in Italy. Teams of scientists at the three locations each reported no signs of dark matter particles. The experiments are most sensitive to particles with masses around 40 or 50 times that of a proton. Scientists can’t completely rule out WIMPs of these masses, but the interactions would have to be exceedingly rare. In initial searches, proponents of WIMPs expected that the particles would be easy to find. “It was thought to be like, ‘OK, we’ll run the detector for five minutes, discover dark matter, and we’re all done,’” says physicist Matthew Szydagis of the University at Albany in New York, a member of LUX. That has turned into decades of hard work. As WIMPs keep failing to turn up, some scientists are beginning to become less enamored with the particles and are considering other possibilities more closely. One alternative dark matter contender now attracting more attention is the axion. This particle was originally proposed decades ago as part of the solution to a particle physics quandary known as the strong CP problem — the question of why the strong nuclear force, which holds particles together inside the nucleus, treats matter and antimatter  equally. If dark matter consists of axions, the particle could therefore solve two problems at once. Axions are small fry as dark matter goes — they can be as tiny as a millionth of a billionth the mass of a WIMP. The particles interact so feebly that they are extremely difficult to detect. If axions are dark matter, “you’re sitting in an enormous, dense sea of axions and you don’t even notice them,” says physicist Leslie Rosenberg of the University of Washington in Seattle, the leader of the Axion Dark Matter eXperiment. After a recent upgrade to the experiment, ADMX scientists are searching for dark matter axions using a magnetic field and special equipment to coax the particles to convert into photons, which can then be detected. Although WIMPs and axions remain the front-runners, scientists are beginning to move beyond these two possibilities. In between the featherweight axions and hulking WIMPs lies a broad range of masses that hasn’t been well explored. Scientists’ favorite theories don’t predict dark matter particles with such intermediate masses, says theoretical physicist Kathryn Zurek of Lawrence Berkeley National Laboratory in California, but that doesn’t mean that dark matter couldn’t be found there. Zurek advocates a diverse search over a broad range of masses, instead of focusing on one particular theory. “Dark matter direct detection is not one-size-fits-all,” she says. In two papers published in Physical Review Letters on January 7 and September 14,  Zurek and colleagues proposed using superconductors — materials that allow electricity to flow without resistance — and superfluids, which allow fluids to flow without friction, to detect light dark matter particles. “We are trying to broaden as much as possible the tools to search for dark matter,” says Zurek. Likewise, scientists with the upcoming Super Cryogenic Dark Matter Search SNOLAB experiment, to be located in an underground lab in Sudbury, Canada, will use detectors made of germanium and silicon to search for dark matter with smaller masses than the xenon experiments can. Scientists have not given up on xenon WIMP experiments. Soon some of those experiments will be scaling up — going from hundreds of kilograms of liquid xenon to tons — to improve their chances of catching a dark matter particle on the fly. The next version of XENON100, the XENON1T experiment (pronounced “XENON one ton”) is nearly ready to begin taking data. LUX’s next generation experiment, known as LUX-ZEPLIN or LZ, is scheduled to begin in 2020. PandaX-II scientists are also planning a sequel. Physicists are still optimistic that these detectors will finally find the elusive particles. “Maybe we will have some opportunity to see something nobody has seen,” says Xiangdong Ji of Shanghai Jiao Tong University, the leader of PandaX-II. “That’s what’s so exciting.” In the sea of nondetections of dark matter, there is one glaring exception. For years, scientists with the DAMA/LIBRA experiment at Gran Sasso have claimed to see signs of dark matter, using crystals of sodium iodide. But other experiments have found no signs of DAMA’s dark matter. Many scientists believe that DAMA has been debunked. “I don't know what generates the weird signal that DAMA sees,” says Hooper. “That being said, I don't think it's likely that it’s dark matter.” But other experiments have not used the same technology as DAMA, says theoretical astrophysicist Katherine Freese of the University of Michigan in Ann Arbor. “There is no alternative explanation that anybody can think of, so that is why it is actually still very interesting.” Three upcoming experiments should soon close the door on the mystery, by searching for dark matter using sodium iodide, as DAMA does: the ANAIS experiment in the Canfranc Underground Laboratory in Spain, the COSINE-100 experiment at YangYang Underground Laboratory in South Korea, and the SABRE experiment, planned for the Stawell Underground Physics Laboratory in Australia. Scientists’ efforts could still end up being for naught; dark matter may not be directly detectable at all. “It’s possible that gravity is the only lens with which we can view dark matter,” says Szydagis. Dark matter could interact only via gravity, not via the weak force or any other force. Or it could live in its own “hidden sector” of particles that interact among themselves, but mostly shun normal matter. Even if no particles are detected anytime soon, most scientists remain convinced that an unseen form of matter exists. No alternative theory can explain all of scientists’ cosmological observations. “The human being is not going to give up for a long, long time to try to search for dark matter, because it’s such a big problem for us,” says Ji.


Cover for 'A Retrospective Analysis of the Benefits and Impacts of US Renewable Portfolio Standards'. Credit: Berkeley Lab A new study estimates that $2.2 billion in benefits came from reduced greenhouse gas emissions and $5.2 billion from reductions in other air pollution for state renewable portfolio standard (RPS) policies operating in 2013. The report also shows national water withdrawals and consumption were reduced by 830 billion gallons and 27 billion gallons in 2013, respectively. The report, entitled A Retrospective Analysis of the Benefits and Impacts of U.S. Renewable Portfolio Standards, was conducted by researchers from the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and National Renewable Energy Laboratory (NREL) and evaluates the benefits and other impacts of RPS policies. RPS policies require utilities or other electricity providers to meet a minimum portion of their load with eligible forms of renewable electricity. They currently exist in 29 U.S. states plus Washington, D.C., and have been a driver for renewable electricity generation in the United States over the past decade. Many states are currently considering whether to extend, eliminate, or otherwise revise existing RPS policies. "This work is intended to inform these ongoing discussions by helping states evaluate RPS programs," said Berkeley Lab's Ryan Wiser, one of the report authors. The study takes care to describe its methods and highlights uncertainties in its findings. For example, benefits from greenhouse gas reductions were estimated to range from $0.7 to $6.3 billion, reflecting differences in underlying estimates of damages caused by climate change. Similarly, air pollution reduction benefits—which arise primarily from avoided premature mortality—were estimated to range from $2.6 to $9.9 billion in 2013, reflecting differences in underlying epidemiological literature, among other factors. "Our goal was to estimate the magnitude of RPS benefits and impacts at a national-level, using established methodologies, while recognizing that individual states can perform their own, more-detailed assessments," adds NREL's Jenny Heeter, another of the report's authors. In addition to evaluating environmental benefits, the study also assessed other impacts. Specifically, the research estimates that RPS policies supported 200,000 renewable energy-related jobs in 2013, saved consumers up to $1.2 billion from reduced wholesale electricity prices and another $1.3 to $3.7 billion from reduced natural gas prices. Consumer savings from reduced electricity and natural gas prices occur because renewable electricity displaces other electricity generation with higher operating costs, much of which is fueled by natural gas. The study is careful to describe these as impacts rather than benefits as they represent resource transfers from some stakeholders to others, rather than net societal benefits on state, national, or global scales. This work was a follow-up and complement to an earlier study by the two labs that focused on the costs of state RPS programs to-date and that noted the need for a full understanding of the potential benefits, impacts, and costs of RPS programs. To that end, this most recent study provides a point of comparison for estimates of RPS program costs. Based on the results of this national study, benefits from reduced greenhouse gas emissions equate to 0.7 to 6.4 cents per kilowatt-hour (kWh) of renewable energy, while benefits from reduced emissions of criteria air pollutants amount to 2.6 to 10.1 cents per kWh. Consumer savings from wholesale electricity market and natural gas price reductions represent another 0 to 1.2 cents per kWh and 1.3 to 3.7 cents per kWh, respectively. Although the study takes a national view—evaluating all state RPS programs as a whole—many of the associated benefits and impacts were highly regional. For example, the economic benefits from air pollution reductions are associated mostly with reduced sulfur dioxide (SO2) emissions from coal-fired power plants and are concentrated primarily in the Mid-Atlantic, Great Lakes, Northeast, and Texas. Reductions in water withdrawal and consumption were largest in California and Texas respectively—both states that regularly experience droughts—while renewable energy jobs from RPS projects were concentrated mostly in California, where large amounts of utility-scale photovoltaic generation was being built in 2013. Having now examined both the costs and benefits of state RPS programs historically, the researchers are planning a follow-up effort for the coming year to evaluate the costs and benefits of RPS programs prospectively, considering scheduled increases to each state's requirements as well as potential policy revisions.


News Article | September 22, 2016
Site: www.washingtonpost.com

On Thursday the Rocky Mountain Institute — an energy-focused think tank known for its boldness in predicting technological change — released a truly sweeping report on the future of vehicles. And if it’s right — something that, to be sure, can be questioned — then we would be on the verge of an energy and technology transformation to rival the sudden decline of the coal industry. The institute certainly isn’t the first to suggest self-driving cars are something special. Uber and Lyft are already exploring self-driving robotaxis. Apple, Google, and Tesla are also in the game. But what’s striking is the optimism about a real technological transformation that could arrive from the merger of two vehicle trends — automation and electrification — and the idea that it could happen quite fast, with major consequences for reducing our greenhouse gas emissions. “It is possible that a new mobility system will emerge in the next few years that is superior to our existing system in almost every way,” write the study’s authors, Charlie Johnson and Jonathan Walker. Autonomous cars won’t just be safer and more energy efficient, the study argues. Hiring one to taxi you around will also become very cheap, they argue, once these vehicles can be electrified, rather than running on gas. The upshot is that at some point, perhaps quite soon, it will make more economic sense to have a robotaxi ferry you around on all your trips than to personally own a car. At the same time, as the trend becomes widespread — with fleets of autonomous vehicles, owned by companies or cities, on the road and not running on gas — our greenhouse gas emissions will plunge, the research suggests. By the year 2018, the study argues, “solely using autonomous taxis for transportation could cost the same as owning and operating a car.” More specifically, the researchers think that electric robotaxis might quickly rival the per-mile cost of personal car use, which they say is 85 cents per mile. “We think as soon as Uber is ready to pull the driver out of their autonomous vehicle — right now they have a safety driver in the vehicle, but they’re testing the autonomous software,” says Walker. “As soon as they can remove that safety driver, the cost of that service is going to be about the same as owning and operating a sedan in Pittsburgh.” And of course these vehicles will be electric because that, too, will save money — in this case, for the companies that own and operate them. The huge distances that autonomous vehicles might drive in their lifetimes will mean that even if an electric vehicle costs more than a gas vehicle to purchase, it will make up the difference over the course of its use. Most striking is what this would do to greenhouse gas emissions. The study suggests a major decrease in gasoline demand and, as a result, a major increase in electricity demand, and a plummeting of transportation-sector greenhouse-gas emissions. Walker said the reduction could be as large as 800 million tons per year in the U.S. by 2035, a huge contribution to fighting climate change. “Personal mobility hasn’t changed in 100 years, and it actually plausibly could in the next five to 10,” Walker said. The study also forecasts that this means that “peak car ownership in the United States will occur around 2020 and will drop quickly after that.” The Rocky Mountain Institute isn’t the only such bullish voice. John Zimmer, the co-founder of Lyft, recently wrote at Medium that “by 2025, private car ownership will all-but end in major U.S. cities.” Meanwhile, a study last year of the adoption electrified robotic cabs found they could slash our greenhouse gas emissions, by 2030, by tremendous percentages. But is this all too gung-ho? To find out, I queried several outside experts about the scenario. Jeffrey Greenblatt, a researcher at the Lawrence Berkeley National Laboratory who co-authored the just mentioned study on greenhouse gas reductions from electric, shared, and autonomous vehicles, said the work seemed pretty solid. “It’s clearly optimistic, but I think it’s based on reasonable assumptions of the future cost of automated technology, as well as the total cost of ownership of vehicles.,” he said. Margo Oge, a former EPA transportation and air quality official and author of the book “Driving the Future: Combating Climate Change with Cleaner, Smarter Cars” agreed about the Rocky Mountain Institute’s “overall findings that autonomous, shared mobility will utilize largely electric power trains as they will be cheaper to operate. It’s the most economical strategy  for autonomous shared mobility and it will help significantly reduce carbon pollution – especially as the utilization of those EVs will be much higher than the conventional powertrains they displace.” However, the speed of the transition is another matter, Oge said, and here, the group might be off. “In the end the overall success of autonomous mobility will be based on public trust,” she said. “It’s not just an issue of technology. Trust takes time to develop.” It’s still not clear how people — or, for that matter, legal systems — will deal with the possibility of occasional autonomous vehicle crashes, even if they are significantly more rare than today’s car crashes. Moreover, the programming of these vehicles to deal with situations like an automotive version of the ethical “trolley dilemma” — in which a vehicle would have to “choose” whether or not to kill its driver in order to save multiple other individuals — is bound to spur controversy. Another transportation researcher, Susan Shaheen of the University of California-Berkeley, praised the Rocky Mountain Institute work for trying to push us in thinking about what the future could be like. “I think the purpose of papers like this is to set a visionary plan,” she said. But Shaheen also noted some potential drawbacks — for instance, regulatory. “By 2018, a question to ask is, are cities going to be ready to convert to a shared, automated, electrified fleet?” she said. “Chances are, we will not be at that point.” The Rocky Mountain Institute’s Walker admits that there could also be regulatory and cultural hurdles to such a vast transition. “Certain cities and states may make it legal, and some cities and states may not,” he said. And then, there’s just the possibility that some Americans love their cars and won’t want to give them up, he said. And there’s yet another problem, noted Don MacKenzie, a transportation researcher at the University of Washington in Seattle — what happens outside of urban areas? “This will not work so well when you need a ride to your cottage or the state park for the weekend, where there is no one else for the vehicle pick up after dropping you off,” he wrote by email (though MacKenzie agreed with other aspects of the report). “In such conditions, an important question becomes whether people will be willing to give up owning a personal vehicle. If they still have that personal vehicle, there’s a good chance they will continue to drive it (and drive it more).” Nonetheless, Walker said that the bullish scenario is something the group is standing by. “There’s people who say the technology’s not going to be ready, but they’re quoting things like 5 or 10 years, when a year ago, they were quoting 30 years,” Walker said. Think California’s current drought is bad? Past incarnations have lasted hundreds of years Scientists may have solved a key riddle about Antarctica — and you’re not going to like the answer What the ‘sixth extinction’ will look like in the oceans: The largest species die off first For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.


News Article | December 14, 2015
Site: www.rdmag.com

The Large Underground Xenon (LUX) dark matter experiment, which operates nearly a mile underground at the Sanford Underground Research Facility (SURF) in the Black Hills of South Dakota, has already proven itself to be the most sensitive detector in the hunt for dark matter, the unseen stuff believed to account for most of the matter in the universe. Now, a new set of calibration techniques employed by LUX scientists has again dramatically improved the detector's sensitivity. Researchers with LUX are looking for WIMPs, or weakly interacting massive particles, which are among the leading candidates for dark matter. "We have improved the sensitivity of LUX by more than a factor of 20 for low-mass dark matter particles, significantly enhancing our ability to look for WIMPs," said Rick Gaitskell, professor of physics at Brown University and co-spokesperson for the LUX experiment. "It is vital that we continue to push the capabilities of our detector in the search for the elusive dark matter particles," Gaitskell said. LUX improvements, coupled to advanced computer simulations at the U.S. Department of Energy's Lawrence Berkeley National Laboratory's (Berkeley Lab) National Energy Research Scientific Computing Center (NERSC) and Brown University's Center for Computation and Visualization (CCV), have allowed scientists to test additional particle models of dark matter that now can be excluded from the search. NERSC also stores large volumes of LUX data--measured in trillions of bytes, or terabytes--and Berkeley Lab has a growing role in the LUX collaboration. Scientists are confident that dark matter exists because the effects of its gravity can be seen in the rotation of galaxies and in the way light bends as it travels through the universe. Because WIMPs are thought to interact with other matter only on very rare occasions, they have yet to be detected directly. "We have looked for dark matter particles during the experiment's first three-month run, but are exploiting new calibration techniques better pinning down how they would appear to our detector," said Alastair Currie of Imperial College London, a LUX researcher. "These calibrations have deepened our understanding of the response of xenon to dark matter, and to backgrounds. This allows us to search, with improved confidence, for particles that we hadn't previously known would be visible to LUX." The new research is described in a paper submitted to Physical Review Letters. The work reexamines data collected during LUX's first three-month run in 2013 and helps to rule out the possibility of dark matter detections at low-mass ranges where other experiments had previously reported potential detections. LUX consists of one-third ton of liquid xenon surrounded with sensitive light detectors. It is designed to identify the very rare occasions when a dark matter particle collides with a xenon atom inside the detector. When a collision happens, a xenon atom will recoil and emit a tiny flash of light, which is detected by LUX's light sensors. The detector's location at Sanford Lab beneath a mile of rock helps to shield it from cosmic rays and other radiation that would interfere with a dark matter signal. So far LUX hasn't detected a dark matter signal, but its exquisite sensitivity has allowed scientists to all but rule out vast mass ranges where dark matter particles might exist. These new calibrations increase that sensitivity even further. One calibration technique used neutrons as stand-ins for dark matter particles. Bouncing neutrons off the xenon atoms allows scientists to quantify how the LUX detector responds to the recoiling process. "It is like a giant game of pool with a neutron as the cue ball and the xenon atoms as the stripes and solids," Gaitskell said. "We can track the neutron to deduce the details of the xenon recoil, and calibrate the response of LUX better than anything previously possible." The nature of the interaction between neutrons and xenon atoms is thought to be very similar to the interaction between dark matter and xenon. "It's just that dark matter particles interact very much more weakly--about a million-million-million-million times more weakly," Gaitskell said. The neutron experiments help to calibrate the detector for interactions with the xenon nucleus. But LUX scientists have also calibrated the detector's response to the deposition of small amounts of energy by struck atomic electrons. That's done by injecting tritiated methane--a radioactive gas--into the detector. "In a typical science run, most of what LUX sees are background electron recoil events," said Carter Hall a University of Maryland professor. "Tritiated methane is a convenient source of similar events, and we've now studied hundreds of thousands of its decays in LUX. This gives us confidence that we won't mistake these garden-variety events for dark matter." Another radioactive gas, krypton, was injected to help scientists distinguish between signals produced by ambient radioactivity and a potential dark matter signal. "The krypton mixes uniformly in the liquid xenon and emits radiation with a known, specific energy, but then quickly decays away to a stable, non-radioactive form," said Dan McKinsey, a UC Berkeley physics professor and co-spokesperson for LUX who is also an affiliate with Berkeley Lab. By precisely measuring the light and charge produced by this interaction, researchers can effectively filter out background events from their search. "And so the search continues," McKinsey said. "LUX is once again in dark matter detection mode at Sanford Lab. The latest run began in late 2014 and is expected to continue until June 2016. This run will represent an increase in exposure of more than four times compared to our previous 2013 run. We will be very excited to see if any dark matter particles have shown themselves in the new data." McKinsey, formerly at Yale University, joined UC Berkeley and Berkeley Lab in July, accompanied by members of his research team. The Sanford Lab is a South Dakota-owned facility. Homestake Mining Co. donated its gold mine in Lead to the South Dakota Science and Technology Authority (SDSTA), which reopened the facility in 2007 with $40 million in funding from the South Dakota State Legislature and a $70 million donation from philanthropist T. Denny Sanford. The U.S. Department of Energy (DOE) supports Sanford Lab's operations. Kevin Lesko, who oversees SURF operations and leads the Dark Matter Research Group at Berkeley Lab, said, "It's good to see that the experiments installed in SURF continue to produce world-leading results." The LUX scientific collaboration, which is supported by the DOE and National Science Foundation (NSF), includes 19 research universities and national laboratories in the United States, the United Kingdom and Portugal. "The global search for dark matter aims to answer one of the biggest questions about the makeup of our universe. We're proud to support the LUX collaboration and congratulate them on achieving an even greater level of sensitivity," said Mike Headley, Executive Director of the SDSTA. Planning for the next-generation dark matter experiment at Sanford Lab is already under way. In late 2016 LUX will be decommissioned to make way for a new, much larger xenon detector, known as the LUX-ZEPLIN (LZ) experiment. LZ would have a 10-ton liquid xenon target, which will fit inside the same 72,000-gallon tank of pure water used by LUX. Berkeley Lab scientists will have major leadership roles in the LZ collaboration. "The innovations of the LUX experiment form the foundation for the LZ experiment, which is planned to achieve over 100 times the sensitivity of LUX. The LZ experiment is so sensitive that it should begin to detect a type of neutrino originating in the Sun that even Ray Davis' Nobel Prize-winning experiment at the Homestake mine was unable to detect," according to Harry Nelson of UC Santa Barbara, spokesperson for LZ.


News Article | October 5, 2016
Site: www.nature.com

The ExbB construct with and without a C-terminal 6×His tag was subcloned into pET26b (Novagen). ExbD was subcloned into pACYCDuet-1 vector (Novagen) with an N-terminal Strep-tag and a C-terminal 10×His tag. ExbD was also subcloned into a pCDF-1b vector (Novagen) containing a C-terminal TEV protease site followed by a 10×His tag. An ExbD construct containing a C-terminal TEV protease site (preceded by a Gly-Gly-Gly linker for efficient digestion by TEV protease) followed by a 10×His tag was constructed by deletion of the sequence encoding the periplasmic domain of ExbD (residues 50–141). TonB was cloned into a pACYCDUET-1 vector with an N-terminal 10×His tag followed by a TEV protease site. Mutants of TonB (C18A), ExbD (D25A, N78C and E113C), and ExbB (C25S) were prepared by site-directed mutagenesis (primer sequences for all cloning and mutagenesis experiments are available upon request). The sequences of all plasmid constructs and mutations were verified by sequence analysis (Macrogen USA and Eurofins Genomics GmbH). Expression of ExbB with a C-terminal 6×His tag was performed by transforming E. coli BL21(DE3) cells (NEB) with the pET26b/ExbB vector. Co-expression was performed by co-transforming E. coli BL21(DE3) cells with the respective ExbB, ExbD, and/or TonB plasmids. For all transformations, cells were plated onto LB agar plates supplemented with appropriate antibiotics. Colonies were then used for a starter culture to inoculate 12 flasks containing either 1 l 2×YT medium (Ton subcomplex) or SelenoMet medium supplemented with l-methionine at 40 mg/l (Molecular Dimensions) (Ton complex), with appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with isopropyl β-d-1-thiogalactopyranoside (IPTG) to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. For selenomethionine-substituted samples for experimental phasing, B834(DE3) cells (NEB) were co-transformed with pET26b/ExbB and pCDF-1b/ExbD plasmids. Single colonies were used to inoculate 12 flasks containing 1 l SelenoMet medium (Molecular Dimensions) supplemented with 40 mg/ml l-selenomethionine and appropriate antibiotics. Cultures were grown at 37 °C with shaking at 220 r.p.m. until they reached an OD of 0.5–1.0, induced with IPTG to 0.1 mM final concentration, and then allowed to continue to grow overnight at 28 °C. Cells were harvested and used immediately or stored at −80 °C. For purification, cells were resuspended in either 1×PBS (Ton subcomplex) or TBS (Ton complex) supplemented with 100 μM 4-(2-aminoethyl)benzenesulfonyl fluoride (AEBSF), 100 μM DNase, and 50 μg/ml lysozyme, and disrupted with two passages through an EmulsiFlex-C3 (Avestin) operating at ~15,000 p.s.i. Membranes were pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C. Membranes were then resuspended in 1×PBS or TBS using a dounce homogenizer and solubilized by the addition of Triton X-100 (Ton subcomplex) or DDM (Anatrace) (Ton complex) to a final concentration of 1% by stirring at medium speed for 1 h to overnight at 4 °C. Insoluble material was pelleted by ultracentrifugation in a Type 45 Ti Beckman rotor at 200,000g for 1 h at 4 °C and the supernatant was used immediately. Immobilized metal affinity chromatography (IMAC) was performed on an AkTA Purifier (GE Healthcare) using a 15-ml Ni-NTA agarose column (Qiagen) equilibrated with 1×PBS or TBS supplemented with 0.1% Triton X-100 or 0.1% DDM. The supernatant was supplemented with 10 mM imidazole and loaded onto the column. The column was washed in three steps with 1×PBS or TBS supplemented with 20, 40 and 60 mM imidazole, respectively, and eluted with 1×PBS or TBS supplemented with 250 mM imidazole in 2-ml fractions. Fractions were analysed by SDS–PAGE and those fractions containing the complex were pooled. To remove the 10×His tag, TEV protease was added to the sample at 0.1 mg/ml final concentration and rocked overnight at 4 °C. For the Ton complex, the sample was then diluted 2–3 times with 25 mM HEPES, pH 7.3, and 0.1% DDM and loaded onto an anion exchange 6-ml ResourceQ column (GE Healthcare). Elution was performed with a 0–1 M NaCl gradient over 5 column volumes. For the Ton subcomplex, the sample was concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 50-kDa MW cut-off (Millipore), filtered, and purified by size-exclusion chromatography using a Superdex 200 HL 16/600 column (GE Healthcare) at a flow rate of 0.5–1.0 ml/min. The buffer consisted of 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, 0.01% NaN , and 0.08% C E . For the Ton complex, eluted fractions were concentrated using an Amicon Ultra-15 Centrifugal Filter Unit with a 100-kDa MW cut-off (Millipore), and passed over a Superose6HR 10/30 column (GE Healthcare) at a flow rate of 0.5 ml/min using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM. Far-UV circular dichroism (CD) spectra (185–260 nm) were measured in 0.1 M NaP , pH 7.0, and 0.03% DDM using quartz cuvettes with a 0.02–0.2 mm optical path length. The results were analysed using the DichroWeb package of programs42 and different sets of reference proteins, including the SMP180 set of membrane proteins. The analysis of the thermal stability of the complexes reconstituted into liposomes was measured by the temperature dependence of the CD signal amplitude at 222 nm. Thermal melting was performed in a magnetically stirred 1-cm quartz cuvette containing 10 mM HEPES, pH 7.0, and 100 mM NaCl with a rate of temperature increase of 0.5 °C/min. Melting curves were normalized to the measured value of the molar ellipticity change at 10 °C. For crystallization, samples were concentrated to ~10 mg/ml and sparse matrix screening was performed using a TTP Labtech Mosquito crystallization robot using hanging drop vapour diffusion and plates incubated at 15–21 °C. Initially, many lead conditions were observed to produce crystals with hexagonal morphology; however, none diffracted to better than ~7 Å and most suffered from anisotropy. To avoid this packing, we performed reductive methylation of our samples before crystallization using the Reductive Alkylation Kit (Hampton Research), followed by an additional size-exclusion chromatography step. This led to a condition which produced diffraction spots to ~4 Å resolution. Further optimization and screening allowed us to grow crystals in 100 mM Na-acetate, pH 4.5, 100 mM MgCl , and 25% PEG 400 that routinely diffracted to ~3.5 Å resolution or better. For heavy atom soaking, crystals were transferred to a drop containing 1 mM HgCl and incubated overnight at room temperature and then harvested directly from the soaking condition. The best native crystals for the ExbB–ExbD complex, however, were grown from 100 mM HEPES-NaOH, pH 7.0, 100 mM CaCl , and 22% PEG MME 550 and diffracted to 2.6 Å resolution; these crystals were also used for heavy atom soaking experiments. Unfortunately, none of the heavy atom soaked crystals (nor the selenomethionine substituted crystals) were useful for phasing owing to crystal pathologies, which we suspected were twinning related. However, selenomethionine substituted crystals of the ExbB –ExbD complex were obtained using 100 mM MES/imidazole, pH 6.5, 30 mM MgCl , 30 mM CaCl , 50% ethylene glycol, and 8% PEG 8000 and diffracted to 5.2 Å resolution with no twinning-related issues. Both native and selenomethionine-substituted crystals were harvested directly from the crystallization drops. Screening for diffraction quality was performed at the GM/CA-CAT and SER-CAT beamlines at the Advanced Photon Source at Argonne National Laboratory and at beamlines 5.0.1 and 8.2.1 at the Advanced Light Source at Lawrence Berkeley National Laboratory. Final datasets were collected at the SER-CAT beamline and all data were processed using either HKL200043 or Xia244. A summary of the data collection statistics can be found in Supplementary Table 1. The presence of both components of the Ton subcomplex within the crystals was confirmed by SDS–PAGE and mass spectrometry analyses of harvested crystals. For phasing the ExbB–ExbD complex structure, three datasets were collected on selenomethionine substituted crystals of the ExbB –ExbD complex at a wavelength of 0.979 Å. The data were processed with Xia244 and, based on non-isomorphism, one dataset was removed. The final two datasets were processed together in space group P4 2 2 to a final resolution of 5.2 Å. Selenium sites (35 total) were located using HKL2MAP45 after 5,000 tries within SHELXD at a resolution range of 20–6 Å. The sites were then fed into AutoSol (PHENIX)46 which removed one site, producing a phase-extended density-modified electron density map into which we could build an initial poly-alanine model. Five-fold symmetry was clearly observed, with each monomer consisting of very elongated α-helices, and directionality was determined on the basis of the predicted topology of ExbB, which contains a single large cytoplasmic domain. This model was then used as a search model to solve the native and Hg-soaked structures by molecular replacement using PHASER/PHENIX46, 47 and the sequence docked on the basis of anomalous peaks from the SeSAD dataset. The ExbB–ExbD complex was solved in space group P2 to 2.6 Å resolution with R/R values of 0.21/0.26 and the Hg-soaked structure in space group P2 2 2 to 3.5 Å resolution with R/R values of 0.25/0.30. All model building was performed using COOT and subsequent refinement done in PHENIX46. r.m.s.d. analysis was performed within PyMOL (Schrödinger). Electrostatic surface properties (calculated using the Linearized Poisson-Boltzman Equation mode with a solvent radius of 1.4), including generation of the electric field lines, were analysed and visualized using the APBS plugin within PyMOL (Schrödinger). Buried surface area was calculated using the PDBePISA server48. Structure-related figures were made with PyMOL (Schrödinger) and Chimera49 and annotated and finalized with Adobe Photoshop and Illustrator. Coordinates and structure factors for the ExbB/ExbD complexes have been deposited into the Protein Data Bank (PDB accession codes 5SV0 and 5SV1). For 2D crystallization experiments, the Ton subcomplex (ExbB–ExbD) was extracted and purified by IMAC as previously described. The sample was passed over a Superose 12 HR 10/30 column using 20 mM Tris-HCl, pH 7, 150 mM NaCl, 0.01% NaN , and 0.035% Triton X-100. The purified complex was then mixed with a solution stock of E. coli polar lipid (Avanti Polar Lipids, Inc.) at 10 mg/ml in 2% Triton X-100, to reach final concentrations of 0.5–1.0 mg/ml protein and 0.1–0.4 mg/ml lipid. The lipid-protein-detergent samples solutions were placed into Mini Slide-A-Lyser dialysis devices (Pierce) with a 20-kDa MW cutoff, and dialysed in 1 l of 25 mM Tris-HCl, pH 7.0, 150 mM NaCl, and 0.01% NaN at 4 °C. Aliquots of dialysed samples were observed periodically by electron microscopy to monitor the formation of 2D crystals. Sample preparation for electron microscopy was carried out by applying a 5-μl drop of protein-lipid material on a glow discharged carbon-coated electron microscopy grid. Staining was performed by addition of 1% (w/v) uranyl acetate and incubation for 1 min. Grids were then imaged on a Tecnai G2 200 LaB6 electron microscope operating at 200 kV at the Institut de Microbiologie de la Méditerranée. Images were recorded with a 2K Eagle CCD camera. The best 2D crystals were selected through observation of the power spectrum of the images using ImageJ software41. Selected images were processed using the IPLT Correlation Averaging suite program50. A filtered image was generated by optical filtering of the low resolution spots, and padded to contain only 4–6 unit cells. The padded image was cross-correlated with the original large image. The positions of the cross-correlation peaks were determined and used to extract sub-images that were summed to generate an average image of the 2D unit cell. Site-directed spin labelling was used to covalently attach the spin label (1-oxyl-2,2,5,5-tetramethyl-∆3-pyrroline-3-methyl) methanethiosulfonate (MTSL) (Toronto Research Chemicals) to Cys25 on ExbB and to cysteines engineered at positions 78 and 113 on ExbD (N78C, E113C; ExbD constructs were in the pACYC vector containing an N-terminal strep-tag and a C-terminal 10×His tag for the Ton subcomplex, and in the pCDF-1b vector for the Ton complex). For labelling with MTSL, samples were first incubated with 2–10 mM dithiothreitol (DTT) for 1–2 h and the DTT then removed by passage over a HiTrap desalting column (GE Healthcare) or during anion exchange (Ton complex). Samples were then incubated with a 10× molar excess of MTSL overnight at 4 °C and then passed over a Superose 6HR 10/30 gel filtration column (GE Healthcare) using 20 mM HEPES-NaOH, pH 7.5, 200 mM NaCl, 0.08% C E or 0.03% DDM (Ton subcomplex); or 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.05% DDM (Ton complex). For DEER measurements, the samples were diluted with D O to a final concentration of 30% and cryoprotected with 10% v/v D8-glycerol before being flash frozen in liquid nitrogen. Continuous wave (CW) electron paramagnetic resonance (EPR) experiments were carried out at room temperature on a bench-top X-band MiniScope MS 400 (Magnettech by Freiberg Instrument) at 9.5 GHz (X-band) with 2.5 mW microwave power, 15 mT sweep width and 0.15 mT modulation amplitude. Spin labelling efficiency was calculated from the second integral of the derivative spectra compared to a standard spin concentration of 100 μM (Tempol in water). The ExbB native cysteine C25 was labelled with a 50% efficiency, while the ExbD mutants were labelled with efficiencies >80%. DEER measurements were initially performed at ETH Zurich on a commercial Bruker ELEXSYS-II E580 Q-band spectrometer (34–35 GHz) and later on a Bruker ELEXSYS E580Q-AWG dedicated pulse Q-band spectrometer operating at 34–35 GHz. Both spectrometers were equipped with a TWT amplifier (150 W) and a home-made rectangular resonator (from ETH Zurich) enabling the insertion of 30–40 μl sample volume in quartz tubes with 3 mm outer diameter51. Dipolar time evolution data were acquired using the four-pulse DEER experiment at 50 K. All pulses were set to be rectangular with 12 ns length, with the pump frequency at the maximum of the echo-detected field swept spectrum, 100 MHz higher than the observer frequency. Deuterium nuclear modulations were averaged by increasing the first interpulse delay by 16 ns for 8 steps as previously described51. The background of the normalized DEER primary data (V(t)/V(0)) was fitted with optimized dimensions from 2.5 to 3.2 and the resulting normalized secondary data (F(t)/F(0)) were converted by model-free Tikhonov regularization to distance distributions with the software DeerAnalysis201552, 53. The simulation of the possible spin label rotamers populated at selected positions in the protein was performed using the Matlab program package MMM2015.1 using the MTSL ambient temperature library54. The ExbB –ExbD complex (ExbD was in the pACYC vector containing an N-terminal strep-tag and a C-terminal 6×HIS tag) was expressed and purified as described earlier. To prepare the sample for crosslinking, the sample was incubated at 4 °C with 5 mM DTT for at least 1 h. The DTT was then removed using a desalting column in 20 mM HEPES, pH 7.0, 150 mM NaCl, and 0.1% DDM. The crosslinker 1,8-bismaleimidodiethylenglycol (BM(PEG) ) (Pierce) was added at a final concentration of 0.2 mM and the reaction was incubated at 4 °C overnight. The sample was concentrated and passed over a Superose 6HR 10/30 gel filtration column using 20 mM HEPES-NaOH, pH 7.0, 150 mM NaCl, and 0.035% DDM on an AkTA Purifier system (GE Healthcare). The results were visualized by SDS–PAGE analysis. Protein complexes were reconstituted into liposomes by dialysis of the protein–lipid–detergent mixture. Lipids (DOPG, DOPC and DOPE) dissolved in chloroform were mixed in a molar ratio of 2:3:5. Chloroform was removed by vortexing in a stream of nitrogen gas in a glass tube followed by drying in vacuum for 2–3 h. The lipid film was hydrated in 1 ml TN buffer (10 mM Tris-HCl, pH 7.5, 50 mM NaCl), followed by five cycles of freeze–thaw and sonication using a water bath sonicator until the suspension of lipids became clear (10–15 min). For proteoliposome preparation, small unilamellar vesicles (SUVs) were mixed with octylglucoside (final concentration, 2%) and then proteins added to achieve a molar ratio of total lipid to protein ∼500–2,000 mol/mol. After 1 h incubation in ice, the lipid–protein–detergent mixture was dialysed into 10 mM Tris-HCl, pH 7.5, 0.3 M sucrose, and 50 mM KCl for 30–40 h using a dialysis membrane with a MW cut-off pore size of 10 kDa. Mueller-Rudin type planar bilayer membranes were formed on a 0.2-mm diameter aperture in a partition that separates two 1-ml compartments, using a mixture of lipids, DOPG, DOPC and DOPE, at a molar ratio of 2:3:5 (10 mg/ml) in n-decane, applied by a brush technique55. The aqueous solution in both compartments consisted of 2 mM KP , pH 7.0, and 0.1 M and 0.4 M KCl in the cis- and trans-compartments, respectively. To study the pH dependence of channel activity, bathing solutions were buffered with 2 mM Na-acetate (pK 4.8), Na-cacodylate (pK 6.2), and Tris (pK 8.3). The pH of the bathing solution was changed by adding 10–20 μl 0.1 M HCl or KOH. The cis-side of the planar bilayer is defined as that to which the electrical potential is applied. Proteoliposomes, 0.1–2 μl, were added to the trans-compartment, and the solutions were stirred until the transmembrane current appeared. A large concentration of an osmolyte inside of the liposomes and the transmembrane KCl concentration gradient caused proteoliposome fusion with the pre-formed planar lipid membrane bilayer. The transmembrane current was measured in voltage-clamp mode with Ag/AgCl electrodes and agar bridges, using a BC-525C amplifier (Warner Instruments). The single-channel conductance of the ExbB–ExbD complexes was measured in symmetrical salt conditions: 0.1 M KCl solution, pH 7.5, at a holding potential of +50 or −50 mV. For ion selectivity experiments, zero-current potential (V ) was determined from volt-ampere characteristics measured in asymmetric salt conditions. Relative cation/anion permeability was calculated using the Goldman-Hodgkin-Katz equation56.


News Article | March 7, 2016
Site: www.rdmag.com

Cutting-edge simulations run at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) over a two-year period are helping physicists better understand what influences the behavior of the plasma turbulence that is driven by the intense heating necessary to create fusion energy. This research has yielded exciting answers to long-standing questions about plasma heat loss that have previously stymied efforts to predict the performance of fusion reactors and could help pave the way for this alternative energy source. The key to making fusion work is to maintain a sufficiently high temperature and density to enable the atoms in the reactor to overcome their mutual repulsion and bind to form helium. But one side effect of this process is turbulence, which can increase the rate of plasma heat loss, significantly limiting the resulting energy output. So researchers have been working to pinpoint both what causes the turbulence and how to control or possibly eliminate it. Because fusion reactors are extremely complex and expensive to design and build, supercomputers have been used for more than 40 years to simulate the conditions to create better reactor designs. NERSC is a Department of Energy Office of Science User Facility that has supported fusion research since 1974. One roadblock in the quest for fusion is that, to date, computer models have often been unable to predict exactly how turbulence will behave inside the reactor. In fact, there have long been differences between predictions and experimental results in fusion experiments when studying how turbulence contributes to heat loss in the confined plasma. Now researchers at MIT's Plasma Science and Fusion Center, in collaboration with colleagues at the University of California at San Diego (UCSD) and General Atomics, have discovered a solution to this disparity. By performing high-resolution multi-scale simulations, the team was able to simultaneously resolve multiple turbulence instabilities that have previously been treated in separate simulations. A series of these multi-scale simulations run on NERSC's Edison system found that interactions between turbulence at the tiniest scale (that of electrons) and turbulence at a scale 60 times larger (that of ions) can account for the mysterious mismatch between theoretical predictions and experimental observations of the heat loss. The findings, published December 17, 2015 in Nuclear Fusion, could greatly improve our knowledge of what's really going on inside the current tokamak research experiments that exist around the world and in future experimental reactors under construction or planning, the researchers noted. "For a very long time, the predictions from leading theories have been unable to explain how much heat loss is coming from electrons in fusion plasma," said Nathan Howard, a research scientist at MIT's Plasma and Fusion Science Center and lead author on the Nuclear Fusion paper. "You apply your best theories, but they have underpredicted the amount of heat loss coming from the electrons. In this particular work, we have shown that using the coupled model--where you capture both the large-scale and small-scale turbulence simultaneously--you can actually reproduce the experimental electron heat losses, in part because there appear to be strong interactions between the large-scale and small-scale turbulence that weren't well understood previously." Doing so requires prodigious amounts of computer time to run simulations that encompass such widely disparate scales, he emphasized. The entire study took between 100 million and 120 million CPU hours on Edison, Howard noted; each simulation required approximately 15 million hours of computation, carried out by using 17,000-30,000 processors at NERSC, making Howard the biggest user of NERSC computing time in 2014. Using an ordinary MacBook Pro to run the full set of six simulations that the team carried out, he estimates, would have taken about 3,000 years. "I ran one entire simulation in 36-hour jobs and had to restart it about 24 times to get it to run long enough," he explained. "And that's not accounting for the amount of time it took to get jobs through the queue on the supercomputer." For the simulations, the researchers used data from experiments conducted in 2012 at MIT's Alcator C-Mod tokamak. The multi-scale simulations used the gyrokinetic model implemented by the GYRO code developed by Jeff Candy at General Atomics. The key inputs into the simulation codes were calculated from the experimental data; the simulation results were then compared to the experimentally derived measurements. "We are actually able to reproduce multiple aspects of the experiment with our simulations, which is important because it gives you some confidence that what you are seeing in the simulations is actually representative of reality," Howard said. For more than a decade, the general expectation by physicists had been that, because the turbulent "swirls" associated with the ions are so much larger than those associated with electrons, electron-scale swirls would simply be smeared out by the much larger turbulent ion motion. And even if the smaller swirls survived the larger ion-scale turbulence, their tiny size suggested that their effects on heat loss would be negligible. But the new findings show that this thinking is not always correct. The two scales of turbulence can indeed coexist, the researchers found, and when they do they can interact with each other so strongly that it's impossible to predict the total heat loss accurately unless using a simulation that simultaneously resolves both scales. In fact, far from being eliminated by the larger-scale turbulence, the tiny swirls produced by electrons continue to be clearly visible in the results, stretched out into long ribbons that wind around the donut-shaped vacuum chamber that characterizes a tokamak reactor. Despite the temperature of 100 million degrees Celsius inside the plasma, these ribbon-like swirls, or eddies, persist long enough to influence how heat gets dissipated from the turbulent mass--a determining factor in how much fusion can actually take place inside the reactor. "This is the first time a lot of these very big hypotheses have been confirmed and shown to be operative in relevant plasma conditions," said Chris Holland, associate research scientist in the Center for Energy Research at UCSD and a co-author on the Nuclear Fusion paper. Looking ahead, work is already underway to modify the numerical and parallelization algorithms in GYRO so it will be able to make optimal use of many-integrated-core architectures (like Cori Phase II) as well as sophisticated accelerator hardware on emerging and future exascale platforms. "We are redesigning the GYRO code from the ground up to take advantage of these powerful but challenging new architectures and to increase the physics fidelity of the simulations," Candy said. "The challenge we are working on right now is figuring out how to get the complicated simulations, which require significant memory size and bandwidth, to work efficiently and scale well on these new platforms so we can continue to study even more complex scenarios," Holland added. "I think this is a problem that can be solved, and if we are successful it opens up exciting possibilities for new ways of doing the simulations, looking at more scales and doing more comprehensive predictions for ITER and other kinds of reactors. If we can find a way to use the new generation of platforms and make these simulations more routine, then it becomes a really exciting tool."


News Article | October 10, 2016
Site: www.rdmag.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer's spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data--like heat maps, node link diagrams and anatomical views--to provide context for brain connectivity data. "The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before," says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. "Other tools tend to look at abstract or statistical network connections but don't do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context," says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool's functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool's development. As a neuroscientist at UCSF's Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer's and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. "We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That's the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections," says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. "For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c," Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they'd also look at fMRI data that had been reduced to a static network diagram. "The problem with this analysis process is that it's all static. If I wanted to explore another region of the brain, which would be a different pattern, I'd have to input a whole different set of data and create another set of static images," says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. "Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn't realize existed before in the dataset. Every time I use it, I find something surprising in the data," says Brown. "It is also incredibly valuable for researchers who don't know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition." The idea for Brain Modulyzer initiated when Berkeley Lab's Weber and Seeley met at the "Computational Challenges for Precision Medicine" in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries--a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. "At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools," says Brown. "The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch." After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called "Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data." Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. "We want to make sure that this tool is useful to the community, so we will keep iterating on it," says Brown. "We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time."


News Article | October 12, 2016
Site: www.biosciencetechnology.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer's spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data--like heat maps, node link diagrams and anatomical views--to provide context for brain connectivity data. "The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before," says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. "Other tools tend to look at abstract or statistical network connections but don't do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context," says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool's functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool's development. As a neuroscientist at UCSF's Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer's and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. "We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That's the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections," says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. "For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c," Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they'd also look at fMRI data that had been reduced to a static network diagram. "The problem with this analysis process is that it's all static. If I wanted to explore another region of the brain, which would be a different pattern, I'd have to input a whole different set of data and create another set of static images," says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. "Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn't realize existed before in the dataset. Every time I use it, I find something surprising in the data," says Brown. "It is also incredibly valuable for researchers who don't know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition." The idea for Brain Modulyzer initiated when Berkeley Lab's Weber and Seeley met at the "Computational Challenges for Precision Medicine" in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries--a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. "At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools," says Brown. "The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch." After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called "Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data." Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. "We want to make sure that this tool is useful to the community, so we will keep iterating on it," says Brown. "We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time."


News Article | October 10, 2016
Site: www.chromatographytechniques.com

Did you know that your brain processes information in a hierarchy? As you are reading this page, the signal coming in through your eyes enters your brain through the thalamus, which organizes it. That information then goes on to the primary visual cortex at the back of the brain, where populations of neurons respond to very specific basic properties. For instance, one set of neurons might fire up because the text on your screen is black and another set might activate because there are vertical lines. This population will then trigger a secondary set of neurons that respond to more complex shapes like circles, and so on until you have a complete picture. For the first time, a new tool developed at the Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) allows researchers to interactively explore the hierarchical processes that happen in the brain when it is resting or performing tasks. Scientists also hope that the tool can shed some light on how neurological diseases like Alzheimer’s spread throughout the brain. Created in conjunction with computer scientists at University of California, Davis (UC Davis) and with input from neuroscientists at UC San Francisco (UCSF), the software, called Brain Modulyzer, combines multiple coordinated views of functional magnetic resonance imaging (fMRI) data—like heat maps, node link diagrams and anatomical views—to provide context for brain connectivity data. “The tool provides a novel framework of visualization and new interaction techniques that explore the brain connectivity at various hierarchical levels. This method allows researchers to explore multipart observations that have not been looked at before,” says Sugeerth Murugesan, who co-led the development of Brain Modulyzer. He is currently a graduate student researcher at Berkeley Lab and a PhD candidate at UC Davis. “Other tools tend to look at abstract or statistical network connections but don’t do quite a good job at connecting back to the anatomy of the brain. We made sure that Brain Modulyzer connects to brain anatomy so that we can simultaneously appreciate the abstract information in anatomical context,” says Jesse Brown, a postdoctoral researcher at UCSF who advised the Berkeley Lab development team on the tool’s functionality. A paper describing Brain Modulyzer was recently published online in the IEEE/ACM Transactions on Computational Biology and Bioinformatics. Brain Modulyzer is now available on github. Murugesan and Berkeley Lab Computer Scientist Gunther Weber developed the tool together. Weber is also an adjunct professor in the Department of Computer Science at UC Davis. UCSF Associate Professor of Neurology William Seeley also advised the tool’s development. As a neuroscientist at UCSF’s Memory and Aging Center, Brown and his colleagues use neuroimaging to diagnose diseases, like Alzheimer’s and dementia, as well as monitor how the diseases progress over time. Ultimately, their goal is to build a predictive model of how a disease will spread in the brain based on where it starts. “We know that the brain is built like a network, with axons at the tip of neurons that project to other neurons. That’s the main way the neurons connect with each other, so one way to think about disease spreading in the brain is that it starts in one place and kind of jumps over along the network connections,” says Brown. To see how a brain region is connected to other brain regions, Brown and his colleagues examine the fMRIs of healthy subjects. The set of connections observed in the fMRIs are visualized as a network. “For us the connection pattern of the network in healthy subjects is valuable information, because if we then study a patient with dementia and see that the disease is starting at point a in that network, we can expect that it will soon spread through the network connections to points b and c,” Brown adds. Before Brain Modulyzer, researchers could only explore these neural networks by creating static images of the brain regions they were studying and superimposing those pictures on an anatomical diagram of the entire brain. On the same screen, they’d also look at fMRI data that had been reduced to a static network diagram. “The problem with this analysis process is that it’s all static. If I wanted to explore another region of the brain, which would be a different pattern, I’d have to input a whole different set of data and create another set of static images,” says Brown. But with Brain Modulyzer, all he has to do is input a matrix that describes the connection strengths between all of the brain regions that he is interested in studying and the tool will automatically detect the networks. Each network is colored differently in the anatomical view and the information visualized abstractly in a number of graph and matrix views. “Modulyzer is such a helpful tool for discovery because it bubbles up really important information about functional brain properties, including information that we knew was there before, but it also connects to brain regions that we didn’t realize existed before in the dataset. Every time I use it, I find something surprising in the data,” says Brown. “It is also incredibly valuable for researchers who don’t know these methods as well. It will allow them to be a lot more efficient in detecting connections between brain regions that are important for cognition.” The idea for Brain Modulyzer initiated when Berkeley Lab’s Weber and Seeley met at the “Computational Challenges for Precision Medicine” in November 2012. This workshop brought together investigators from Berkeley and UCSF to focus on computational challenges posed by precision medicine. Their initial discussions led to a collaboration with Oblong Industries—a company that builds computer interfaces--to translate laboratory data collected at UCSF into 3D visualizations of brain structures and activity. The results of this collaboration were presented at the Precision Medicine Summit in May 2013. “At the Aging and Memory Center at UCSF, our expertise is in neuroscience, neurological diseases and dementia. We are really fortunate to be in touch with Berkeley Lab scientists whose expertise in visualization, maps and working with big data exploration helped us build such amazing tools,” says Brown. “The precision medicine collaboration was such a fruitful collaboration for everyone that we decided to stay in touch.” After the Precision Medicine Summit, the team discussed possibilities for further collaboration, which led to a Laboratory Directed Research and Development (LDRD) project at Berkeley Lab called “Graph-based Analysis and Visualization of Multimodal Multi-resolution Large-scale Neuropathology Data.” Part of the funding for Brain Modulyzer development came from this LDRD, as well as grants to Seeley from the Tau Consortium and National Institutes of Health. Soon, the team hopes to present their Brain Modulyzer paper to the neuroscience community for feedback. “We want to make sure that this tool is useful to the community, so we will keep iterating on it,” says Brown. “We have plenty of ideas to improve on what we have, and we think that Modulyzer will keep getting better over time.” In addition to Brown, Weber, Murugesan and Seeley other authors on the paper are Bernd Hamann (UC Davis), Andrew Trujillo (UCSF) and Kristopher Bouchard (Berkeley Lab).


News Article | November 29, 2016
Site: globenewswire.com

Copenhagen, 2016-11-29 12:30 CET (GLOBE NEWSWIRE) -- The Carlsberg Forum series reflects the strong link between science and business. Each year, the Kaj Linderstrøm-Lang awards are given to prominent scientists for their achievements within biochemistry or physiology, the fields of science in which Kaj Linderstrøm-Lang, a professor at Carlsberg Research Laboratory in the period 1939-1959, distinguished himself as a pioneer. This year, Professor Henrik V. Scheller, Joint BioEnergy Institute, Lawrence Berkeley National Laboratory, USA and Professor Geoff Fincher, School of Agriculture, Food & Wine, The University of Adelaide, Australia received this year’s Kaj Linderstrøm-Lang Prize as an acknowledgement of their outstanding achievements on identifying and characterizing enzymes involved in synthesis and modification of the plant cell wall. The third Kaj Linderstrøm-Lang Prize was awarded to Professor Richard Henderson, MRC Laboratory of Molecular Biology, Cambridge, UK for his pioneering work towards obtaining high resolution atomic structures of membrane proteins and membrane protein complexes by electron cryomicroscopy. The prize is a plated gold medal and a financial personal award of DKK 40.000. Finally, the Emil Chr. Hansen Golden Medal was awarded to Jef Boeke, Director, Institute for Systems Genetics at New York University, for his seminal contributions to yeast genome scrambling and the design of synthetic yeast. The Emil Chr. Hansen Foundation sponsors the Emil Chr. Hansen Golden Medal. Professor Emil Chr. Hansen worked at the Carlsberg Laboratory from 1877-1909, and was the first to isolate a pure yeast that revolutionized the brewing industry. Birgitte Skadhauge, Head of Carlsberg Research Laboratory, says: “The distinguished work of Carlsberg Research Laboratory scientists Kaj Linderstrøm-Lang and Emil Chr. Hansen made it possible to have the high quality beers we have today. This year marks the 140th anniversary of the Carlsberg Research Laboratory, and we found it particularly important to celebrate the connection between beer and science this year by honouring not only Professors Richard Henderson, Geoff Fincher and Henrik Scheller but also Jef Boeke with the Emil Christian Hansen Golden Medal for their notable scientific contributions to business and society.” The program for this year’s Carlsberg Forum consisted of eight lectures by outstanding international scientists and businesspersons spanning trend-setting research in e.g. yeast, cereal-crops, food for health & well-being and new technologies. The business part of the program included lectures on business and health challenges in a globalized and rapidly changing world. In a special endnote address, Birgitte Skadhauge reviewed the Laboratory’s anniversary and the famed re-brew project. Scientists at the Laboratory made an extraordinary discovery in the old cellars of Carlsberg in Copenhagen, Denmark, some years ago. They discovered a very old Carlsberg bottle that surprisingly still contained living yeast cells. They grew and analyzed the cells, and, as it turns out, the bottle was one of the very first beers brewed with the original pure yeast from 1883 when Carlsberg democratized modern lager beers. Earlier this year, specially invited guests celebrated the Laboratory’s 140th anniversary by tasting, for the first time in more than a hundred years, the original Carlsberg quality lager, re-brewed by the Laboratory. Birgitte Skadhauge says: “What better way to take science to business than by re-brewing the father of quality lager. The Laboratory has pioneered beer quality since it was founded 140 years ago. On the eve of our anniversary, we are proud to have demonstrated scientifically why Carlsberg is probably the best beer in the world.” The movie premiere of Warner Bros’ documentary about the re-brew project premiered in Copenhagen and Fredericia, Denmark, on 24 and 25 November. A movie trailer is available on rebrewproject.com The Carlsberg Group is one of the leading brewery groups in the world, with a large portfolio of beer and other beverage brands. Our flagship brand – Carlsberg – is one of the best-known beer brands in the world and the Baltika, Carlsberg and Tuborg brands are among the eight biggest brands in Europe. More than 47,000 people work for the Carlsberg Group, and our products are sold in more than 150 markets. In 2015, the Carlsberg Group sold 120 million hectolitres of beer, which is more than 35 billion bottles of beer. Find out more at www.carlsberggroup.com.


News Article | September 26, 2016
Site: www.scientificcomputing.com

The roar can be deafening. Cooling fans and power supplies whoosh and whine from rows and rows of supercomputers at the main data center of the Texas Advanced Computing Center in Austin. The power bill at TACC can reach over a million dollars a year to keep the machines humming. But there's a stranger in town that might change how data centers power their systems. A new kind of advanced computing system called Hikari (Japanese for the word "light") came online at TACC late August, 2016. What's new is that Hikari runs on solar power and high voltage direct current, or HVDC. Hikari is a microgrid that supports a supercomputer, a first for the U.S. By day solar panels that shade a TACC parking lot provide nearly all of Hikari's power, up to 208 kilowatts. At night it switches back to conventional AC power from the utility grid. "The Hikari project is a Japan and Texas collaboration project, and it aims to demonstrate the potential of the HVDC system," said Toshihiro Hayashi, assistant manager in the engineering divisions of NTT FACILITIES, INC., Japan. Engineers of the Hikari HVDC power feeding system predict it will save 15 percent compared to conventional systems. "The 380 volt design reduces the number of power conversions when compared to AC voltage systems," said James Stark, director of Engineering and Construction at the Electronic Environments Corporation(EEC), a Division of NTT FACILITIES, INC.. "What's interesting about that," Stark added, "is the computers themselves - the supercomputer, the blade servers, cooling units, and lighting - are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That's really the largest technical innovation." Data centers in the U.S. consumed an estimated 70 billion kilowatt hours in 2014, which represents about 1.8 percent of total U.S. electricity consumption. That's according to a June 2016 Department of Energy report by the Lawrence Berkeley National Laboratory. High Voltage Direct Current also allows for ease in connection to renewable energy, solar for Hikari but potentially other sources like wind and hydrogen fuel cells. "That's really one of our main focuses, trying to make data centers more sustainable so that we're reducing overall power consumption within an industry that has traditionally consumed a lot of power," James Stark said. With sustainability comes reliability. This held true for another NTT FACILITIES, INC. project called the Sendai Microgrid on the fateful day of the Tohoku earthquake, March 11, 2011. The Sendai Microgrid was equipped with gas engines, a fuel cell, and a photovoltaic array. It continued to supply power and heat to facilities at Tohoku Fukushi University for hospital patients and elderly nursing home residents despite days of blackouts from the catastrophic damage to the district's energy supply system caused by the earthquake. "This microgrid power supply system activated very well after the earthquake," Hayashi said. "It showed that it's a very highly reliable energy system." Another NTT FACILITIES project leading to Hikari is the Tsukuba Research and Development Center, a HVDC microgrid that generates 70 kilowatts for a modular data center. "We have various experiences in Japan of having this HVDC system," Hayashi said. These projects, including Hikari, are supported by NEDO, the New Energy and Industrial Technology Development Organization, a public agency of Japan. The Hikari project partners are NEDO, NTT FACILITIES, INC., and the University of Texas at Austin through the Texas Advanced Computing Center. The collaboration started with a visit to TACC by engineers at NTT FACILITIES, INC. in early 2014, fresh off the heels of basic studies they'd done for installing HVDC at U.S. sites. TACC also shares a strong interest in developing new technologies, including energy savings. "We're very interested in UT Austin's motto," said Hayashi, "which is 'what starts here changes the world.' We very much agree with this motto." Hayashi's team worked with TACC to develop feasibility studies of the Hikari HVDC project from December 2014 to May 2015. This effort led to a Memorandum of Understanding between the State of Texas and NEDO in August of 2015. NTT FACILITIES, INC. worked with EEC to build out Hikari, which completed the system installation late August 2016. "If there wasn't such partnership, we wouldn't have launched this project. I would like to express my gratitude to NEDO for establishing the partnership," Hayashi said. The Hikari supercomputer cluster consists of 432 Hewlett Packard Enterprise (HPE) Apollo 8000 XL730f servers coupled with HPE DL380 and DL360 nodes that are interconnected with a first-of-its-kind Mellanox End-to-End EDR InfinBand at 100 gigabytes per second. Over 10,000 cores from "Haswell" Xeon processors will deliver more than 400 teraflops. The Hikari project also aims to demonstrate energy efficiency through more than just HVDC. The HPE Apollo 8000 systems use a warm water-based liquid cooling system that eliminates the need for fans within the nodes and reduces the energy that would normally be required for water refrigeration and excess heat removal. The solar energy that would have been used for powering fans and chillers can be used for computational work. When it reaches production later in 2017, Hikari will be used by the University of Texas medical researchers to make progress on diseases like cancer and disorders like autism. "We really hope this project will demonstrate the efficiency advantages of using 380 volt DC, not only in data centers, but in any commercial building," James Stark said. "The hope is that the research that comes out of this demonstration project will help to open the door to more widespread use of 380 volt systems throughout data centers and commercial buildings worldwide."


News Article | November 10, 2016
Site: www.washingtonpost.com

On election day, the U.S. was busy voting for a president who openly scoffs at the reality of human-caused climate change and has expressed the goal of withdrawing the U.S. from the international process for dealing with it. So if climate change is something you worry about, you’re probably extra worried right now. But on the same day, a paper published in Nature Communications Tuesday actually contained some of the better news about climate change that we’ve heard in a while. Granted, even that doesn’t make it very sunny — just a modest bit of evidence suggesting a slight, temporary reprieve in the rate at which we’re altering the planet. Trevor Keenan of Lawrence Berkeley National Laboratory and his colleagues examined what is called the “terrestrial carbon sink”: How much trees, plants, and other aspects of global land surfaces are managing to pull carbon dioxide out of the air. Every year, you see, humans pump many billion tons of the stuff into the atmosphere, but not all of it stays there to warm the planet. A very significant fraction ends up getting absorbed by the ocean. Another large fraction gets pulled in by land-based plants which use it for photosynthesis. And so on. What the new study shows is that from 2002 to 2014, plants appear to have gone into overdrive, and started pulling more carbon dioxide out of the air than they had before. The result was that the rate at which carbon dioxide accumulates in the atmosphere did not increase during this time period, although previously, it had grown considerably in concert with growing greenhouse gas emissions. Granted, this does not mean that plants revved up nearly enough to prevent steady accumulation of the long-lived gas in the atmosphere each year. There is still more added every year. But they did increase the fraction of it that they were gobbling up, in a way that scientists could measure. “That portion [of carbon dioxide] that stays in the atmosphere, that’s called the airborne fraction, and that has reduced by about 20 percent over the last 15 years,” said Keenan in an interview. “So around the year 2000 it was about 0.5, and currently it’s about 0.4.” In other words, in recent years the land and ocean started absorbing 60 percent of all emitted carbon dioxide, whereas before they had been absorbing more like 50 percent. Carbon dioxide in the atmosphere still went up steadily each year, but maybe not at quite as fast a rate as before. Keenan completed the work with scientists from U.S., U.K., Australian and Chinese institutions — including two researchers with the Global Carbon Project, which every year tracks how much CO2 the world emits, as well as how much of it ends up in the atmosphere as opposed to being trapped by the ocean and land “sinks.” One reason for the change is the so-called global warming “pause” that persisted during much of the 2000s (and now appears to have ended), explained Keenan. You see, plants engage in photosynthesis which pulls in carbon dioxide, but they also engage in respiration, which releases it again. The same is true of microorganisms living on land – in its soil, for instance. And respiration rates increase at higher temperatures. But during the pause, Keenan continued, carbon dioxide levels kept rising but temperature didn’t rise as much over land surfaces. That seems to have triggered a disproportionate response from living things on land. “We have more and more CO2 going into photosynthesis, but we don’t have more and more coming out from respiration,” he explained. One related trend may be what has been termed “Arctic greening.” Scientists have been finding, of late, that as northern portions of the planet warm up even as total atmospheric carbon dioxide increases, there’s been an increase in plant growth in these regions: If all of this were to continue — if more carbon dioxide in the air continually meant an increased response from photosynthesis, and if respiration stayed in check too — then you might think the growing land carbon “sink” would be able to really help save us. And indeed, climate change “skeptics” have often pointed to plant growth as a possible factor that would mitigate climate change. However, that is most emphatically not the conclusion of the new paper. Rather, what it has detected seems to have been a temporary effect. It’s even possible the situation could fully reverse itself in the future. “What we do know is that the effect of CO2 fertilization diminishes as CO2 goes up, so it’s a diminishing return,” said Keenan. “Whereas the effect of temperature on respiration is thought to increase.” Especially in a context in which Trump’s election could significantly hobble global climate progress, the new research underscores how effective it could be to curtail another major climate driver – deforestation, especially in tropical countries. If we not only stopped cutting down tropical forests, but also reclaimed lost forested areas and allowed them to grow back on a mass scale, research like this does serve as a reminder that there is a way to further fire up the land sink, and reduce the rate of global warming still more. And that could buy the planet some much needed time. In the meantime, it remains to be seen whether the 2015-2016 El Nino event has already shifted the trends reported in the new paper, making the land “sink” weaker once again. There are, after all, good reasons to think that El Nino events, by causing droughts and large scale wildfires (such as what we saw in Indonesia a year ago), can actually hurt the land’s ability to pull carbon dioxide out of the air. Which just goes to show you how fragile any shift in the land “sink” could be. “It’s good news for now,” said Keenan of the study. “We can’t expect it to continue.” Correction: This article had previously stated that the land and ocean carbon sinks had gone from absorbing 40 percent of airborne carbon dioxide to 50 percent. It should have been from 50 percent to 60 percent. Why scientists are so worried about sea-level rise in the second half of this century We’re adding record amounts of wind and solar — and we’re still not moving fast enough For more, you can sign up for our weekly newsletter here and follow us on Twitter here.


« Toyota expands R&D facilities in China; PHEVs coming under Corolla Levin lineup; fuel cell vehicle demos | Main | DOE: California, Connecticut, and New York top fuel cell states » An international team led by researchers at UCLA and Caltech has demonstrated that altering the form of platinum nanoscale wires from a smooth surface to a jagged one can significantly reduce the amount of precious metal required as a catalyst for the oxygen reduction reaction (ORR) in fuel cells and thus lower the cost. According to the findings, the newly developed catalyst is so active that the amount of platinum required for a fuel cell could be 1/50 of what is needed today. In a paper published in Science, the team reports that the jagged Pt nanowires exhibit an ECSA (electrochemical active surface area) of 118 m2 per gram Pt and a specific activity of 11.5 mA per square centimeter for ORR for a mass activity of 13.6 ampere per milligram Pt, nearly doubling previously reported best values. Reactive molecular dynamics simulations suggested that the highly stressed, under-coordinated rhombohedral-rich surface configurations of the jagged nanowire enhanced ORR activity versus more relaxed surfaces. Platinum is used to catalyze the key oxygen reduction reaction in the fuel cell’s cathode, directly converting the chemical energy in hydrogen fuel into electricity, with water vapor as the byproduct. Oxygen reduction represents the key reaction step that limits the rate of power generation, and platinum is currently still the most viable catalyst to speed up the reaction and power generation, despite a great deal of research exploring alternatives. However, current state-of-art platinum catalysts are not active enough, which necessitates the use of a relatively large amount of platinum, contributing to the high price of fuels cells. Manufacturing nanowires with jagged surfaces, rather than smooth, creates new types of highly active sites that can significantly reduce the reaction barrier and speed up the oxygen reduction reaction. Further, the thin body of the nanowire ensures most of the platinum atoms are exposed on the surface to actively participate in the reaction instead of being embedded inside the body and making little contribution to the reaction. All of that results in a reduction in the amount of platinum used, and therefore the cost, while at the same ramping up the reaction efficiency and power generation rate. This work is a perfect example of what one can achieve by the atomic scale control of nanoscale materials, and how structural modifications at such small dimension can lead to big gain in functions for real applications. This is fascinating world to explore for a material scientist. —Yu Huang, professor of materials science and engineering at the UCLA Henry Samueli School of Engineering and Applied Science Huang was a co-principal investigator on the research along with Xiangfeng Duan, UCLA professor of chemistry and biochemistry. Both are members of the California NanoSystems Institute at UCLA. The other principal investigator was William Goddard, the Charles and Mary Ferkel Professor of Chemistry, Materials Science, and Applied Physics at Caltech. The lead author of the research was Mufan Li, a UCLA doctoral student advised by Duan and Huang. The researchers created the wires in a two-step process. First, they used a heating process to create nanowires of a platinum-nickel alloy. Then they used an electrochemical process selectively to remove the nickel atoms from the alloy nanowires. Following testing, the researchers found that the jagged nanowires made the reaction much more efficient, delivering 50 times more current (power) than the same amount of commercial catalysts can. Other authors hailed from the National Research Council of Italy; Tsinghua University in China; The Chinese Academy of Sciences; California State University, Long Beach; Northeastern University; and Lawrence Berkeley National Laboratory. The research was supported by the Department of Energy and the National Science Foundation.


News Article | November 17, 2016
Site: phys.org

The Top500 list was announced Monday, Nov. 14, at The International Conference for High Performance Computing, Networking, Storage and Analysis, SC16, in Salt Lake City. It's announced semi-annually and is comprised of systems that are ranked according to their computation speeds as determined by the LINPACK Benchmark, which is a calculation of a linear system represented by a large double precision dense matrix. SciPhi-XVI was delivered to Jefferson Lab and installed in mid-August and was deployed on Sept. 30 with 200 nodes. An additional 64 nodes were added to the machine in late October, and the High Performance Computing group then benchmarked the machine. The SciPhi-XVI machine benchmarked at 425.8 Teraflops using 256 of its 264 nodes. The cluster's 264 nodes are comprised of the second-generation Intel Xeon Phi chips, which were released in 2016 and are codenamed Knights Landing. Each node has 64 computing cores, and individual nodes within the machine are connected via the Intel Omni-Path Architecture, which supports a 100 Gigabit/second data transfer rate between the nodes. "A special Intel distributed LINPACK benchmark software, HPL (High Performance Linpack), was used to rank the Jefferson Lab cluster, and it is based on an open source distribution with performance enhancement for Intel Xeon Phi processors," said Jie Chen, a member of Jefferson Lab's High Performance Computing group. "To achieve the performance number for the SciPhi-XVI cluster, a set of HPL running parameters has to be determined through many trials. In particular, the size of the matrix representing the linear equations solved by the HPL was chosen to be 2,457,600." Jefferson Lab conducts discovery-caliber nuclear physics research in exploring the atomic nucleus and its fundamental constituents, such as protons and neutrons and these particles' building blocks: quarks and gluons. The theory of how these particles interact—Quantum Chromodynamics, or QCD, is calculated in the form of lattice QCD, or LQCD. Scientists will use the SciPhi-XVI machine for LQCD calculations, as well as a test system for the use of the Xeon Phi processors for analyzing data that is generated in Jefferson Lab's experimental physics program. The Chroma code, which is used to solve LQCD on the Xeon Phi processor platform, will be used with an open source library of equation solvers for LQCD. The High Performance Computing group has been modernizing the freely available Chroma code, so that it will run faster and provide better results, more quickly on the new generations of hardware. "There are always challenges to bringing online state-of-the-art hardware systems. We are very proud of our staff for their hard work and dedication that made this achievement possible," said Jefferson Lab Chief Information Office, Amber Boehnlein. "This hardware achievement, together with our efforts to continually optimize the performance of our software for the newest systems, positions the laboratory to continue performing great science in high-performance computation." Jefferson Lab work on Chroma is funded through the U.S. Department of Energy Office of Science. The research is carried out in collaboration with partner institutions, such as DOE's National Energy Research Scientific Computing Center (NERSC) Exascale Science Application Program (NESAP) and the Scientific Discovery through Advanced Computing (SciDAC) program. NERSC is a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. Supercomputer computation time on SciPhi-XVI is allocated based on the recommendations of the United States Lattice Gauge Theory Computational Program (USQCD), a consortium of top LQCD theorists in the United States that spans both high-energy physics and nuclear physics. The USQCD consortium developed the QUDA code library that is used with the Chroma code. Explore further: K computer takes first place on HPCG benchmark


News Article | March 4, 2016
Site: www.cemag.us

Electrons can extend our view of microscopic objects well beyond what’s possible with visible light — all the way to the atomic scale. A popular method in electron microscopy for looking at tough, resilient materials in atomic detail is called STEM, or scanning transmission electron microscopy, but the highly focused beam of electrons used in STEM can also easily destroy delicate samples. This is why using electrons to image biological or other organic compounds, such as chemical mixes that include lithium — a light metal that is a popular element in next-generation battery research — requires a very low electron dose. Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new imaging technique, tested on samples of nanoscale gold and carbon, that greatly improves images of light elements using fewer electrons. The newly demonstrated technique, dubbed MIDI-STEM, for matched illumination and detector interferometry STEM, combines STEM with an optical device called a phase plate that modifies the alternating peak-to-trough, wave-like properties (called the phase) of the electron beam. This phase plate modifies the electron beam in a way that allows subtle changes in a material to be measured, even revealing materials that would be invisible in traditional STEM imaging. Another electron-based method, which researchers use to determine the detailed structure of delicate, frozen biological samples, is called cryo-electron microscopy, or cryo-EM. While single-particle cryo-EM is a powerful tool — it was named as science journal Nature’s 2015 Method of the Year — it typically requires taking an average over many identical samples to be effective. Cryo-EM is generally not useful for studying samples with a mixture of heavy elements (for example, most types of metals) and light elements like oxygen and carbon. “The MIDI-STEM method provides hope for seeing structures with a mixture of heavy and light elements, even when they are bunched closely together,” says Colin Ophus, a project scientist at Berkeley Lab’s Molecular Foundry and lead author of a study, published last week in Nature Communications, that details this method. If you take a heavy-element nanoparticle and add molecules to give it a specific function, conventional techniques don’t provide an easy, clear way to see the areas where the nanoparticle and added molecules meet. “How are they aligned? How are they oriented?” Ophus asks. “There are so many questions about these systems, and because there wasn’t a way to see them, we couldn’t directly answer them.” While traditional STEM is effective for “hard” samples that can stand up to intense electron beams, and cryo-EM can image biological samples, “We can do both at once” with the MIDI-STEM technique, says Peter Ercius, a Berkeley Lab staff scientist at the Molecular Foundry and co-author of the study. The phase plate in the MIDI-STEM technique allows a direct measure of the phase of electrons that are weakly scattered as they interact with light elements in the sample. These measurements are then used to construct so-called phase-contrast images of the elements. Without this phase information, the high-resolution images of these elements would not be possible. In this study, the researchers combined phase plate technology with one of the world’s highest resolution STEMs, at Berkeley Lab’s Molecular Foundry, and a high-speed electron detector. They produced images of samples of crystalline gold nanoparticles, which measured several nanometers across, and the superthin film of amorphous carbon that the particles sat on. They also performed computer simulations that validated what they saw in the experiment. The phase plate technology was developed as part of a Berkeley Lab Laboratory Directed Research and Development grant in collaboration with Ben McMorran at University of Oregon. The MIDI-STEM technique could prove particularly useful for directly viewing nanoscale objects with a mixture of heavy and light materials, such as some battery and energy-harvesting materials, that are otherwise difficult to view together at atomic resolution. It also might be useful in revealing new details about important two-dimensional proteins, called S-layer proteins, that could serve as foundations for engineered nanostructures but are challenging to study in atomic detail using other techniques. In the future, a faster, more sensitive electron detector could allow researchers to study even more delicate samples at improved resolution by exposing them to fewer electrons per image. “If you can lower the electron dose you can tilt beam-sensitive samples into many orientations and reconstruct the sample in 3D, like a medical CT scan. There are also data issues that need to be addressed,” Ercius says, as faster detectors will generate huge amounts of data. Another goal is to make the technique more “plug-and-play,” so it is broadly accessible to other scientists. Berkeley Lab’s Molecular Foundry is a DOE Office of Science User Facility. Researchers from the University of Oregon, Gatan Inc., and Ulm University in Germany also participated in the study. Source: Berkeley Lab


News Article | December 14, 2016
Site: www.eurekalert.org

Findings by Berkeley Lab researchers mean the Arctic may be even less of a carbon sink than previously thought When the frozen Arctic tundra starts to thaw around June of each year, the snow melting and the ground softening, the soil may release a large pulse of greenhouse gases, namely, carbon dioxide and methane. Little has been known about such releases. Now scientists at the U.S. Department of Energy's (DOE) Lawrence Berkeley National Laboratory, in collaboration with a team of other scientists taking measurements both in the field and in the lab, have quantified the scale of such releases and explained the underlying mechanisms for the phenomenon. Their study was based on a spring pulse in northern Alaska that they documented in 2014 that included CO2 emissions equivalent to 46 percent of the net CO2 that is absorbed in the summer months and methane emissions that added 6 percent to summer fluxes. What's more, recent climate trends may make such emissions more frequent, the scientists conclude. "We can see the effects of climate change happening more rapidly in the Arctic than in any other part of world," said Berkeley Lab scientist Naama Raz-Yaseef. "So we need to understand the processes that are occurring and what to expect in the future. The amount of CO2 and methane (CH¬¬4) in the atmosphere determines the greenhouse effect--so we need to know more about these sources." Their study was recently published in the journal Geophysical Research Letters in a paper titled, "Large CO2 and CH4 Emissions from Polygonal Tundra During Spring Thaw in Northern Alaska." Raz-Yaseef, an ecohydrologist, was the lead author. Co-authors were Berkeley Lab researchers Margaret Torn, Yuxin Wu, and Tim Kneafsey; Dave Billesbach of the University of Nebraska; Anna Liljedahl and Vladimir Romanovsky of the University of Alaska; David Cook of Argonne National Laboratory; and Stan Wullschleger of Oak Ridge National Laboratory. The study was a project of DOE's Next-Generation Ecosystem Experiment (NGEE-Arctic), which seeks to gain a predictive understanding of the Arctic terrestrial ecosystem's feedback to climate. The team used two towers 4 meters high to collect gas flux readings. The towers are located about 5 km apart near Barrow, Alaska, the northernmost town in the U.S, one tower operated by NGEE and the other by DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility. "Typically we just measure what happens during summer," Raz-Yaseef said. "We assume nothing happens during winter because it's frozen, so there's no biological activity. But in 2014 we measured a big flux of carbon and methane emitted from the ground, at the very beginning of thaw and snowmelt. At first we didn't know if it was real or a measurement error. Then we confirmed it with another tower 5 km away (the ARM tower), which measured a very similar flux of the same size during the same period." That spring they measured a total of three pulses, the longest lasting five days. These findings are important in understanding the carbon balance of ecosystems in the Arctic. So far, the Arctic is considered a carbon sink, meaning it absorbs more CO2 than it emits on an annual basis, thanks mainly to the vegetation that grows in the summer. (Oceans and forests are far larger carbon sinks.) "Given our findings, the Arctic is an even smaller carbon sink than we thought since during some years nearly half of the summer uptake of CO2 is offset with these spring emissions," Raz-Yaseef said. Why is the soil releasing gases? Soils everywhere are full of microbes, and microbial activity emits gases. What makes the Arctic different is the way the ground freezes. As temperatures start to drop in the late fall, the soil freezes from the permafrost up and from the surface down. The active layer in between is trapped in the middle. "For about a month after the surface freezes, the middle layer is still active," Raz-Yaseef explained. "The temperature is still around 0 (Celsius), so microbial activity continues to take place, producing carbon and methane, but it's trapped beneath the surface ice. It can't be emitted, so it's stored throughout the winter." When the ice cap thaws in spring, the gases escape to the atmosphere. The scientists confirmed these observations in several ways. First, they measured soil temperatures every 10 cm down through the permafrost 1.5 meters below ground every 5 minutes year-round. Second, they extracted large, frozen soil cores that were transported to the laboratory for experiments. "(Berkeley Lab scientist) Yuxin Wu thawed a core under controlled conditions, but just like in nature," Raz-Yaseef said. "Just when the surface ice melted in the lab, he measured a large flux of carbon and methane. This is formed from gases trapped in the soil over winter." Separately, Berkeley Lab scientist Timothy Kneafsey took CT scans of hundreds of frozen cores and found gas-rich channels and pockets near the surface, which the scientists posited could serve as accumulation zones and pathways for gas flow. Spring pulses not annual but may become more frequent In fact, it's these channels and pockets that enable the emission pulses, the scientists conclude in their paper. The spring pulses were not detected in 2013 or 2016, and 2015 could not be measured due to equipment malfunction. Since it's not an annual event they analyzed the conditions under which the pulses might occur. "The process is more likely to occur when there are events of rain on ice," Raz-Yaseef said. "When it thaws and freezes repeatedly, that produces cracks in the frozen soil, and through these cracks the gas can be emitted." During warmer years, the scientists expect the spring pulses to be more frequent. "We expect there will be more gas built up due to longer and warmer fall seasons and more frequent pulse events due to more rain on ice in the spring," Raz-Yaseef said. One thing that's for sure is the scientists now know not to ignore gas emissions in early spring. "Now we'll put the towers up much, much earlier, just in case there's another event, we'll be sure to catch it," Raz-Yaseef said. "It's one of those things, once you're aware of it, you open your eyes and looks for it." NGEE-Arctic is supported by the DOE Office of Science. ARM is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 29, 2016
Site: www.eurekalert.org

Tiny, glowing crystals designed to detect and capture heavy-metal toxins such as lead and mercury could prove to be a powerful new tool in locating and cleaning up contaminated water sources. Motivated by publicized cases in which high levels of heavy metals were found in drinking water in Flint, Mich., and Newark, N.J., a science team led by researchers at Rutgers University used intense X-rays at Lawrence Berkeley National Laboratory (Berkeley Lab) to probe the structure of the crystals they developed and learn how they bind to heavy metals. The crystals function like miniature, reusable sensors and traps, and are known as luminescent metal-organic frameworks, or LMOFs. One type of LMOF that the team tested was found to selectively take up more than 99 percent of mercury from a test mixture of heavy and light metals within 30 minutes, according to recent results published in Applied Materials and Interfaces. No other MOFs have performed as well in this dual role of detecting and capturing, or "adsorbing," toxic heavy metals, the team reported. Simon Teat, a Berkeley Lab staff scientist, studied individual LMOF crystals, each measuring about 100 microns (millionths of a meter), with X-rays at the lab's Advanced Light Source (ALS). Using diffraction patterns produced as the X-ray light struck the LMOF samples, Teat applied software tools to map their three-dimensional structure with atomic resolution. The ALS is one of just a few synchrotron X-ray light sources in the world that have dedicated experimental stations for chemical crystallography studies of crystallized chemical compounds such as MOFs. It all starts with structure What he found was a patterned, grid-like 3-D structure containing carbon, hydrogen, oxygen, nitrogen, and zinc atoms that framed large, open channels. These atomic-scale structural details are key to understanding how the LMOFs bind heavy metals, and can also aid in designing more highly specialized structures. "With MOFs, you're typically interested in using the holes for something," Teat explained. In this case, the structure allows heavy metals to enter these open channels and chemically bind to the MOFs. Their very open framework gives the MOFs an abundant surface area relative to their size, which allows them to take in a large amount of contaminants. The LMOF structure was engineered to glow by incorporating a fluorescent chemical component, or ligand. "When the metal binds to the fluorescent ligand, the resulting framework fluoresces," Teat said. The fluorescence of the LMOFs switches off when they interact with the heavy metals. According to Jing Li, a chemistry professor at Rutgers University who led the research, the technology could be a money-saving solution. "Others had developed MOFs for either the detection of heavy metals or for their removal, but nobody before had really investigated one that does both," Li added. Intense X-rays produced at synchrotrons are the best way to map the 3-D structure of the MOFs, Li said, adding, "Knowing the crystal structures is one of the most important aspects of our research. You need those in order to perform subsequent characterizations and to understand the properties of these materials." In their tests, researchers found that the LMOFs bind strongly to mercury and lead, but bind weakly to lighter metals such as magnesium and calcium that are also found in water supplies but do not pose the same hazards. This selective trait, based on the molecular makeup of the LMOFs, is important, Li said. "We need to have a MOF that is selective and will only take the harmful species." The LMOFs can also be recycled. Researchers found that they could collect, clean, and then reuse the LMOFs for three cycles of toxic cleansing before their performance began to degrade. The study notes that heavily industrialized areas, cities with antiquated water regulations, and agricultural communities can be particularly susceptible to groundwater contamination, which can lead to soil contamination if not addressed. This can cause the contaminants to be taken up by plants and animals in the surrounding environment, broadening the pathways of exposure. Li said that further R&D could explore lower-cost and more durable LMOFs that could last for more cycles, and researchers could also pursue the development of water filters by blending the LMOFs with polymers to create a solid film. "These filters could be used for capture on a larger scale," she said. "We would like to continue with this research," Li said, adding that her team would like to test the system's performance on actual contaminated water sources if funding becomes available. "These are promising results, but we have a long way to go." Her team also has used Berkeley Lab's ALS to determine the crystal structures of MOFs for a wide variety of other applications, including high-explosives detection; toxin detection in foods; and new types of light-emitting components for LEDs, known as phosphors, that incorporate cheaper, more abundant materials. The Advanced Light Source is a DOE Office of Science User Facility. Researchers from the University of Texas at Dallas and Rider University also participated in this research. The work was supported by the DOE Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 8, 2016
Site: www.eurekalert.org

New findings suggest the rate at which CO2 is accumulating in the atmosphere has plateaued in recent years because Earth's vegetation is grabbing more carbon from the air than in previous decades. That's the conclusion of a new multi-institutional study led by a scientist from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). It's based on extensive ground and atmospheric observations of CO2, satellite measurements of vegetation, and computer modeling. The research is published online Nov. 8 in the journal Nature Communications. To be clear, human activity continues to emit increasing amounts of carbon, and the atmospheric concentration of CO2, now at 400 parts per million (ppm), continues to rise. But the scientists found that between 2002 and 2014, the rate at which CO2 increased in the atmosphere held steady at about 1.9 ppm/year. In addition, the proportion of the CO2 emitted annually by human activity that remains in the atmosphere declined by about 20 percent. This slowdown can't keep pace with emissions, so the overall amount of human-caused CO2 in the atmosphere increased, just not as quickly. And for that, new research suggests, we can thank plants. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly," says Trevor Keenan, a research scientist in Berkeley Lab's Climate & Ecosystem Sciences Division and the corresponding author of the paper. The scientists attribute the stalled CO2 growth rate to an uptick in land-based photosynthetic activity, fueled by rising CO2 levels from fossil fuel emissions. It's a snowball effect: as CO2 levels rise in the atmosphere, photosynthetic activity flourishes and plants take in more carbon, sparking more plant growth, more photosynthesis, and more carbon uptake. They also identified another player. Plant respiration, a process in which plants use oxygen and produce CO2, did not increase as quickly as photosynthesis in recent years. This is because plant respiration is sensitive to temperature, and it was affected by the recent slowdown in global warming that was observed most significantly over vegetated land. So, between 2002 and 2014, plants took in more CO2 through photosynthesis, but did not "exhale" more CO2 into the atmosphere through respiration. "These changes decreased the amount of anthropogenic CO2 that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO2," says Keenan. Their findings provide a possible answer to a climatic mystery. The growth rate of atmospheric CO2 climbed steadily during the latter half of the 20th century, from 0.75 ppm/year in 1959 to 1.86 ppm/year in 2002. But Keenan and colleagues discovered an inflection point last year when they analyzed the latest data from the Global Carbon Project, which quantifies carbon emissions and their sources annually. Since 2002, the growth rate has remained flat. This pause is especially surprising because it has occurred as human activity pumps more and more carbon into the atmosphere. All that CO2 must be going somewhere, so the scientists suspected something about the carbon cycle has recently changed in a big way. "We believed one of the planet's main carbon sinks had unexpectedly strengthened. The question was: which one?" says Keenan. The scientists ruled out oceans as a dominant cause because most computer models agree the amount of carbon taken in by oceans has increased steadily in recent years. That left terrestrial ecosystems, which undergo a large year-to-year variability in carbon uptake, and the two biggest influences on this variability are photosynthesis and plant respiration. To study these influences, the scientists used ten "global dynamic vegetation models" that predict how the terrestrial carbon cycle changes over time. They also used a model that incorporates satellite measurements of vegetation cover and plant activity to predict global photosynthesis and respiration rates. They validated the model by comparing its results with data from AmeriFlux and FLUXNET, which are networks of eddy-covariance research towers that measure ecosystem carbon, water, and energy fluxes in North and South America. Berkeley Lab manages AmeriFlux for the Department of Energy. Model projections were generated using different scenarios of atmospheric CO2 level, temperature, soil moisture, and other processes. This enabled the researchers to evaluate the impacts of these processes on the planet's terrestrial carbon cycle. Taken together, the models zeroed in on rising CO2 levels as having the biggest impact on photosynthesis and plant respiration. The result is a boost in terrestrial carbon uptake, particularly in tropical and high-latitude ecosystems. Specifically, the models suggest rising CO2 levels caused terrestrial ecosystems to double the rate at which they take in carbon, from between one and two petagrams of carbon per year in the 1950s, to between two and four petagrams of carbon per year in the 2000s. For comparison, human activity emits between nine and ten petagrams of carbon per year (one petagram is one trillion kilograms). The scientists conclude this increase in carbon uptake put the brakes on the growth rate of atmospheric CO2 between 2002 and 2014. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan, adding that their results answer questions and pose new ones. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate." The research is partly funded by the Laboratory Directed Research and Development Program of Berkeley Lab and the Department of Energy's Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 30, 2016
Site: www.cemag.us

Tiny, glowing crystals designed to detect and capture heavy-metal toxins such as lead and mercury could prove to be a powerful new tool in locating and cleaning up contaminated water sources. Motivated by publicized cases in which high levels of heavy metals were found in drinking water in Flint, Mich., and Newark, N.J., a science team led by researchers at Rutgers University used intense X-rays at Lawrence Berkeley National Laboratory (Berkeley Lab) to probe the structure of the crystals they developed and learn how they bind to heavy metals. The crystals function like miniature, reusable sensors, and traps, and are known as luminescent metal-organic frameworks, or LMOFs. One type of LMOF that the team tested was found to selectively take up more than 99 percent of mercury from a test mixture of heavy and light metals within 30 minutes, according to recent results published in Applied Materials and Interfaces. No other MOFs have performed as well in this dual role of detecting and capturing, or “adsorbing,” toxic heavy metals, the team reports. Simon Teat, a Berkeley Lab staff scientist, studied individual LMOF crystals, each measuring about 100 microns (millionths of a meter), with X-rays at the lab’s Advanced Light Source (ALS). Using diffraction patterns produced as the X-ray light struck the LMOF samples, Teat applied software tools to map their three-dimensional structure with atomic resolution. The ALS is one of just a few synchrotron X-ray light sources in the world that have dedicated experimental stations for chemical crystallography studies of crystallized chemical compounds such as MOFs. What he found was a patterned, grid-like 3D structure containing carbon, hydrogen, oxygen, nitrogen, and zinc atoms that framed large, open channels. These atomic-scale structural details are key to understanding how the LMOFs bind heavy metals, and can also aid in designing more highly specialized structures. “With MOFs, you’re typically interested in using the holes for something,” Teat explains. In this case, the structure allows heavy metals to enter these open channels and chemically bind to the MOFs. Their very open framework gives the MOFs an abundant surface area relative to their size, which allows them to take in a large amount of contaminants. The LMOF structure was engineered to glow by incorporating a fluorescent chemical component, or ligand. “When the metal binds to the fluorescent ligand, the resulting framework fluoresces,” Teat says. The fluorescence of the LMOFs switches off when they interact with the heavy metals. According to Jing Li, a chemistry professor at Rutgers University who led the research, the technology could be a money-saving solution. “Others had developed MOFs for either the detection of heavy metals or for their removal, but nobody before had really investigated one that does both,” Li adds. Intense X-rays produced at synchrotrons are the best way to map the 3D structure of the MOFs, Li says, adding, “Knowing the crystal structures is one of the most important aspects of our research. You need those in order to perform subsequent characterizations and to understand the properties of these materials.” In their tests, researchers found that the LMOFs bind strongly to mercury and lead, but bind weakly to lighter metals such as magnesium and calcium that are also found in water supplies but do not pose the same hazards. This selective trait, based on the molecular makeup of the LMOFs, is important, Li says. “We need to have a MOF that is selective and will only take the harmful species.” The LMOFs can also be recycled. Researchers found that they could collect, clean, and then reuse the LMOFs for three cycles of toxic cleansing before their performance began to degrade. The study notes that heavily industrialized areas, cities with antiquated water regulations, and agricultural communities can be particularly susceptible to groundwater contamination, which can lead to soil contamination if not addressed. This can cause the contaminants to be taken up by plants and animals in the surrounding environment, broadening the pathways of exposure. Li said that further R&D could explore lower-cost and more durable LMOFs that could last for more cycles, and researchers could also pursue the development of water filters by blending the LMOFs with polymers to create a solid film. “These filters could be used for capture on a larger scale,” she says. “We would like to continue with this research,” Li says, adding that her team would like to test the system’s performance on actual contaminated water sources if funding becomes available. “These are promising results, but we have a long way to go.” Her team also has used Berkeley Lab’s ALS to determine the crystal structures of MOFs for a wide variety of other applications, including high-explosives detection; toxin detection in foods; and new types of light-emitting components for LEDs, known as phosphors, that incorporate cheaper, more abundant materials. The Advanced Light Source is a DOE Office of Science User Facility. Researchers from the University of Texas at Dallas and Rider University also participated in this research. The work was supported by the DOE Office of Science.


News Article | November 10, 2016
Site: www.eurekalert.org

At its start, the universe was a superhot melting pot that very briefly served up a particle soup resembling a "perfect," frictionless fluid. Scientists have recreated this "soup," known as quark-gluon plasma, in high-energy nuclear collisions to better understand our universe's origins and the nature of matter itself. The physics can also be relevant to neutron stars, which are the extraordinarily dense cores of collapsed stars. Now, powerful supercomputer simulations of colliding atomic nuclei, conducted by an international team of researchers including a Berkeley Lab physicist, provide new insights about the twisting, whirlpool-like structure of this soup and what's at work inside of it, and also lights a path to how experiments could confirm these characteristics. The work is published in the Nov. 1 edition of Physical Review Letters. This soup contains the deconstructed ingredients of matter, namely fundamental particles known as quarks and other particles called gluons that typically bind quarks to form other particles, such as the protons and neutrons found at the cores of atoms. In this exotic plasma state -- which can reach trillions of degrees Fahrenheit, hundreds of thousands of times hotter than the sun's core -- protons and neutrons melt, freeing quarks and gluons from their usual confines at the center of atoms. These record-high temperatures have been achieved by colliding gold nuclei at Brookhaven National Laboratory's RHIC (Relativistic Heavy Ion Collider), for example, and lead nuclei at CERN's LHC (Large Hadron Collider). Experiments at RHIC discovered in 2005 that quark-gluon plasma behaves like a fluid. In addition to gold nuclei, RHIC has also been used to collide protons, copper and uranium. The LHC began conducting heavy-ion experiments in 2014, and has confirmed that the quark-gluon plasma behaves like a fluid. There remain many mysteries about the inner workings of this short-lived plasma state, which may only have existed for millionths of a second in the newborn universe, and nuclear physicists are using a blend of theory, simulations and experiments to glean new details about this subatomic soup. "In our sophisticated simulations, we found that there is much more structure to this plasma than we realized," said Xin-Nian Wang, a theorist in the Nuclear Science Division at Berkeley Lab who has worked for years on the physics of high-energy nuclear collisions. When plotted out in two dimensions, the simulations found that slightly off-center collisions of heavy nuclei produce a wobbling and expanding fluid, Wang said, with local rotation that is twisted in a corkscrew-like fashion. This corkscrew character relates to the properties of the colliding nuclei that created the plasma, which the simulation showed expanding along -- and perpendicular to -- the beam direction. Like spinning a coin by flicking it with your finger, the simulations showed that the angular momentum properties of the colliding nuclei can transfer spin properties to the quark gluon plasma in the form of swirling, ring-like structures known as vortices. The simulations showed two of these doughnut-shaped vortices -- each with a right-handed orientation around each direction of the separate beams of the colliding nuclei -- and also many pairs of oppositely oriented vortices along the longest dimension of the plasma. These doughnut-shaped features are analogous to swirling smoke rings and are a common feature in classical studies of fluids, a field known as hydrodynamics. The simulations also revealed a patterned outward flow from hot spots in the plasma that resemble the spokes of a wheel. The time scale covered in the simulation was infinitesimally small, Wang said, roughly the amount of time it takes light to travel the distance of 10-20 protons. During this time the wobbling fluid explodes like a fireball, spurting the particle soup outward from its middle more rapidly than from its top. Any new understanding of quark-gluon plasma properties should be helpful in interpreting data from nuclei-colliding experiments, Wang said, noting that the emergence of several localized doughnut-like structures in the simulations was "completely unexpected." "We can think about this as opening a completely new window of looking at quark-gluon plasmas, and how to study them," he said. "Hopefully this will provide another gateway into understanding why this quark-gluon fluid is such a perfect fluid -- the nature of why this is so is still a puzzle. This work will benefit not only theory, but also experiments." The simulations provide more evidence that the quark-gluon plasma behaves like a fluid, and not a gas as had once been theorized. "The only way you can describe this is to have a very small viscosity," or barely any friction, a characteristic of a so-called 'perfect fluid' or 'fundamental fluid,'" Wang said. But unlike a familiar fluid like water, the simulation focuses on a fluid state hundreds of times smaller than a water molecule. Michael Lisa, a physics professor at Ohio State University who is part of the collaboration supporting the Solenoidal Tracker at RHIC (STAR), said the so-called vorticity or "swirl structure" of this plasma has never been measured experimentally, though this latest theoretical work may help to home in on it. STAR is designed to study the formation and characteristics of the quark-gluon plasma. "Wang and his collaborators have developed a sophisticated, state-of-the-art hydrodynamic model of the quark-gluon plasma and have identified swirling structures that vary within the fluid itself," he said. "Even more useful is the fact that they propose a method to measure these structures in the laboratory." Lisa also said there is ongoing analysis work to confirm the simulation's findings in data from experiments at RHIC and the LHC. "It is precisely innovations like this, where theory and experiment collaborate to explore new phenomena, that hold the greatest hope for greater insight into the quark-gluon plasma," he said. "Many tools have been used to probe the inner working mechanics and symmetry properties of this unique matter," said Zhangbu Xu, a spokesperson for the STAR collaboration and a staff scientist at Brookhaven National Laboratory. He also said that preliminary results from STAR also suggest some spinning motion in the fluid, and the simulation work "adds a new dimension" to this possibility. To carry out the simulations, the team performed calculations using the Green Cube at the GSI Helmholtz Center for Heavy Ion Research in Germany and also at Central China Normal University. Researchers from Germany's Frankfurt Institute for Advanced Studies, Goethe University, GSI Helmholtz Center for Heavy Ion Research; and the University of Science and Technology of China and Central China Normal University also participated in this work. This work was supported by the Department of Energy's Office of Science, Germany's Helmholtz Association, GSI Helmholtz Center for Heavy Ion Research, and State Offensive for the Development of Scientific and Economic Excellence (LOEWE) program. The Relativistic Heavy Ion Collider is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science. .


News Article | November 17, 2016
Site: www.eurekalert.org

For the third time in its history, Thomas Jefferson National Accelerator Facility is home to one of the world's 500 fastest supercomputers. The SciPhi-XVI supercomputer was just listed as a TOP500 Supercomputer Site on November 14, placing 397th on the 48th edition of the list of the world's top supercomputers. The Top500 list was announced Monday, Nov. 14, at The International Conference for High Performance Computing, Networking, Storage and Analysis, SC16, in Salt Lake City. It's announced semi-annually and is comprised of systems that are ranked according to their computation speeds as determined by the LINPACK Benchmark, which is a calculation of a linear system represented by a large double precision dense matrix. SciPhi-XVI was delivered to Jefferson Lab and installed in mid-August and was deployed on Sept. 30 with 200 nodes. An additional 64 nodes were added to the machine in late October, and the High Performance Computing group then benchmarked the machine. The SciPhi-XVI machine benchmarked at 425.8 Teraflops using 256 of its 264 nodes. The cluster's 264 nodes are comprised of the second-generation Intel® Xeon Phi™ chips, which were released in 2016 and are codenamed Knights Landing. Each node has 64 computing cores, and individual nodes within the machine are connected via the Intel® Omni-Path Architecture, which supports a 100 Gigabit/second data transfer rate between the nodes. "A special Intel distributed LINPACK benchmark software, HPL (High Performance Linpack), was used to rank the Jefferson Lab cluster, and it is based on an open source distribution with performance enhancement for Intel® Xeon Phi™ processors," said Jie Chen, a member of Jefferson Lab's High Performance Computing group. "To achieve the performance number for the SciPhi-XVI cluster, a set of HPL running parameters has to be determined through many trials. In particular, the size of the matrix representing the linear equations solved by the HPL was chosen to be 2,457,600." Jefferson Lab conducts discovery-caliber nuclear physics research in exploring the atomic nucleus and its fundamental constituents, such as protons and neutrons and these particles' building blocks: quarks and gluons. The theory of how these particles interact -- Quantum Chromodynamics, or QCD, is calculated in the form of lattice QCD, or LQCD. Scientists will use the SciPhi-XVI machine for LQCD calculations, as well as a test system for the use of the Xeon Phi™ processors for analyzing data that is generated in Jefferson Lab's experimental physics program. The Chroma code, which is used to solve LQCD on the Xeon Phi™ processor platform, will be used with an open source library of equation solvers for LQCD. The High Performance Computing group has been modernizing the freely available Chroma code, so that it will run faster and provide better results, more quickly on the new generations of hardware. "There are always challenges to bringing online state-of-the-art hardware systems. We are very proud of our staff for their hard work and dedication that made this achievement possible," said Jefferson Lab Chief Information Office, Amber Boehnlein. "This hardware achievement, together with our efforts to continually optimize the performance of our software for the newest systems, positions the laboratory to continue performing great science in high-performance computation." Jefferson Lab work on Chroma is funded through the U.S. Department of Energy Office of Science. The research is carried out in collaboration with partner institutions, such as DOE's National Energy Research Scientific Computing Center (NERSC) Exascale Science Application Program (NESAP) and the Scientific Discovery through Advanced Computing (SciDAC) program. NERSC is a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory. Supercomputer computation time on SciPhi-XVI is allocated based on the recommendations of the United States Lattice Gauge Theory Computational Program (USQCD), a consortium of top LQCD theorists in the United States that spans both high-energy physics and nuclear physics. The USQCD consortium developed the QUDA code library that is used with the Chroma code. Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility for the U.S. Department of Energy's Office of Science. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.


News Article | December 16, 2016
Site: www.eurekalert.org

Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). A new way to visualize electric fields "This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid," said Jason Horng, a co-lead author of a study published Dec. 16 in Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab's Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. "The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell," said Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. "One of the outstanding problems in studying a large network of cells is understanding how information propagates between them," Balch said. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui said, "This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane." The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers said. Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab's Advanced Light Source to understand the effects of an electric field on graphene's absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene's light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer's light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field's strength and location along the surface of the graphene. The new imaging platform -- dubbed CAGE for "Critically coupled waveguide-Amplified Graphene Electric field imaging device" -- proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field's location along the graphene sheet's surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Balch said that there are already plans to test the platforms with living cells. "We are working with collaborators to test this with real heart cells," she said. "There are several potential applications for this research in heart health and drug screening." There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. "The kind of elegance behind this system comes from its generality," Balch said. "It can be sensitive to anything that carries charge." The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


Home > Press > New graphene-based system could help us see electrical signaling in heart and nerve cells: Berkeley-Stanford team creates a system to visualize faint electric fields Abstract: Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). A new way to visualize electric fields "This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid," said Jason Horng, a co-lead author of a study published Dec. 16 in Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab's Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. "The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell," said Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. "One of the outstanding problems in studying a large network of cells is understanding how information propagates between them," Balch said. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui said, "This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane." The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers said. Tapping graphene's light-absorbing properties Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab's Advanced Light Source to understand the effects of an electric field on graphene's absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene's light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer's light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field's strength and location along the surface of the graphene. Millionths-of-a-volt sensitivity The new imaging platform -- dubbed CAGE for "Critically coupled waveguide-Amplified Graphene Electric field imaging device" -- proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field's location along the graphene sheet's surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Next up: living heart cells Balch said that there are already plans to test the platforms with living cells. "We are working with collaborators to test this with real heart cells," she said. "There are several potential applications for this research in heart health and drug screening." There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. "The kind of elegance behind this system comes from its generality," Balch said. "It can be sensitive to anything that carries charge." ### The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility. About Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > A new way to look at MOFs: International study challenges prevailing view on how metal organic frameworks store gases Abstract: An international collaboration of scientists led by Omar Yaghi, a renowned chemist with the Lawrence Berkeley National Laboratory (Berkeley Lab), has developed a technique they dubbed "gas adsorption crystallography" that provides a new way to study the process by which metal-organic frameworks (MOFs) - 3D crystals with extraordinarily large internal surface areas - are able to store immense volumes of gases such a carbon dioxide, hydrogen and methane. This new look at MOFs led to a discovery that holds promise for the improved design of MOFs tailored specifically for carbon capture, or for the use of hydrogen and natural gas (methane) fuels. "Up to this point we have been shooting in the dark in our designing of MOFs without really understanding the fundamental reasons for why one MOF is better than another," says Yaghi. "Our new study expands our view and thinking about MOFs by introducing gas-gas interactions and their organization into superlattices that are a major factor in achieving high storage capacity for gases." Yaghi, who invented MOFs in the early 1990s while at the Arizona State University, is now a faculty scientist with Berkeley Lab's Materials Sciences Division and the University of California (UC) Berkeley Chemistry Department, where he also serves as co-director of the Kavli Energy NanoScience Institute (Kavli-ENSI). For this latest study, Yaghi and Osamu Terasaki, a chemist with Stockholm University, along with collaborators from institutes in the United States, China, South Korea and Saudi Arabia, interfaced a gas adsorption apparatus with a form of X-ray crystallography, called in-situ small angle X-ray scattering (SAXS). The result was a gas adsorption crystallography technique that uncovered evidence of cooperative interactions between gas molecules within a MOF. "These cooperative gas-gas interactions lead to highly organized behavior, which results in the formation of gas aggregates about 40 nanometers in size," Yaghi says. "The aggregates are arranged in orderly superlattice structures, which is in stark contrast to the prevailing view that the adsorption of gas molecules by MOFs occurs stochastically." Yaghi and Terasaki are the corresponding authors of a paper describing this study that has been published in Nature. The paper is titled "Extra adsorption and adsorbate superlattice formation in metal-organic frameworks." The lead authors are Hae Sung Cho, Hexiang Deng and Keiichi Miyasaka. Other co-authors are Zhiyue Dong, Minhyung Cho, Alexander Neimark and Jeung Ku Kang. Since Yaghi's original invention, thousands of different types of MOFs have been created. A typical MOF consists of a metal oxide center surrounded by organic molecules that form a highly porous three-dimensional crystal framework. The variations on this basic structure are virtually limitless and can be customized so that a MOF's pores adsorb specific gas molecules, making MOFs potentially ideal gas storage vessels. "One gram of MOF has a surface area of up to 10,000 square meters onto which it is possible to compact gas molecules into MOF pores like so many bees on a honeycomb without the high pressures and low temperatures usually required for compressed gas storage," Yaghi says. The selectivity and uptake capacity of a MOF are determined by the nature of the gas molecule being adsorbed and its interactions with the MOF's constituents. While the interactions of gas molecules with the internal surface of a MOF and among themselves within individual pores have been extensively studied, the gas-gas interactions across a MOF's pore walls have not been explored until now. With their SAXS-based gas adsorption crystallography technique, Yaghi, Terasaki and their collaborators discovered that local strain in the MOF induced by pore-filling can give rise to collective and long-range gas-gas interactions, resulting in the formation of superlattices that extend over several pores. "We were able to track and map the distribution and ordering of adsorbate molecules in five members of the mesoporous MOF-74 series along entire adsorption-desorption isotherms," Yaghi says. "In all cases, we found that the capillary condensation that fills the pores gives rise to the formation of extra-adsorption domains that span several neighboring pores and have a higher adsorbate density than non-domain pores." The next step, Yaghi says, will be to apply this new gas adsorption crystallography technique to other porous molecular systems that can serve as gas storage vessels, such as covalent organic frameworks (COFs) and zeolitic imidazolate frameworks (ZIFs). "We want to generate a comprehensive view of how various gases interact collectively within porous materials interior," says Yaghi. "We will then feed this data into computer models to improve the theory of gas adsorption." About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > New form of electron-beam imaging can see elements that are 'invisible' to common methods: Berkeley Lab-pioneered 'MIDI-STEM' produces high-resolution views of lightweight atoms Abstract: Electrons can extend our view of microscopic objects well beyond what's possible with visible light--all the way to the atomic scale. A popular method in electron microscopy for looking at tough, resilient materials in atomic detail is called STEM, or scanning transmission electron microscopy, but the highly-focused beam of electrons used in STEM can also easily destroy delicate samples. This is why using electrons to image biological or other organic compounds, such as chemical mixes that include lithium--a light metal that is a popular element in next-generation battery research--requires a very low electron dose. Scientists at the Department of Energy'sc Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new imaging technique, tested on samples of nanoscale gold and carbon, that greatly improves images of light elements using fewer electrons. The newly demonstrated technique, dubbed MIDI-STEM, for matched illumination and detector interferometry STEM, combines STEM with an optical device called a phase plate that modifies the alternating peak-to-trough, wave-like properties (called the phase) of the electron beam. This phase plate modifies the electron beam in a way that allows subtle changes in a material to be measured, even revealing materials that would be invisible in traditional STEM imaging. Another electron-based method, which researchers use to determine the detailed structure of delicate, frozen biological samples, is called cryo-electron microscopy, or cryo-EM. While single-particle cryo-EM is a powerful tool--it was named as science journal Nature's 2015 Method of the Year --it typically requires taking an average over many identical samples to be effective. Cryo-EM is generally not useful for studying samples with a mixture of heavy elements (for example, most types of metals) and light elements like oxygen and carbon. "The MIDI-STEM method provides hope for seeing structures with a mixture of heavy and light elements, even when they are bunched closely together," said Colin Ophus, a project scientist at Berkeley Lab's Molecular Foundry and lead author of a study, published Feb. 29 in Nature Communications, that details this method. If you take a heavy-element nanoparticle and add molecules to give it a specific function, conventional techniques don't provide an easy, clear way to see the areas where the nanoparticle and added molecules meet. "How are they aligned? How are they oriented?" Ophus asked. "There are so many questions about these systems, and because there wasn't a way to see them, we couldn't directly answer them." While traditional STEM is effective for "hard" samples that can stand up to intense electron beams, and cryo-EM can image biological samples, "We can do both at once" with the MIDI-STEM technique, said Peter Ercius, a Berkeley Lab staff scientist at the Molecular Foundry and co-author of the study. The phase plate in the MIDI-STEM technique allows a direct measure of the phase of electrons that are weakly scattered as they interact with light elements in the sample. These measurements are then used to construct so-called phase-contrast images of the elements. Without this phase information, the high-resolution images of these elements would not be possible. In this study, the researchers combined phase plate technology with one of the world's highest resolution STEMs, at Berkeley Lab's Molecular Foundry, and a high-speed electron detector. They produced images of samples of crystalline gold nanoparticles, which measured several nanometers across, and the super-thin film of amorphous carbon that the particles sat on. They also performed computer simulations that validated what they saw in the experiment. The phase plate technology was developed as part of a Berkeley Lab Laboratory Directed Research and Development grant in collaboration with Ben McMorran at University of Oregon. The MIDI-STEM technique could prove particularly useful for directly viewing nanoscale objects with a mixture of heavy and light materials, such as some battery and energy-harvesting materials, that are otherwise difficult to view together at atomic resolution. It also might be useful in revealing new details about important two-dimensional proteins, called S-layer proteins, that could serve as foundations for engineered nanostructures but are challenging to study in atomic detail using other techniques. In the future, a faster, more sensitive electron detector could allow researchers to study even more delicate samples at improved resolution by exposing them to fewer electrons per image. "If you can lower the electron dose you can tilt beam-sensitive samples into many orientations and reconstruct the sample in 3-D, like a medical CT scan. There are also data issues that need to be addressed," Ercius said, as faster detectors will generate huge amounts of data. Another goal is to make the technique more "plug-and-play," so it is broadly accessible to other scientists. ### Berkeley Lab's Molecular Foundry is a DOE Office of Science User Facility. Researchers from the University of Oregon, Gatan Inc. and Ulm University in Germany also participated in the study. About Berkeley Lab Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit www.lbl.gov. The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 4, 2016
Site: www.eurekalert.org

From deep within the Earth to the upper atmosphere, the organisms and ecosystems highlighted in the 37 projects selected for the 2017 Community Science Program (CSP) of the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, reflect the breadth and depth of interests researchers are exploring to find solutions to energy and environmental challenges. "These new CSP projects, selected through our external review process, exploit DOE JGI's experimental and analytical "omics" capabilities and build our portfolio in key focus areas including sustainable bioenergy production, plant microbiomes and terrestrial biogeochemistry," said Susannah Tringe, DOE JGI User Programs Deputy. The CSP 2017 projects were selected from 98 full proposals received, resulting from 123 letters of intent submitted. The full list of projects may be found at http://jgi. . A number of the accepted proposals call for developing reference genomes for plants of relevance to bioenergy, either as potential feedstocks or because they possess biochemistries that could provide insights into better ways to derive biofuels from plant biomass. These will take advantage of powerful DOE JGI sequencing upgrades. Building upon the previously published reference genome for Sorghum bicolor generated by the DOE JGI, Todd Mockler of the Donald Danforth Plant Science Center has targeted the sequencing of a number of highly diverse sorghum lines. The project will explore and begin to assemble the pangenome to accelerate gene discovery and increase understanding of which variants are associated with different functional outcomes. This work is being conducted in close concert with DOE's Advanced Research Projects Agency-Energy's (ARPA-E) Transportation Energy Resources from Renewable Agriculture (TERRA) program. David Des Marais of Harvard University is similarly focused on Brachypodium, which has had two genomes - Brachypodium distachyon and B. sylvaticum - sequenced by the DOE JGI. Among his plans are estimating a pangenome for B. sylvaticum and sequencing 4 additional Brachypodium species to further functional genomic analysis research in grasses. Karen Aitken of Australia's national science agency CSIRO is focused on producing the first genome assembly of a cultivated sugarcane plant, namely variety R570. Cultivated sugarcane produces 80 percent of the world's sugar and is already harvested in large volumes and along with its transportation system efficiencies, is helping to guide optimized production strategies for other renewable and sustainable biofuel crops. Many projects focus on plants with demonstrated tolerance to stressors such as drought. The proposal from John Cushman of the University of Nevada seeks to establish the common or crystalline ice plant (Mesembryanthemum crystallinum L.) as a DOE JGI Flagship Genome species like poplar, soybean, and Brachypodium distachyon. Most plants use what is known as the C3 pathway to photosynthetically fix carbon (from CO2) in the presence of abundant nutrients, while plants in arid conditions rely on a different mechanism, the water-conserving CAM pathway. A rapidly growing desert annual native to the Namibian desert, the common ice plant is the first reported species that can switch from the C3 pathway to CAM when stressed by factors such as drought and salinity. Elizabeth Kellogg, also of the Danforth Center, requested a genome sequence for the big bluestem (Andropogon gerardii subsp. gerardii), the plant most associated with the Great Plains. The big bluestem dominates the tall grass prairie, accounting for up to 70 percent of biomass in some areas, and is highly resistant to climate change. In a complementary project, Karolina Heyduk of the University of Georgia seeks genome sequences for two cacti: the C3 species Yucca aloifolia and the CAM species Y. filamentosa. The comparison of the genomes for these two related cacti could illuminate the genes and pathways involved in these two forms of photosynthesis. Similarly, the gene atlas for CAM plant Kalanchoe laxiflora proposed by Xiaohan Yang of Oak Ridge National Laboratory could allow researchers to understand how it responds to changes in environmental conditions, including temperature, water, and nitrogen source. Few conifers have had their genomes sequenced thus far, and Joerg Bohlmann of Canada's University of British Columbia wants to raise the number. He has targeted the genomes and transcriptomes of the common yew and western red cedar, as well as the transcriptome of the Jeffrey pine, all candidate bioenergy feedstocks. Two projects focus on the impact of fire on forests as full recovery can take decades. Daniel Cullen of Forest Products Laboratory is comparing how microbial communities in forests dominated by Lodgepole pine (Pinus contorta) fare in fire-disturbed and undisturbed forests, in part to define processes that underlie carbon cycling in coniferous forests. Similarly, Thomas Bruns of University of California, Berkeley seeks to learn more about the effects of pyrophilous fungi on post-fire soil carbon by studying the genomes and transcriptomes of 13 fungi as well as the metatranscriptomes of burned soils. Several projects focus on plant-microbe interactions, both on micro- and macroscales. Sharon Doty of the University of Washington is investigating poplar endophytes, one that fixes nitrogen and one involved in phytoremediation. Plant root microbial consortia are known to be critically important to plant growth and resilience to changing soil conditions but much remains to be learned about community composition and function. Devin Coleman-Derr of the USDA-ARS, with his resource allocation, is seeking to learn more about the role of root-associated Actinobacteria in promoting host fitness in sorghum and rice under drought conditions. Paul Dijkstra of Northern Arizona University, with his allocation, will illuminate soil bacterial transcriptional regulatory networks in switchgrass fields. William Whitman of the University of Georgia plans to develop pangenomes of 100-200 species of soil or plant-associated prokaryotes. The pangenome concept is vital to understanding the repertoire of genes upon which microbial populations may call as local environments change. Jared LeBoldus of Oregon State University is targeting metabolites of Sphaerulina musiva, the cause of Septoria stem canker and leaf spot disease in poplar. Christine Smart of Cornell University is characterizing the mechanisms of willow rust (Melampsora americana), and conducting a comparative genomic study involving M. americana and other Melampsora genomes sequenced by the DOE JGI that are known plant pathogens of poplar and other candidate bioenergy feedstocks. Gregory Bonito of Michigan State University will identify the mechanisms of attraction, communication, and growth-promoting activity between Mortierella, a close relative of arbuscular mycorrhizal fungi, and the DOE JGI Flagship plants. Nhu Nguyen of the University of California, Berkeley will be a generating genus-wide molecular phylogeny of Suillus, asking for 50 fungal genome sequences. Suillus fungi tolerate heavy metals, but the protection varies among hosts. Wayne Nicholson of the University of Florida will be generating a more comprehensive assessment of Carnobacterium, strains of which can be found in all pressure niches from the deep ocean to the upper atmosphere. Sean Crowe of Canada's University of British Columbia will characterize the methanome, comprised of genomic information distributed across organisms that either produce or consume methane, which is both a source of fuel and a greenhouse gas. Graeme Attwood of New Zealand's AgResearch Ltd and his team seek to define gene function in rumen microbes, in part to control the microbes' production of methane emissions from bovines. Soil emissions are a focus of Jennifer Pett-Ridge of Lawrence Livermore National Laboratory and Jonathan Raff of Indiana University. Pett-Ridge is determining the impact microbes in humid tropical forest soils have on carbon cycling, work that complements her DOE Early Career Research Program award. Raff intends to use samples from temperate hardwood forest sites to learn more about soil emissions of nitric oxide (NO) and more accurately represent NO sinks and sources in climate models. Alison Buchan of University of Tennessee, Knoxville is generating data about lignin related aromatic compounds in salt marshes that are removed between river mouths and open oceans, and the biochemical pathways employed in this process. A similar project comes from Christopher Francis of Stanford University and involves the San Francisco Bay Delta, the largest estuary on the west coast of North America. His team is investigating how environmental changes drive estuarine microbial community changes, and if certain pathways and organisms dominate under certain conditions, or if genes co-vary with specific functional gene ecotypes. Several projects are focused on algae for their roles in carbon fixation and for potential bioenergy applications. Chi Zhang of the University of Nebraska - Lincoln is focused on Zygnematales, the closest algal lineage to land plants, to learn more about how plant cell walls formed in the evolutionary transition from aquatic to land plants. The information could shed light on how to deconstruct plant cell walls for biofuel production without impacting plant viability. Matthew Johnson of Woods Hole Oceanographic Institute is interested in the bloom-forming Mesodinium rubrum and M. major complex. While these algae are major contributors to primary production, they acquire their ability to harness light energy for nutrients through predation so genome sequencing would help distinguish native metabolic pathways from those of the prey. Jeffry Dudycha of the University of South Carolina is pursuing a project based on cryptophytes, eukaryotic microalgae that are important primary producers in aquatic environments and are capable of capturing a broad spectrum of available light. By sequencing representatives from all major clades, the team hopes to maximize ways diverse algal communities could boost lipid yields for biofuels. Biological soil crusts or biocrusts are extremely sensitive to climate changes. As surviving extreme drought is a rare feature in plants, Elena Lopez Peredo of Marine Biological Laboratory will be generating annotated genomes of Scenedesmus algae to learn more about the desiccation tolerance of green microalgae in biocrusts. In a complementary project, Steven Harris of University of Nebraska - Lincoln is detailing the interactions of fungal and algal components of biocrusts to learn more about how they can tolerate environmental stress. Several projects build off previous efforts to sequence 1,000 microbial genomes (KMG), which have so far led to the selection of nearly 2,300 type strains. Markus Göker of Germany's DSMZ is spearheading KMG-4 to sequence environmentally relevant cultures. Ramunas Stepanauskas of Bigelow Laboratory for Ocean Sciences is using single-cell genomics to target taxonomic "blind spots" - clades systematically underrepresented or missed due to primer mismatches or intervening sequences in marker genes. Barbara MacGregor of the University of North Carolina is exploring how horizontal gene transfer has shaped the genomes of large sulfur bacteria, which are often found in dense microbial mats and play roles in carbon, nitrogen, sulfur and phosphorus cycles. Mary Ann Moran of the University of Georgia is examining organic sulfur biogeochemistry in a coastal ocean ecosystem. She and colleagues from the Monterey Bay Aquarium Research Institute collected a time series of coastal microbes and microbial communities which will be subjected to sequencing to explore how metabolism shifts as community composition changes. Mark Dopson of Sweden's Linnaeus University has a project that deploys DOE JGI's single cell genomics resources on samples sourced from deep subsurface. Targeted microbiomes will come from deep bedrock waters including Mont Terri Underground Rock Lab in Switzerland and sites constructed to investigate the safety of deep geological storage of spent nuclear fuel. Data would provide knowledge contributing to informed decisions on safety. Ashley Shade of Michigan State University plans to use a synthetic bacterial community to study the interactions among multiple community members, and then link community structure to activity and to exometabolite interactions. In sum, these 37 new projects will furnish a genomic basis for ongoing and new explorations into the genetic bases for plant, fungal, algal, microbial, and microbial community mechanisms focused on important processes of biofuel and bioenergy feedstock composition and metabolisms, as well as related environmental processes of importance for DOE missions. The U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, is committed to advancing genomics in support of DOE missions related to clean energy generation and environmental characterization and cleanup. DOE JGI, headquartered in Walnut Creek, Calif., provides integrated high-throughput sequencing and computational analysis that enable systems-based scientific approaches to these challenges. Follow @doe_jgi on Twitter. DOE's Office of Science is the largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | April 19, 2016
Site: phys.org

That same energy also makes actinides highly radioactive. Scientists doing early research with these substances in the 1940s and 50s did not fully understand the risks of long-term handling and storage for the resulting waste products. Locations such as the Department of Energy's (DOE's) Hanford and Savannah River Sites still have radioactive materials from the Manhattan Project in storage awaiting processing and final disposition. Often stored in large holding containers, these volatile substances mix with one another over time. One of the major challenges for decontaminating these sites lies in separating radioactive compounds from their more inert counterparts for safe removal and storage. To this end, a multi-institution team has been using computing resources at DOE's Oak Ridge National Laboratory (ORNL) to understand actinide chemistry at the molecular level in hopes of designing methods to clean up contamination and safely store spent nuclear fuel. "We put this group together because we were already collaborating on scientific papers," said professor David Dixon of The University of Alabama. "Rather than just saying that we were going to do one reaction and needed computing time, we're using a team approach to try and get a broad, fundamental understanding of actinide science." Dixon serves as the principal investigator on the project, which includes collaborators from Lawrence Berkeley National Laboratory, Washington State University, The State University of New York at Buffalo, the University of Minnesota, and Rice University. The team is using the Titan supercomputer at the Oak Ridge Leadership Computing Facility—a DOE Office of Science User Facility located at ORNL—to work on a variety of research goals related to understanding the chemical reactions taking place with actinides. Supercomputing plays an essential role in understanding these elements. Because of their high radioactivity, actinides are difficult to handle, and those experiments that can be performed are expensive. "These elements are really radioactive, and in many cases, there is not much of it around," Dixon said. "The radioactivity part makes it very difficult to handle in a lab. We want to try and get as much information as we can about their properties before people do specific lab experiments to test out ideas and design new systems to help remediate contamination." The Dixon team gained access to Titan through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Within the larger allocation of computing time, the team is divided into several smaller groups looking at specific problems related to actinide chemistry. After spent nuclear fuel begins to dissolve within a storage tank, actinides still exist as ions in an aqueous mixture. To clean up this mixture, researchers must separate these ions into groups based on how they affect the remediation process and by the radioactive hazards they present. Currently, state-of-the-art methods revolve around a multistage process where researchers form a metal-ligand complex with an actinide ion, which then helps the ion migrate across the boundary between oil and water. Washington State University professor Aurora Clark leads a group that studies how solutions of actinides can be separated, primarily focusing on liquid-liquid interfaces. "One of the most critical components of actinide separation is the interface between water or an aqueous solvent system and an organic solvent (such as oil), and often these mixtures are incredibly complicated," Clark said. "We're doing simulations to understand the essential physics of what's going on at these liquid-liquid interfaces. We're trying to understand how the structure and dynamics at the oil-water interface can influence permeability of different compounds across these interfaces and can influence the ability of an actinide-containing molecule to migrate across the interface. Essentially, much of the storage for radioactive waste needs to be cleaned of actinides. Clark's group runs large-scale molecular dynamics (MD) simulations to understand this process with a level of detail unattainable through experiment. The team uses Titan to design a solution containing thousands to tens of thousands of individual atoms. During the simulation, the team charts the particles' interactions as the solution moves to a state of equilibrium—or where a reaction and any counter-reaction proceed at an equal rate—and then analyzes the data from these interactions. Clark's group uses some novel approaches for data analysis. One of the team's achievements was creating intermolecular network theory, which allows the team to focus less on the positions of each individual particle in the simulation (like many MD simulations) and more on how the interactions of various particles dictate the physical properties of these radioactive mixtures. Intermolecular network theory gets the team to take its data and analyze a "network" of the various chemical interactions. Researchers then analyze that network much like a network analyst would observe internet connectivity or monitor a network of computers—in fact, the team uses a modified version of Google's PageRank algorithm, which is instrumental in ranking the importance and relevance of web content, to help understand the immediate organization of solvents around ions. Through its suite of computational tools and access to Titan in the last year, the Clark group was able to observe how the molecule tributyl phosphate changes its conformation when at the interface of water and hexane. As the molecule changes, a massive reorganization at the interface makes the transfer of an actinide across the interface more permeable. In addition to working on liquid-liquid interfaces, Clark's group is also starting to simulate liquid-solid interactions, with a specific focus on metal organic frameworks—compounds that create nanoscale pores that attract and bind to actinides. As supercomputing power continues to increase, Clark hopes that metal organic frameworks may become the gold standard for scrubbing actinides from an environment. "The strategy [of employing metal organic frameworks] compliments our other research and provides a way to both separate and potentially store radioactive material in a single step," Clark said. Clark's group is generating large amounts of data in these simulations and has been working with OLCF support staff to develop methods for in situ data analysis techniques that would allow the team to drastically reduce the amount of data it would need to transport out of the OLCF. Other researchers on the Dixon team, such as Lawrence Berkeley National Laboratory's (LBNL's) Bert de Jong, are using their time on Titan and the OLCF's developmental and analysis cluster Eos to collaborate closely with experimentalists. De Jong's group works with teams at both LBNL and the University of Utah to understand how to change and control an element's oxidation state—or how many electrons an atom can shed or add to change its charge. By altering the behavior of elements like uranium—or its actinide relatives, thorium, neptunium, and plutonium—incorporating them in molecules and forming chemical bonds, researchers hope to extract radioactive elements from the environment or waste storage tanks and keep them in a more innocuous form, or to separate them for reuse as nuclear fuel. De Jong's group benefitted from Titan's hybrid architecture. "For our thorium work, we had to do very high-accuracy simulations, and for those, we have to do coupled cluster calculations," de Jong said. "The NWChem code that we use is GPU enabled, so that allows us to do these kinds of calculations with the fastest time-to-solution possible." Throughout the group's work on Titan and Eos, simulations have served as both the basis for driving experimental research and verification of certain experimental hypotheses. "I believe that computing is both predictive and interpretive," de Jong said. "Experiments often can only tackle aspects of the problem, but computing can help broaden insights and trends that can confirm or change interpretations and provide more additional information very quickly and safely." De Jong pointed to research on chemical reactions involving ligands and proteins—molecules that bind to radioactive elements—where Titan simulations suggested that certain reactions would shed water molecules while reducing uranium. This phenomenon was subsequently observed experimentally in the Netherlands. Simulations on Titan also predicted the formation of an unusually bonded uranium-nitrosyl molecule, work that was published in the Journal of the American Chemical Society. "Simulations on elements on the bottom of the periodic table, such as uranium and plutonium, are a challenge, and we are using accurate experimental data to verify that our models and methods are accurate," de Jong said. The verification of experiments performed at the University of Utah, studying thorium reactions in a gas phase, lays the foundation for complex simulations of uranium or heavier, more volatile elements that are dangerous to work with in a laboratory. As the OLCF moves toward its next-generation supercomputer, Summit, the Dixon team plans to focus on optimizing its various codes for taking full advantage of GPUs and other computing methods. The team wants to continue expanding its work on metal organic frameworks and incorporate quantum Monte Carlo methods—a computational technique that uses statistical data and random numbers to accurately plot electrons in a simulation. Explore further: What, oh, what are those actinides doing? More information: Richard M Cox et al. Reactions of Th + H , D , and HD Studied by Guided Ion Beam Tandem Mass Spectrometry and Quantum Chemical Calculations , The Journal of Physical Chemistry B (2016). DOI: 10.1021/acs.jpcb.5b08008 Yasaman Ghadar et al. Solutes at the liquid:liquid phase boundary—Solubility and solvent conformational response alter interfacial microsolvation, The Journal of Chemical Physics (2015). DOI: 10.1063/1.4914142 Yu Gong et al. Gas Phase Uranyl Activation: Formation of a Uranium Nitrosyl Complex from Uranyl Azide, Journal of the American Chemical Society (2015). DOI: 10.1021/jacs.5b02420


News Article | December 19, 2016
Site: www.cemag.us

Scientists have enlisted the exotic properties of graphene, a one-atom-thick layer of carbon, to function like the film of an incredibly sensitive camera system in visually mapping tiny electric fields in a liquid. Researchers hope the new method will allow more extensive and precise imaging of the electrical signaling networks in our hearts and brains. The ability to visually depict the strength and motion of very faint electrical fields could also aid in the development of so-called lab-on-a-chip devices that use very small quantities of fluids on a microchip-like platform to diagnose disease or aid in drug development, for example, or that automate a range of other biological and chemical analyses. The setup could potentially be adapted for sensing or trapping specific chemicals, too, and for studies of light-based electronics (a field known as optoelectronics). “This was a completely new, innovative idea that graphene could be used as a material to sense electrical fields in a liquid,” says Jason Horng, a co-lead author of a study published Nature Communications that details the first demonstration of this graphene-based imaging system. Horng is affiliated with the Kavli Energy NanoSciences Institute, a joint institute at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, and is a postdoctoral researcher at UC Berkeley. The idea sprang from a conversation between Feng Wang, a faculty scientist in Berkeley Lab’s Materials Sciences Division whose research focuses on the control of light-matter interactions at the nanoscale, and Bianxiao Cui, who leads a research team at Stanford University that specializes in the study of nerve-cell signaling. Wang is also a UC Berkeley associate professor of physics, and Cui is an associate professor of chemistry at Stanford University. “The basic concept was how graphene could be used as a very general and scalable method for resolving very small changes in the magnitude, position, and timing pattern of a local electric field, such as the electrical impulses produced by a single nerve cell,” says Halleh B. Balch, a co-lead author in the work. Balch is also affiliated with the Kavli Energy NanoSciences Institute and is a physics PhD student at UC Berkeley. “One of the outstanding problems in studying a large network of cells is understanding how information propagates between them,” Balch says. Other techniques have been developed to measure electrical signals from small arrays of cells, though these methods can be difficult to scale up to larger arrays and in some cases cannot trace individual electrical impulses to a specific cell. Also, Cui says, “This new method does not perturb cells in any way, which is fundamentally different from existing methods that use either genetic or chemical modifications of the cell membrane.” The new platform should more easily permit single-cell measurements of electrical impulses traveling across networks containing 100 or more living cells, researchers say. Graphene, which is composed of a honeycomb arrangement of carbon atoms, is the focus of intense R&D because of its incredible strength, ability to very efficiently conduct electricity, high degree of chemical stability, the speed at which electrons can move across its surface, and other exotic properties. Some of this research is focused on the use of graphene as a component in computer circuits and display screens, in drug delivery systems, and in solar cells and batteries. In the latest study, researchers first used infrared light produced at Berkeley Lab’s Advanced Light Source to understand the effects of an electric field on graphene’s absorption of infrared light. In the experiment, they aimed an infrared laser through a prism to a thin layer called a waveguide. The waveguide was designed to precisely match graphene’s light-absorbing properties so that all of the light was absorbed along the graphene layer in the absence of an electric field. Researchers then fired tiny electrical pulses in a liquid solution above the graphene layer that very slightly disrupted the graphene layer’s light absorption, allowing some light to escape in a way that carried a precise signature of the electrical field. Researchers captured a sequence of images of this escaping light in thousandths-of-a-second intervals, and these images provided a direct visualization of the electrical field’s strength and location along the surface of the graphene. The new imaging platform — dubbed CAGE for “Critically coupled waveguide-Amplified Graphene Electric field imaging device” — proved sensitive to voltages of a few microvolts (millionths of a volt). This will make it ultrasensitive to the electric fields between cells in networks of heart cells and nerve cells, which can range from tens of microvolts to a few millivolts (thousandths of a volt). Researchers found that they could pinpoint an electric field’s location along the graphene sheet’s surface down to tens of microns (millionths of a meter), and capture its fading strength in a sequence of time steps separated by as few as five milliseconds, or thousandths of a second. In one sequence, researchers detailed the position and dissipation, or fade, of a local electric field generated by a 10-thousandths-of-a-volt pulse over a period of about 240 milliseconds, with sensitivity down to about 100 millionths-of-a-volt. Balch says that there are already plans to test the platforms with living cells. “We are working with collaborators to test this with real heart cells,” she says. “There are several potential applications for this research in heart health and drug screening.” There is also potential to use other atomically thin materials besides graphene in the imaging setup, she said. “The kind of elegance behind this system comes from its generality,” Balch says. “It can be sensitive to anything that carries charge.” The research team included participants from Berkeley Lab, UC Berkeley, and Stanford University. The work was supported by the U.S. Department of Energy Office of Science, the National Science Foundation, the David and Lucile Packard Foundation, and the Stanford University Bio-X Graduate Fellowship Program. The Advanced Light Source is a DOE Office of Science User Facility.


News Article | November 14, 2016
Site: cleantechnica.com

Although snowbound in winter, Colorado is also famous for its sunny skies. In fact, Pueblo Colorado ties 7th place as the sunniest US city, according to NOAA’s annual average % possible sunshine ranking, with an average of 76%. Only Arizona, California, Nevada, and Texas have sunnier cities. It’s no wonder, then, that Colorado ranks high in the US for installed solar energy capacity. According to the US Solar Energy Industries Association (SEIA), Colorado ranks #9 in its ranking of the top 10 states for total installed solar capacity. Growing 117% over 2014, 146 megawatts (MW) of solar were installed in 2015 alone, making Colorado’s current total 599 MW. At 599 MW, this is enough solar energy installed in Colorado to power 114,000 homes. SEIA projects that, over the next 5 years, Colorado will install another 1,878 MW of solar capacity, more than 5 times the amount of solar installed in Colorado in the last 5 years. Over 382 companies contribute to the full value chain in Colorado’s solar market, employing nearly 5,000 people. Following solar investments in the state, $305 million was invested in solar installation during 2015, representing a 44% increase over 2014. Significant growth in solar investments in Colorado for 2016 is also expected. Currently, Xcel Energy has around 370 MW of Colorado’s 599 MW solar power on its system, and approximately 230 MW of that comes from rooftop PV systems. This high percentage of power gives Xcel Energy quite an advantage when it comes to driving Colorado’s solar rates and policies. In the past few years, Xcel has been beating the war drums, attempting to persuade Colorado’s Public Utilities Commission (CPUC) to reduce benefits being realized by rooftop solar customers. In August 2015, solar advocates in Colorado believed their latest troubles were finally over when a year-long debate over net metering ended with no rate reductions. But it was clear the fight was not yet over when Xcel introduced a proposed grid-use fee last spring. Arranged on a sliding scale, the proposed fee ranged from $2.62 per month for customers using fewer than 200 kWh per month, to $44.79 for consumption of 1,401 kWh or more per month. It also proposed lowering the customers’ volumetric charge to help offset the fixed cost increase. Solar proponents rallied once again and argued that “the rate proposal would undermine the economic benefits of installing residential solar by introducing an unavoidable fee and lowering the per-kilowatt-hour charge, which decreases the net metering credit solar customers receive for excess electricity sent back to the grid.” The SEIA reports that installed solar PV system prices in the US have dropped by 12% from last year and 66% from 2010. In 2016, the latest data indicates that the average installed cost of solar in the US is just below $3.50/watt for residential and $2.00/watt for commercial installations. Of course, everyone’s needs are unique, so it is always best to get a personalized solar quote, but at $3.50/watt, this means that in Colorado it currently costs about $17,500 to buy an average-size (5 kW) system. The federal ITC of 30% will reduce this out-of-pocket cost by about $5,250, and you can expect an additional savings of about $1,224/yr ($102/mo) in estimated energy savings. This total reduction works out to be about $6,474, so the first-year cost of installing a 5 kW solar PV system would be around $11,026. Granted, the monthly energy savings estimate is based on 2011 data, but over the course of 20 years this still works out to be about $24,480, or a net gain of about $12,230 for a 5 kW system installed in 2016. That’s a great return on your investment and we didn’t even calculate the additional gain you might make selling your excess electricity back to the utility under Colorado’s current net metering rate of $0.1243 kWh according to EIA! And what’s more, even if you don’t stay in your home for the full 20 years, the sales value of your home increases significantly from the first moment your solar system is installed. According to a study by the Lawrence Berkeley National Laboratory (LBNL), every kilowatt of solar installed on your roof will increase your home’s sales value by an average of $5,911. A 5 kW system will add $29,555 to your home’s value, so even if you sell after one year, you could potentially pay off your ($11,026) solar system and still make $18,529 on your investment! As an enlightened source of energy in a nation increasingly challenging consumers of solar power, Colorado is blazing a bright path to a clean, renewables future. It’s been blazing this trail since 2004, as the first state to adopt a Renewable Energy Standard. Tirelessly carving a trail through political skirmishes and scuffles, Colorado has served as a solar energy guide to new business models — most recently becoming a pioneer in the shared solar energy market. In January 2016, the Colorado Energy Office (CEO) and GRID Alternatives launched a pilot optimizing the community solar model to reduce energy costs for low-income customers. The pilot proposed five demonstration projects, building a total of 579 kW of shared solar, or community solar gardens (CSGs). Colorado Energy Office Director Jeff Ackermann stated, “Colorado has always been a leader in renewable energy, and now we take another innovative step forward as we create community solar models that are more affordable and available to Colorado rural electric cooperatives and the low-income communities they serve.” CSGs provide access to solar energy generation for renters, people living in multi-family housing, or who are not able to take advantage of solar for any number of other reasons, including roof conditions and shady locations, but especially low income. And, if approved by Colorado’s PUC, VoteSolar’s Rick Gilliam noted, the recent settlement agreement reached this past August will significantly expand solar access across the state, especially among low-income consumers. The new settlement will add to Colorado’s already successful CSG shared solar program in several key ways, including that Xcel will take on a 5% low-income participation requirement and there will also be an incremental 4 MW/year CSG program offering 100% low-income participation. Gilliam explained, “Affordable solar offers an exciting opportunity to address some of the biggest financial and health challenges facing low-income families.” Today, according to Shared Renewables HQ, 14 states and Washington, DC have policies in place for shared renewables and CSGs, and many more states are working to expand access to shared renewable energy resources. Check out this quick Youtube clip from Colorado News Channel 13 about Colorado’s Solar Gardens: Finally reaching a settlement agreement in August 2016, the Xcel’s proposed grid fee was killed and net metering was once again saved from manipulation. Instead, according to the settlement, the utility “will institute time-of-use rates on a trial basis in anticipation of moving to time-of-use rates as the default rate design for residential customers.” The time-of-use (TOU) rate pilot proposes to help Xcel assign an appropriate value for electricity that is produced when it’s most valuable. New voluntary programs will be offered to up to 20,000 customers in 2017. In 2018, it will be extended to 34,000 customers, and to 48,000 customers in 2019. The results of the pilot program will be studied for broad implementation of TOU rates in 2020. More than 24 parties signed on to the agreement, including Colorado Public Utilities Commission staff; the Office of Consumer Counsel; several clean energy, consumer, and environmental groups; Walmart; VoteSolar; and the Colorado Solar Energy Industries Association (COSEIA). COSEIA Board President John Bringenberg stated, “This settlement signals cooperation rather than confrontation, which COSEIA believes is the most productive way to advance solar, allow citizens and companies to own clean energy rather than rent dirty fossil fuel, and impact catastrophic climate change.” He noted, “We must work with Xcel and other stakeholders to support the competitive solar industry.” The CPUC is expected to make a final decision on the settlement filing by the end of 2016. It is not unanimously supported, and some groups that signed on have requested some changes. For example, Xcel’s proposed Solar*Connect program has been requested to be changed to Renewable*Connect. This program would allow Xcel to build a solar plant up to 50 MW in size and sell shares of the facility to the public. The settlement agreement also changed Xcel’s 2017 Renewable Energy Plan, calling for expanding clean energy resources in a “measured and economically reasonable way.” Adding 225 MW of solar capacity to Xcel’s Solar*Rewards program from 2017–2019, the settlement specifies 123 MW should come from small-scale solar projects. Over the same three-year period, the settlement also requests up to 105 MW of community solar gardens (CSGs) be developed, carving out an additional 12 MW for low-income customers. This adds up to a total of 342 MW of new solar capacity slated for installment in 2017–2019. Limited numbers of low-income solar program participants will be offered solar rooftops, coupled with the Colorado Energy Office’s weatherization program that leverages DOE funding, and Xcel will also provide a $2/W rebate. With a total of 300 participants, the low-income solar program will be offered to 75 participants in 2017, 100 in 2018, and 125 participants in 2019. Alice Jackson, Xcel Energy’s regional vice president for rates and regulatory affairs, hailed the recent settlement agreement. “It will allow us to meet our customers’ expectations” she stated, “by giving them more control over their energy choices. It will bring more renewable and carbon-free energy to Colorado through the use of new technologies, and it will provide affordable and reliable energy to further power the state’s economy.” Likewise encouraged, Rick Gilliam, Vote Solar’s Program Director of DG Regulatory Policy stated, “Colorado is already a leader in many ways when it comes to our nation’s growing solar economy.” He continued, “This settlement means more good news for Colorado. It will ensure that more of the state’s energy consumers will have the opportunity to benefit from the local jobs, energy bill savings, and cleaner air that solar delivers.” Gilliam continued, “The agreement also recognizes the importance of good regulatory process through a series of subsequent stakeholder workshops in several important technical and policy areas when deciding these critical solar issues. We look forward to participating in those future conversations to help keep Colorado shining.” The following is an in-depth exploration of the state of solar energy in Colorado. Please feel free to offer further contributions in the comments, below. Available within every state of the US, the federal solar investment tax credit (ITC) offers credit from the federal government of up to 30% of a solar installation’s total costs. However, this credit can only be taken full advantage of if you have that much tax liability in the tax year. The current law extends the federal ITC through 2019 and will be phased out after that. Corporate depreciation incentives are also available for business, as well as a Business Energy Investment Tax Credit (ITC-1603), and a Renewable Energy Production Tax Credit (PTC). The following is a limited selection of state solar energy incentives listed for Colorado in the DSIRE Incentives Database. The Database of State Incentives for Renewables & Efficiency (DSIRE) website is operated by the N.C. Clean Energy Technology Center at N.C. State University and offers up-to-date, comprehensive public information on specific solar energy incentives across the US, including state and federal policies, programs, and incentives. • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Wind (Small), Hydroelectric (Small), Fuel Cells using Renewable Fuels • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Low Income Residential, Institutional • Applicable Utilities: All utilities (except municipal utilities with less than 5,000 customers) • System Capacity Limit: IOU customers: 120% of the customer’s average annual consumption. Municipality and co-op customers: 25 kW for non-residential; 10 kW for residential. • Aggregate Capacity Limit: No limit specified. Community solar gardens: 6 MW/yr for 2011-2013; set by PUC thereafter • Net Excess Generation: Credited to customer’s next bill at retail rate. After 12-month cycle, IOU customers may opt to roll over credit indefinitely or to receive payment at IOU’s average hourly incremental cost. Municipality and co-ops provide annual reconciliation at a rate they deem appropriate. • Ownership of Renewable Energy Credits: Customer owns RECs • Meter Aggregation: Allowed for IOU customers. Community solar gardens are allowed Note: Colorado’s Public Utilities Commission opened a proceeding (14M-0235E) in March 2014 for considering changes to net metering and other issues related to retail renewable distributed generation. However, this docket was closed in August 2015 after the Commission voted to keep the current net metering rules. Eligibility and Availability: Net metering was adopted by the Colorado PUC in December 2005. Utilities must provide net metering service at non-discriminatory rates to customer-generators. Utilities must provide a single bi-directional meter free of charge to customer-generators who do not already own one. A second meter is required for systems over 10 kW to measure output for the counting of renewable energy credits (RECs). Net Excess Generation: Monthly net excess generation (NEG) is applied on a 1-for-1 basis as a kilowatt-hour (kWh) credit to the customer’s next bill. At the end of the year, or upon termination, the NEG must be reimbursed to the customer by the utility at the most recent year’s average hourly incremental cost. Net metering customers may choose to have their NEG carried forward from month-to-month indefinitely, but this one-time only option must be done in writing on or before the end of the calendar year. It’s a pretty bad idea, though, because all kWh credits carried forward will be lost upon terminating service with the utility. Renewable Energy Credits: Customer-generators retain ownership of RECs associated with energy generated by the customer-generator’s system. Utilities may acquire RECs by purchasing them from customer-generators vis a standard offer with a minimum term of 20 years if the system is less than 100 kW. Meter Aggregation: A customer with more than one meter located on contiguous property can choose to have their generator offset the load measured at more than one meter. Aggregating meters under net metering requires that all affected meters must be on the same rate schedule and needs a 30-day notice to the utility, specifying the order in which kWh credits will be applied to the multiple meters. Municipal Utilities and Electric Cooperatives: Colorado law enacted in March 2008 (H.B. 1160) requires all electric coops as well as all municipal utilities with over 5,000 customers to offer net metering for residential systems up to 10 kW and commercial and industrial systems up to 25 kW. NEG follows the same rules as above, receiving 1-for-1 credit in following months, and reimbursed at the end of an annual period. Eligibility and Availability: Colorado enacted the Community Solar Gardens (CSGs) Act in 2010, allowing for the creation of CSGs of up to 2 megawatts in an IOU’s service territory (H.B. 1342). H.B. 15-1377 was enacted in 2015, specifying that CSGs can be located in an electric coop’s service territory and used for complying with the retail distributed generation requirements of Colorado’s Renewable Energy Standard. CSGs may be owned by a nonprofit, for-profit organization, or even the utility itself, but must have at least 10 subscribers, unless the CSG is in the territory of an electric coop and the system size is under 50 kW. Subscribers can purchase up to 40% of the electricity produced by the array, receiving kWh credits on their utility bills relative to the subscription amount. Unless owned by a low-income subscriber, CSG subscriptions must be for at least 1 kW. No more than 120% of the subscriber’s annual energy consumption may be supplied by the CGS. H.B. 15-1284, enacted May 2015, added the note that the CSG subscribers must be “located in the service territory of the same qualifying retail utility and also in the same county as, or a county adjacent to, that of the community solar garden.” Net Excess Generation: If a CSG subscriber has a NEG in a monthly billing period, the credit will offset future consumption, rolling over from month-to-month indefinitely until the customer terminates service with the IOU, at which time any remaining billing credits expire. Aggregate Cap: Since 2015, IOUs began acquiring renewable energy from CSGs as part of their renewable portfolio standard compliance plan. Also, a minimum of 5% of an IOU’s purchases from CSGs must be reserved for subscribers in low-income brackets. • Sales and Use Tax Exemption for Renewable Energy Equipment Last updated: 07/21/2015 • Local Option — Sales and Use Tax Exemption for Renewable Energy Systems Last updated: 10/08/2015 • City of Boulder — Solar Sales and Use Tax Rebate Last updated: 01/25/2016 • Renewable Energy and Energy Efficiency for Schools Loan Last updated: 07/21/2015 With Xcel’s settlement agreement in August 2016 to withdraw its proposed grid-use fee, the utility stated that, instead, it “will institute time-of-use rates on a trial basis in anticipation of moving to time-of-use rates as the default rate design for residential customers.” New voluntary programs will be launched by Xcel Energy for up to 20,000 customers in 2017, 34,000 in 2018, and 48,000 in 2019. These programs will be carefully followed and studied for a broader TOU program to be launched in 2020. According to VoteSolar, customers who opt into the test TOU program will have a 2-month window to cancel if they don’t like it. Lauren Randall, Sunrun’s senior manager of public policy, stated that she sees time-of-use rates as a “beneficial compromise” as opposed to changing net metering rates and adding new fees. Randall said, “We think moving forward this could be a model for utilities and solar companies collaborating on rates that are good for consumers who want more energy choice.” She also noted that TOU rates benefit all ratepayers “by producing energy when the grid needs it.” More hearings are ongoing, but the Colorado PUC anticipates a final decision on this latest settlement by the end of 2016. The adoption and integration of distributed energy storage are making steady progress in Colorado. A panel discussion in April 2015 held by the Colorado Public Utilities Commission (CPUC) investigated a broad range of energy storage topics. Participants delved into such issues as costs, economics, anticipated growth rate, and regulatory changes relevant to energy storage strategies for residential, commercial, and industrial customers. Electric Power Research Institute’s (EPRI) Energy Storage Program Senior Project Manager Ben Kaun outlined the beneficial functions offered with energy storage. Kaun emphasized that energy storage functions as a “capacity resource, supporting the grid through flexible ramping, voltage control, and renewable integration, and providing grid reliability and resiliency services.” SolarCity’s Ryan Hanley outlined the trending decline in battery prices, and increase in customer interest in utilizing storage systems with solar installations. Hanley described SolarCity as “bullish on energy storage,” and reported that the company anticipates all solar PV installations will include a storage component by 2020. Hanley emphasized to the Commission that policies are needed to recognize the full advantage of distributed storage and allow participants to profit from those advantages. He also pointed out the need for a widespread deployment of smart inverters to ensure the fullest range of benefits from distributed energy storage. Following up on this panel discussion, the CPUC approved Xcel Energy’s request last spring to launch two energy storage demonstration projects. The settlement agreement, signed in March 2016, authorized Xcel to invest $9.1 million for the two projects that will test energy storage applications, one of which will include a microgrid strategy, as well. Known as the Panasonic Project, the first test will feature a utility-scale solar generation plant with one large battery installed by Xcel near Denver International Airport. The Panasonic Project will be connected to the regional grid but will have the capability to operate as a microgrid, as well. Known as the Stapleton Project, the second application will feature 6 batteries installed on the customer side of the meter at homes with rooftop solar already present. Xcel will also install another 6 batteries on that area’s feeder line which receives significant electricity flowing from distributed generation. These batteries will be purposely utilized for storing excess energy to be discharged during peak load hours. All of the resulting information and data from these two projects will be made publicly available. The PUC noted that milestone reports will provide greater public visibility, “ensuring that costs for the projects would be appropriately vetted in a future rate proceeding, and assuring that all resulting data and information that would benefit the public will be made available to the PUC and the public as a whole.” Xcel Energy stated that it will be “evaluating the capabilities of batteries installed on distribution feeders to regulate voltage, reduce peak demand, and reduce energy costs for the benefit of all the company’s customers.” • Alamosa Solar Generating Project — completed in 2012 by developer Cogentrix. This concentrating photovoltaics project has the capacity to generate 30 MW of electricity — enough to power over 5,400 Colorado homes. • Hooper Solar Project — At 50 MW, Hooper Solar Project is among the larger solar installations in Colorado. Completed in 2016 by SunPower, this PV project produces enough electric capacity to power over 9,500 homes. • Kohl’s, REI, Safeway, Walmart, and Intel — These are among the larger retailers in Colorado that have gone solar. Intel has installed one of the largest corporate photovoltaic systems in the state with 1,000 kW of solar capacity at their location in Fort Collins. According to SEIA, there are currently over 382 solar companies employed throughout the value chain in Colorado, ranging from solar system manufacturers to system installers. The following is an incomplete listing of solar installers in Colorado, chosen and listed based on high customer reviews and recommendations: Sunlight Solar Energy, Inc. (Colorado Branch) 384 Garden of the Gods Road, #150, Colorado Springs, CO 80907 Colorado State Energy Office – Find a wide variety of information on state government energy programs, policy, projects, energy-saving strategies and energy-related statistics Colorado State Legislature – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Colorado Colorado Public Utilities Commisssion – Learn about the governing body that regulates the electricity rates and services of California public utilities DSIRE Incentives Database – Colorado – Search a public clearinghouse for specific solar energy incentives in Colorado and across the United States U.S. Energy Information Administration – Colorado State Profile – Explore official energy statistics, including data on electricity supply and demand, from the U.S. government Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | November 15, 2016
Site: cleantechnica.com

As one of the sunniest states in the US, Arizona has enjoyed early and great success as a leader in solar energy production. With 2,453 megawatts (MW) of solar installed, the US Solar Energy Industries Association (SEIA) ranks Arizona as #2 for total installed solar capacity. Expanding 4% over 2014, in 2015, 258 MW of solar capacity was installed, ranking Arizona 5th in the nation for 2015 installed capacity. Over $582 million was invested last year in Arizona’s solar installations. Currently, the total amount of solar power capacity installed in Arizona is enough to power 348,000 homes, but over the next 5 years, SEIA expects that number to more than double. With over 375 solar companies serving the value chain in Arizona, there are more than 6,900 people employed in the state’s solar market. With all this tremendous growth in Arizona’s solar industry over the past 4 years, it is sad to see the current negative effects of solar policy manipulation and politics at play in Arizona’s energy arena. Changes in net metering incentives, the imposition of new rate plans, proposed demand rates and grid-use charges, and on top of all of this, politically nuanced media representation have given rise to huge instability in the state’s solar market. Engaging with local stakeholders and policy makers, SEIA states that it is working to promote “stability and transparency into policies so that the market can recover from this market disturbance and continue to grow.” A huge back-and-forth exchange between solar proponents and state utilities has developed into a full-blown battle over the value of rooftop solar in Arizona. At the root of the tumult is the view of Arizona Public Service Electric Company (APS) and state regulators that grid-connected solar customers must pay their “fair share” for grid services in the form of an additional fee tacked on to their utility bill. Solar proponents respond that grid-connected solar energy supplies a bonus to the grid that other customers don’t offer, so paying an additional charge is an unfair penalty. Especially in the face of lowered net metering rates for excess solar energy entering the grid, proposing an additional charge for grid-connected solar customers has really slapped rooftop solar customers in the face and heated up the public debate. The volley reached the level of drafting competing solar ballot initiatives to amend the state’s constitution, until solar advocates and state lawmakers entered mediation, brokered a compromise, and dropped the ballot initiatives. The ceasefire, brokered between SolarCity (representing “Yes on AZ Solar,” a political action committee funded by SolarCity) and Arizona Public Service Company finally occurred when the two sides agreed to mediate over how solar customers will be compensated for their excess electricity. The ceasefire was brokered less than 1 hour after the Arizona State Senate initiated steps to send the electorate measures mandating separate rates for rooftop solar customers, and for regulating solar leasing companies as utilities. Head utility regulator, Arizona Corporation Commission (ACC) Chairman Doug Little, expressed relief at the ceasefire negotiation. He said, “It would have been a pretty ugly dispute between the ballot measures over the summer.” In August, after 2 full days of testimony before the commission, the ACC rejected requests to add fees for solar customers and to end net metering. Earthjustice attorney Michael Hiatt reported, “This decision is great news for Arizona families and small businesses that plan on going solar, and for everyone who breathes cleaner air as a result.” He added, “The decision sends a powerful message to Arizona utilities that the Commission will not simply rubberstamp their anti-solar agenda.” However, the squabbles over Arizona’s solar energy future are not over yet. A year-long Value of Solar Docket was opened at this same time last year, hoping to resolve the rancorous debate over solar customer compensation once and for all. The VOS docket could conclude within the next month and promises to offer another round of spectacular fireworks, rhetoric, and politically nuanced media representation. Stay Tuned! The following is an (untumultuous) in-depth exploration of the state of solar energy in Arizona. Please feel free to offer further contributions in the comments, below. Because everyone’s situation is unique, there’s really no substitute to simply getting a solar quote. However, research and data are always being updated and the SEIA reports that installed solar PV system prices in the US have dropped by 12% from last year and 66% from 2010. The latest data indicates that in 2016 the average installed cost of solar in the US is just below $3.50/watt for residential and $2.00/watt for commercial installations. This means that today it will cost about $17,500 to buy an average-size (5 kW) system at roughly $3.50/watt. In Arizona, 6 kW systems are fairly common, so this price works out to about $21,000. Even using 2011’s data for monthly savings, at $137 saved per month for Arizona over the course of 20 years, this works out to be $32,880, netting a gain of (at least) $11,880 on a 6 kW system installed in 2016. What if you don’t stay in your home for the full 20 years? In Arizona, property taxes don’t increase with the addition of a rooftop solar installation, but your home’s sales value increases an average of $5,911 per kilowatt of solar installed, according to a 2013 Lawrence Berkeley National Laboratory (LBNL) study. So, even if you sell your home in the year following your 6 kW solar rooftop installation, it is reasonable to expect a sales price increase of something like $35,466. This leaves you with a net gain of about $14,466 on your ($21,000) solar rooftop, assuming you sell your house at the asking price. Sell or stay, either way, rooftop solar is a win-win scenario! The federal solar investment tax credit (ITC) is available in all states of the US. Up to 30% of a solar installation’s costs can be credited via the ITC. However, you can only take advantage of the credit once, and only if you have that much tax liability in that tax year. Currently, the 30% ITC extends through 2019 but is scheduled to be phased out after that. Businesses may also qualify for incentives, including the Renewable Energy Production Tax Credit (PTC), the Business Energy Investment Tax Credit (ITC-1603), and a corporate depreciation incentive, as well. The following are a few of Arizona’s state solar energy incentives listed on the DSIRE Incentives Database. Operated by the N.C. Clean Energy Technology Center at N.C. State University, the Database of State Incentives for Renewables & Efficiency (DSIRE) offers an up-to-date and comprehensive website for solar energy incentives across the US, including federal and state incentives, programs, and policies. • Start Date: 05/23/2009 • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Hydrogen, Municipal Solid Waste, Combined Heat & Power, Landfill Gas, Wind (Small), Hydroelectric (Small), Anaerobic Digestion, Fuel Cells using Renewable Fuels • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Federal Government, Agricultural, Institutional, Senior citizens • Applicable Utilities: Investor-owned utilities, electric cooperatives • System Capacity Limit: No capacity limit specified, but system must be sized to meet part or all of customer’s electric load and may not exceed 125% of customer’s total connected load • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Credited to customer’s next bill at retail rate; excess reconciled annually at avoided-cost rate • Ownership of Renewable Energy Credits: Not addressed in net metering rules; customer owns RECs unless participating in a utility incentive • Meter Aggregation: Not addressed Net Excess Generation: Accomplished using a single bi-directional meter, customer net excess generation (NEG) “is carried over to the customer’s next bill at the utility’s retail rate, as a kilowatt-hour (kWh) credit. Any NEG remaining at the customer’s last monthly bill in the annual true-up period will be paid to the customer, via check or billing credit, at the utility’s avoided cost payment.” The NEG rule continues, “For customers taking service under a time-of-use rate, off-peak generation will be credited against off-peak consumption, and on-peak generation will be credited against on-peak consumption. The customer’s monthly bill is based on the net on-peak kWh and net off-peak kWh amounts. Any monthly customer NEG will be carried over to the customer’s next bill as an off-peak or on-peak kWh credit.” Additional Charges: The ACC requires net metering charges be assessed non-discriminatorily. Any new or additional charges increasing eligible customer-generator’s costs beyond that of other customers in the rate class that the eligible customer-generator would be otherwise assigned to must first be proposed to the ACC for consideration and approval. The ACC also notes that the utility has the burden of proof in any such cost increase proposal. In December 2013, responding to an application from APS addressing cost shifting, the ACC ordered a $0.70 per kW charge (a $3.50 monthly charge for a 5 kW system) for all residential distributed generation systems installed on or after January 1, 2014.** However, this charge doesn’t apply to customers with systems installed on or before December 31, 2013. The ACC also requires APS to file quarterly reports about the number of new distributed generation installations per month, the kW size of those installations, and to report the amount of revenue collected from customers through the lost fixed cost recovery charge. Other utilities have also proposed additional charges for customer-generators. *Salt River Project (SRP) and municipal utilities do not fall under the jurisdiction of the Arizona Corporation Commission, and are therefore not subject to the state net metering rules–see SRP Net Metering Program Overview, below. **The charge applies specifically to “distributed generation” systems, not “net-metered systems.” However, as net metering only applies to systems located on the customer’s premises, this charge will affect net metering customers. The charge only applies to APS customers. • Website: http://www.srpnet.com/environment/earthwise/solar/solarelectric.aspx • Utilities: Salt River Project • Eligible Renewable/Other Technologies: Geothermal Electric, Solar Photovoltaics, Wind (All), Wind (Small) • Applicable Sectors: Commercial, Residential, Low-Income Residential • Applicable Utilities: SRP • System Capacity Limit: Existing Self-Generation Customers: 300 kW • New Self-Generation Customers: Not specified • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Credited to customer’s next bill at retail rate; held for same time of use period. • For existing DG customers: excess reconciled annually in April at average annual market price minus price adjustment of 0.00017/kWh • Ownership of Renewable Energy Credits: Not addressed • Meter Aggregation: Not allowed Note: Salt River Project (SRP) requires an interconnection agreement. SRP changed its existing net metering program for residential customers in February 2015. These changes went into effect with the April 2015 billing cycle. However, customers who purchased their distributed energy system or signed a lease agreement before December 8, 2014, are allowed to keep their original net metering rate plan for 20 years. Net Excess Generation: According to the SRP NEG rules, “The kWh delivered to SRP are subtracted from the kWh delivered from SRP for each billing cycle. If the kWh calculation is net positive for the billing cycle, SRP will bill the net kWh to the customer under the applicable price plan. If the kWh calculation is net negative for the billing cycle, SRP will carry forward and credit the kWh against customer kWh usage on the next monthly bill. However, if the kWh is net negative at the end of the April billing cycle, SRP will credit the net kWh from the customer at an average annual market price. No credits will be carried forward to the May billing cycle.” Additional Charges: Under SRP’s self-generation plan, customers must pay a fixed monthly service fee based on the size of their electricity service. They also must pay a grid, or demand, charge based on the customer’s maximum energy usage during peak electricity times. According to SRP, the energy charges per kilowatt-hour (kWh) are lower with the customer generation plan than with the standard residential rate plans. Aimed at reducing Arizona’s staggering peak load, a $4 million residential storage program was mandated in July 2016 by the Arizona Corporation Commission (ACC). Arizona Public Service (APS) received a 120-day deadline to firm up the specifics and get approval from the ACC, making this the first time a state besides California has mandated investment in home energy storage. Totally amazingly, the ACC has initiated this hallmark policy at no additional cost to ratepayers by tapping into an existing fund for energy-efficiency projects. Commissioner Andy Tobin proposed the idea last June. Refined with the help of Commissioner Bob Stump, the proposal passed at ACC’s July meeting. Commissioner Tobin said, “My view is, why should we be waiting around when we can help stimulate the storage market with surplus funds that are already there, that are supposed to be used for these energy efficiencies?” He added, “We’re missing the boat across the country, quite frankly, in pushing for storage the way we pushed for solar 10 years ago.” The ACC has mandated that energy storage should help residents lower demand during peak system load times. The proposal states, “Distributed generation technology may require sending more complex price signals not otherwise appropriate for traditional customers,” explaining that the utility “may need to offer participating customers advanced, time-differentiated rate plans.” “Arizona’s only energy problem in the short [term] and now,” said Tobin, “is peak demand four hours a day, five days a week for three months.” He explained, “When you’re talking 110, 115 degrees on a hot day and you’ve got kids in the house or seniors at home, it’s not like you have a lot of options.” The ACC’s proposed residential energy storage program offers a new option. Incentivizing residents to use stored energy at peak load times may be the best solution for Arizona. The pilot plan may even lead to a robust market for energy storage in the state, as well as offering significant savings for residential ratepayers. Commissioner Tobin is optimistic about the future of energy storage for Arizona. “If you can take two hours in storage out of a four-hour peak in an Arizona summer,” he said, “that’s huge.” The Solar Energy Industry Association (SEIA) lists the following solar installations as especially noteworthy in Arizona: • Agua Caliente in Yuma County was completed in 2013 by developer First Solar. This photovoltaic project has the capacity to generate 290 MW of electricity– enough to power over 41,200 Arizona homes. • At 125 MW, Arlington Valley Solar Project II is among the largest solar installations in Arizona. Completed in 2015 by LS Power, this photovoltaic project has enough electric capacity to power more than 17,700 homes. • Several large retailers in Arizona have gone solar, including Wal-Mart, REI, Intel, and IKEA. Macy’s has installed one of the largest corporate photovoltaic systems in the state with 3,456 kW of solar capacity at their location in Goodyear. According to SEIA, there are currently over 375 solar companies employed throughout the value chain in Arizona, ranging from solar system installers to manufacturers. The following is not a complete listing of all solar installers in Arizona, but were chosen and listed based on high customer recommendations and reviews: It is worse than ironic that Arizona should be debating the value of rooftop solar energy generation. The Arizona Public Service (APS) has around $2 billion invested in solar energy capacity, with a portfolio divided between 551 MW of rooftop solar installations and 499 MW of utility-scale solar plants. This past summer, amidst all the tumult and rancor over rooftop solar customer compensation, APS became the first utility outside of California to surpass 1 GW of solar energy capacity. Reaching this milestone hasn’t exactly promoted solar benevolence among APS, however. As noted in the introduction, APS has rather promoted battle between itself and Arizona’s solar advocates, narrowly avoiding a ballot initiative to amend the state constitution in favor of rooftop solar customers. A rate increase proposal before the Arizona Corporation Commission (ACC) is currently under review, pushing for a mandatory demand charge that would increase residential customers’ bills by an average of 7.96%. “In its upcoming rate case,” states Solar Strong America Chairman Kris Mayes, “APS suggests ratepayers shoulder an additional $3.6 billion in new costs. That’s real money out of household budgets, and if the ACC can help mitigate those costs by encouraging more rooftop solar energy, then the benefits of rooftop solar must be fully acknowledged.” Mayes, a former ACC Commissioner, adds dryly, “Arizona ratepayers could have cost savings falling from the sky, if our regulators see the light.” In November, the ACC is expected to vote on a methodology for quantifying the benefits of rooftop solar for utility ratepayers. The “Value of Solar” docket, as the ACC vote is referred to, is a critical pivot point upon which Arizona’s rooftop solar industry is poised, breathless but hopeful for its future. Retired Congressman Barry Goldwater Jr. and Retired Mohave County Sheriff Tom Sheahan are encouraging all solar advocates to weigh in on the value that rooftop solar and energy choice has in Arizona. In their recent opinion piece, published in the AZ Capitol Times, Goldwater and Sheahan note: “The Arizona Corporation Commission is holding a proceeding to determine the value of rooftop solar, the value that utilities should pay customers who deliver electricity back to the grid. It is clear that rooftop solar benefits all Arizonans. It drives down costs for utilities by lessening the need to build new generating stations and transmission lines, for starters – costs that are passed onto us. “Solar gives Arizonans options to generate their own electricity and lower their bills. It reduces pollution. Solar saves water. Solar creates jobs and provides almost 9,000 Arizona families with incomes higher than the national average according to the Solar Foundation. The benefits of our sunshine and rooftop solar are numerous. “Rooftop solar encourages free market competition. That’s why Arizona monopoly utilities have been trying to undermine rooftop solar. They don’t want you to supply any of your own energy. “The outcome of the ACC Value of Solar docket will impact all Arizonans. Recent independent studies conducted in other states show rooftop solar provides a long-term benefit to all, whether you have solar on your roof or not. “If our Arizona Commissioners conclude that solar is not a net positive for everyone, utilities across our state will use this as an excuse to strengthen their monopolies. This will hurt Arizona energy choice and send jobs and business running from the state. All benefits need to be thoroughly considered in the Commission’s deliberations. “Currently six utilities regulated by the ACC have filed proposals that threaten to eliminate thousands of solar jobs and the growing solar industry. Now, they are hoping that the value of solar proceeding will give them an excuse to do so. “A fair methodology for assessing all the benefits of rooftop solar would do the opposite and stop the utility assault on rooftop solar in its tracks. “That’s why it’s important for Arizonans to weigh in. Let regulators know the value that rooftop solar and energy choice has for Arizona. Let regulators know that our values lie in individual choice and freedom, homegrown jobs, innovation and competition.” Goldwater and Sheahan urge interested people to call the Arizona Corporation Commission and reference the Value of Solar Docket. ACC Phoenix Office: (602) 542-4251 or 1-(800) 222-7000 (Toll Free In-State Only). ACC Tucson Office: (520) 628-6550 or 1-(800) 535-0148 (Toll Free In-State Only). In addition, you can email Arizona’s Commissioners and offer your thoughts on the value of rooftop solar at the following addresses: Arizona Corporation Commission – Learn about the governing body that regulates the electricity rates and services of Arizona public utilities Arizona State Legislature – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Arizona Arizona Governor’s Office of Energy Policy – Find a wide variety of information on state government energy programs, policy, projects, energy-saving strategies and energy-related statistics Arizona State University – Solar Power Lab – Read about the innovative solar technology research taking place at Arizona State University Arizona Goes Solar – Learn about residential and commercial solar energy options and related regulations through a collaborative effort led by the Arizona Corporation Commission and implemented by the regulated electric utilities in Arizona DSIRE Incentives Database — Arizona – Search a public clearinghouse for specific solar energy incentives in Arizona and across the United States U.S. Energy Information Administration – Arizona State Profile – Explore official energy statistics, including data on electricity supply and demand, from the U.S. government BBB Accredited Business Directory — Solar Energy System Design & Installation – Better Business Bureau’s comprehensive directory of Arizona’s solar energy contractors, dealers, suppliers, manufacturers, and distributors Check out this Youtube clip on Arizona’s Solar Story: Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | December 4, 2016
Site: cleantechnica.com

Connecticut enjoys one of the fastest-growing solar markets among the smaller US states. One reason is because the Nutmeg State actually receives more sunshine than most of the solar farms in Germany, with an average of close to 4.5 hours of direct sunlight for every 10 square feet (approx. 1 square meter). It may be a small state, but Connecticut is big on renewables and makes it easy for residents and businesses to go solar. The state has consistently ranked in the top 10 nationally for energy efficiency programs & policies since the early 2000s. Over 10,000 homes in Connecticut added solar panels since 2012, with nearly 5,000 installations completed in 2014 alone. And, with the state’s recent launch of C-PACE, municipalities and commercial property owners are increasingly installing solar systems, as well. The Solar Energy Industries Association (SEIA) reports that over 147 solar companies employ nearly 2000 people in Connecticut. The total installed capacity of solar energy in Connecticut is 265 MW, ranking it 17th in the US. The state’s 2015 installation of solar capacity ranks 14th nationally, growing 64% in 2015, with 91 MW installed for the year. Investments in solar installations for 2015 represented a whopping 124% increase over 2014, pulling in $270 million, and 2016 is expected to see even greater growth. Unlike many states where going solar is treated like an act of treason, Connecticut has been consistently and refreshingly open-minded and generous in its adoption of clean energy policies, programs, and incentives. With the electric restructuring legislation of 1998, Connecticut’s Public Act 98-28 established separate funds supporting renewable energy and energy efficiency. A surcharge on ratepayers’ utility bills started up in 2000 for the renewable energy fund, charging $0.0005 per kilowatt-hour (0.5 mill per kWh), rising to 1 mill in July 2004. The Connecticut Green Bank, known officially as the Clean Energy Finance and Investment Authority (CEFIA), administers the CT Clean Energy Fund (CCEF) fed by the renewable energy fund surcharge. Designated by the CT General Assembly, the CT Green Bank is a quasi-governmental investment organization that develops programs and funds projects that meet the CCEF mission. The total funding for the CCEF from 2000–2010 was $151 million, with annual funding of approximately $20 million. Eligible projects include solar photovoltaics (PV), biomass, hydroelectric, hydrogen, combined heat & power (CHP), fuel cells using non-renewable or renewable fuels, tidal, wave, ocean thermal, and other distributed generation technologies. Applicable sectors include residential, commercial, industrial, institutional, and utilities. Also, since 1977, Connecticut residents have enjoyed a 100% exemption from additional property taxes when installing solar panels on their homes. A solar installation represents an instant property value boost of many thousands of dollars for a homeowner. However, as long as the proper claim is filed in time, Connecticut never charges the homeowner any extra taxes for installing solar panels. Add to this a 100% sales tax exemption for the solar installation and its value immediately increases an additional 6.35%. Thanks to this early and visionary foresight, Connecticut offers a wide range of generously funded incentives, rebates, and financing programs to help its citizens transition to solar energy production for their homes and businesses. The following is an in-depth exploration of the state of solar energy in Connecticut. Please feel free to offer further contributions in the comments, below. Determining what a solar PV system will cost up front and save you over the long run depends on a number of factors, including system size, the length of time the system is in operation, and the price of electricity over the course of this time. In Connecticut, the average residential solar system is 7 kW and generates around 8,400 kWh per year, but your needs and capabilities may be different, so it is always best to get a personalized solar quote. According to the SEIA, installed solar PV system prices in the US have dropped 12% from 2015 and 66% since 2010. In 2016, the latest data indicate that the average installed cost of solar in the US is $2.89/watt for residential and around $2.00/watt for commercial installations. At $3.50/watt, this means that in Connecticut it will cost roughly $24,500 to buy an average-size (7 kW) system. The federal ITC of 30% reduces this out-of-pocket expense by about $7,350, and you can expect an additional savings of about $1,488 ($124/mo) in estimated energy savings. The total reduction works out to be around $8,838, so the first year cost of installing a 7 kW solar PV system would be around $15,662. Keep in mind that the monthly energy savings estimate is based on 2011 data, but over 20 years this still works out to be around $29,760, or a net gain of around $14,098 for a 7 kW system installed in 2016. No matter how you look at it, this is a great return on your investment and we didn’t even calculate the additional gain you will make by selling your excess electricity back to the utility under Connecticut’s generous net metering rate of $0.18 kWh! And finally, even if you don’t remain in your home for a full 20 years, your home’s sales value significantly increases as soon as your solar system is installed. According to a Lawrence Berkeley National Laboratory (LBNL) study, every kilowatt of solar installed on your roof increases the sales value of your home by an average of $5,911. This means that a 7 kW system will add $41,377 to your home’s value, so even if you sell it after the first year, you could potentially pay off your ($15,662) solar PV system and still make $25,715 on your investment! The federal solar investment tax credit (ITC) is available in every state and offers credit from the federal government of up to 30% of the total costs of a solar installation. To get this tax rebate, fill out the one-page form with your federal tax return. If you are not expecting to owe taxes in the same year as your installation, roll over your credit to the following year. The federal ITC is currently scheduled to remain through 2019 and is to be phased out thereafter. Businesses are also eligible for corporate depreciation incentives, a Business Energy Investment Tax Credit (ITC-1603) and a Renewable Energy Production Tax Credit (PTC). The following is a limited listing of Connecticut solar energy incentives from the DSIRE Incentives Database. Operated by the NC Clean Energy Technology Center at NC State University, the Database of State Incentives for Renewables & Efficiency (DSIRE) website offers current, comprehensive public information on US solar energy incentives, including federal and state policies, programs, and incentives. • Eligible Renewable/Other Technologies: Solar Thermal Electric, Solar Photovoltaics, Wind (All), Biomass, Hydroelectric, Municipal Solid Waste, Combined Heat & Power, Fuel Cells using Non-Renewable Fuels, Landfill Gas, Tidal, Wave, Ocean Thermal, Wind (Small), Hydroelectric (Small), Fuel Cells using Renewable Fuels • Eligible Efficiency Technologies: Heat recovery, Yes; specific technologies not identified • Applicable Sectors: Commercial, Industrial, Local Government, Nonprofit, Residential, Schools, State Government, Federal Government, Agricultural, Multifamily Residential, Institutional • Applicable Utilities: Investor-owned utilities • System Capacity Limit: Standard net metering: 2 MW; Virtual net metering: 3 MW • Aggregate Capacity Limit: No limit specified • Net Excess Generation: Carried over as a kWh credit for one year; Reimbursed to customer at the avoided cost of wholesale power at the end of the year (March 31). • Ownership of Renewable Energy Credits: Customer owns RECs • Meter Aggregation: Yes (virtual net metering allowed for municipal, state, or agricultural customers) Connecticut Light and Power Company (CL&P) and United Illuminating Company (UI), CT’s two investor-owned utilities, are required to provide net metering to customers that generate electricity using “Class I” renewable energy resources, which include solar and other renewables, up to two megawatts (MW) in capacity, with no stated limit on the aggregate capacity of net-metered systems in the utility’s service territory. Any net excess generation (NEG) during a monthly billing cycle is carried over to the following month as a kWh credit for one year. On March 31 (the year’s end), the utility must pay the customer for all remaining NEG at the “avoided cost of wholesale power.” Virtual Net Metering: — Virtual net metering is allowed for state, municipal, and agricultural customers generating electricity from either Class I or Class III* resources from facilities of up to 3 MW. The system may be owned by the customer, leased by the customers, or owned by a third-party on a customer’s property. It may supply electricity for the municipal host customer and additional beneficial accounts so long as beneficial accounts and the host account are within the same electric distribution company’s service territory. Municipal or state customers may host up to 5 additional municipal or state accounts, and 5 additional non-state or -municipal buildings if those accounts are “critical facilities”** and are connected to a microgrid. Agricultural customers may host up to 10 beneficial accounts so long as those accounts either use electricity for agricultural purposes, or are municipal, or non-commercial critical facilities. Additionally, all virtual net metering hosts may aggregate all meters owned by the customer host. When host customers produce more electricity than is consumed, excess electricity is credited to the beneficial accounts for the next billing cycle at the retail rate “against the generation service component and a declining percentage of the transmission and distribution charges that are billed to the beneficial accounts.” The declining percentages are as follows: • First year of commercial operation: 80% of transmission and distribution charges • Second year of commercial operation: 60% of transmission and distribution charges • Third year of commercial operation and after: 40% of transmission and distribution charges Excess credits are rolled over every month for one year. The electric distribution company must compensate municipal or state host customers for excess virtual net metering credits remaining, if any, at the end of the calendar year “at the retail generation rate and the above declining percentage of transmission and distribution charges.” Note: HB 5496 enacted on June 2016 requires that the virtual net metering facilities must be operational within 18 months from the date CT Department of Energy and Environmental Protection (DEEP) issues final permit. Shared Clean Energy Pilot Program — Enacted in June 2015, SB 928 requires DEEP to create a two-year pilot “shared energy facility program,” also known as community net metering. This program allows individuals to subscribe and receive credits for electricity generated from an off-site solar or other renewable energy facility. Subscribers must be located within the same electric distribution company service territory as the facility is located. To be eligible, a “Shared Clean Energy Facility” must: • Have a nameplate capacity of 4 MW or less • Have at least two subscribers DEEP was required to issue a Request for Proposal (RFP) to develop shared clean energy facilities by 1 July 2016. The success of the pilot program will be studied for two years, with full analysis and recommendations to be issued by 1 July 2018. Shared clean energy facilities may be owned by for-profit or not-for-profit organizations and third-party entities may be contracted with to build, own, or operate these facilities. The pilot program total capacity is capped at 6 MW, with 2 MW allocated for the United Illuminating service area and 4 MW for the Eversource service area. *Class III resources are defined as “the electricity output from combined heat and power systems with an operating efficiency level of no less than fifty per cent that are part of customer-side distributed resources developed at commercial and industrial facilities in this state on or after January 1, 2006, a waste heat recovery system installed on or after April 1, 2007, that produces electrical or thermal energy by capturing preexisting waste heat or pressure from industrial or commercial processes, or the electricity savings created in this state from conservation and load management programs begun on or after January 1, 2006.” **Critical Facilities are defined as “a hospital, police station, fire station, water treatment plant, sewage treatment plant, public shelter, correctional facility, production and transmission facilities of a television or radio station, commercial area of a municipality, municipal center, or any other area identified by the Department of Energy and Environmental Protection as critical.” • Sales and Use Tax Exemption for Energy-Efficient Products Last updated: 12/12/2014 • Sales and Use Tax Exemption for Solar and Geothermal Systems Last updated: 12/12/2014 • The United Illuminating Company – ZREC and LREC Long Term Contracts Last updated: 02/23/2015 In September 2015, the CT DEEP opened a proceeding to advance energy storage systems and other electric grid improvements. This proceeding supports Section 103 of Public Act 15-5, passed in June 2015, requiring Connecticut electric distribution companies to submit proposals to DEEP for “demonstration projects to build, own, or operate grid-side system enhancements, such as energy storage systems.” As an initial step, the proceeding offered opportunities for public comment and stakeholder workshop, as well as establishing priority goals and objectives for the Distributed Energy Resource (DER) Integration Demonstration Projects. The (DER) Integration Demonstration Projects program is “designed to find best practices on how different grid-side system enhancements can be reliably and efficiently integrated into the grid in a manner that is cost-effective for all ratepayers.” All projects are subject to approval by DEEP first, and then by the Connecticut Public Utilities Regulatory Authority (PURA). Proposals submitted to DEEP are required to: • Demonstrate and investigate how distributed energy resources (DER) can be reliably and efficiently integrated into the electric distribution system • Maximize the value provided to the electric grid, electric ratepayers, and the public from distributed energy resources • Complement and enhance the programs, products, and incentives available through the Connecticut Green Bank, the Connecticut Energy Efficiency Fund, and other similar programs In January 2017, Connecticut’s state legislative committee with jurisdiction over energy will receive DEEP’s evaluation on all approved proposals. Energy storage systems are expected to be the most likely beneficiaries of the program. Distribution system automation and controls are also expected to receive attention, as well as advanced distribution system metering, communication, and system enhancements enabling two-way power flow. One example of an energy storage system already on the ground in Connecticut comes from a ground-breaking project teaming SolarCity with Connecticut Municipal Electric Energy Cooperative (CMEEC) and Brightfields Development, LLC. SolarCity’s largest such installation in the Northeast, this project provides 13 MW of solar and 6 MWh of energy storage systems in southern Connecticut. CMEEC will use the solar and stored energy to serve its municipal members and wholesale electric customers. Seven solar project sites in total will comprise almost 57,000 solar panels. All of the lands used for the project sites were considered under-utilized or were no longer being used for farming. One of the sites was a former landfill. The energy storage system is SolarCity’s first capacity services project. SolarCity is providing the engineering, installation, and maintenance of the battery storage system to be controlled by CMEEC via SolarCity’s proprietary GridLogic control platform. GridLogic allows CMEEC to remotely and instantaneously dispatch stored solar energy to optimize performance and manage load spikes. CMEEC CEO Drew Rankin explains, “CMEEC’s focus is on energizing customers and community prosperity, so we are excited to help create clean, reliable and locally based power resources.” Rankin continues, “These systems utilize leading edge technology that is both powerful and fiscally responsible. With these solar and storage projects, we are pleased to join SolarCity at the forefront of energy innovation.” • Somers Solar Center — completed in 2013 by developer HelioSage Energy. This photovoltaic project has a power output capacity of 5 MW — enough to power over 700 Connecticut homes. • East Lyme Solar Park — At 5 MW, East Lyme is among the largest solar installations in Connecticut. Completed in 2014 by Greenskies Renewable Energy, this photovoltaic project has enough electric capacity to power more than 700 homes. • Walmart, Kohl’s, Staples, Target, and Walgreens — These are among the larger retailers in Connecticut that have gone solar. • IKEA New Haven — with 940 kW, IKEA’s New Haven installation is one of the largest corporate photovoltaic systems in the state. With over 147 solar companies supporting Connecticut’s solar industry, a wide variety of products and services are locally available. A recent survey by the Solar Foundation ranked Connecticut 18th in the nation for solar industry jobs per capita. Over 50% of these jobs are held by installers and 29% are sales and distribution positions. As a fast-growing market, it is not surprising to find many of the top national solar companies working in CT. A pioneering program called Solarize Connecticut was “sparked” by the Connecticut Green Bank and is administered by SmartPower. Town by town, community by community, Connecticut is going solar in a well-organized fashion. Currently, 58 communities in the state have successfully completed Solarize campaigns. Over 2,000 homeowners have already installed over 17 MW of solar on their rooftops, and Solarize Connecticut campaigns are enjoying ongoing success throughout the state. In addition to federal and state incentives to lower the cost of going solar, Solarize Connecticut offers significant discounts due to economies of scale. The process starts in a community through the help of town officials and local program management volunteers from Energy Task Forces. SmartPower issues a Request for Proposals from installers interested in serving the town participating in the Solarize campaign. SmartPower and a town committee interviews the installers and ranks their proposals based on such factors as prices, equipment quality, quality of service offered, and overall installer track record. To take advantage of Solarize Connecticut’s special pricing, residents must agree to work with the installers competitively selected by the local committee. Residents are free to work with any other installer to go solar on their own, of course, but many folks feel comfortable working under the umbrella of the Solarize Connecticut program. Homeowners have access to $0 down financing through Dividend Solar, and the Smart-E Loan offers 10- and 12-year terms, as well, making financing a solar installation very convenient in Connecticut. Check out this quick YouTube clip showcasing the launch of the Solarize Stamford Program: Connecticut Public Utility Regulatory Authority – Learn about the governing body that regulates the electricity rates and services of Connecticut public utilities Connecticut General Assembly – Track pending legislation affecting solar energy, locate and contact individual legislators, and stay up to date on current legislative issues in Connecticut Connecticut Clean Energy Fund: Clean Energy Finance and Investment Authority (CEFIA) – As the nation’s first full-scale clean energy finance authority, CEFIA will leverage public and private funds to drive investment and scale up clean energy deployment in Connecticut. Energize Connecticut – A cost-saving clean energy initiative of the Energy Efficiency Fund, the Connecticut Green Bank, the State and local electric and gas utilities with funding from a charge on customer energy bills. Solar Energy Association of Connecticut, Inc. – nonprofit organization promoting the development of renewable energy and environmentally benign technologies in Connecticut. DSIRE Incentives DataBase Connecticut – Search a public clearinghouse for specific solar energy incentives in Connecticut and across the United States. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | January 22, 2016
Site: cleantechnica.com

When a solar company analyzes a residence for rooftop solar, they look at several items, including available unobstructed space (that is free from shade), electric bill usage, and size of budget. What is meant by overbuilding a rooftop solar array is to build out more panels than would be needed to address the home’s current electricity demand. First, there are a couple of points worth discussing before diving into the enumerated list. The first is an obvious potential limitation regarding this plan, and that is whether or not there is enough usable space on the rooftop for additional panels. If there isn’t un-shaded available rooftop real estate, then overbuilding rooftop solar becomes a moot point. The second consideration is whether or not the homeowner(s) can afford to overbuild their solar project. It may not be that difficult to overbuild, though, since the added solar panels will have a reduced cost per panel relative to the panels in the primary solar array footprint, due to reduced relative soft costs (more on that later). This leads us to consider the reasons to overbuild, and whether or not they are compelling enough for such an investment. Here are some points for consideration. Everyone is likely to rate the value of each point differently depending upon one’s point of view, and it’s possible that not all points will apply to everyone. Core home functioning is moving away from fossil fuels and toward electricity use. We are seeing more “smart home” connected controls for the thermostat, security, light control, keypad locks, refrigerators, and more. Many of us believe that all of HVAC in homes is moving in the direction of electrical systems. For example, when solar air heaters are installed on the sun-facing side of the home, electricity-using fans are used to move air, and when solar hot water systems are installed, electricity is needed for water pumps. Overbuilding solar arrays leaves capacity available to accommodate these converted home services. Deploying expanded electricity capacity makes sense in this increasingly electrified world. You may have need now or in the future to add one or more EV vehicle charging stations. Odds should be increasing in the coming years that any overnight house guest who drives to your home may be driving an EV that needs a charge. Overbuilding solar leaves room for EV expansion and accommodation. The theory here in this point is that a larger solar array will benefit the environment more than a smaller rooftop solar array. One expanded home solar array will obviously not change the world, but if more and more people do this, then in aggregate it will to some extent contribute in the right direction by displacing fossil fuels. One clear benefit in the migration from burning fossil fuels to increasing renewables electricity production is the retardation of climate disruption due to fewer GHGs (greenhouse gases) being emitted. As more solar electricity is produced, gas and/or coal plants are used less and ultimately, in time, taken offline faster as we migrate to a clean energy world. The environmental benefits of solar is a well-discussed topic on CleanTechnica. If you have interest in learning more, please visit “Going Solar Simplest & Biggest Action Anyone Can Make, Says SolarCity” by Joshua S. Hill. Many utility-grade energy plants require long-distance transmission to get to the final consumption point. The first and most obvious benefit of decentralized energy is that it doesn’t have to be transmitted if it’s used at the point of production; this increases efficiency. Decentralized energy production also makes any country safer from a malicious attack on the grid. As Amory Lovins’ Rocky Mountain Institute states it: “Centralized electricity systems with giant power plants are becoming obsolete. In their place are emerging “distributed resources” — smaller, decentralized electricity supply sources (including efficiency) that are cheaper, cleaner, less risky, more flexible, and quicker to deploy.” (A brief summary of Distributed Energy can be found on Rocky Mountain Institute’s site here.) For additional information about distributed energy, please consider reading these two CleanTechnica articles: “Moving Beyond Utility 2.0, To Energy Democracy,” by John Farrell Whenever energy-related purchases can be diverted from the fossil fuels industry and directed to the solar power (or any renewable energy) industry, this helps with new R&D funds, more clean jobs, and further reduced costs for renewables due to increased production volume. It’s a well-known fact that solar increases a home’s value when it comes time to sell the home. According to the Lawrence Berkeley National Laboratory (Berkeley Lab), as reported in an article on Fool.com (run by The Motley Fool guys), it is claimed that rooftop solar can increase a home’s resale value by about $15,000. By comparing the sale prices of around 4,000 solar rooftop homes across 8 states and 12 years to nearly 19,000 comparable non-solar homes, researchers found that, for an average 3.6kW solar system house, buyers were generally willing to pay around $15,000 more than for a comparable non-solar home. How much some added overbuilt solar would increase a home’s value has not been studied, but it has been estimated. In the same Berkeley Lab document, the researchers state that their models “… indicate decreasing marginal returns to increasing PV system size.” This means that, to a large extent, overbuilding solar would have to be a labor of love and for motives that consider the big picture, and that aren’t strictly related to financial return. Especially if you have a Powerwall or other storage system, overbuilding your rooftop solar could effectively make your electricity production and use self sufficient. The combination of overproduction and storage can provide you with flexibility to bring you effectively offgrid while still being connected to the grid. The combination of solar and storage would allow you to at least not lose energy that you produce that you may otherwise be providing back to the grid for no compensation (depending upon your utility’s net metering policy). Overbuilding prepares you for the possibility that your utility could have a favorable-to-consumer net metering policy in place at the time of solar installation that may be withdrawn later by the utility. Overbuilt rooftop solar, if done by enough people, would likely contribute by some modicum amount to reducing the need for new dirty coal or nat gas plants to be built as your utility’s electricity demand increases over time. This increase in demand could potentially be satisfied by this extra solar electricity production if enough people did it. All renewable energy production contributes favorably to the global ecosystem to some degree by reducing electricity production from dirty fossil fuel energy or nuclear energy. If you are already going to install rooftop solar, the soft costs are already calculated into that project. To add some more panels is an efficient increase per solar panel. According to Barry Cinnamon of Spice Solar of Campbell, California, most of the soft costs are already covered if you were to increase the size of your system. Barry states: “Soft costs are generally fixed for a residential system. The fixed soft costs such as system design, permitting, interconnection, customer acquisition, labor and monitoring changes only slightly if the system size increases.” New, fun gadgets are being developed every year that either use electricity directly or that use batteries, which require electricity to recharge those batteries. For example, how can we live without Samsung’s smart refrigerator announced this year here in Las Vegas at CES that notifies the user that some food item’s expiration date is about to go bad in the refrigerator? (To be fair to other manufacturers, this idea isn’t brand new.) Indeed, we are seeing more and more devices with no end in sight. A variant concept is to get more efficient panels (if the project isn’t already using the best panels) that would theoretically generate a relative increased gain; ideally, one would both get more efficient panels and more solar panels. Overall, if your rooftop solar project size can be increased, there will be a number of sure benefits, and other possible benefits. The increased array size has long-term benefits, but of course must be affordable for the buyers. Go big. If you work in solar sales and wish to reprint this article for customers, you have permission to do so. If you increase your project size due to this article, or if you work in solar sales and use it, please feel free to write a comment below the article even if the comment comes long after this article is published. I’ll see your comment and reply.    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | November 11, 2015
Site: phys.org

Basic biology research at Lawrence Berkeley National Laboratory (Berkeley Lab) has led to the formation of Cinder Biological, or CinderBio, a startup company producing a new class of enzymes made from microbes that thrive in hot volcanic waters. Co-founded by Berkeley Lab scientists Steve Yannone and Jill Fuss, CinderBio will first target the food processing industry, where its enzymes can significantly reduce the vast amounts of chemicals and water used to clean equipment. Eventually it expects to formulate enzymes for the biofuels, paper, and textile industries, and possibly even more. "Our enzymes are unique—they can operate in conditions that nobody else's can, so this opens up a lot of previously unexplored applications," said Fuss, a molecular biologist and the company's chief technology officer. "They're made from microbes that come from Yellowstone and live in hot acidic pools, so they thrive in nearly boiling acid. We've been able to take gene sequences and make them into enzymes that are extraordinarily stable." CinderBio won a cash prize in the 2014 FLoW DOE National Clean Energy Business Plan Competition at Caltech. It was also was one of eight technologies selected to be a project for the Fall 2013 UC Berkeley Haas School of Business Cleantech to Market program that seeks to translate cleantech research into market opportunities. This year, with a SBIR (Small Business Innovation Research) Phase 1 grant from the National Science Foundation, CinderBio tested its technology at a local creamery. The cleaning of dairy and other food processing equipment is normally a multi-step process. "They use a lot of base, then they wash, then acid, then wash, then sanitizer," said Fuss. "It takes a lot of time and uses a lot of chemicals and water. The microbes in the equipment can be hard to get rid of, so in some areas they've started to use harsher chemicals to combat those biofilms." In the field trials, using CinderBio's enzymes to do the cleaning in place of chemicals, Yannone, a named inventor and the company's CEO, said he was amazed at how successful they were. "I came back after our first trials and recalculated everything because I thought we had moved a decimal point," he said. "We reduced water usage by almost 30 percent and our enzymes removed contaminants much faster and much more effectively than we had expected. The managers and owner at the creamery were all very excited by our results." Enzymes are naturally occurring molecules that act as catalysts to activate or speed up biochemical reactions; their use in industry is commonplace and usually beneficial to the environment since they can replace or reduce the use of chemicals or allow processes to take place at lower temperatures, lowering energy use. CinderBio's technology comes from extremophilic microbes that grow naturally in thermal pools and have been studied for many years. The Berkeley Lab team was able to exploit aspects of the biology of these extreme microbes to turn them into enzyme-making machines. They developed a set of genetic tools that allow them to introduce naturally occurring genes into these microbes to produce enzymes that are more heat- and acid- stable than what has been available. "Historically scientists have thought that all the information needed to make an active enzyme is entirely encoded in the DNA sequence," said Yannone. "Some of our revelations center on the idea that that's not true, that there are other components in a microbe making an enzyme that's stable and works in hot acid other than just the DNA sequence." Yannone and Fuss first started studying the microbe about 10 years ago, looking for the fundamental biological processes that allowed it to thrive in such extreme conditions. Although scientists are still investigating what exactly confers such extreme stability on the microbe, Fuss was able to characterize the structural biochemistry of one of the enzymes. And Yannone, working with his undergraduate student then technician, Adam Barnebey, developed the molecular biology system that led to the formation of CinderBio. Their invention was patented by Berkeley Lab, and Barnebey was the company's first employee. They are under consideration for a Phase 2 SBIR grant and next year may seek additional outside funding. They plan to further refine the platform and scale up production while seeking practical feedback from the food processing industry, a vast market. "We think a lot about what we eat and how our food is grown, but we don't think about what happens in the middle," Fuss said. "Our technology helps make food processing more sustainable and could be used for any food product that goes in a container and ends up on a shelf." Besides saving water and time, use of CinderBio's enzymes would also cut down on chemical waste. Chemical wastewater cannot be reclaimed and instead often has to be trucked off-site. Replacing these chemicals with biodegradable CinderBio enzymes makes that wastewater reclaimable, of particular importance given the ongoing drought in California. "It's great when Berkeley Lab scientists are able to transform their fundamental research into an innovative technology that not only solves industry challenges but also contributes to our country's energy and environmental challenges," said Elsie Quaite-Randall, Berkeley Lab's Chief Technology Transfer Officer. Explore further: Going to extremes to find greener chemicals


Thin films of (LuFeO ) /(LuFe O ) were grown by reactive-oxide molecular-beam epitaxy in a Veeco GEN10 system on (111) (ZrO ) (Y O ) (or 9.5 mol% yttria-stabilized zirconia) substrates, denoted YSZ, held at a temperature of about 700 °C. This substrate temperature was determined by an optical pyrometer focused on an opaque, 200-nm-thick, platinum layer on the backside of the YSZ substrates used to absorb the heat from the SiC radiative substrate heater. Lutetium and iron were evaporated from elemental sources each at a flux of approximately 1 × 1013 atoms per square centimetre per second. The fluxes were first calibrated (approximately) with a quartz crystal microbalance. The lutetium flux was then more accurately refined using the reflection high-energy electron diffraction (RHEED) intensity oscillations that occurred when (111) Lu O was deposited on (111) YSZ. With the known lutetium flux, the iron flux was adjusted to produce a stoichiometric LuFeO film as judged by the out-of-plane lattice parameter measured by XRD33. The background oxygen partial pressure was meticulously varied during the superlattice growth to provide an environment sufficiently oxidizing for the Fe3+ in the LuFeO without over-oxidizing the Fe2.5+ in the LuFe O layers. The background partial pressure of a mixture of about 2% O and O was varied between 3 × 10−7 Torr and 9 × 10−7 Torr using a capacitance manometer to control the inlet pressure of a piezoelectric leak valve. Each growth sequence commenced with at least five monolayers of the LuFeO structure before the first LuFe O layer to avoid over-oxidation of the LuFe O by oxygen supplied by the YSZ substrate. (LuFeO ) /(LuFe O ) superlattices with thicknesses ranging from 28 nm to 35 nm were studied. The structure was characterized by XRD using a four-circle Rigaku SmartLab diffractometer equipped with a Ge(220) × 2 monochromator on the incident side and a Ge(220) × 2 analyser on the diffracted side, with Cu K radiation. XRD θ–2θ scans of all samples presented as well as a representative rocking curve are shown in Extended Data Fig. 1. Cross-sectional TEM specimens were prepared using an FEI Strata 400 Focused Ion Beam (FIB) with a final milling step of 2 keV to reduce surface damage. High-resolution HAADF-STEM images were acquired on a 100-keV Nion UltraSTEM, a fifth-order aberration-corrected microscope. The images in Fig. 1b were acquired from a single sample containing the representative layering patterns in this study. The lutetium distortions were quantified from HAADF-STEM images. Several images were averaged to reduce scan noise and the lutetium atomic positions were determined through an iterative two-dimensional Gaussian fitting procedure and segmentation using a watershed algorithm. The position of each atom was then compared to its neighbouring atom on each side in the same atomic plane. The magnitude and direction of the distortion was calculated by comparing the height difference of the three atoms as shown in Extended Data Fig. 6a. The schematics show different patterns of three-atom sets corresponding to the marked polarization. This distortion directly correlates with the ferroelectric polarization as shown in Extended Data Fig. 2. The resulting polarization at different locations in the structure is plotted in Extended Data Fig. 5 and in the aggregate in Extended Data Fig. 6b, c. In sum, the distortions from over 90,000 lutetium atoms were analysed to generate the results shown in Fig. 2e. The temperature dependence of the lutetium distortions was measured using a Protochips Aduro double-tilt heating holder in a 200-keV FEI Tecnai F20-ST microscope as shown in Extended Data Fig. 6d. A cross-sectional TEM specimen was prepared by FIB as above and then mounted onto a ceramic heating chip. The temperature was controlled on the chip by passing a current, which was calibrated to be within 50 K of the intended temperature. HAADF-STEM images were recorded at intervals as the sample was heated from 290 K to 675 K. The ferroelectric distortions were estimated from the strength of the superlattice reflections observed in the fast Fourier transform of the image corresponding to the tripling of the unit cell from the lutetium distortions. We label regions where ferroelectric distortions were observed, where weak distortions were observed and where the distortions were not observed (although they may be present, they are below our resolution limit). Magnetic measurements were performed with a Quantum Design MPMS3 SQUID magnetometer. M–T curves with the sample field-cooled in 1 kOe and zero-field-cooled were measured for each sample, along with a bare YSZ substrate that had undergone identical deposition conditions as a superlattice (same oxygen partial pressure and time at growth temperature, but not exposed to the lutetium or iron molecular beams). The M–T curves of the YSZ substrate were used to subtract the diamagnetic and paramagnetic backgrounds from the (LuFeO ) /(LuFe O ) M–T curves. The curves in Fig. 2a were normalized to facilitate comparison of the Curie temperatures (T ) at which the superlattices order ferromagnetically (or ferrimagnetically). The ferromagnetic (or ferrimagnetic) Curie temperature was taken to be the intersection between a line extending from the point of steepest descent of the field-cooled curve and a line extending from the high-temperature background. M–H loops (where H is the magnetic field) out to 70 kOe were measured for each sample at 50 K, 100 K, 200 K and 300 K. The linear diamagnetic background for each of these loops was subtracted with a linear fit to the high-field data. At higher temperature the loops saturate, but below 130 K the loops did not saturate, and an additional M–H loop was measured with the sample field-cooled in 70 KOe to measure the saturated magnetic moment at these temperatures. The known bulk magnetic moment of LuFeO (ref. 11) was subtracted from the total measured magnetic moment to obtain the LuFe O magnetic moments shown in Fig. 2d. Extended Data Fig. 3c displays ‘excess magnetization’ as a function of composition at 50 K, 100 K and 200 K. This value was found by subtracting the magnetization due to the end-member LuFeO and LuFe O signals from the total moment observed. Neutron diffraction was performed on the BT-4 triple-axis spectrometer at the NIST Center for Neutron Research using pyrolytic graphite monochromator and analyser crystals with neutrons of initial and final energies of 14.7 meV (λ = 2.359 Å). Relaxed horizontal collimation settings of the neutron optics in stream order were open–monochromator–40′–sample–40′–analyser–open–detector to maximize the intensity of the very weak scattered signal. The sample was mounted onto a silicon wafer using a small amount of aluminium foil and sealed in an aluminium canister with 4He gas to ensure thermal equilibration with the closed-cycle refrigerator. Neutron diffraction was used to determine the onset of long-range magnetic order of a (LuFeO ) /(LuFe O ) superlattice. Neutron scattering is not sensitive to small impurity phases that may influence other bulk characterization techniques. Magnetic reflections of the (LuFeO ) /(LuFe O ) superlattice were observed in neutron diffraction by scanning along the [10L] reciprocal lattice direction, where the Miller index L is along the c axis or growth direction, at several temperatures between 5 K and 325 K. We found a single 101 magnetic reflection (indexed to the magnetic unit cell of 10.4 nm, or two superlattice repeat distances) at 5 K that was not present at room temperature. This peak is observed to show considerable change in intensity between 5 K and room temperature as shown in Extended Data Fig. 4a. The centre of the peak is offset slightly from the 101 reflection owing to a slight misalignment of the sample. A Gaussian function is fit to the signal to determine the integrated intensity as a function of temperature. Disregarding finite-size effects of the films along the growth direction, the full width of the resolution function is shown as the horizontal line (black), demonstrating that the peak is not resolution-limited. The magnetic correlation length from this is found to be ξ ≈ 30 nm, or roughly six superlattice units along the c direction. The onset of magnetic order was determined by fitting the temperature-dependent integrated intensity of the peak with a mean-field order parameter. Shown in Extended Data Fig. 4b, we find T  = 238 ± 28 K. This is consistent with the onset of ferromagnetism (or ferrimagnetism) determined from magnetometry of a (LuFeO ) /(LuFe O ) superlattice (246 K). To understand the origin of this 101 reflection, we calculated the magnetic structure factors for the magnetic moment arrangement obtained from DFT for the COII and Fe3+-doped configurations of the (LuFeO ) /(LuFe O ) superlattice. These calculations show that 10L-type reflections are particularly sensitive to the onset of ferrimagnetic order along the c axis in the LuFe O layers in the charge-ordered state. X-ray absorption spectroscopy on the Fe L edge was performed at Beamline 4.0.2 at the Advanced Light Source at Lawrence Berkeley National Laboratory. Spectra were acquired with 100% linearly polarized light oriented nearly parallel to and perpendicular to the c axis of the sample for an angle of X-ray incidence of 20° to the sample surface. The normalized difference between these spectra was collected for temperatures ranging from 300 K to 700 K for (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) . All temperatures measured were above the magnetic transition temperatures of the corresponding films and so the dichroic signal records an asymmetry in the electronic structure, as previously recorded for ferroelectric thin films31. As shown in Extended Data Fig. 7b, the (LuFeO ) /(LuFe O ) sample displays dichroism of about 20% at 300 K due to the structural anisotropy between the in-plane and out-of-plane electronic configurations in the hexagonal structure. A smaller anisotropy was also previously recorded for LuFe O (ref. 34). Previous calibration of the endstation has identified a systematic error of approximately ±5% in the measured dichroism due to the variation of the degree of X-ray polarization, the angle of X-ray incidence and photon energy with time. To further determine the uncertainty associated with these measurements, we measured the dichroic signal from the (LuFeO ) /(LuFe O ) sample multiple times before sample heating and again after the sample heating/irradiation. From comparison of the dichroic signal, we find a standard error of the mean of approximately 0.016 (4% in un-normalized units), within the uncertainty estimated from the previous calibration. The (LuFeO ) /(LuFe O ) , (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) samples, which were identified as displaying ferroelectric lutetium distortions by HAADF-STEM, demonstrated a dichroism of about 40% at 300 K. The temperature-dependent fits of the recorded difference spectra as a linear combination of the structural (LuFeO ) /(LuFe O ) component and (LuFeO ) /(LuFe O ) component at 300 K are shown in Fig. 2f. There is a drop in the recorded signal for the (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) samples indicative of a ferroelectric transition. The XLD signal does not decrease fully to the signal observed in (LuFeO ) /(LuFe O ) , suggesting that there may be an additional structural component in the (LuFeO ) /(LuFe O ) films that is not captured by the (LuFeO ) /(LuFe O ) component. The data shown in Fig. 2f were fitted to an order parameter plus a constant background using a Bayesian fitting algorithm. From this, we identify a transition T at about 550 K and 500 K in the (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) sample, respectively. Assuming the same constant background for the (LuFeO ) /(LuFe O ) sample, we estimate a transition near 900 K, beyond the measurement range of this experiment. The variable-temperature HAADF-STEM images (Extended Data Fig. 6d) recorded from these layering patterns are also consistent with transitions at comparable temperatures, although slightly different transitions between the m = 3 and m = 5 superlattices are found. The high-temperature transition in the (LuFeO ) /(LuFe O ) superlattice is consistent with the observation of superlattice reflections in RHEED corresponding to the trimer distortion24 during thin-film deposition at the growth temperature for this film. Vertical (or out-of-plane) PFM was used to measure the ferroelectric polarizability of the (LuFeO ) /(LuFe O ) superlattice shown in Figs 2g and 4a35–37. A bottom electrode was added to the sample after thin-film deposition. The substrate was first thinned using tripod mechanical polishing at a 1° angle; the measured region had a substrate thickness of approximately 150 nm. A copper adhesion layer and subsequent platinum layer were deposited onto the thinned substrate by direct-current magnetron sputtering in an argon background of 3 × 10−3 Torr at room temperature. This provided a conducting back electrode for the PFM experiments without disturbing the epitaxy of the (LuFeO ) /(LuFe O ) film on the insulating YSZ substrate. In Fig. 2g, a 20 μm × 20 μm region was poled with a d.c. bias of −15 V to switch the sample to the ‘upward’-oriented out-of-plane polarization. A second switch with a d.c. bias of +15 V was then performed to write the pattern shown with the corresponding ‘downward’-oriented polarization. The resulting structure was imaged directly after poling and again 100 h later using a 2-V a.c. voltage. The image acquired after 100 h was rotated after acquisition to follow the orientation in the original image. The persistence of the written structure demonstrates the ability to reversibly switch the ferroelectric polarization with an electric field. In Fig. 4a, a 15 μm × 15 μm region was poled using a single switch of a d.c. bias at +15 V and −15 V (applied to the cantilever tip), generating a ‘downward’- and ‘upward’-oriented out-of-plane polarization, plotted in red and turquoise, respectively. The magnetic structure presented in Fig. 4b–d was measured nearly 200 h after the initial poling shown in Fig. 4a, further corroborating the persistence of the switched state. Variable-temperature and magnetic field transport measurements were made in the van der Pauw geometry and performed using the combination of a Quantum Design PPMS and external Keithley electronics. Measurements were made of the in-plane resistivity, for example, parallel to the (0001) planes. The resistivity increases substantially as the temperature is lowered until it is beyond our measurement abilities below about 200 K. Resistivity values for the entire stack of the two samples we measured, (LuFeO ) /(LuFe O ) and (LuFeO ) /(LuFe O ) , are 4.3–86.5 Ω cm over the 330–198 K temperature range. Plotting the natural logarithm of the conductivity as various powers of inverse temperature suggests that conduction is primarily through thermally activated carriers. No anomaly in the resistivity at T or any changes in resistivity with the application of a magnetic field (0 T, 5 T and 9 T) were observed. We attempted, but were unable (the sample was too resistive), to measure a bare YSZ substrate that had undergone identical deposition conditions as a superlattice (the same oxygen partial pressure and time at growth temperature, but not exposed to the lutetium or iron molecular beams) to determine if the primary conduction path was mediated by oxygen vacancies generated in the substrate. The enhancement of T in region I could be caused by strain or oxygen stoichiometry effects. Lutetium atom distortions were not observed in these superlattices. A slight mismatch between the LuFeO in-plane lattice (a = 5.989 ± 0.005 Å in thick relaxed epitaxial films25 or 3.458 Å for the primitive cell) and the bulk LuFe O (a = 3.4406 Å; ref. 38) produces a tensile strain of up to 0.25% on the LuFe O layers; epitaxial strain has previously been shown to affect ferromagnetic transitions7, 39, 40. In region I, T reaches 250 K, approximately the transition observed in LuFe O single crystals17. Previous experiments have shown that the magnetic properties of LuFe O are extremely sensitive to deviations in oxygen stoichiometry41 and thus a small amount of oxygen diffusion between the LuFe O and LuFeO layers could have also caused the increase in T to 250 K. We use first-principles DFT calculations to elucidate the origin of the increase in the magnetic transition temperature T with the number of LuFeO layers (Fig. 2c) and, correspondingly, the magnitude of the electrical polarization (Fig. 2e). In particular, we perform DFT calculations on what we call ‘model LuFe O ’ systems: bulk-like LuFe O structures with the addition of the trimer distortion Q. This trimer distortion is characteristic of LuFeO and, as shown in Fig. 1b, is experimentally observed in the (LuFeO ) /(LuFe O ) superlattices, yet is not observed in bulk LuFe O . This allows us to derive a simple model that elucidates the origin of the effect in the superlattices, without performing DFT calculations directly on these larger structures. Previous experimental work17 has suggested that bulk LuFe O has a net polarization due to a charge disproportionation between the composing iron bilayers. In the first layer (A), an Fe2+ ion is at the centre of a hexagon composed of six Fe3+ ions, leading to a 1:2 ratio of the charges. The second layer (B) has a similar arrangement with a majority of Fe2+. It was proposed that the bilayers order to form an ABAB stacking along the c direction. We denote this structure as ‘COII’, which is displayed in Fig. 3b and reproduced in Extended Data Fig. 10b. This stacking sequence breaks inversion symmetry and so is compatible with a ferroelectric state (space group Cm). We have also considered multiple additional charge-order arrangements, including those compatible with a non-ferroelectric state, consistent with later experimental results42, 43. We introduce only the low-energy arrangements here. One such arrangement has the same bilayer structure as COII, but an opposite stacking of the bilayers, for example, ABBA ordering. In this case, a mirror plane is located at the Lu–O plane, preserving inversion symmetry (space group C2/m). We refer to this structure, displayed in Fig. 3a and reproduced in Extended Data Fig. 10a, as ‘COI’. In additional, the layers can be stacked in an AABB arrangement (proposed in ref. 42, but studied theoretically for the first time here), which we denote ‘COIII’ (space group C2/m) and display in Extended Data Fig. 10c. In this configuration, the AA and BB bilayers have excess charge and a mirror plane located at the centre between two A (B) layers. COI and COIII are non-polar (space group C2/m)42, 43 and therefore not ferroelectric. We find that COI is the lowest-energy configuration, followed by COII, which is higher than COI by only about 4 meV per formula unit. COIII is higher in energy than COI by more than about 50 meV per formula unit and therefore will not be considered further. The small difference in energy between COI and COII is expected because the two structures differ only in long-range inter-bilayer Coulomb interactions. Structurally, however, COI and COII have a quite distinct (and relatively easy to experimentally discern) difference: the COII configuration induces the lutetium trimer distortions, which are forbidden by symmetry in COI. One difficulty in definitively determining the magnetic ground state of LuFe O from first principles is the presence of multiple magnetic configurations of COI and COII with similar energy. The energetics of these magnetic configurations in COI and COII are shown in Extended Data Fig. 9b. The ground-state magnetization identified for COI is M = 0.5μ /Fe, whereas the lowest energy of COII has M = 1.2μ /Fe. Both COI and COII have low-energy states with M ranging from 0μ /Fe to 1.2μ /Fe. According to experimental measurements44, LuFe O undergoes a ferrimagnetic transition at T  = 240–250 K, resulting in an Ising-like net magnetic moment along the c direction. The reported moment ranges from 0.8μ /Fe to 1.4μ /Fe and is particularly sensitive to oxygen stoichiometry41. We thus consider an Ising Hamiltonian to describe the magnetic properties of the LuFe O system: Here J is the symmetric super-exchange interaction and the spin S  = ±2 (S  = 5/2) for Fe2+ (Fe3+) in trigonal bipyramidal coordination with high spin. The six super-exchange interactions considered correspond to nearest-neighbour interactions for iron (2+/3+) located either in the same plane (J ), in neighbouring planes within a bilayer (J ) or in neighbouring planes between different bilayers (J ); see Extended Data Fig. 8a. These parameters are estimated from first principles. The magnetic ground states and Curie transition temperatures are then determined from Monte Carlo simulations. The simulated T for COI (T  ≈ 500 K) is found to be about a factor of two higher than the experimentally reported value (T  ≈ 240 K). The possible origins of this discrepancy are: (1) correlations among electrons not captured with DFT + U that strongly influence the magnetic exchange interactions or (2) relativistic interactions between spins that are not included in the present model, yet that often strongly compete with symmetric exchange interactions. We therefore introduce a normalization factor of approximately 0.48 to match the calculated T of COI with the corresponding experimentally reported value and use this same factor throughout our study. Although, in essence, the T of bulk (COI) LuFe O becomes a parameter, the changes in T (for example, from COI to COII) are calculated from a parameter-free, first-principles theory. The corresponding results are displayed in Fig. 3a and b for COI and COII (with Q ≈ 1), respectively, where it is seen that the ferrimagnetic transition temperature of COII is substantially larger (T  ≈ 300 K) than that of COI. Noting that COII induces lutetium trimer distortions, we repeat our Monte Carlo simulations of the magnetic transition for different values of Q. As shown in Fig. 3b, we observe that T monotonically increases with the magnitude of this distortion. Calculated super-exchange values as a function of distortion are shown in Extended Data Fig. 8b. The largest super-exchange interactions correspond to in-plane interactions and follow a linear trend as a function of Q. Although the Fe3+–Fe3+ interactions increase and become more antiferromagnetic, the Fe2+–Fe2+ interactions follow the opposite trend. The Fe2+–Fe3+ interactions do not exhibit substantial variation as a function of Q. This result is consistent with the Goodenough–Kanamori–Anderson (GKA) rules considering the change in angle for the Fe–O–Fe super-exchange in-plane paths as illustrated in Extended Data Fig. 8a. As the distortions are increased, on average the Fe–O–Fe angle becomes larger than the undistorted value of 118° for Fe3+–Fe3+ interactions, smaller for Fe2+–Fe2+ and unchanged for Fe2+–Fe3+. Because the Fe2+ and Fe3+ ions both have partially filled d orbitals, the GKA rules predict antiferromagnetic and ferromagnetic super-exchange interactions for 180°and 90° angles, respectively, consistent with the trend observed in Extended Data Fig. 8b. The observed control of T by the trimer distortion is universal, parameter-free and is independent of any specific detail of the first-principles methods. DFT calculations were also performed on the (LuFeO ) /(LuFe O ) periodic supercells for superlattices with m = 1, 3 and 5. The size of these supercells varies depending on whether m is odd or even. For odd values of m, the number of (LuFeO ) /(LuFe O ) blocks has to be doubled along the out-of-plane direction to obtain a periodic supercell. For even values, the number of blocks has to be tripled. Additionally, taking into account that trimerization of the LuFeO structure and charge order of the LuFe O structure demand a supercell containing three formula units per plane, the total number of atoms for the different periodic (LuFeO ) /(LuFe O ) supercells is N = 6(5m + 7) and N = 9(5m + 7), for odd and even values of m, respectively. Owing to computational constraints, most of our calculations were performed on (LuFeO ) /(LuFe O ) (72 atoms) and (LuFeO ) /(LuFe O ) (132 atoms) supercells. Calculations were also performed on (LuFeO ) /(LuFe O ) supercells (192 atoms)—without considering full structural optimizations—to confirm the key result obtained for m = 3, that is, stabilization of hole-doped LuFe O blocks with head-to-head domain walls occurring in the middle of LuFeO blocks, as discussed below. In a single ferroelectric domain of a (LuFeO ) /(LuFe O ) superlattice, the most stable charge-order configuration is COII, as shown in Extended Data Fig. 10d for m = 3. The lowest-energy magnetic configuration corresponds to the same ferrimagnetic arrangement found in bulk LuFe O with the COII pattern, characterized by M = 1.2μ /Fe. Once LuFeO is added to LuFe O —that is, (LuFeO ) /(LuFe O ) superlattices with m ≥ 1—the COI pattern becomes unstable. The single-domain configuration with the COII pattern of the m = 3 (LuFeO ) /(LuFe O ) superlattice is metastable against multidomain configurations, even under E = 0 electrical boundary conditions. We refer to the multidomain configurations as (1) ‘undoped-type’ (Extended Data Fig. 10e) and (2) ‘doped-type’ (Extended Data Fig. 10f). For the m = 1 composition, the difference in energy between the domain types is negligible. By contrast, for the m = 3 superlattice, the doped-type domain configuration is about 8 meV per formula unit more stable than the undoped-type, which itself is about 8 meV per formula unit more stable than the single-domain configuration, demonstrating the added stability of this configuration with increasing m. We also verified that the doped-type domain configuration remains the most stable for the m = 5 superlattice. The undoped-type structure corresponds to the COII ordering pattern in which the AB bilayers are stacked in ABAB sequence along the c direction. Here the superlattice exhibits charged tail-to-tail and head-to-head ferroelectric domain walls. For the m = 3 superlattice, the tail-to-tail domain wall occurs at the interface between a bilayer and LuFeO , whereas the head-to-head domain wall always occurs in the LuFeO block next to the second bilayer, as labelled in Extended Data Fig. 10e. The magnetism of this structure is similar to the single ferroelectric domain case shown in Extended Data Fig. 10d. The starting point to understanding the doped-type domain structure is again the COII ordering pattern of the bulk LuFe O . In this case, however, nearly an electron of charge from a Fe2+ cation located in the A layer of LuFe O is transferred to an Fe3+ cation located in the LuFeO layer, as illustrated in Extended Data Fig. 10f. This results in Fe3+-rich iron bilayers and a corresponding stacking sequence of AAAA (see Extended Data Fig. 10f). Therefore, we can consider the head-to-head domains in this configuration as being stabilized as a result of LuFe O layers electron-doping the LuFeO layers. Indeed, in the doped-type domains, the tail-to-tail domain wall occurs at the interface between a bilayer and LuFeO , whereas the head-to-head domain wall occurs right at the doping layer in the LuFeO block, as shown in Extended Data Fig. 10f. Alternatively, we can consider the LuFeO layers as hole-doping the LuFe O . In this doped-type of ferroelectric domain, the only contribution to the total magnetization in the hole-doped LuFe O layers is from ferromagnetically aligned Fe2+ ions located at the centre of the Fe3+ hexagons. Because the in-plane Fe3+–Fe3+ super-exchange interactions are always found to be antiferromagnetic, there is no net contribution due to the Fe3+ ions. This leads to a magnetization of M = 1.33μ /Fe for each Fe3+-doped bilayer, as shown in Extended Data Fig. 9c. Although this charged domain-wall configuration might appear energetically unfavourable, mobile carriers redistribute to screen the excess of bound charges. The calculated density of states for each iron cation in the heterostructure is shown in Extended Data Fig. 10e. Our calculations demonstrate that electrons migrate from the LuFe O block to occupy iron states in contiguous LuFeO layers, which nevertheless remain insulating. Owing to the shift in the electrostatic potential caused by LuFeO dipoles, conducting Fe d states appear at the head-to-head domain walls (Extended Data Fig. 10f). Because the potential drop across the structure should increase with superlattice periodicity, we expect the electrical conductivity of the walls to grow monotonically with m. The same qualitative trends are observed in the m = 5 superlattices. Structural relaxations of bulk LuFe O were performed using the DFT + U method45 with the Perdew–Burke–Ernzerhof (PBE)46 form of exchange correlation functional as implemented in VASP code47. We used the projector-augmented plane-wave method48. We considered Lu 4f states in the core; for Fe 3d states we chose U  = 4.5 eV (where U is the screened Coulomb interaction) and J = 0.95 eV (where J is the Hunds coupling on the iron site). The choice of U and J are based on our previous study on the LuFeO system16, but all results remain qualitatively similar for choices of U  > 4.5 eV. We used a 6 × 6 × 2 k-point mesh and a kinetic-energy cut-off value of 500 eV. The Hellman–Feynman forces were converged to 0.001 eV Å−1. To account for the different charge orders (COI, COII and COIII), a supercell of the high-symmetry cell was considered. Considering optimized structures with VASP, spin–spin exchange interactions were estimated by fitting the Ising model with the energies of different magnetic configurations calculated with the linearized augmented plane-wave (LAPW) method as implemented in the Wien2k code49. We considered 25 magnetic configurations to construct the model Hamiltonian for the high-symmetry phase and each charge-ordered state. To be consistent, we used the same U and J values and k-point mesh as for VASP. We used U  = 8.8 eV and J = 0.95 eV for Lu 4f states. The plane-wave cut-off, which is defined as the ‘muffin tin’ radius multiplied by k , is 7.0. For (LuFeO ) /(LuFe O ) superlattices, all internal parameters as well as out-of-plane lattice constants were fully relaxed (except for the in-plane lattice constant, which was fixed to the average value of the corresponding LuFeO and LuFe O lattice constants, a  = 3.46 Å). All calculations involving superlattices were performed in VASP. Magnetic imaging was performed using cryogenic PEEM at the Advanced Light Source at Beamline 11.0.150, taking advantage of XMCD at the Fe L edge51. Magnetic images are obtained by dividing images recorded with left and right circular polarization. The resulting dark and bright contrast is a measure of the projection of the magnetization direction on the X-ray polarization vector, which has a 30° angle of incidence relative to the surface of the sample. To probe the coupling between the ferroic orders, XMCD-PEEM measurements were made on a region of the (LuFeO ) /(LuFe O ) sample that was poled with a PFM tip to form distinct regions of ‘up’ and ‘down’ c-oriented ferroelectric polarization. The PFM poling was performed eight days before PEEM imaging to ensure that the polarization configuration that was imaged was robust. The PFM poling was performed at 300 K. In the PEEM, the sample was first cooled to 200 K to increase the magnetic contrast from the buried LuFe O layer. As shown in Fig. 4b, the magnetic structure displays the same distinct pattern as the ferroelectric polarization. Finally, to confirm that the magnetic image shown in Fig. 4b could not be due to extrinsic effects, the sample was heated to 320 K. As shown in Fig. 4d, the resulting dichroism dropped by about 70%, consistent with the drop in the saturation magnetization between these temperatures identified in SQUID. Because the overall XLD contrast was constant between those two temperatures (not displayed in Fig. 2f, but acquired using the same experimental configuration) and any extrinsic chemical contrast would similarly be constant, this indicates that the strong dichroism observed at 200 K must arise from the magnetic order.


News Article | November 21, 2016
Site: phys.org

New X-ray methods at the Department of Energy's SLAC National Accelerator Laboratory have captured the highest resolution room-temperature images of this protein complex, which allows scientists to closely watch how water is split during photosynthesis at the temperature at which it occurs naturally. The research team took diffraction images using the bright, fast pulses of X-rays at SLAC's X-ray free-electron laser - the Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Nature published the study on Nov. 21. Previously, the resting state of photosystem II had been seen in detail using samples that were frozen. In this latest study, the researchers were able to see two key steps in photosynthetic water splitting under conditions as it occurs in nature, a big step to decoding how the process works in detail. A damage-free, room temperature study means there are fewer artifacts in the images, and this gives a clearer picture of how photosystem II works in nature. For many years now, scientists have been trying to understand this process in meticulous detail. Besides its fundamental importance to science, understanding this process might help develop ways to create artificial photosynthesis devices that can serve as potential clean energy sources. "The eventual goal is to emulate what photosynthesis has been doing for about three billion years. This has been a research challenge for decades," said Junko Yano, principal investigator and senior scientist at Lawrence Berkeley National Laboratory. "We now have the right tool, the femtosecond X-ray laser pulses created at LCLS, that allows us to observe the water-splitting reaction as it happens, in real time, and as it happens in nature." "We want to have enough snapshots of this process to see how exactly the oxygen is formed," added Uwe Bergmann, a distinguished scientist at SLAC, and co-author of the Nature paper. "This method - to be able to get high resolution images at room temperature and under illumination in real time - is really the strategy we need to catch the molecule in the act." The international research team is a long-standing collaboration between SLAC and Berkeley Lab, and includes Humboldt University in Germany, Umeå University and Uppsala University in Sweden, Stanford University, Brookhaven National Laboratory and University of Oxford in the United Kingdom. New Techniques and New Research Directions In previous high-resolution studies of the system at synchrotron light sources, samples experienced radiation damage from longer exposure to X-rays. At LCLS, the researchers were able to take advantage of the ultrafast laser pulses to collect X-ray crystallography and spectroscopy data before damage occurred. "The beauty of the LCLS is that the laser pulses are so short - only 40 femtoseconds in duration, but very intense - that you can collect the data before the sample is destroyed," said co-author Jan Kern, scientist at Berkeley Lab and SLAC, in a related press release. "It's very new, and there are only two places in the world, LCLS and the SPring-8 Angstrom Compact free electron LAser (SACLA), where this can be done at present." In a recent experiment at SACLA, the only other operating hard X-ray free-electron laser, another team of scientists was able to look at the first step in the cycle in a frozen sample, and under dark conditions - which means the water-splitting reaction had not yet been initiated in the protein complex. This latest study at SLAC catches this first step, as well as another step that is two stages further along in the four-step cycle, and at the same temperature the process normally occurs in nature. To do this, the scientists placed droplets of the sample in a solution with small, crystallized forms of the photosystem II on a moving conveyor belt and illuminated the samples with pulses of green light from a laser to initiate the water-splitting reaction. After two light pulses, they captured images of the crystals using X-rays, with a resolution finer than 2.5 angstroms - significantly better than previously achieved at room temperature. The water-splitting reaction takes place at a metal catalyst within the photosystem II protein, known as the oxygen-evolving complex, that is made up of four manganese atoms and one calcium atom. The complex uses the energy from light to form pure oxygen from two water molecules. The four manganese atoms are critical in shuffling electrons through the cycle, but it is unknown where exactly in the complex the involved water is located or where the oxygen formation occurs. To sort this out, the researchers used ammonia, a water substitute, to narrow down where oxygen atoms from two water molecules combine to form an oxygen molecule. If the ammonia was bound to a site, and the reaction still proceeded, that site is unlikely to be part of the oxygen molecule formation. The results from this study offered a surprise - the data do not seem to support two leading theories for how the reaction proceeds within the oxygen-evolving complex. In future studies using this same technique, the researchers hope to capture more images at different steps of the process, which will allow them to further refine the details of the water-splitting reaction. "The chemistry is so unusual," co-principal investigator and senior scientist, Vittal Yachandra at Berkeley Lab, said "Learning how exactly this water-splitting process works will be a breakthrough in our understanding, and it can help in the development of solar fuels and renewable energy." Explore further: Oxygen levels in the air do not limit plant productivity More information: Iris D. Young et al, Structure of photosystem II and substrate binding at room temperature, Nature (2016). DOI: 10.1038/nature20161


News Article | November 2, 2016
Site: www.nature.com

At the threshold of the Sahara Desert near Ouarzazate, Morocco, some 500,000 parabolic mirrors run in neat rows across a valley, moving slowly in unison as the Sun sweeps overhead. This US$660-million solar-energy facility opened in February and will soon have company. Morocco has committed to generating 42% of its electricity from renewable sources by 2020. Across Africa, several nations are moving aggressively to develop their solar and wind capacity. The momentum has some experts wondering whether large parts of the continent can vault into a clean future, bypassing some of the environmentally destructive practices that have plagued the United States, Europe and China, among other places. “African nations do not have to lock into developing high-carbon old technologies,” wrote Kofi Annan, former secretary-general of the United Nations, in a report last year1. “We can expand our power generation and achieve universal access to energy by leapfrogging into new technologies that are transforming energy systems across the world.” That's an intoxicating message, not just for Africans but for the entire world, because electricity demand on the continent is exploding. Africa's population is booming faster than anywhere in the world: it is expected to almost quadruple by 2100. More than half of the 1.2 billion people living there today lack electricity, but may get it soon. If much of that power were to come from coal, oil and natural gas, it could kill international efforts to slow the pace of global warming. But a greener path is possible because many African nations are just starting to build up much of their energy infrastructure and have not yet committed to dirtier technology. Several factors are fuelling the push for renewables in Africa. More than one-third of the continent's nations get the bulk of their power from hydroelectric plants, and droughts in the past few years have made that supply unreliable. Countries that rely primarily on fossil fuels have been troubled by price volatility and increasing regulations. At the same time, the cost of renewable technology has been dropping dramatically. And researchers are finding that there is more potential solar and wind power on the continent than previously thought — as much as 3,700 times the current total consumption of electricity. This has all led to a surging interest in green power. Researchers are mapping the best places for renewable-energy projects. Forward-looking companies are investing in solar and wind farms. And governments are teaming up with international-development agencies to make the arena more attractive to private firms. Yet this may not be enough to propel Africa to a clean, electrified future. Planners need more data to find the best sites for renewable-energy projects. Developers are wary about pouring money into many countries, especially those with a history of corruption and governmental problems. And nations will need tens of billions of dollars to strengthen the energy infrastructure. Still, green ambitions in Africa are higher now than ever before. Eddie O'Connor, chief executive of developer Mainstream Renewable Power in Dublin, sees great potential for renewable energy in Africa. His company is building solar- and wind-energy facilities there and he calls it “an unparalleled business opportunity for entrepreneurs”. Power outages are a common problem in many African nations, but Zambia has suffered more than most in the past year. It endured a string of frequent and long-lasting blackouts that crippled the economy. Pumps could not supply clean water to the capital, Lusaka, and industries had to slash production, leading to massive job lay-offs. The source of Zambia's energy woes is the worst drought in southern Africa in 35 years. The nation gets nearly 100% of its electricity from hydropower, mostly from three large dams, where water levels have plummeted. Nearby Zimbabwe, South Africa and Botswana have also had to curtail electricity production. And water shortages might get worse. Projections suggest that the warming climate could reduce rainfall in southern Africa even further in the second half of the twenty-first century. Renewable energy could help to fill the gap, because wind and solar projects can be built much more quickly than hydropower, nuclear or fossil-fuel plants. And green-power installations can be expanded piecemeal as demand increases. Egypt, Ethiopia, Kenya, Morocco and South Africa are leading the charge to build up renewable power, but one of the biggest barriers is insufficient data. Most existing maps of wind and solar resources in Africa do not contain enough detailed information to allow companies to select sites for projects, says Grace Wu, an energy researcher at the University of California, Berkeley. She co-authored a report2 on planning renewable-energy zones in 21 African countries, a joint project by the Lawrence Berkeley National Laboratory (LBNL) in California and the International Renewable Energy Agency (IRENA) in Abu Dhabi. The study is the most comprehensive mapping effort so far for most of those countries, says Wu. It weighs the amount of solar and wind energy in the nations, along with factors such as whether power projects would be close to transmission infrastructure and customers, and whether they would cause social or environmental harm. “The IRENA–LBNL study is the only one that has applied a consistent methodology across a large region of Africa,” says Wu. High-resolution measurements of wind and solar resources have typically been done by government researchers or companies, which kept tight control of their data. The Berkeley team used a combination of satellite and ground measurements purchased from Vaisala, an environmental monitoring company based in Finland that has since made those data publicly available through IRENA's Global Atlas for Renewable Energy. The team also incorporated geospatial data — the locations of roads, towns, existing power lines and other factors — that could influence decisions about where to put energy projects. “If there's a forest, you don't want to cut it down and put a solar plant there,” says co-author Ranjit Deshmukh, also an energy researcher at Berkeley. The amount of green energy that could be harvested in Africa is absolutely massive, according to another IRENA report3, which synthesized 6 regional studies and found potential for 300 million megawatts of solar photovoltaic power and more than 250 million megawatts of wind (see 'Power aplenty'). By contrast, the total installed generating capacity — the amount of electricity the entire continent could produce if all power plants were running at full tilt — was just 150,000 megawatts at the end of 2015. Solar and wind power accounted for only 3.6% of that. The estimate of wind resources came as a surprise, says Oliver Knight, a senior energy specialist for the World Bank's Energy Sector Management Assistance Program in Washington DC. Although people have long been aware of Africa's solar potential, he says, as of about a decade ago, few local decision-makers recognized the strength of the wind. “People would have told you there isn't any wind in regions such as East Africa.” The World Bank is doing its own studies, which will assess wind speeds and solar radiation at least every 10 minutes at selected sites across target countries. It will ask governments to add their own geospatial data, and will combine all the information into a user-friendly format that is freely available and doesn't require advanced technical knowledge, says Knight.“It should be possible for a mid-level civil servant in a developing country to get online and actually start playing with this.” In the semi-arid Karoo region of South Africa, a constellation of bright white wind turbines rises 150 metres above the rolling grassland. Mainstream Renewable Power brought this project online in July, 17 months after starting construction. The 35 turbines add 80 megawatts to South Africa's supply, enough to power about 70,000 homes there. The Noupoort Wind Farm is just one of about 100 wind and solar projects that South Africa has developed in the past 4 years, as prices fell below that of coal and construction lagged on two new massive coal plants. South Africa is primed to move quickly to expand renewable energy, in part thanks to its investment in data. Environmental scientist Lydia Cape works for the Council for Scientific and Industrial Research, a national lab in Stellenbosch. She and her team have created planning maps for large-scale wind and solar development and grid expansion. Starting with data on the energy resources, they assessed possible development sites for many types of socio-economic and environmental impact, including proximity to electricity demand, economic benefits and effects on biodiversity. The South African government accepted the team's recommendations and designated eight Renewable Energy Development Zones that are close to consumers and to transmission infrastructure — and where power projects will cause the least harm to people and ecosystems. They total “about 80,000 square kilometres, the size of Ireland or Scotland, roughly”, says Cape. The areas have been given streamlined environmental authorization for renewable projects and transmission corridors, she says. But for African nations to go green in a big way, they will need a huge influx of cash. Meeting sub-Saharan Africa's power needs will cost US$40.8 billion a year, equivalent to 6.35% of Africa's gross domestic product, according to the World Bank. Existing public funding falls far short, so attracting private investors is crucial. Yet many investors perceive African countries as risky, in part because agreements there require long and complex negotiations and capital costs are high. “It's a real challenge,” says Daniel Kammen, a special envoy for energy for the US Department of State and an energy researcher at the University of California, Berkeley. “Many of these countries have not had the best credit ratings.” Elham Ibrahim, the African Union's commissioner for infrastructure and energy, advises countries to take steps to reassure private investors. Clear legislation supporting renewable energy is key, she says, along with a track record of enforcing commercial laws. South Africa is setting a good example. In 2011, it established a transparent process for project bidding called the Renewable Energy Independent Power Producer Procurement Programme (REIPPPP). The programme has generated private investments of more than $14 billion to develop 6,327 megawatts of wind and solar. Mainstream Renewable Power has won contracts for six wind farms and two solar photovoltaic plants through REIPPPP. “This programme is purer than the driven snow,” says O'Connor. “They publish their results. They give state guarantees. They don't delay you too much.” Although the country's main electricity supplier has wavered in its support for renewables, the central government remains committed to the programme, he says. “I would describe the risks in South Africa as far less than the risks in England in investing in renewables.” For countries less immediately attractive to investors, the World Bank Group launched the Scaling Solar project in January 2015. This reduces risk to investors with a suite of guarantees, says Yasser Charafi, principal investment officer for African infrastructure with the International Finance Corporation (IFC) in Dakar, which is part of the World Bank Group. Through the Scaling Solar programme, the IFC offers low-priced loans; the World Bank guarantees that governments will buy the power generated by the projects; and the group's Multilateral Investment Guarantee Agency offers political insurance in case of a war or civil unrest. Zambia, the first country to have access to Scaling Solar, has won two solar projects that will together provide 73 megawatts. Senegal and Madagascar were next, with agreements to produce 200 and 40 megawatts, respectively. Ethiopia has just joined, and the IFC will give two further countries access to the programme soon; its target is to develop 1,000 megawatts in the first 5 years. That power won't be useful if it can't get to users. One of the big barriers to a clean-energy future in Africa is that the continent lacks robust electricity grids and transmission lines to move large amounts of power within countries and across regions. But that gap also provides some opportunities. Without a lot of existing infrastructure and entrenched interests, countries there might be able to scale up renewable projects and manage electricity more nimbly than developed nations. That's what happened with the telephone industry: in the absence of much existing land-line infrastructure, African nations rapidly embraced mobile phones. The future could look very different from today's electricity industry. Experts say that Africa is likely to have a blend of power-delivery options. Some consumers will get electricity from a grid, whereas people in rural areas and urban slums — where it is too remote or too expensive to connect to the grid — might end up with small-scale solar and wind installations and minigrids. Still, grid-connected power is crucial for many city dwellers and for industrial development, says Ibrahim. And for renewables to become an important component of the energy landscape, the grid will need to be upgraded to handle fluctuations in solar and wind production. African nations can look to countries such as Germany and Denmark, which have pioneered ways to deal with the intermittent nature of renewable energy. One option is generating power with existing dams when solar and wind lag, and cutting hydropower when they are plentiful. Another technique shuttles electricity around the grid: for example, if solar drops off in one place, power generated by wind elsewhere can pick up the slack. A third strategy, called demand response, reduces electricity delivery to multiple customers by imperceptible amounts when demand is peaking. These cutting-edge approaches require a smart grid and infrastructure that connects smaller grids in different regions so that they can share electricity. Africa has some of these 'regional interconnections', but they are incomplete. Four planned major transmission corridors will need at least 16,500 kilometres of new transmission lines, costing more than $18 billion, says Ibrahim. Likewise, many countries' internal power grids are struggling to keep up. That's part of what makes working in energy in Africa challenging. Prosper Amuquandoh is an inspector for the Ghana Energy Commission and the chief executive of Smart and Green Energy Group, an energy-management firm in Accra. In Ghana, he says, “there's a lot of generation coming online”. The country plans to trade electricity with its neighbours in a West African Power Pool, Amuquandoh says, but the current grid cannot handle large amounts of intermittent power. Despite the challenges, he brims with enthusiasm when he talks about the future: “The prospects are huge.” With prices of renewables falling, that kind of optimism is spreading across Africa. Electrifying the continent is a moral imperative for everyone, says Charafi. “We cannot just accept in the twenty-first century that hundreds of millions of people are left out.”


Diz-Munoz A.,University of California at Berkeley | Diz-Munoz A.,University of California at San Francisco | Fletcher D.A.,University of California at Berkeley | Fletcher D.A.,Lawrence Berkeley National Laboratory | Weiner O.D.,University of California at San Francisco
Trends in Cell Biology | Year: 2013

Many cell phenomena that involve shape changes are affected by the intrinsic deformability of the plasma membrane (PM). Far from being a passive participant, the PM is now known to physically, as well as biochemically, influence cell processes ranging from vesicle trafficking to actin assembly. Here we review current understanding of how changes in PM tension regulate cell shape and movement, as well as how cells sense PM tension. © 2012 Elsevier Ltd.


Larabell C.A.,University of California at San Francisco | Larabell C.A.,Lawrence Berkeley National Laboratory | Nugent K.A.,University of Melbourne
Current Opinion in Structural Biology | Year: 2010

X-ray imaging of biological samples is progressing rapidly. In this paper we review the progress to date in high-resolution imaging of cellular architecture. In particular we survey the progress in soft X-ray tomography and argue that the field is coming of age and that important biological insights are starting to emerge. We then review the new ideas based on coherent diffraction. These methods are at a much earlier stage of development but, as they eliminate the need for X-ray optics, have the capacity to provide substantially better spatial resolution than zone plate-based methods. © 2010 Elsevier Ltd.


Martin L.W.,University of Illinois at Urbana - Champaign | Ramesh R.,Lawrence Berkeley National Laboratory
Acta Materialia | Year: 2012

We review recent developments and advances in multiferroic and magnetoelectric heterostructures. Driven by the promise of new materials functionality (i.e. electric field control of ferromagnetism), extensive on-going research is focused on the search for and characterization of new multiferroic materials. In this review we develop a comprehensive overview of multiferroic materials, including details on the nature of order parameters and coupling in these materials, the scarcity of such materials in single phase form, routes to create and control the properties of these materials, and we finish by investigating such effects in a number of model materials and heterostructures. This includes an in-depth discussion of BiFeO 3, an investigation of recent advances in magnetoelectric materials, and an introduction to a variety of approaches by which one can achieve novel materials functionality. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Frauendorf S.,University of Notre Dame | Macchiavelli A.O.,Lawrence Berkeley National Laboratory
Progress in Particle and Nuclear Physics | Year: 2014

The role of neutron-proton pairing correlations on the structure of nuclei along the N=Z line is reviewed. Particular emphasis is placed on the competition between isovector (T=1) and isoscalar (T=0) pair fields. The expected properties of these systems, in terms of pairing collective motion, are assessed by different theoretical frameworks including schematic models, realistic Shell Model and mean field approaches. The results are contrasted with experimental data with the goal of establishing clear signals for the existence of neutron-proton (np) condensates. We will show that there is clear evidence for an isovector np condensate as expected from isospin invariance. However, and contrary to early expectations, a condensate of deuteron-like pairs appears quite elusive and pairing collectivity in the T=0 channel may only show in the form of a phonon. Arguments are presented for the use of direct reactions, adding or removing an np pair, as the most promising tool to provide a definite answer to this intriguing question. © 2014 Elsevier B.V. All rights reserved.


Bouchard K.E.,University of California at San Francisco | Bouchard K.E.,Lawrence Berkeley National Laboratory | Chang E.F.,University of California at San Francisco
Journal of Neuroscience | Year: 2014

Speech production requires the precise control of vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly organized into complex sequences. Multiple productions of the same phoneme can exhibit substantial variability, some of which is inherent to control of the vocal tract and its biomechanics, and some of which reflects the contextual effects of surrounding phonemes (“coarticulation”). The role of the CNS in these aspects of speech motor control is not well understood. To address these issues, we recorded multielectrode cortical activity directly from human ventral sensory-motor cortex (vSMC) during the production of consonant-vowel syllables. We analyzed the relationship between the acoustic parameters of vowels (pitch and formants) and cortical activity on a single-trial level. We found that vSMC activity robustly predicted acoustic parameters across vowel categories (up to 80% of variance), as well as different renditions of the same vowel (up to 25% of variance). Furthermore, we observed significant contextual effects on vSMC representations of produced phonemes that suggest active control of coarticulation: vSMC representations for vowels were biased toward the representations of the preceding consonant, and conversely, representations for consonants were biased toward upcoming vowels. These results reveal that vSMC activity for phonemes are not invariant and provide insight into the cortical mechanisms of coarticulation. © 2014 the authors.


Bzdak A.,Brookhaven National Laboratory | Koch V.,Lawrence Berkeley National Laboratory
Physical Review C - Nuclear Physics | Year: 2012

We show that the effect of finite acceptance drastically influences the net-baryon and net-charge cumulants, which are believed to be sensitive probes of the QCD phase diagram. We derive general formulas that relate the true cumulants K n which reflect the full dynamics of the system with the actually measured cumulants c n for a given acceptance, modeled by a binomial probability parameter p. We find that this relation involves additional moments which cannot be expressed by cumulants and should be measured in order to extract any potential information about the QCD critical point. We demonstrate that for a wide range of the true cumulant ratios K n/K m the measured ratios c n/c m quickly converge if p<1/2, which makes the interpretation of the data very challenging, especially in case of the net-proton cumulants. Our study further suggests that the measurement of net-charge cumulants may be more advantageous for the investigation of the QCD phase diagram. © 2012 American Physical Society.


Schneidman-Duhovny D.,University of California at San Francisco | Hammel M.,Lawrence Berkeley National Laboratory | Sali A.,University of California at San Francisco
Nucleic Acids Research | Year: 2010

Small angle X-ray scattering (SAXS) is an increasingly common technique for low-resolution structural characterization of molecules in solution. SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can contribute to many applications, such as comparing a conformation in solution with the corresponding X-ray structure, modeling a flexible or multi-modular protein, and assembling a macromolecular complex from its subunits. These applications require rapid computation of a SAXS profile from a molecular structure. FoXS (Fast X-Ray Scattering) is a rapid method for computing a SAXS profile of a given structure and for matching of the computed and experimental profiles. Here, we describe the interface and capabilities of the FoXS web server (http://salilab.org/foxs). © The Author(s) 2010. Published by Oxford University Press.


Dobrowolska M.,University of Notre Dame | Tivakornsasithorn K.,University of Notre Dame | Liu X.,University of Notre Dame | Furdyna J.K.,University of Notre Dame | And 3 more authors.
Nature Materials | Year: 2012

The ferromagnetic semiconductor (Ga,Mn)As has emerged as the most studied material for prototype applications in semiconductor spintronics. Because ferromagnetism in (Ga,Mn)As is hole-mediated, the nature of the hole states has direct and crucial bearing on its Curie temperature T C. It is vigorously debated, however, whether holes in (Ga,Mn)As reside in the valence band or in an impurity band. Here we combine results of channelling experiments, which measure the concentrations both of Mn ions and of holes relevant to the ferromagnetic order, with magnetization, transport, and magneto-optical data to address this issue. Taken together, these measurements provide strong evidence that it is the location of the Fermi level within the impurity band that determines T C through determining the degree of hole localization. This finding differs drastically from the often accepted view that T C is controlled by valence band holes, thus opening new avenues for achieving higher values of T C. © 2012 Macmillan Publishers Limited. All rights reserved.


Minor D.L.,University of California at San Francisco | Minor D.L.,Lawrence Berkeley National Laboratory
Cell | Year: 2016

Ligand binding usually moves the target protein from an ensemble of inactive states to a well-defined active conformation. Matthies et al. flip this scheme around, finding that, for the magnesium channel CorA, loss of ligand binding induces an ensemble of conformations that turn the channel on. © 2016 Elsevier Inc.


Bagriantsev S.N.,University of California at San Francisco | Clark K.A.,University of California at San Francisco | Minor D.L.,University of California at San Francisco | Minor D.L.,Lawrence Berkeley National Laboratory
EMBO Journal | Year: 2012

K 2P 2.1 (TREK-1) is a polymodal two-pore domain leak potassium channel that responds to external pH, GPCR-mediated phosphorylation signals, and temperature through the action of distinct sensors within the channel. How the various intracellular and extracellular sensory elements control channel function remains unresolved. Here, we show that the K 2P 2.1 (TREK-1) intracellular C-terminal tail (Ct), a major sensory element of the channel, perceives metabolic and thermal commands and relays them to the extracellular C-type gate through transmembrane helix M4 and pore helix 1. By decoupling Ct from the pore-forming core, we further demonstrate that Ct is the primary heat-sensing element of the channel, whereas, in contrast, the pore domain lacks robust temperature sensitivity. Together, our findings outline a mechanism for signal transduction within K 2P 2.1 (TREK-1) in which there is a clear crosstalk between the C-type gate and intracellular Ct domain. In addition, our findings support the general notion of the existence of modular temperature-sensing domains in temperature-sensitive ion channels. This marked distinction between gating and sensory elements suggests a general design principle that may underlie the function of a variety of temperature-sensitive channels. © 2012 European Molecular Biology Organization.


Browne E.,Lawrence Berkeley National Laboratory | Tuli J.K.,Brookhaven National Laboratory
Nuclear Data Sheets | Year: 2012

The evaluators present in this publication spectroscopic data and level schemes from radioactive decay and nuclear reactions for all isobars with mass number A=230. This evaluation includes the first experimental evidence of 230Am, produced through the 197Au( 40Ar,3n) 234Bk (α decay to 230Am) reaction, E( 40Ar)=188.4 MeV (2003MoZX). © 2012 Elsevier Inc.


Grant
Agency: Department of Energy | Branch: | Program: STTR | Phase: Phase I | Award Amount: 224.78K | Year: 2014

Generating and screening multiple combinations of genes, enzymes, and other biological parts has become vital to next generation biofuel development and production. This currently requires large personnel costs and sizeable capital investments in robotics equipment. Commercial adoption of the synthetic biology technology will require high throughput capabilities, low cost, product reliability, and full automation with minimal human intervention. HJ Science & amp; Technology, Inc. proposes to demonstrate the feasibility of an automated, programmable, low-cost, and compact platform capable of running rapid and complex bioengineering processes and optimization of next generation biofuel development and production. Our approach leverages a Technology Transfer Opportunity developed by the Joint BioEnergy Institute for microfluidic automation of combinatorial DNA assembly, cell transformation, and screening of cells. The microfluidic automation platform offers significant labor savings by simplifying and accelerating the process and forgoes the need for robotics as well as reduces costs by using much smaller volumes of reagent materials. In Phase I, we will establish the feasibility of the proposed microfluidic automation technology by engineering and screening E. coli cells for the production of fatty acid ethyl esters (FAEE). By performing all three key operations: 1) construction of a plasmid containing genes for FAEE production in E. coli, and 2) subsequent transformation into E. coli cells/chromosomal integration and screening of expression products, on the same microfluidic chip in a fully automated format, we can assess the FAEE production yield as a function the gene variant in almost real time, thereby greatly enhancing the overall efficiency of the FAEE production process. Commercial Applications and Other Benefits: In addition to biofuel, the proposed microfluidic automation technology has combinatorial DNA construction and functional screening in these industries: 1) pharmaceuticals, especially protein-based drugs, 2) agriculture, 3) bio-based chemicals, 4) biotechnology, 6) enzymes, and 7) DNA synthesis. The proposed technology will also greatly benefit the life-science instrumentation and microfluidic device industries.


Grosse Perdekamp M.,University of Illinois at Urbana - Champaign | Yuan F.,Lawrence Berkeley National Laboratory
Annual Review of Nuclear and Particle Science | Year: 2015

We review the current status and future perspectives of theory and experiments of transverse spin phenomena in high-energy scattering processes off nucleon targets and related issues in nucleon structure and QCD. Systematic exploration of transverse spin effects requires measurements in polarized deep-inelastic scattering, polarized pp collisions, and e+e- annihilations. Sophisticated QCD-based techniques are also needed to analyze the experimental data sets. Copyright ©2015 by Annual Reviews. All rights reserved.


Fletcher D.A.,University of California at Berkeley | Fletcher D.A.,Lawrence Berkeley National Laboratory | Mullins R.D.,University of California at San Francisco
Nature | Year: 2010

The ability of a eukaryotic cell to resist deformation, to transport intracellular cargo and to change shape during movement depends on the cytoskeleton, an interconnected network of filamentous polymers and regulatory proteins. Recent work has demonstrated that both internal and external physical forces can act through the cytoskeleton to affect local mechanical properties and cellular behaviour. Attention is now focused on how cytoskeletal networks generate, transmit and respond to mechanical signals over both short and long timescales. An important insight emerging from this work is that long-lived cytoskeletal structures may act as epigenetic determinants of cell shape, function and fate. © 2010 Macmillan Publishers Limited. All rights reserved.


Patent
Clemson University, Pacific Northwest National Laboratory and Lawrence Berkeley National Laboratory | Date: 2014-02-04

A liquid sampling, atmospheric pressure, glow discharge (LS-APGD) device as well as systems that incorporate the device and methods for using the device and systems are described. The LS-APGD includes a hollow capillary for delivering an electrolyte solution to a glow discharge space. The device also includes a counter electrode in the form of a second hollow capillary that can deliver the analyte into the glow discharge space. A voltage across the electrolyte solution and the counter electrode creates the microplasma within the glow discharge space that interacts with the analyte to move it to a higher energy state (vaporization, excitation, and/or ionization of the analyte).


News Article | December 2, 2016
Site: www.eurekalert.org

David Bailey, Jonathan Borwein, Andrew Mattingly, and Glenn Wightwick will receive the 2017 AMS Levi L. Conant Prize. The four co-authors are honored for their article "The Computation of Previously Inaccessible Digits of Π2 and Catalan's Constant," Notices of the AMS, August 2013. "This fascinating article will delight any mathematician who has ever been intrigued by the mysteries of the digits of Π," the prize citation says. The number Π, known to the ancients, is familiar from many contexts, including its role in the formulas for the circumference and area of a circle. Catalan's constant, named after the mathematician Eugène Charles Catalan (1814-1894), is of a more recent vintage. This mysterious number crops up in various problems in mathematics, but it is not known whether it is transcendental, as is Π---or even whether it is irrational. The article opens with a historical journey, from Archimedes to the computer age, with many interesting anecdotes along the way. It then goes on to discuss the remarkable "BBP" formula, discovered by Bailey together with Peter Borwein and Simon Plouffe. The formula allows one to calculate binary or hexadecimal digits of Π beginning with the nth digit without first calculating any of the preceding n - 1 digits. The article leads readers through not only an elementary proof of the BBP formula but also the unconventional search that originally led to this formula as well as similar formulas for Catalan's constant and Π2. The article also provides intriguing insights into the age-old question of whether the digits of Π are truly randomly distributed. Certainly the task of computing digits of Π has a practical side: running paired computations of Π provides a strenuous integrity test of computer hardware and software. But, as this outstanding article makes clear, it is the fundamental properties and deep mystery of Π that have made it so powerfully fascinating for so long. David H. Bailey is a retired senior scientist at the Lawrence Berkeley National Laboratory, and a research associate at the University of California, Davis. Andrew Mattingly is senior information technology architect at IBM Australia. Glenn Wightwick is deputy vice-chancellor and vice-president (Research) at the University of Technology Sydney. The prize will be awarded posthumously to Jonathan Borwein, who died in August 2016, before learning that he and his co-authors would receive the prize. Borwein was professor of mathematics at the Centre for Computer Assisted Research Mathematics and its Applications at the University of Newcastle. "Borwein's creative work and beautiful expositions will be sorely missed," the prize citation says. Presented annually, the AMS Conant Prize recognizes the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years. The prize will be awarded Thursday, January 5, 2017, at the Joint Mathematics Meetings in Atlanta. Find out more about AMS prizes and awards at http://www. . Founded in 1888 to further mathematical research and scholarship, today the American Mathematical Society fulfills its mission through programs and services that promote mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life.


« Honda begins sales of Clarity Fuel Cell in Japan; targeting 200 units first year | Main | UQM receives new follow-on order from Proterra to support increased demand for electric buses » The US Department of Energy (DOE) selected 33 small businesses to work directly with DOE national labs to accelerate the commercialization of new clean energy technologies. The department’s Office of Energy Efficiency and Renewable Energy is investing nearly $6.7 million under Round 1 of the new Small Business Vouchers (SBV) pilot. For Round 1, the small businesses and laboratories will collaborate on advancing a number of clean energy technologies, including water, wind, bioenergy, solar, buildings, vehicles, fuel cells, geothermal technologies, and advanced manufacturing. The selected small businesses will work with scientists at nine department laboratories: Oak Ridge National Laboratory (ORNL); National Renewable Energy Laboratory (NREL); Lawrence Berkeley National Laboratory (LBNL); Sandia National Laboratories (SNL); Pacific Northwest National Laboratory (PNNL); Idaho National Laboratory (INL); Los Alamos National Laboratory (LANL); Argonne National Laboratory (ANL); and Lawrence Livermore National Laboratory (LLNL). SBV is a collaborative, national effort that provides $20 million for US companies to help improve industry awareness of national laboratory capabilities and provide small and mid-size businesses access to the resources available within the national laboratory system. Vouchers range from $50,000-300,000. The companies were competitively chosen from a pool of hundreds of applications. Almost 40% of the businesses represent new DOE-industry partnerships and relationships with the national labs. Building on the tremendous response of Round 1, the department also has begun accepting applications for Round 2 of the SBV pilot. A total of $13 million worth of funding remains; over the course of the year, up to 100 vouchers will be awarded.


Researchers at MIT have developed a practical and physically-based way of treating the surface of materials called perovskite oxides, to make them more durable and improve their performance. These materials are promising candidates to serve as electrodes in energy-conversion devices such as fuel cells and electrolyzers. This surface treatment could solve one of the major challenges that has hindered widespread deployment of fuel cell technology that, when operated reversibly, can present a promising alternative to batteries for renewable-energy storage. The new findings are being reported today in the journal Nature Materials, in a paper by MIT Associate Professor Bilge Yildiz of the departments of Nuclear Science and Engineering and Materials Science and Engineering, former MIT postdoc Nikolai Tsvetkov, graduate students Qiyang Lu and Lixin Sun, and Ethan Crumlin of the Lawrence Berkeley National Laboratory. Perovskites have become a bustling area of research in recent years, with potential applications in areas ranging from fuel cell electrodes, to nonvolatile memory chips for computers, to solar thermochemical fuel production through the splitting of water and carbon dioxide. They are a broad class of oxide materials, and many teams are exploring variations of perovskite composition in search of the most promising candidates for different uses. But the relative instability of the material’s surface over time has been one of the major limitations to use of perovskites. The surfaces of these materials, when exposed to water or gases such as oxygen or carbon dioxide at elevated temperatures, as they often are in actual applications, “suffer from degradation because of chemical segregation and phase separation,” Yildiz explains. She says “we, as well as others in the field, have discovered in the past several years that the surfaces of these perovskites get covered up by a strontium-oxide related layer, and this layer is insulating against oxygen reduction and oxygen evolution reactions, which are critical for the performance of fuel cells, electrolyzers, and thermochemical fuel production. This layer on the electrode surface is detrimental to the efficiency and durability of the device, causing the surface reactions to slow down by more than an order of magnitude.” In earlier work, Yildiz and her team uncovered the reasons behind such detrimental surface segregation of strontium. “We have found it to be governed by enrichment of oxygen vacancies at the surface,” she says. Oxygen vacancies are atomic defects in the lattice where oxygen atoms are missing. “Then the solution was to kill some of those oxygen vacancies.” This idea is contrary to the conventional understanding that oxygen vacancies assist reactions with oxygen molecules at the perovskite oxide surface and improve the rate of oxygen reduction reaction in fuel cells. So, simply adding a small fraction of more oxidizable elements at the perovskite surface “annihilates some of the oxygen vacancies, makes the surface more oxidized, and prevents the formation of insulating phases that block oxygen exchange reactions at the surface of the material,” Yildiz says. In this way, the surface retains the intrinsically good electronic, ionic, and catalytic properties of the perovskite oxide and enables fast oxygen exchange reactions. The team’s analysis shows that there is a sweet spot in the addition of more oxidizable elements to the surface, both in terms of the composition and the concentration. In these initial experiments, they tried several different elements to provide the protective effect. The improvement increases up to a certain concentration, and then adding more of the surface additives starts to make things worse again. So for any given material, there will be an optimum amount that should be added, they found. Using hafnium, the new treatment has been shown to reduce the rate of degradation, and increase by 30 times the rate of oxygen exchange reactions at the surface. The result was quite unexpected, Yildiz says. “Nobody would have planned to use hafnium to improve anything in this field,” she says, as that element or its oxide shows almost no reactivity by itself. But as a surface treatment for the perovskite, it caused the greatest improvement of all the elements tested, “because it provides a good balance between the stability of the surface and the availability of oxygen vacancies,” she explains. Importantly, Yildiz adds: “We believe the value of the work is not only in having found a potential improvement to fuel cell electrode durability, but also in fundamentally proving the mechanism behind this improvement. For that, our in situ X-ray spectroscopy experiments enabled by the Advanced Light Source have been critical.” The surface treatment process is simple and requires only a tiny amount of the additive elements deposited from a solution of the metal chloride. “What we put on the surface is a very small amount, so it’s not changing the bulk material,” Yildiz says. In fact, the surface treatment amounts no more than a single atomic layer over the bulk material. The findings could be quite significant in making perovskite oxide electrocatalysts for some applications, she says, including solid oxide fuel cells. “The bulk electronic and ionic properties of perovskite oxides are really good, as they have been optimized over several decades for use in fuel cells,” she explains, but “the bottleneck now was to improve the oxygen reduction reaction kinetics at the surface,” and many teams have been stymied because the behavior of the material at its surface turned out to be “not nearly as good as hoped,” or as models predicted. Now, the researchers say they have a handle on why the problem occurs, and how to deal with it. “The observations could be used to produce more robust fuel cells with lower degradation rates, which at the moment is a major target for solid oxide fuel cell developers,” says John Kilner, a professor of energy materials at Imperial College, London, who was not involved in this work. “In many catalytic materials, stability and performance do not come hand-in-hand — the most active catalysts are also the least stable ones,” says William Chueh, an assistant professor of materials science and engineering at Stanford University, who also was not connected with the research. “In this work, Yildiz and co-workers identified a new way to substantially improve the stability of cobalt-based electrocatalysts simply by adding a small amount of dopants on the surface.” Chueh adds, “The most promising application of this work is to substantially improve the stability of solid-oxide fuel cells. This is the key issue that controls the cost, and limits the widespread adoption of this technology. The work is excellent in both fundamental insights and technological implications.” The work received support from the National Science Foundation, Division of Materials Research, Ceramics Program with a CAREER Award, and from the NASA Jet Propulsion Laboratory, and made use of the Advanced Light Source facility at the Lawrence Berkeley National Laboratory, supported by the Department of Energy.


News Article | August 22, 2016
Site: www.scientificcomputing.com

A moment of inspiration during a wiring diagram review has saved more than $2 million